abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
An apparatus is provided for low latency adaptive clocking, the apparatus comprises: a first power supply rail to provide a first power; a second power supply rail to provide a second power; a third power supply rail to provide a third power; a voltage divider coupled to the first, second, and third power supply rails; a bias generator coupled to voltage divider and the third power supply rail; anoscillator coupled to the bias generator and the first supply rail; and a clock distribution network to provide an output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail. |
1.A device including:The first power supply rail is used to provide the first power;A second power supply rail for providing second power;The third power supply rail is used to provide third power;A voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail;A bias generator, coupled to the voltage divider and the third power supply rail;An oscillator coupled to the bias generator and the first supply rail; andA clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail.2.The apparatus of claim 1, wherein the bias generator includes an amplifier coupled to the first power supply rail.3.The apparatus of claim 1, comprising a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power Supply rail.4.The apparatus of claim 3, wherein the voltage regulator includes a low dropout circuit.5.The apparatus of claim 1, wherein the oscillator is a voltage controlled oscillator.6.The apparatus of claim 1, wherein the oscillator includes an LC oscillator.7.The apparatus according to any one of claims 1 to 6, comprising a phase frequency detector coupled to the first power supply rail, wherein the phase frequency detector is used to receive a reference clock and A feedback clock is taken as an input and is used to generate one or more outputs indicating the phase difference between the reference clock and the feedback clock.8.The apparatus of claim 6, comprising a frequency divider coupled to the oscillator and the phase frequency detector, wherein the frequency divider is used to divide the output of the oscillator and provide The feedback clock, and wherein the frequency divider is coupled through the first power supply rail.9.The apparatus of claim 1, wherein the voltage divider includes one or more programmable resistance devices.10.The apparatus of claim 1, wherein the voltage divider is used for sensing noise on the second power supply rail and for injecting the sensed noise onto the bias generator so that The output of the bias generator modulates the frequency of the oscillator according to the injected sensed noise.11.A device including:The first power supply rail is used to provide the first power;A second power supply rail for providing second power;The third power supply rail is used to provide third power;A voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail;A digital loop filter, coupled to the voltage divider and the third power supply rail;An oscillator, coupled to the digital loop filter and the third power supply rail;A clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; andA time-to-digital converter (TDC) is coupled to the digital loop filter, where the TDC is coupled to the first power supply rail.12.The apparatus of claim 11, comprising a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power Supply rail.13.The apparatus of claim 12, wherein the voltage regulator includes a low dropout circuit.14.The apparatus of claim 11, wherein the oscillator comprises a numerically controlled oscillator.15.The device according to any one of claims 11 to 14, wherein the voltage divider is used to sense noise on the second power supply rail and to inject the sensed noise into the digital A loop filter or the oscillator so that the frequency of the oscillator is modulated by the sensed noise.16.A system including:MemoryA processor, coupled to the memory, the processor includes:The first power supply rail is used to provide the first power;A second power supply rail for providing second power;The third power supply rail is used to provide third power;A voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail;A bias generator, coupled to the voltage divider and the third power supply rail;An oscillator coupled to the bias generator and the first supply rail; andA clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; andA wireless interface is used to allow the processor to communicate with another device.17.The system of claim 16, wherein the voltage divider includes one or more programmable resistance devices.18.The system of claim 16, wherein the voltage divider is used to sense noise on the second power supply rail and to inject the sensed noise onto the bias generator so that The output of the bias generator modulates the frequency of the oscillator according to the injected sensed noise.19.The system according to any one of claims 16 to 18, wherein the oscillator includes one of the following:Voltage controlled oscillator; orLC oscillator.20.A system including:MemoryA processor, coupled to the memory, the processor includes:The first power supply rail is used to provide the first power;A second power supply rail for providing second power;The third power supply rail is used to provide third power;A voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail;A digital loop filter, coupled to the voltage divider and the third power supply rail;An oscillator, coupled to the digital loop filter and the third power supply rail;A clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; andA time-to-digital converter (TDC), coupled to the digital loop filter, wherein the TDC is coupled to the first power supply rail; andA wireless interface is used to allow the processor to communicate with another device.21.The system of claim 20, comprising a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power Supply rail.22.The system according to any one of claims 20 or 21, wherein the voltage divider is used to sense noise on the second power supply rail and to inject the sensed noise into the digital A loop filter or the oscillator so that the frequency of the oscillator is modulated by the sensed noise. |
Low latency adaptive timingPriority claimThis application claims the priority of US Provisional Application Serial No. 62 / 562,335, entitled "Low latency analog adaptive timing", filed on September 22, 3017, which application is incorporated by reference in its entirety.Background techniqueThe clock signal can be generated by a phase-locked loop (PLL). Clock signals can be distributed throughout the processor to facilitate the operation of the processor. For example, state elements (eg, flip-flops, latches, etc.) located at different points in the processor die can operate synchronously by operating according to a clock signal. When a large, sudden current demand occurs, the voltage supply on the die provided to the state element may "sag" (eg, a few nanoseconds) while the PLL continues to generate a clock signal at a fixed frequency. Note that other voltage droop events may last even longer. To ensure that the processor operates during these droop events, even during normal operation (for example, when there is no voltage droop), a high voltage margin is provided for the state element. That is, the processor is designed to operate at the highest specified frequency and at the same time at the lowest potential voltage.Because power has a secondary dependence on voltage, a large amount of power may be wasted during normal operation to ensure functionality during infrequent voltage droop. Moreover, as processor speed and integration increase, the amount of power required may become a limiting factor. For example, the cost of designing and cooling processors that consume large amounts of power may become impractical.The existing analog PLL implements adaptive frequency scaling (AFS) to compensate for power supply voltage droop and overshoot. One such AFS technology is described by US Patent No. 6,922,111. The current analogue implementation of AFS technology directly modulates the VCO supply through the resistive coupling of the digital power supply. Current analog implementations do not fully utilize the full benefits of AFS technology at lower voltages and frequencies.BRIEF DESCRIPTIONThe embodiments of the present disclosure will be more fully understood from the specific embodiments given below and from the drawings of the various embodiments of the present disclosure, however, these embodiments should not be construed as limiting the present disclosure to specific embodiments, but For explanation and understanding only.FIG. 1 illustrates an analog phase locked loop (PLL) with adaptive frequency scaling (AFS) applied to an offset generator according to some embodiments of the present disclosure.FIG. 2 illustrates an apparatus according to some embodiments, showing a bias generator and a delay state of an oscillator, wherein the bias generator operates on the power supply provided by AFS.FIGS. 3A-3B illustrate plots respectively showing improvement in timing margin using AFS for the bias generator according to some embodiments.4 illustrates a digital PLL (DPLL) having an AFS applied to a numerically controlled oscillator (DCO) and / or loop filter according to some embodiments of the present disclosure.5 illustrates a smart device or computer system or SoC (system on chip) with an AFS for bias generator and / or for digital loop filter (DLF) and DCO according to some embodiments of the present disclosure .detailed descriptionRelative to other techniques that trade off response time for the range of supported supply levels, the embodiments enable adaptive frequency scaling (AFS) to be lowered to a lower distribution supply voltage with an almost instantaneous response time. A large amount of performance was put on hold due to the slowing down of different amounts of supply caused by clock and data. AFS (Adaptive Frequency System) solves this problem by slowing down the PLL in response to any droop sensed on the noisy supply rail VccDist used by the clock distribution and data path.An AFS implementation in an analog PLL uses a potentiometer between a noisy distribution supply VccDist and a regulated PLL supply VccPLL to inject some of the noise on VccDist onto the VCO's (VCO's) supply To affect the frequency change. This provides a very fast response time, but due to the headroom requirements of the VCO, the range of VccDist on which AFS can be employed is limited to the threshold voltage (eg, about 0.85V).Various embodiments have many technical effects. For example, the low latency adaptive timing device of various embodiments uses AFS for clock data compensation to minimize the frequency guard band (or reduce the frequency guard band). A potentiometer (of AFS) is used to sense the noise on VccDist and inject it into the bias. The noise on the bias modulates the output frequency of the VCO clock (the output of the VCO). Thus, without using the traditional droop detector and associated circuits, an adaptive timing environment with lower latency is obtained. Other technical effects will be apparent from the drawings and embodiments.In the following description, numerous details are discussed to provide a more thorough explanation of the embodiments of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be implemented without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail to avoid obscuring the embodiments of the present disclosure.Note that in the corresponding drawings of the embodiments, the signals are represented by lines. Some lines may be thicker to indicate more component signal paths, and / or have arrows at one or more ends to indicate the main information flow direction. Such instructions are not intended to be restrictive. Rather, wires may be used in conjunction with one or more exemplary embodiments to facilitate easier understanding of circuits or logic cells. As dictated by design needs or preferences, any represented signal can actually include one or more signals that can travel in either direction, and can be implemented using any suitable type of signal scheme.Throughout the specification and in the claims, the term "connected" means a direct connection between connected objects such as an electrical, mechanical, or magnetic connection that does not require any intermediary equipment.The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between connected objects or an indirect connection through one or more passive or active intermediary devices.The term "circuit" or "module" may refer to one or more passive and / or active components arranged to cooperate with each other to provide a desired function. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data / clock signal. The meanings of "a / an" and "the" include plural references. The meaning of "in" includes "in" and "on".The term "zoom" generally refers to converting the design (schematic and layout) from one process technology to another process technology, and then reduced in the layout area. The term "zoom" also generally refers to reducing the layout and device size within the same technology node. The term "zoom" may also refer to the adjustment of the signal frequency relative to another parameter (eg, power supply level) (eg, deceleration or acceleration-ie, reduction or amplification, respectively). The terms "substantially", "close to", "approximately", "near" and "approximately" generally refer to within +/- 10% of the target value.Unless otherwise specified, descriptions of common objects using ordinal numbers "first", "second", and "third" only indicate that different instances of the same object are being referenced, and are not intended to imply such The described objects must be in a given order in terms of ordering or in any other way, both in time and space.For the purposes of this disclosure, the phrases "A and / or B" and "A or B" mean (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B, and / or C" means (A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C).In the description and claims, the terms "left", "right", "front", "rear", "top", "bottom", "upper", "lower", etc. are used to describe Purpose, and not necessarily used to describe permanent relative position.For the purposes of the embodiments, the transistors in the various circuits and logic blocks described herein are metal oxide semiconductor (MOS) transistors or derivatives thereof, where the MOS transistor includes a drain, a source, a gate, and a body terminal. Transistors and / or MOS transistor derivatives also include tri-gate transistors and fin field-effect transistors, gate-enclosed cylindrical transistors, tunneling FETs (TFETs), square-line or rectangular strip transistors, ferroelectric FETs (FeFETs) Or other devices that implement the function of transistors, such as carbon nanotubes or spintronic devices. The symmetrical source terminal and drain terminal of the MOSFET, ie they are the same terminal and are used interchangeably here. On the other hand, TFET devices have asymmetric source and drain terminals. Those skilled in the art will appreciate that other transistors (eg, bipolar junction transistors (BJT PNP / NPN), BiCMOS, CMOS, etc.) can be used without departing from the scope of this disclosure.It is pointed out that those elements in the drawings that have the same reference numerals (or names) as those in any other figures can be operated or function in any manner similar to that described, but are not limited thereto.FIG. 1 illustrates an apparatus 100 according to some embodiments of the present disclosure, the apparatus 100 including an analog PLL having an AFS applied to a bias generator. The analog PLL includes a phase frequency detector (PFD) 101, a charge pump (CO) 102, a low-pass filter (LPF) including a capacitor C1, a bias generator (eg, N bias generator (Nbias Gen.) 103, and P bias generator (Pbias Gen. 104), voltage and controlled oscillator (VCO) 105. The output VCOClk of the VCO 105 is divided by the frequency divider 106 by a certain ratio, and the divided output FbClk (feedback clock) is received by the phase detector, which compares the phase and frequency of the feedback clock with the reference clock RefClk Compare the phase and frequency. Accordingly, the PFD 101 generates up / down signals for the CP 102. The output v1 of the CP 102 is filtered using the capacitor C1 and provided as an input to the N bias generator 103. The N bias generator 103 generates Nbias (N bias) to bias another bias generator called P bias generator (Pbias Gen.) 104. The N-bias generator 103 also generates a version of v1, called Vctrl. Nbias is used by the P bias generator 104, which uses Nbias to generate Pbias (P bias). This Pbias is supplied to the VCO 105 to control the oscillation frequency of the VCO 105 according to the output of the PFD 101. Pbias is the same as Vctrl (control voltage), and this Pbias is also used to adjust the frequency of VCOClk.In some embodiments, the device 100 includes a voltage generator 111 (eg, a DC-DC converter, a low dropout (LDO) regulator, etc.) that uses an input power supply (input supply) as an analog PLL Provide power to VccPLL. In some embodiments, AFS 112 includes a voltage divider as shown with programmable and / or fixed resistances R1, R2, and R3. The output of AFS is VccAFS and VccDist. The percentage of the amount of noise from VccDist added to VccAFS depends on the ratio of the resistors R1, R2 and R3 of the AFS voltage divider.In some embodiments, the device 100 includes a clock distribution (Clk Distr.) 107 network (eg, flip-flops and buffers / inverters) that receives the output VCOClk or its buffered version and transfers the The buffered version is driven to other locations on the chip. In some embodiments, power from a power generator or source (eg, DC-DC converter, LDO converter 111) is used to supply the VccPLL to power the P-bias generator 104 and VCO 105, while the N-bias generator 103 is powered by VccAFS, and the clock distribution (Clk Distr) 107 network is powered by VccDist. In this example, the data path including flip-flop 108, combinational logic (CL) 109, and flip-flop 110 is also powered by VccDist. The input data din and the provided output data dout are sampled using the clock from the clock distribution 107.In some embodiments, (using a potentiometer such as AFS 112 between VccPLL and VccDist) the supply noise is injected onto the supply of Nbias instead of the VCO. In some embodiments, the modulation of Nbias directly affects the frequency of the VCO. Since the N-bias generator 103 consumes much less current than VCO, it has less stringent headroom requirements and can support AFS down to lower voltages (eg, less than 0.85V). The embodiments herein do not use a droop detector that may add latency (eg, about 200-500 picoseconds), thus providing a nearly instantaneous response to supply noise.In some embodiments, AFS 112 helps maintain or increase the timing margin in the path by slowing down the clock in response to voltage droop in power distribution / data path supply (VccDist). For example, analyzing the total margin in the timing path for different VccDist levels provides an indication of the lowest VccDist level at which AFS can be used. In some embodiments, both the N-bias generator and P-bias generator blocks / circuits are powered by VccAFS, and the VCO is powered by VccPLL.Here, according to some embodiments, the AFS 112 is used for clock data compensation to minimize (or reduce) the frequency guard band. A potentiometer (of AFS) is used to sense the noise on VccDist and inject it onto Nbias. The noise on Nbias modulates the output frequency of the VCO clock (the output of the VCO). In some embodiments, the VCO 105 is an inductor-capacitor (LC) oscillator (LCO). In LCO, the frequency of VCOClk is adjusted by switching between a variable number of smaller capacitors by reference voltage and / or by using coarse and / or fine codes. These coarse and / or fine codes can be generated by converting Pbias (or Vctrl) into digital codes (eg, coarse and / or fine codes) about the varistor of the LCO.2 illustrates an apparatus 200 according to some embodiments, showing the offset generator (104 and 105) and the delay stage of the VCO 105, wherein the offset generators 103 and 104 operate on the power supply provided by the AFS 112 . In the example, both the N-bias generator 103 and the P-bias generator 104 are powered by VccAFS, while the VCO 105 (one delay unit is shown here) is powered by VccPLL.The N-bias generator 103 includes an amplifier 103a, a p-type device MP1, and n-type devices MN1 and MN2 coupled together as shown. The input V1 is received by the amplifier 103a, and the amplifier 103a adjusts the current intensity of the transistor MN2 so that the inputs Vctrl and V1 are substantially equal. Transistor MN1 is powered by VccAFS. Transistor MP1 is diode connected and provides Vctrl. In some embodiments, the entire circuit and devices of N bias generator 103 are powered by VccAFS. In some embodiments, the amplifier 103a is powered by VccPLL and other devices are powered by VccAFS. The N bias generator 103 provides one or two outputs-Vctrl and Nbias. Nbias is used to bias the n-type devices of subsequent circuits.P bias generator 104 includes p-type transistor MP2 and n-type transistors MN3 and MN4 coupled together as shown. The circuit architecture of the P bias generator 104 is similar to the circuit architecture of Nbias 103 minus the amplifier 103a and associated circuits. Transistor MN4 is biased by Nbias, transistor MN3 is biased by VccAFS, and transistor MP2 is diode-connected and powered by VccAFS.Here, a delay stage is illustrated for the VCO 105. Those skilled in the art will appreciate the use of multiple delay stages coupled together in a ring to form an oscillator. The delay stage includes p-type transistors MP3, MP4, MP5 and MP6 coupled together as shown and n-type devices MN5, MN5b and MN6. The output of the delay stage is the differential output Out and Outb. Transistor MN6 is biased by Nbias, and Vctrl or Pbias is used to bias transistors MP4 and MP5. Each delay stage receives the output (eg, differential output) from its adjacent delay stage as inputs In and Inb.In various embodiments, the delay stage of VCO 105 is powered by VccPLL. In some embodiments, the N bias generator 103 is powered by VccAFS, and the P bias generator 104 is powered by VccPLL. In some embodiments, the P bias generator 104 is powered by VccAFS, and the N bias generator 103 is powered by VccPLL. In some embodiments, the amplifier 103a of the N bias generator 103 circuit is powered by the VccPLL.FIGS. 3A-3B illustrate plots 300 and 320 respectively illustrating timing margin improvement using AFS for an offset generator according to some embodiments. The results in the plot compare the timing margin in the given path between the AFS scheme on the VCO and the AFS scheme on the N-bias generator at the strongest AFS setting (eg, the setting that injects the most noise). The results indicate that injecting noise into the supply of the VCO may not be used below 0.85V because the margin gradually approaches 0ps, and the proposed scheme of each embodiment (for example, at the strongest AFS setting) can be as low as It is always used when 0.8V or lower and has sufficient timing margin. This enables the proposed solutions of the various embodiments to also be used at much lower voltages.FIG. 4 illustrates an apparatus 400 according to some embodiments of the present disclosure, which includes a digital PLL with AFS applied to a DCO 405 (numerically controlled oscillator) and / or loop filter. The digital PLL opposite to the analog PLL mainly uses digital circuits and signals to control clock frequency generation and retention.Here, the term "analog signal" is any continuous signal, for which the time-varying characteristic (variable) of the signal is a representation of some other time-varying quantity, ie similar to another time-varying signal. Here, the term "digital signal" is a physical signal representing a series of discrete values (quantized discrete time signal) of, for example, an arbitrary bit stream or digitized (sampled and analog-to-digital converted) analog signal.In some embodiments, the digital PLL of device 400 includes a time-to-digital converter (TDC) 401, a digital loop filter (DLF) 403, a numerically controlled oscillator (DCO) 405, and those circuits described with reference to FIG. Similar to other circuits. The TDC 401 receives RefClk and FbClk, and provides a digital stream as a digitally represented output TDCOut indicating the phase difference between RefClk and FbClk. The TDC may include a delay line having multiple delay stages (for example, a buffer or an inverter), and the output of each delay stage (and the input of the first delay stage) is performed by a flip-flop using the reference clock as a sampling clock sampling. The input to the first delay stage in the delay line is FbClk. Thus, FbClk is regularly sampled by RefClk. The output of the flip-flops are then combined to provide a digital stream TDCOut. TDCOut is then received by DLF 403, which uses a filter equation to filter out any noise in TDCOut. The filter can be implemented using any suitable digital filter, such as a finite impulse response (FIR) filter, an infinite impulse response (IIR) filter. The controller (not shown as a separate circuit but integrated in DLF 403) generates coarse code and fine code, which are controls for making the frequency of VCOClk from DCO 405 change significantly or slightly Code.The DCO 405 may be any suitable digital oscillator, such as a delay line, with adjustable loading (eg, capacitive loading) at the output of each delay stage of the delay line. These adjustable loads can be controlled by coarse and / or fine code (eg, added to or subtracted from the load). In some embodiments, DCO 405 is an inductor-capacitor (LC) oscillator (LCO). In LCO, the frequency of VCOClk is adjusted by switching between a variable number of smaller capacitors using coarse and / or fine codes.In some embodiments, the VDCPLL is used to power the TDC 401 (just as the PFD 101 of the analog PLL of FIG. 1 is powered by the VccPLL). In some embodiments, the clock distribution is powered by VccDist. In some embodiments, DFL 403 and DCO 405 are powered by VccAFS. Thus, the coarse code and the fine code (digital signal) are adjusted according to the noise injected into VccAFS. This noise then adjusts the frequency of the DCO in a way that has a low latency effect. In some embodiments, the noise on DCO 405 modulates the frequency. In some embodiments, the use of VccAFS to supply DLF 403 and DCO 405 removes the need for level shifters. The level shifter causes a loss of latency, and this loss is removed here. In various embodiments, the AFS 112 (eg, voltage divider) senses the noise on VccDist and injects the sensed noise onto the DLF 403 via VccAFS. The sensed noise is then converted into coarse and / or fine codes, which are the output of DLF 403. Thus, the frequency of VCOClk is modulated by the sensed noise.FIG. 5 illustrates a smart device or computer system or SoC (System On Chip) 1600 having an apparatus for low-latency adaptive timing according to an embodiment of the present disclosure. The device for low-latency adaptive timing may include the architecture for analog PLL derived timing of FIG. 1 or the architecture for digital PLL derived timing of FIG. 4. As described with reference to the various embodiments, by providing filtered supplies to the components of the PLL, the droop detector circuit is removed, allowing a quick response to any droop of the input supply.Figure 5 illustrates a block diagram of an embodiment of a mobile device in which a planar interface connector can be used. In some embodiments, the computing device 1600 represents a mobile computing device, such as a computing tablet, mobile phone or smart phone, wireless-enabled electronic reader, or other wireless mobile device. It will be understood that certain components are shown diagrammatically, and not all components of such devices are shown in the computing device 1600.In some embodiments, the computing device 1600 includes a first processor 1610 having means for low-latency adaptive timing according to some embodiments discussed. According to some embodiments, other blocks of the computing device 1600 may also include means for low-latency adaptive timing. Embodiments of the present disclosure may also include a network interface (such as a wireless interface) within 1670, so that system embodiments may be incorporated into wireless devices (eg, cell phones or personal digital assistants).In some embodiments, the processor 1610 (and / or processor 1690) may include one or more physical devices, such as a microprocessor, application processor, microcontroller, programmable logic device, or other processing device. The processing operations performed by the processor 1610 include the execution of an operating platform or operating system on which applications and / or device functions are executed. Processing operations include operations related to I / O (input / output) with human users and / or with other devices, operations related to power management, and / or operations related to connecting computing device 1600 to another device. The processing operations may also include operations related to audio I / O and / or display I / O.In some embodiments, the computing device 1600 includes an audio subsystem 1620 that represents hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) associated with providing audio functions to the computing device Device) components. Audio functions may include speaker and / or headset output and microphone input. Devices for such functions may be integrated into the computing device 1600 or connected to the computing device 1600. In one embodiment, the user interacts with the computing device 1600 by providing audio commands received and processed by the processor 1610.In some embodiments, the computing device 1600 includes a display subsystem 1630. The display subsystem 1630 represents hardware (eg, display devices) and software (eg, drivers) components that provide visual and / or tactile displays for users to interact with the computing device 1600. The display subsystem 1630 includes a display interface 1632 that includes a specific screen or hardware device for providing a display to the user. In one embodiment, the display interface 1632 includes logic separate from the processor 1610 for performing at least some processing related to the display. In one embodiment, the display subsystem 1630 includes a touch screen (or touchpad) device that provides both output and input to the user.In some embodiments, the computing device 1600 includes an I / O controller 1640. I / O controller 1640 represents hardware devices and software components related to interaction with the user. I / O controller 1640 is operable to manage hardware that is part of audio subsystem 1620 and / or display subsystem 1630. In addition, the I / O controller 1640 illustrates a connection point for an additional device that is connected to the computing device 1600 through which the user can interact with the system. For example, devices that can be attached to the computing device 1600 can include microphone devices, speakers or stereo systems, video systems or other display devices, keyboard or keypad devices, or other I / O devices for use with specific applications ( (Such as card readers or other devices).As mentioned above, the I / O controller 1640 may interact with the audio subsystem 1620 and / or the display subsystem 1630. For example, input through a microphone or other audio device may provide input or commands for one or more applications or functions of the computing device 1600. In addition, audio output may be provided as an alternative or in addition to display output. In another example, if the display subsystem 1630 includes a touch screen, the display device also serves as an input device that can be at least partially managed by the I / O controller 1640. There may also be additional buttons or switches on the computing device 1600 to provide I / O functions managed by the I / O controller 1640.In some embodiments, the I / O controller 1640 manages devices such as accelerometers, cameras, light sensors, or other environmental sensors, or other hardware that may be included in the computing device 1600. The input may be part of direct user interaction and provide environmental input to the system to affect its operation (such as filtering noise, adjusting the display for brightness detection, applying the camera's flash or other features).In some embodiments, the computing device 1600 includes a power management 1650 that manages battery power usage, battery charging, and features related to power saving operations. The memory subsystem 1660 includes a memory device for storing information in the computing device 1600. The memory may include non-volatile (the state does not change if power to the memory device is interrupted) and / or volatile (the state is uncertain if the power to the memory device is interrupted). The memory subsystem 1660 may store application data, user data, music, photos, documents, or other data, and system data (whether long-term or temporary) related to the execution of applications and functions of the computing device 1600.The elements of the embodiments are also provided as machine-readable media (eg, memory 1660) for storing computer-executable instructions (eg, instructions for implementing any other processes discussed herein). The machine-readable medium (eg, memory 1660) may include, but is not limited to, flash memory, optical disk, CD-ROM, DVD ROM, RAM, EPROM, EEPROM, magnetic or optical card, phase change memory (PCM), or suitable for storing electronic instructions Or other types of machine-readable media with computer-executable instructions. For example, embodiments of the present disclosure can be downloaded as a computer program (eg, BIOS), which can be transmitted from a remote computer (eg, server) to a requesting computer via a communication link (eg, modem or network connection) in the form of data signals (For example, client).In some embodiments, the computing device 1600 includes connectivity devices 1670. The connectivity device 1670 includes hardware devices (eg, wireless and / or wired connectors and communication hardware) and software components (eg, drivers, protocol stacks) for enabling the computing device 1600 to communicate with external devices. The computing device 1600 may be separate devices such as other computing devices, wireless access points or base stations, and peripheral devices such as head mounted devices, printers, or other devices.The connectivity device 1670 may include many different types of connectivity devices. For an overview, the computing device 1600 is illustrated as having a cellular connectivity device 1672 and a wireless connectivity device 1674. The cellular connectivity device 1672 generally refers to a cellular network connectivity device provided by a wireless carrier, such as via GSM (Global System for Mobile Communications) or its variant or derivative type, CDMA (Code Division Multiple Access) or its variant or derivative type, TDM (Time Division Multiplexing) or its variants or derivatives, or other cellular service standards. A wireless connectivity device (or wireless interface) 1674 refers to a wireless connectivity device that is not cellular, and may include a personal area network (such as Bluetooth, near field, etc.), a local area network (such as Wi-Fi), and / or a wide area network ( Such as WiMax) or other wireless communication.In some embodiments, the computing device 1600 includes a peripheral connection 1680. Peripheral connection 1680 includes hardware interfaces and connectors for peripheral connections and software components (eg, drivers, protocol stacks). It will be understood that the computing device 1600 may be either a peripheral device ("to" 1682) connected to other computing devices, or may have a peripheral device ("from" 1684) connected to the computing device 1600. The computing device 1600 generally has a purpose for connecting to other computing devices for purposes such as managing (eg, downloading and / or uploading, changing, synchronizing) content on the computing device 1600. In addition, the docking connector may allow the computing device 1600 to connect to certain peripheral devices that allow the computing device 1600 to control content output to audiovisual or other systems, for example.In addition to dedicated docking connectors or other dedicated connection hardware, the computing device 1600 can also establish peripheral connections 1680 via common or standards-based connectors. Common types can include a universal serial bus (USB) connector (which can include any of several different hardware interfaces), a display port (DisplayPort) including a mini display port (MiniDisplayPort; MDP), and a high-definition multimedia interface ( HDMI), FireWire or other types.References in the specification to "embodiments", "one embodiment", "some embodiments", or "other embodiments" mean that specific features, structures, or characteristics described in connection with these embodiments are included in at least some implementations Example, but not necessarily included in all embodiments. The various appearances of "an embodiment", "one embodiment", or "some embodiments" do not necessarily all refer to the same embodiment. If the specification states that "may", "may", or "can" include a component, feature, structure, or characteristic, it does not necessarily include that particular component, feature, structure, or characteristic. If the specification or claims refer to "a" or "an" element, that does not mean there is only one of the elements. If the specification or claims refer to "additional" elements, this does not exclude the presence of more than one such additional element.In addition, specific features, structures, functions, or characteristics can be combined in one or more embodiments in any suitable manner. For example, the first embodiment and the second embodiment can be combined as long as the specific features, structures, functions, or characteristics associated with the first and second embodiments are not mutually exclusive.Although the disclosure has been described in conjunction with specific embodiments of the disclosure, many alternatives, modifications, and variations of such embodiments will be apparent to those of ordinary skill in the art based on the foregoing description. The embodiments of the present disclosure are intended to cover all such alternatives, modifications, and variations that fall within the broad scope of the appended claims.In addition, for simplicity of illustration and discussion and to not obscure the disclosure, well-known power / ground connections to integrated circuit (IC) chips and other components may or may not be shown within the drawings presented. Furthermore, in order to avoid obscuring the present disclosure, and also considering the fact that the details of the implementation of such block diagram arrangements largely depend on the platform that will implement the present disclosure, the arrangement can be shown in block diagram form (ie such details It should be within the scope of those skilled in the art). Where specific details (eg, circuits) are stated to describe example embodiments of the present disclosure, it should be apparent to those of ordinary skill in the art that the present disclosure may be implemented without these specific details or variations of these specific details . The description is therefore considered to be illustrative and not restrictive.The following examples relate to further embodiments. The details in the examples can be used anywhere in one or more embodiments. All optional features of the devices described herein can also be implemented with respect to methods or processes.Example 1. An apparatus includes: a first power supply rail for supplying first power; a second power supply rail for supplying second power; a third power supply rail for supplying third power; a voltage divider coupled to A first power supply rail, a second power supply rail and a third power supply rail; an offset generator coupled to the voltage divider and the third power supply rail; an oscillator coupled to the offset generator and the first supply rail And a clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail.Example 2. The apparatus of example 1, wherein the bias generator includes an amplifier coupled to the second power supply rail.Example 3. The apparatus of Example 1 includes a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power supply rail.Example 4. The device of example 3, wherein the voltage regulator includes a low dropout circuit.Example 5. The device of example 1, wherein the oscillator is a voltage controlled oscillator.Example 6. The apparatus as described in Example 1 includes a phase frequency detector coupled to the first power supply rail, wherein the phase frequency detector is used to receive a reference clock and a feedback clock as inputs, and to generate an indication reference One or more outputs of the phase difference between the clock and the feedback clock.Example 7. The apparatus as described in Example 6 includes a frequency divider coupled to the oscillator and the phase frequency detector, wherein the frequency divider is used to divide the output of the oscillator and provide a feedback clock, and wherein, The frequency divider is coupled through the first power supply rail.Example 8. The apparatus of example 1, wherein the voltage divider includes one or more programmable resistance devices.Example 9. The apparatus of example 8, wherein the voltage divider is used to sense noise on the second power supply rail and to inject the sensed noise onto the bias generator so that the output of the bias generator is based on The injected sensed noise adjusts the frequency of the oscillator.Example 10. An apparatus includes: a first power supply rail for supplying first power; a second power supply rail for supplying second power; a third power supply rail for supplying third power; a voltage divider coupled to First power supply rail, second power supply rail, and third power supply rail; digital loop filter, coupled to voltage divider and third power supply rail; oscillator, coupled to digital loop filter and third power Supply rail; a clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; and a time-to-digital converter (TDC), the TDC is coupled to Digital loop filter, where the TDC is coupled to the first power supply rail.Example 11. The apparatus of example 10 includes a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power supply rail.Example 12. The device of example 11, wherein the voltage regulator includes a low dropout circuit.Example 13. The apparatus of example 10, wherein the oscillator comprises a numerically controlled oscillator.Example 14. The apparatus of example 10, wherein the oscillator includes an LC oscillator.Example 15. A system includes: a memory; a processor coupled to the memory, the processor including: a first power supply rail for supplying first power; a second power supply rail for supplying second power; and a third power supply rail For supplying third power; a voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail; a bias generator, coupled to the voltage divider and the third power supply rail; oscillation An amplifier coupled to the bias generator and the first supply rail; and a clock distribution network for providing the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; and wireless The interface is used to allow the processor to communicate with another device.Example 16. The system of example 15, wherein the voltage divider includes one or more programmable resistance devices.Example 17. The system of example 15, wherein the voltage divider is used to sense noise on the second power supply rail and to inject the sensed noise onto the bias generator so that the output of the bias generator is based on The injected sensed noise modulates the frequency of the oscillator.Example 18. A system includes: a memory; a processor coupled to the memory, the processor including: a first power supply rail for supplying first power; a second power supply rail for supplying second power; and a third power supply rail , Used to provide third power; voltage divider, coupled to the first power supply rail, the second power supply rail, and the third power supply rail; digital loop filter, coupled to the voltage divider and the third power supply rail; An oscillator coupled to the digital loop filter and the third power supply rail; a clock distribution network to provide the output of the oscillator to one or more logics, wherein the clock distribution network is coupled to the second power supply rail; And a time-to-digital converter (TDC) coupled to the digital loop filter, where the TDC is coupled to the first power supply rail; and a wireless interface for allowing the processor to communicate with another device.The system of Example 18 includes a voltage regulator coupled to the first power supply rail, wherein the voltage regulator is used to provide the first power to the first power supply rail.The system of example 18, wherein the oscillator includes one of the following: a numerically controlled oscillator; or an LC oscillator.A summary is provided that will allow the reader to clarify the nature and gist of the technical disclosure. It should be understood that the abstract will not be used to limit the scope or meaning of the claims. The appended claims are hereby incorporated into the detailed description, each claim as a separate embodiment. |
The present invention provides a partition-free multi-socket memory system architecture. A technique to increase memory bandwidth for throughput applications. In one embodiment, memory bandwidth can be increased, particularly for throughput applications, without increasing interconnect trace or pin count by pipelining pages between one or more memory storage areas on half cycles of a memory access clock. |
1.A device includes:At least two processors coupled to at least two memories, wherein in the first part of the clock signal period, the first of the at least two processors will read the stored in the first of the at least two memories A first portion of data and a second portion of data stored in a second of the at least two memories, and wherein in the first portion of the clock signal period, the second of the at least two processors will A third portion of data stored in a first one of the at least two memories and a fourth portion of data stored in a second one of the at least two memories are read.2.The apparatus of claim 1, further comprising a first buffer coupled to the first memory for storing the data after the first and third portions of the data have been read from the first memory. The first and third parts.3.The apparatus of claim 2, further comprising a second buffer coupled to the second memory for storing the data after the second and fourth portions of the data have been read from the second memory. The second and fourth parts.4.The apparatus of claim 3, wherein the first processor will read the first portion of the data from a first portion of the first buffer and the third portion of the second buffer. The third part of the data.5.The apparatus of claim 4, wherein said second processor will read said second portion of said data from a second portion of said first buffer and The fourth part of the data is partially read.6.The apparatus of claim 1, further comprising an interconnect coupled to the at least first and second processors for communicating page status information corresponding to the at least first and second memories.7.The apparatus of claim 1, wherein the first, second, third, and fourth portions of the data each have the same bit width.8.The apparatus of claim 1, wherein the at least first and second processors are to perform three-dimensional (3D) graphics operations.9.The apparatus of claim 1, wherein the first portion of the first clock period is half of the first clock period.10.The apparatus of claim 1, wherein the first portion of the first clock period is a clock period.11.A processor including:First logic that provides page status information to a second processor, wherein the page status information includes whether the first page of the first memory is to be closed, and if the second processor instructs the second processor to Accessing information from the first page, the first logic will prevent the first page from being closed.12.The processor of claim 11, further comprising execution logic for executing a single instruction multiple data (SIMD) instruction.13.The processor of claim 11, wherein the page status information is to be passed between the first and second processors via a dedicated interconnect.14.The processor of claim 11, further comprising second logic for receiving page status information from the second processor, wherein the page status information includes whether a second page of the second memory is to be closed, wherein If the processor will access information from the second page, the second processor will prevent the second page from being closed.15.The processor of claim 14, wherein the processor and the second processor will access information from the first and second memories in parallel, respectively.16.The processor of claim 14, further comprising third logic for causing the third page to be in the third page if the processor or the second processor is to access information in the third page. A memory is opened.17.The processor of claim 11, further comprising three-dimensional (3D) graphics rendering logic.18.The processor of claim 17, wherein the second processor includes 3D graphics rendering logic.19.A system including:Multiple processors coupled to multiple memories, wherein each of the multiple processors will access each of the multiple memories in parallel;A plurality of interconnects coupled to the plurality of processors, passing page state information between the plurality of processors.20.The system of claim 19, further comprising a plurality of memory controllers coupled to each of the plurality of processors.21.The system of claim 20, wherein the plurality of memory controllers will route access from each of the plurality of processors to the plurality of memories.22.The system of claim 19, wherein each processor will access a 1 / N-bit wide data word from each of the plurality of memories, where "N" corresponds to the number of the plurality of processors.23.The system of claim 22, wherein each of the plurality of memories is coupled to a buffer, the buffer storing data to be accessed in parallel by the plurality of processors.24.The system of claim 23, wherein the buffer will store 16 bits simultaneously.25.A method including:Open multiple pages of memory, each page in a different memory;Accessing data from each of the plurality of pages of memory, and providing the data to a plurality of processors in parallel;Request to close at least one of the plurality of pages of memory, wherein the request is from one of the plurality of processors that does not control the at least one page of memory to one that does control the at least one page of memory Another one of the plurality of processors;If other processors without the plurality of processors are accessing at least one page of the plurality of pages of memory, a request to close the at least one page of the plurality of pages of memory is granted.26.The method of claim 25, further comprising passing an indication of the request to the plurality of processors.27.The method of claim 26, wherein the indication is passed to the plurality of processors through a plurality of dedicated interconnects coupled to the plurality of processors.28.The method of claim 27, wherein the plurality of processors include a plurality of memory controllers for accessing the data from the plurality of memories.29.The method of claim 27, wherein said plurality of memories includes a plurality of buffers for temporarily storing said data until it is accessed by said plurality of processors.30.The method of claim 25, wherein the plurality of processors are graphics processors. |
Divide-free multi-slot memory system architectureTechnical fieldEmbodiments of the present invention mainly relate to the field of information processing, and more specifically, to the field of multi-slot memory interfaces.Background techniqueAs more applications continue to take advantage of the parallel processing capabilities of multiprocessing systems and microprocessors, the need to promote greater memory bandwidth is growing. Parallel applications may include graphics applications, financial applications, medical and biotechnology applications, or any other application involving simultaneous manipulation of large data sets, such as through a single instruction multiple data (SIMD) instruction. To some extent, more traditional sequential central processing unit (CPU) workloads may also require or otherwise benefit from larger memory bandwidth and data bus sizes, depending on the size of the data structures they operate on.For example, a graphics application is intended to perform texturing operations or other special effects in parallel on multi-pixels of one or more polygons to render a three-dimensional (3D) graphical scene. The size of some textures or other large data structures may require or otherwise create a need for high bandwidth from one or more processors to one or more memory storage areas (e.g., DRAM) to quickly retrieve and store this data. Some existing technologies have attempted to provide greater memory bandwidth by increasing the number of pins or bus traces from one or more processors or processing cores to one or more memories. Increasing the interconnect width, such as off-package bus bandwidth to increase the bandwidth, adversely affects the system cost and limits the system's applicability to a more general-purpose computing system.In some existing technologies, increasing memory bandwidth can be accomplished by adding more data pins to the package and / or increasing the bandwidth of each data pin (relatively increasing the switching frequency). However, there are practical (e.g., economic) restrictions on increasing bandwidth by increasing the bus bandwidth (e.g., by adding more pins) and / or increasing the bus frequency.To further increase system bandwidth, some existing technologies may use multiple processors with corresponding memory allocated to each processor. This creates a pairing between the processor and the allocated memory, which are typically interconnected by a high-bandwidth bus. The processor / memory pair can then be interconnected via another bus, which may require additional pins, but may not have the bandwidth to support sharing the data each processor gets from its respective memory. Because it is difficult to share information accessed by one processor from one memory to another in a convenient manner, an application may attempt to divide the work performed by the application between processor / memory pairs. Partitioning applications places a significant burden on application developers because they need to ensure that they are storing and accessing data in the correct processor / memory pairs to avoid significant wait times. Setting restrictions on applications, such as code / data partitioning, increases application development costs, inhibits portability, and prevents these applications from being more successful in the market.Summary of the inventionAn apparatus includes at least two processors coupled to at least two memories. Wherein in the first part of the clock signal period, the first of the at least two processors will read the first part of the data stored in the first of the at least two memories and the stored in the at least two memories The second part of the data in the second part, and wherein in the first part of the clock signal period, the second part of the at least two processors will read the first part stored in the at least two memories A third part of the data and a fourth part of the data stored in the second of the at least two memories.The present invention also provides a processor including first logic for providing page state information to a second processor. Wherein the page status information includes whether the first page of the first memory is to be closed, and if the second processor instructs the second processor to access information from the first page, the first logic Will prevent the first page from being closed.The invention also provides a system comprising: a plurality of processors coupled to a plurality of memories, wherein each of the plurality of processors will access each of the plurality of memories in parallel; and coupled to the plurality of processes Multiple interconnects of processors, page state information is passed between the multiple processors.The invention also provides a method comprising: opening a plurality of pages of the memory, each page being in a different memory; accessing data from each of the plurality of pages of the memory, and providing the data to a plurality of the parallel Processor; request to close at least one of the plurality of pages of memory, wherein the request is from one of the plurality of processors that does not control the at least one page of memory to the at least one that does control memory The other of the plurality of processors of the page; if other processors without the plurality of processors are accessing at least one page of the plurality of pages of memory, then all of the plurality of pages of memory are permitted to be closed A request for at least one page.BRIEF DESCRIPTION OF THE DRAWINGSThe embodiments of the present invention are shown by way of example and not limitation in the figures of the accompanying drawings. Similar reference numerals in the drawings represent similar parts, wherein:Figure 1 shows a block diagram of a multi-processor system in which at least one embodiment of the present invention can be used;2 is a block diagram illustrating a two-socket system according to one embodiment, in which the memory controllers are external to their respective processors;3 is a block diagram showing a two-socket system according to one embodiment, in which the memory controllers are located inside their respective processors;4 illustrates a timing diagram corresponding to the two-socket system of FIGS. 2 and 3 according to one embodiment;5 is a block diagram illustrating a four-socket system according to an embodiment;6 illustrates a timing diagram corresponding to the four-socket system of FIG. 5 according to one embodiment;FIG. 7 is a flowchart of operations that can be used to perform at least one embodiment of the present invention;8 is a block diagram showing a configuration of a two-socket system in which at least one embodiment can be used;FIG. 9 is a block diagram showing another configuration of a two-socket system in which at least one embodiment may be used;FIG. 10 illustrates a shared interconnect system in which at least one embodiment may be used;FIG. 11 illustrates a point-to-point interconnected computer system in which at least one embodiment of the present invention may be used.FIG. 12 illustrates a system in which one embodiment of the present invention may be used.detailed descriptionEmbodiments of the invention relate to processing devices and systems, including those processing devices and systems that can process parallel or "throughput" applications. Certain embodiments include at least two processing units (e.g., graphics processors) to process memory accesses representing applications, such as 3D graphics applications, and at least two storage structures, such as DRAM devices, each coupled to the at least two A processing unit, wherein each of the at least two storage structures includes or is associated with one or more buffers storing information, the buffers having a storage width corresponding to a width of data to be read from each memory (e.g., , 16-bit). In one embodiment, each buffer is divided, configurable in width, or otherwise coupled to two different processors (e.g., through their respective memory controllers), one of each buffer Part (e.g., half a buffer) will store data that will be provided to one processor, while other parts (e.g., half a buffer) are coupled to at least one other processor so that each processor can Memory access information. In one embodiment, the number of portions of the buffer can be configured based on the number of processors from which data is accessed.By providing each processor access to two or more storage structures, application software can store information in and access information from more than one storage structure, which provides software with information about where to store and access program data and Flexibility in other information. In addition, embodiments of the present invention not only allow software to access information from other memory structures that do not correspond to the memory structure of a particular processor, but embodiments of the invention, while doing so, also maximize the memory interface bandwidth of each processor.Embodiments of the present invention enable software applications to access and store information in multiple storage structures corresponding to multiple processors. In some cases, this can be useful when processing parallel instructions or applications that utilize single instruction multiple data (SIMD) or multiple instruction multiple data (MIMD) operations, as each SIM or MIMD operation can be performed from multiple The memory structure accesses the operand data element regardless of the specific memory structure in which it is located. This is useful for some applications, such as 3D graphics or financial applications that perform operations on large amounts of information simultaneously. However, this is also useful for some traditional, more sequential CPU applications, and CPU applications that utilize information that may be stored in multiple different locations.In some embodiments, where memory is organized or accessed according to segments, such as "pages," a processor (or memory interface logic) that accesses the pages may maintain a structure (e.g., a "page table") to page the memory of a particular The size or organization maps to the page size or scheme of the processor or memory controller. For example, in one embodiment, where a processor or memory controller can map physical pages of a particular memory to a set of virtual pages, the processor or memory controller can open or close these virtual pages in response to a program accessing the page .Because in some embodiments each processor or memory interface has access to other memory structures that can be controlled by or otherwise correspond to another processor memory interface, Some communication between processors / memory controllers may be desirable in order to maintain consistency between the page states (open / closed) of each processor or memory controller. In one embodiment, n-width interconnects (where "n" can represent a variable number of channels / pins / channels / traces, from 1 to more) can be used across processors and memory controllers. Page status is passed between times so that one processor does not close pages of memory that another processor may need to access. By passing page status between processors or memory controllers that access one or more memories, unnecessary page open or close operations can be avoided, thereby improving access performance between processors or memory controllers . Moreover, in some embodiments, the n-width interconnect may be a relatively low bandwidth, so that excessive pins, power supplies, or other resources are not required.Advantageously, embodiments of the present invention may allow applications to run on multiple processors regardless of the memory device in which the data is stored or to be stored. This is especially useful in graphics applications, for example, one graphics processor is rendering half the screen's pixels and another graphics processor is rendering the remaining half. In this case, triangles that fall on the boundary may generate latency during filtering, because a processor will need to access adjacent texel information from a memory (corresponding to half of the corresponding processor Texels on the screen), and another processor will need to access adjacent texel information (corresponding to texels on half the screen of the corresponding processor) from another memory. In this case, a processor that needs information from a non-corresponding memory may need to request it through the corresponding processor, and the corresponding processor will have to return the information to the requesting processor, which consumes bandwidth, thus A relatively high-bandwidth bus is required between processors. Otherwise, software developers will have to make restrictions on where to store the data, which can be very difficult, especially if rendering triangles across boundaries.A similar situation occurs when one processor is rendering a frame and another processor is rendering the next frame. In particular, special effects such as reflections sometimes rely on information from frames directly in front of them. In this case, since information from the previous frame (corresponding to another processor / memory pair) is needed in the current frame (corresponding to one processor / memory pair), a split frame exists and is processed (as shown above) The same waiting time problem. Embodiments of the present invention can handle these situations, such as split frame rendering examples and alternate frame rendering examples, without some prior art bandwidth issues, and the software does not need to know or care where the corresponding data is stored. This is possible in one embodiment due to the fact that the processors employed in some embodiments of the invention store information in alternate form between the memories in use automatically (without assistance from the OS or application) (For example, one page of information), and derive information from the provided address, from which memory the data is accessed.In one embodiment, the page table maps the addresses provided by the software to locations in two memories corresponding to the two processors used to execute the throughput application. In particular, the page table uses the bits of the address to access the entries of the table, which contain the addresses of the information stored in alternate locations in the two memories. Therefore, when software stores or accesses information, page tables automatically route access to the correct memory without requiring the software (OS or application) to understand or care where the information is actually stored. In this way, information can be accessed from any memory at burst speeds in alternate form, thereby maximizing the bandwidth of each processor's memory interface and avoiding the use of relatively high-bandwidth buses to support interleaved memory / processors access.In some embodiments, multiple processors may provide data to the requesting application by managing the request in an efficient manner, such as by using consistency filtering. In one embodiment, the consistency filter may include one or more consistency tables, or other structures corresponding to or accessible by one or more processors, such that applications running on one processor A request for data causes that processor to access a table indicating the address of data that may currently be accessible by another processor (e.g., in a processor's cache, buffer, or other structure, in the processor's corresponding memory In the currently open page, etc.). If the most recent version of the requested data resides in the cache of another processor, the processor receiving the request may signal the other processor to return the requested data to the requesting application, or the processor receiving the request may Data is retrieved from that processor through an n-wide interprocessor interconnect. In some embodiments, each processor may include multiple processors, in which case each processor may correspond to a processor socket.In some embodiments, the techniques described above may be applied to a system or processor having two, four, eight, or more processors or cores. In addition, embodiments of the present invention can be applied to a number of different systems or processing configurations or applications, including general-purpose computers, graphics game consoles, graphics card applications, and the like. In one embodiment, the techniques described herein involve one or more processors running 3D graphics or other applications such as financial applications, medical applications, imaging applications, and the like. In other embodiments, the techniques described herein may be used in conjunction with a general purpose CPU for running sequential or more traditional workloads. In still other embodiments, the techniques described herein can be used with hybrid processors designed to run traditional CPU workloads and throughput applications, such as including traditional CPUs and graphics-specific logic ("CPU + GPU") Processor. In one embodiment, the techniques described herein are used in conjunction with one or more processors having multiple CPU processor cores, capable of executing SIMD instructions, coupled to interconnects, and parallel application-specific logic (eg, graphics texture sampling logic).FIG. 1 illustrates a microprocessor in which at least one embodiment of the present invention may be used. FIG. 1 illustrates a processor that can be used for a conventional CPU application, a throughput application (eg, a 3D graphics application), or a combination of a conventional CPU and a throughput application. The processor 100 includes a plurality of processing cores 100-1 to 100-N, dedicated throughput application hardware 110 (for example, graphics texture sampling hardware), memory interface logic 120, and is organized along a ring interconnect 130. In some embodiments, the processor 100 may include one or more last-level caches 135 that include information from the caches 101-1 to 101-N in each of the cores 100-1 to 100-N. In one embodiment, one or more processing cores 100-1 to 100-N are capable of performing SIMD operations.In one embodiment, the memory controller may interface with a memory external to the processor 100, which may include DRAM, such as graphics DRAM 105. In one embodiment, the memory interface may have a certain width, such as 16 bits, and may access memory pages of a certain size (such as 2KB). In a system where more than one processor 100 can access one or more memories (e.g., DRAM), the memory is controlled by another processor or memory controller, or otherwise corresponds to another processor or memory controller, processor 100 may also include logic 140 to pass, receive, and process information to or from different processors or memory controllers in order to maintain consistency of page states among the various processors accessing each memory. In one embodiment, the logic 140 may include registers or other storage areas, as well as some control or decoding logic, in combination with a page table to explain other processors or memory controllers that can access the same memory as the processor 100 Page status. The processor 100 may use this consistency information to determine whether to close a page of memory or open a page of new memory. In addition, the processor 100 may pass the state of certain pages of the memory to other processors or memory controllers that access the same memory as the processor 100 accesses.In some embodiments, such as information on graphic textures, or other information requiring a relatively large amount of memory bandwidth, can be accessed from other memory corresponding to another processor (not shown) without the application software having to know or consider storage The memory of this information. In one embodiment, the memory interface of the system can synthesize its effective bandwidth by providing addresses to at least two memory storage structures, such as DRAM or a DRAM array (eg, DIMM), and when the data of the second memory When the first portion of the width is supplied to the first processor and the second portion of the data width of the second memory is supplied to the second processor, the first portion of the data width is supplied to the processor from the first memory, and from the first memory The second part of the data width is supplied to the second processor.In some embodiments, the processor 100 may include more or fewer memory controllers than those shown in FIG. 1. In addition, the memory controller of FIG. 1 may be inside the processor 100 or outside the processor 100. For example, FIG. 2 is a block diagram illustrating a two-socket system in which the memory controllers are external to their respective processors, according to one embodiment.Specifically, FIG. 2 shows processors 200 and 205 coupled to respective memory controllers 210 and 215, which control the memories 220 and 225, respectively. As shown in FIG. 2, the processors 200 and 205 communicate with the memory controllers 210 and 215 through the interconnections 203, 207, 213, and 217, respectively. In addition, processors 200 and 205 pass page status information through link 208. In one embodiment, the addresses are provided to and responded to the memories 220 and 225, and the data word is read from the addressed location of each memory to one or more of the memory, the memory, or the memory controller. In multiple buffers 230, 235, 240, and 245. In one embodiment, the data word is 16 bits, but other sizes are possible, depending on the width of the processor / memory controller / memory data bus. In one embodiment, the one or more buffers are organized into two parts (e.g., half), such that when the processor 205 reads half of the buffers 230, 235 corresponding to the memory controller 210, and When half of one of the buffers 240, 245 of the memory controller 215, the processor 200 can read the other half of one of the buffers 230, 235 of the memory controller 210, while the processor 200 reads the corresponding one of the memory controller The other half of one of the buffers 215, 240, 245.In one embodiment, the buffer may be configurable into portions that correspond to multiple processors that may be accessing the memory corresponding to the buffer. For example, the buffer may be configurable in two halves in a dual processor system, quaternary in a four processor system, and eight halves in an eight processor system. In one embodiment, logic can be used to detect the number of processors accessing memory in the system and automatically (dynamically) divide the buffer in response to it.After one of the two buffers corresponding to each memory controller is read, the second buffer of each memory controller can be immediately read in a similar manner on the next clock edge. In one embodiment, at the same time The next data word is read from the memory to the previous read buffer corresponding to one of the memory controllers 210 and 215. This process can continue for several indeterminate cycles, so that data can be continuously read from the two memories to the processor 200 and each cycle or half cycle (in the case of a double-pumped interface). 205 (or written to two memories by processors 200 and 205). In one embodiment, multiple pages in each memory may remain open at a time, so that a new page close / open cycle need not be performed for each access. However, if a new page does need to be opened, one of the processors may notify the other processors of the page to be opened or closed via link 208 so that the page will not be closed, for example, being used by one of the processors Page. In this way, the page states of the two processors can be kept consistent.In one embodiment, the memory controllers 210 and 215 may be internal to the processors 200 and 205. FIG. 3 is a block diagram showing a two-socket system according to one embodiment, in which the memory controllers are inside their respective processors 300 and 305. In one embodiment, buffers 330, 335, 340, and 345 are located in memories 320 and 325, or external to the memory, such as on a DIMM circuit board. In one embodiment, information may be written to or read from the memories 320 and 325 in a manner consistent with the technique described with reference to FIG. 2.FIG. 4 shows a timing diagram related to FIG. 2 or FIG. 3, according to which at least one embodiment may be performed. According to one embodiment, FIG. 4 shows addresses 401, 405 and data signals 410, 415, 420, and 425, corresponding to half of the data transferred from each memory to each processor shown in FIGS. 2 and 3. The fact clearly shown in FIG. 4 is that the embodiment of the present invention may be advantageous for reading data every half clock cycle, or in some embodiments, it may be advantageous for reading data every clock cycle.The technique shown in the timing diagram of FIG. 4 can be extended to accommodate situations where two processors read more from two different memories. FIG. 5 illustrates a four-slot system in which at least one embodiment of the present invention can be implemented. In the four-socket system of FIG. 5, any processor 500-1 to 500-4 can read from any memory 510-1 to 510-4 at the same time, so that software applications do not need to care where the data is located.FIG. 6 illustrates a timing diagram corresponding to the four-socket system of FIG. 5 according to one embodiment. According to one embodiment, FIG. 6 shows addresses 601, 602, 603, 605 and data signals 610, 615, 620, 625, 630, 635, 640, 645, corresponding to the transfer from each memory to each shown in FIG. Data for each processor. The fact clearly shown in FIG. 6 is that embodiments of the present invention may be advantageous for reading data every half clock cycle, or in some embodiments, it may be advantageous for reading data every clock cycle.FIG. 7 is a flowchart of operations that can be used to perform at least one embodiment of the present invention. In one embodiment, in operation 701, two addresses are provided to the two different memories (eg, cache, DRAM, etc.) from the first processor and the second processor or the corresponding memory controller, respectively. In operation 705, the first width of the information is retrieved from the location in each memory indicated by the address provided to the memory, respectively, and temporarily stored in the first and second buffers corresponding to the first and second memories. At this point, in operation 710, the first processor / memory controller may simultaneously read half of the first buffer and half of the second buffer, and at the same time, the second processor may simultaneously read another half of the first and second buffers. half. In operation 715, when the processor is reading data from the first and second buffers, the second width of the information is indicated by the addresses of the first and second processors / memory controllers to the first and second memories, respectively. The other location is retrieved and temporarily stored in the third and fourth buffers respectively corresponding to the first and second memories, respectively. In operation 720, the first processor / memory controller may simultaneously read half of the third and fourth buffers, and the second processor may simultaneously read the other half of the third and fourth buffers.This operation can be repeated continuously for the entire page length of the data, or in some embodiments, for longer data, where subsequent pages can be opened without affecting the access rate of the read operation. Furthermore, in some embodiments, there may be fewer or more than two buffers corresponding to each of two different memories. In one embodiment, the first and second widths of the data are each 16 bits. However, in other embodiments, it may be larger or smaller. Furthermore, in some embodiments, the operations described above may be extended to four, eight, or any number of processors or memory devices. In one embodiment, each processor is a graphics processor, but in some embodiments, all or part of the processors may be general purpose processors, or some combination of general purpose and graphics processors. In one embodiment, the operations described above can be used to improve the performance of throughput applications, such as graphics applications, financial applications, molecular modeling applications, or other applications that involve performing operations / instructions on multiple data elements simultaneously.Embodiments of the present invention can be used in a variety of configurations on a variety of platforms, including game consoles and general purpose computer platforms. In addition, the processors and memories used in connection with various embodiments may be organized in a variety of ways, depending on the needs and limitations of a particular system or application.FIG. 8 is a block diagram showing a configuration of a two-socket system in which at least one embodiment may be used. FIG. 8 illustrates processors 801 and 805 coupled to memories 810, 815, 820, and 825. The configuration of FIG. 8 may involve routing cross-interconnects 830, 835 in a multilayer circuit board, which is acceptable or required in some applications.FIG. 9 is a block diagram showing another configuration of a two-socket system in which at least one embodiment may be used. FIG. 9 shows two processors 901, 905 coupled to four memories 910, 915, 920, 925. Because there is no cross interconnection, the configuration shown in FIG. 9 may not involve wiring interconnections in multiple layers. Depending on the needs of the platform or application, other configurations can be used. In addition, embodiments of the present invention can be used for multiple different systems with multiple different interconnection topologies, organizations, protocols, and so on.For example, FIG. 10 illustrates a shared bus computer system (eg, a front side bus (FSB) computer system) in which one embodiment of the present invention may be used. Any processor 1001, 1005, 1010, or 1015 may include an asymmetric core (different in performance, power, operating voltage, clock speed, or ISA, etc.), which may be from any local level (L1) cache memory 1020, 1025, 1030, 235, 1040, 1045, 1050, 1055 to access information, any one of the cache memories is in one of the processor cores 1023, 1027, 1033, 1037, 1043, 1047, 1053, 1057 or otherwise with the above processor One of the cores is related. In addition, any processor 1001, 1005, 1010, or 1015 can access information through the chipset 1065 from the system memory 1060 or from a shared level 2 (L2) cache 1003, 1007, 1013, 1017.Embodiments of the present invention may exist in any processor or agent shown in FIG. 10. For example, logic 1019 may be incorporated in any or all processors 1023, 1027, 1033, 1037, 1043, 1047, 1053, 1057 to perform aspects of at least one embodiment. In particular, logic 1019 can be used to detect, transmit, and interpret signals from other agents in the system to determine whether a page of memory is opened or closed, depending on whether the page is currently being accessed by another agent. In other embodiments, the logic 1019 is distributed among multiple agents. In still other embodiments, the logic 1060 may include software, hardware, or some combination thereof.In addition to the FSB computer system shown in FIG. 10, other system configurations may be used in conjunction with various embodiments of the present invention, including a point-to-point (P2P) interconnection system and a ring interconnection system. For example, the P2P system of FIG. 11 may include several processors, and only two of the processors 1170, 1180 are shown by way of example. The processors 1170, 1180 may include a local memory controller center (MCH) 1172, 1182 to connect the memories 112, 114, respectively. The processors 1170, 1180 may use PtP interface circuits 1178, 1188 to exchange data through a point-to-point (PtP) interface 1150. The processors 1170 and 1180 may use point-to-point interface circuits 1176, 1194, 1186, and 1198 to exchange data with the chipset 1190 through the respective PtP interfaces 1152, 1154, respectively. The chipset 1190 may also exchange data with the high-performance graphics circuit 1138 through the high-performance graphics interface 1139.Embodiments of the invention may be included in any processor or agent in FIG. 11. For example, logic 1199 may be incorporated in either or both of processors 1170, 1180 to perform aspects of at least one embodiment. In particular, logic 1199 can be used to detect, transmit, and interpret signals from other agents in the system to determine whether a page of memory is opened or closed, depending on whether the page is currently being accessed by another agent. In other embodiments, the logic 1199 is distributed among multiple agents. In still other embodiments, the logic 1199 may include software, hardware, or some combination thereof.Many different types of processing equipment can benefit from using this process redistribution technology. For example, the processing units 600-1 to 600-N may be a general-purpose processor (such as a microprocessor), or may be a microprocessor core for a multi-core (on a single die) microprocessor. Alternatively, a digital signal processor, a graphics processor, a network processor, or any type of special purpose processor that can be used in a system with multiple parallel units or cores can benefit from the heat moving between processing units ( Or power) purposeful process. The processing units or processors may be the same or have at least partial functional overlap. That is, each processing unit has a common set of instructions or commands such that there is at least part, if not all, processes that can be executed on more than one processing unit or processor. In other embodiments, the processing units may be asymmetric in up to the following aspects, which have arbitrary or combined different performance capabilities, number of transistors, power consumption or thermal characteristics, clock frequency or ISA.To help facilitate the processing and return of the requested data, at least one embodiment may include consistency filtering to determine the best (e.g., fastest) way to retrieve the data requested by the application. For example, in one embodiment, the consistency filter may include a consistency table whose entries include information about data currently accessible by any processor in the system. In one embodiment, the consistency table for a processor includes a list of addresses that indicates data available in a cache, buffer, or other storage structure of another processor in the system such that when an application requests data, A processor may first check its consistency table to see if another processor currently has the data. If so, the data can be retrieved by the processor serving the request by retrieving data across n-width interconnects between processors. In one embodiment, because the table will only indicate some of the data available in the cache / buffer / etc of any processor, (in fact, the table can vary in the amount of information contained in it), so The traffic on the n-width interprocessor interconnect can be reduced, or at least controlled, based on the size or information of the consistency table.FIG. 12 illustrates a system in which one embodiment of the present invention may be used, including consistency filtering. In FIG. 12, an application or thread 1240 running on the processor 1205 may request data by providing an address to the processor 1205. The processor 1205 may then access a consistency table 1245 stored in the processor or some memory accessible to the processor to determine whether the requested data is currently in a cache or buffer in the processor 1200. For example, if the table indicates that the requested data is currently available in the processor 1200, the processor 1205 may retrieve the data from the processor 1200 across the interconnect 1208, thereby providing the data to the program in the most convenient manner possible. In one embodiment, the table is referenced by a portion of the address provided by the application or thread 1240 to the processor 1205. Furthermore, in at least one embodiment, a different table (or the same table) corresponds to each processor in the system, and by creating an entry in the table for each requested address found in another processor maintain. In addition, each entry may contain information to indicate when the data was not found in another processor, or the entry may be deleted together. Various consistency tables maintain schemes and algorithms that can be used to keep track of the information that will be shared across processors 1208 across interconnects.One or more aspects of at least one embodiment may be implemented by representation data stored on a machine-readable medium representing various logic in a processor, which when read by a machine causes a machine to manufacture the logic that performs the described techniques. This representation, called an "IP core", can be stored on a tangible machine-readable medium ("tape") and supplied to various customers or production facilities for loading into a manufacturing machine that actually manufactures logic or processors.Thus, methods and apparatus for directing access to a macro-architecture memory area have been described. It should be understood that the above description is intended to be illustrative, and not restrictive. From reading and understanding the above description, many other embodiments will be apparent to those skilled in the art. Accordingly, the scope of the invention should be determined with reference to the appended claims and the full scope of equivalents to which these claims are entitled. |
Activation systems and methods initiate High-Definition Multimedia Interface (HDMI) communication between an HDMI source and an HDMI sink through an HDMI receptacle of the source. These systems and methods are especially suited for use with mobile sources that generally operate from a battery that cannot provide the + 5V signal which the HDMI protocol requires sources to place on the + 5V pin of their HDMI receptacles. These systems and methods automatically detect the insertion of an HDMI cable into the source's HDMI receptacle and subsequently generate and apply the required + 5 V signal to the + 5 V pin of the source's HDMI receptacle to initiate HDMI communication. Because they are directed to use in mobile sources, the embodiments are configured to minimize current drain. |
I claim: 1. A high-definition multimedia interface (HDMI) activation system to facilitate communication between an HDMI source and an HDMI sink through an HDMI receptacle of said source, the system comprising: a capacitance detector coupled to at least one predetermined pin of said HDMI receptacle to enable detection of a capacitance increase when an HDMI cable is inserted into said receptacle; and a charge pump configured to apply an activation voltage to a + 5V pin of said receptacle in response to said detection; whereby said communication is initiated. 2. The system of claim 1, wherein said predetermined pin is a hot plug detect (HPD) pin. 3. The system of claim 1, further including a voltage monitor coupled to a hot plug detect (HPD) pin of said receptacle to detect receipt of a high voltage level from said sink in response to said activation voltage. 4. The system of claim 3, wherein said voltage monitor is coupled to reactivate said capacitance detector and deactivate said charge pump in response to an absence of said high voltage level. 5. The system of claim 3, wherein said activation voltage is between 4.8 and 5.3 volts and said high voltage level is between 2.4 and 5.3 volts. 6. The system of claim 1, wherein said capacitance detector includes: a capacitor; a resistor coupled to provide a charging current to said capacitor; and a comparator coupled to sense a predetermined voltage drop across said capacitor when said predetermined pin is coupled to said capacitor after said capacitor has been charged; wherein said predetermined voltage drop indicates presence of said cable. 7. The system of claim 1, wherein said capacitance detector includes: a capacitor; a resistor coupled to provide a charging current to said capacitor; and a comparator coupled to sense a predetermined time for voltage across said capacitor to rise to a predetermined level after said capacitor has been discharged; wherein said predetermined time indicates presence of said cable. 8. The system of claim 7, further including a counter coupled to said comparator to sense said predetermined time. 9. The system of claim 1, wherein said charge pump includes: a capacitor; and a switch system arranged to couple a voltage to a first plate of said capacitor in a first operational phase and to couple said voltage to a second plate of said capacitor in a second operational phase; said first plate thereby pumped above said voltage. 10. The system of claim 9, wherein said charge pump includes a second capacitor and said switch system is arranged to couple the first plate of said capacitor to a first plate of said second capacitor in said second operational phase and to couple said voltage to a second plate of said second capacitor in said first operational phase. 11. The system of claim 9, wherein said source is a cell phone and said sink is a television monitor. 12. A high-definition multimedia interface (HDMI) activation system to facilitate communication between an HDMI source and an HDMI sink through an HDMI receptacle of said source, the system comprising: a capacitance detector coupled to a hot plug detect (HPD) pin of said HDMI receptacle to enable detection of a capacitance increase when an HDMI cable is inserted into said receptacle; a charge pump configured to apply an activation voltage to a + 5V pin of said receptacle in response to said detection; and a voltage monitor coupled to a said HPD pin to detect receipt of a high voltage level from said sink in response to said activation voltage; said communication thereby initiated. 13. The system of claim 12, wherein said voltage monitor is coupled to reactivate said capacitance detector and deactivate said charge pump in response to an absence of said high voltage level. 14. The system of claim 13, wherein said activation voltage is between 4.8 and 5.3 volts and said high voltage level is between 2.4 and 5.3 volts. 15. The system of claim 12, wherein said capacitance detector includes: a capacitor; a resistor coupled to provide a charging current to said capacitor; and a comparator coupled to sense a predetermined voltage drop across said capacitor when said predetermined pin is coupled to said capacitor after said capacitor has been charged; wherein said predetermined voltage drop indicates presence of said cable. 16. The system of claim 12, wherein said capacitance detector includes: a capacitor; a resistor coupled to provide a charging current to said capacitor; and a comparator coupled to sense a predetermined time for voltage across said capacitor to rise to a predetermined level after said capacitor has been discharged; wherein said predetermined time indicates presence of said cable. 17. The system of claim 12, wherein said charge pump includes: a capacitor; and a switch system arranged to couple a voltage to a first plate of said capacitor in a first operational phase and to couple said voltage to a second plate of said capacitor in a second operational phase; said first plate thereby pumped above said voltage. 18. A method of activating high-definition multimedia interface (HDMI) communication between an HDMI source and an HDMI sink through an HDMI receptacle of said source, the method comprising the steps of: detecting a capacitance increase across at least one predetermined pin of said HDMI receptacle to enable detection of insertion of an HDMI cable into said receptacle; and with a voltage, pumping at least one capacitor to thereby provide a greater activation voltage to a + 5V pin of said receptacle in response to said detection; said communication thereby initiated. 19. The method of claim 18, further including the step of monitoring the voltage at a hot plug detect (HPD) pin of said receptacle to detect receipt of a high voltage level from said sink in response to said activation voltage. 20. The method of claim 19, wherein said activation voltage is between 4.8 and 5.3 volts and said high voltage level is between 2.4 and 5.3 volts. |
ACTIVATION SYSTEMS AND METHODS TO INITIATE HDMI COMMUNICATION WITH MOBILE SOURCES CROSS REFERENCES TO RELATED APPLICATIONS [0001] This application claims the benefit of United States Provisional Application Serial No. 60/967,890 filed September 7, 2007. BACKGROUND OF THE INVENTION Field of the invention [0002] The present invention relates generally to High-Definition Multimedia Interface systems. Description of the Related Art [0003] High-Definition Multimedia Interface (HDMI) is a compact audio-video connector interface directed to transmittal of uncompressed digital data streams. On a single cable, HDMI supports television (TV) and personal computer (PC) video formats including standard, enhanced and high-definition video along with up to 8 channels of digital audio. Development of HDMI 1.0 began in early 2002 under direction of the HDMI founders (Hitachi, Matsushita Electric Industrial (Panasonic), Phillips, Silicon Image, Sony, Thomson (RCA), and Toshiba). The HDMI specification has been adopted by over 800 consumer electronics (CE) and PCcompanies and HDMI products generally began shipping in autumn of 2003. [0004] HDMI devices are manufactured to adhere to various specification versions in which each version has an assigned number such as 1.0, 1.2, or 1.3a. The HDMI 1.3 specification defines category 1 cables which have been tested at a pixel clock rate of 74.5 MHz and category 2 cables which have been tested at a pixel clock rate of 340 MHz to meet a set of required parameter specifications (inter-pair skew, far-end crosstalk, attenuation, differential impedance) or, alternatively, to meet non-equalized/equalized eye diagram requirements. HDMI cables that are manufactured with lower-quality construction and materials can generally meet the HDMI performance requirements at distances up to something on the order of 5 meters whereas higher-quality cables can generally meet the requirements at distances up to something on the order of 15 meters. [0005] Currently, there are three HDMI connector types. The type A connector has outer dimensions of 4.45 x 13.9 millimeters and provides 19 pins with bandwidth to support current high-definition television (HDTV) modes. The type B connector has outer dimensions of 4.45 x 21.2 millimeters and provides 29 pins to double the bandwidth of type A to thereby support future high-resolution displays. A type C mini-connector is also provided to support mobile devices. [0006] HDMI facilitates exchange of video, audio and auxiliary data in three modes called the Video Data Period, the Data Island Period and the Control Period. Pixels of an active video line are transmitted during the Video Data Period. During the Data Island Period (which occurs during horizontal and vertical blanking intervals), audio and auxiliary data pixels are transmitted. The Control Period is positioned between these two periods. [0007] One objective of the HDMI protocol is to reduce several conventional cables that traditionally interconnect a digital source (i.e., a source of digital video and/or audio signals) and a digital sink (i.e., a device that responds to the digital video and/or audio signals) down to a single cable. HDMI was developed for consumer electronics products and it thus contrasts with an earlier protocol digital video interface (DVI) that was developed for use by computers. DVI also provides digital connection between sources and sinks but it doesn't carry audio signals which implies that an extra cable is required for an audio connection. HDMI, however, is fully backward compatible with DVI so that that only a DVI-to-HDMI cable adaptor is required for use with a DVI system. This opens HDMI to a wide range of DVI-equipped products from a variety of manufacturers. In contrast to DVI, HDMI facilitates higher resolutions, connects both video and audio signals, supports two-way communication between source and sink, and its connectors are significantly smaller.[0008] Similar to DVI, HDMI transports data via the transition minimized differential signaling (TMDS) encoding protocol. TMDS conveys data by transitioning between ' 1 ' and 1O' states while, at the same time, minimizing the state transitions. Reducing the state transitions substantially reduces electromagnetic interference (EMI) levels on the HDMI cable. In addiion, however, TMDS acts to minimize long strings of identical states which otherwise can cause detection errors. In this process, incoming 8-bit data is encoded into a 10-bit transition-minimized, DC-balanced word. Three TMDS data channels (CHO, CHl and CH2) are provided in an HDMI cable with each channel consisting of a signal conductor, an inverse signal conductor, and a ground conductor. A fourth channel (also comprising signal conductor, inverse signal conductor and ground) is dedicated to carry a TMDS clock signal. [0009] Another cable conductor is dedicated to consumer electronic control (CEC) which allows a system user to command and control multiple CEC-enabled devices with one remote control and for individual CEC-enabled devices to command and control each other without user intervention. CEC has the capability of turning all remote controls in a system into universal remotes so that, for example, a single button can switch on all devices that are needed to play back content. In an exemplary scenario, a DVD player could turn on a sink device and associated surround sound systems that are needed for playback. [0010] Other cable conductors are directed to display data channel (DDC) which allows a source device (e.g., a DVD player) to determine the audio and visual capabilities of a sink device. A DDC query from the source device prompts the display to respond with associated display and interface information (e.g., manufacturer name, model number, acceptable data formats, and other display capabilities. DDC can, for example, automatically manage a display device so that a consumer need not alter settings to obtain the highest quality output. DDC is realized with a data conductor (SDA) and a clock conductor (SCL). A ground for both CEC and DDC is carried on a separate conductor. After successful completion of these DDC communications, the sink device can be enabled to receive clock and TMDS signals from the source device. [0011] In order to facilitate DDC communications, another cable connector, called hot plug direct (HPD), permits the source device to detect when a sink device has been connected to it. When an HDMI cable is mounted to the sink device, this device detects that the source device is providing +5 V on the + 5V conductor. In response, the sink device places a high level voltage on the HPD conductor. When the source device detects this signal on its HPD conductor, it then inaugurates the DDC communication.BRIEF SUMMARY OF THE INVENTION [0012] The present invention is generally directed to activation systems and methods for initiating HDMI communication with mobile devices. The drawings and the following description provide an enabling disclosure and the appended claims particularly point out and distinctly claim disclosed subject matter and equivalents thereof. BRIEF DESCRIPTION OF THE DRAWINGS [0013] FIG. 1 is a view of an HDMI cable connecting exemplary HDMI sources and sinks; [0014] FIG. 2 is an enlarged view of an HDMI plug within the ellipse 2-2 which shows its insertion into an HDMI receptacle; [0015] FIG. 3 is a view along the plane 3-3 of FIG. 2 which illustrates layout of HDMI pins in the HDMI plug; [0016] FIG. 4 is a view along the plane 4-4 of FIG. 1 which illustrates conductors inside an HDMI cord; [0017] FIGS. 5A-5C illustrate various insertions of an HDMI cable into the receptacle of a mobile HDMI source; [0018] FIG. 6 is a diagram that illustrates structure and processes of an HDMI activation system embodiment; [0019] FIGS. 7A and 7B are schematics of charge pump embodiments for use in the system of FIG. 6; and [0020] FIGS. 8A and 8B are schematics of charge pump embodiments for use in the system of FIG. 6. DETAILED DESCRIPTION OF THE INVENTION [0021] FIGS. 1-8B illustrate activation system embodiments that are configured to initiate HDMI communication between an HDMI source and an HDMI sink through an HDMI receptacle of the source. These systems and methods are especially suited for use with mobile sources that generally operate with a supply voltage that is less than the + 5V signal which the HDMI protocol requires sources to place on the + 5V pin of their HDMI receptacles. Accordingly, HDMI communication will not be initiated when the mobile source is connected to an HDMI sink. However, the activation embodiments illustrated in FIGS. 1 -8B willautomatically detect the insertion of an HDMI cable into the source's HDMI receptacle and will subsequently generate and apply the required + 5V signal to the + 5V pin of the source's HDMI receptacle to initiate HDMI communication. Because these embodiments are directed to use in mobile sources, they are configured to minimize current drain. [0022] In particular, FIG. 1 illustrates HDMI systems 20 in which any of various digital audio/video sources 21 can be coupled to any of various digital audio/video sinks 22 through an HDMI cable 24 that is formed with two HDMI plugs 25 that are joined together by an HDMI cord 26. As shown, examples of the sources 21 are computers, set-top boxes, digital video disc (DVD) playsers, Blu-ray disc players, video game consoles, and audio/video (A/V) receivers. As also shown, examples of the sinks 22 are computer monitors, high definition (HD) televisions, video projectors, and digital audio devices. [0023] FIG. 2 is an enlarged view of the HDMI plug 25 within the ellipse 2-2 of FIG. 1. This enlarged view is also partially sectioned to illustrate that the plug has spring-like conductive pins 28 arranged at top and bottom of a socket. FIG. 2 also shows an HDMI receptacle 30 which carries conductive spikes 31 on top and bottom of a tongue located within a recess. An insertion arrow 32 illustrates insertion of the plug 25 into the recess which will insert the receptacle's tongue into the plug's socket with each of the spikes 31 contacting a respective one of the pins 28. For simplicity of description, all of the pins 28 and spikes 31 will, from this point on, be simply referred to as pins. [0024] FIG. 3 is a view along the plane 3-3 in FIG. 2 that illustrates the layout of HDMI pins 28 within the recess of the HDMI plug 25 of FIG. 2. The layout corresponds with the 19 pin layout of both type A and type C HDMI plugs. Each of the three data channels (channels 0, 1 and 2) and the associated TMDS clock is formed by a signal pin (denoted as +), an inverse signal pin (denoted as -), and a ground pin. These pins are located at the center and the right side of the plug 25. CEC, DDC data and clock, HPD, +5V, and DDC/CEC ground are each carried on respective pins that are located at the left side of the plug 25. [0025] FIG. 4, which is an enlarged view along the plane 4-4 in FIG. 1, shows that the signal, inverse signal, and ground of each of the data channels and the TMDS clock is bundled within its own foil (e.g., mylar) wrap 33. The wraps, in turn, are carried within a cord jacket 34. FIG. 4 also shows that SDA and SDC portions of the DDC are carried within another foil wrap in the center of the cord 26 and the other HDMI signals 36 (CEC, hotplug direct, +5V, DDC/CEC ground and a spare) are spaced about the interior of the cable. [0026] In a particular embodiment, FIG. 1 indicates that a digital audio/videosource such as a DVD player may be connected through an HDMI cable 24 to a digital audio/video source such as a HD television. In a typical connection scenario, both the DVD player and the HD television would normally be plugged into an electrical energizing source prior to application of the HDMI cable. Accordingly, they can generate various internal voltages and have them available for use. One of these voltages can be + 5 volts so that, in accordance with the HDMI protocol, the DVD player can place + 5 volts on its + 5 V pin of the HDMI cable (the + 5 volts is specified in the HDMI protocol to be between 4.8 and 5.3 volts with a maximum current capability of 50 milliamps). [0027] When the HDMI cable is inserted between the devices, the HD television monitors its + 5V pin (and, in accordance with the HDMI protocol, pulls less than 10 millamps from this pin). If + 5 V is not detected, the HD television is required by the HDMI protocol to place a low voltage level between 0 and 0.4 volts on its HPD pin (e.g., via a resistor coupled to ground). If + 5V is detected, the HD television is required to place a high voltage level between 2.4 and 5.3 volts on its HPD pin. When the DVD player detects the high voltage level on its HPD pin, it can then initiate the DDC communication process described above in the background section. After successful completion of these DDC processes, the HD television device can then receive clock and TMDS signals from the DVD player over the clock pins and the channel 0, 1 and 2 pins that are shown in FIG. 3. These signals enable the HD television to generate video and audio signals. [0028] In contrast to sources such as the DVD player, mobile sources typically operate on battery voltages in the range of 2.5 to 4.5 volts and are thus unable to provide a voltage on the + 5V line of the HDMI plug 28 that will be recognized by a sink device. FIG. 6, however, illustrates an automatic activation system embodiment 50 that can modify a mobile source to address this problem. Before describing the system 50, attention is directed to FIGS 5A-5C which illustrate possible couplings of an HDMI cable to a mobile source in the form of a cell phone 40. In FIG. 5 A, an insertion arrow 41 indicates that an HDMI cable with pin potentials of zero volts is inserted into the HDMI receptacle 30 of the cell phone 40. In FIG. 5B, an insertion arrow 42 indicates that an HDMI cable with pin potentials greater than zero volts (i.e., the pins carry a charge) is inserted into the receptacle 30. Finally, an insertion arrow 43 in FIG. 5C indicates that an HDMI cable has a sink device such as an HD television 44 attached and the cable is inserted into the receptacle 30. [0029] Attention is now returned to FIG. 6 which shows the activation system 50 installed in a mobile source in the form of the cell phone 40 of FIGS. 5A-5C. The system 50 includes a capacitance detector 42, an HPD pin voltage monitor 44, and acharge pump 46. The capacitance detector and the HPD pin voltage monitor are coupled to the HPD pin of an HDMI receptacle 30 of the cell phone 40 and the charge pump 46 is coupled to the + 5V pin of the receptacle 30. [0030] Because the cell phone 40 operates on a battery voltage in the range of 2.5 to 4.5 volts, it cannot provide a voltage on the + 5V line of the HDMI plug 28 that will be recognized by a sink. The activation system 50 of FIG. 5, however, modifies the cell phone so that it can successfully complete HDMI communications with an HDMI sink. Operational processes of this automatic activation system are shown as processes 50 through 55 in the cell phone 40 and these processes are briefly described under the heading "process flow" in FIG. 6. [0031] In an initial process 50, the capacitance detector 46 monitors the HPD pin to detect an increase in capacitance at this pin. During this monitoring process, the charge pump 48 is not activated to thereby reduce current drain from the battery of the cell phone. In process 51, the capacitance detector responds to a recognized capacitance increase by turning on the charge pump 48 as indicated by process 51. As it is no longer needed, the capacitance detector is preferably turned off to also reduce current drain. The charge pump now generates a + 5 volt signal and applies it to the + 5V pin as indicated by process 52 and as required by the HDMI protocol. [0032] Because the capacitance detector 46 is configured to sense an added capacitance, it can sense any of the insertions 41-43 illustrated in FIGS. 5A-5C because it detects the inserted capacitance formed by the HPD line through the HDMI cord (26 in FIGS. 1 and 4) and the HDMI pins at each end of this line. At this point, the system 50 has established that one of these insertions has taken place. To determine if an HDMI sink is at the other end of the inserted HDMI cable, the voltage monitor 47 now monitors the HPD pin to detect a rise to the high voltage level between 2.4 and 5.3 volts which is the signal that a sink places on the HPD pin in response to the + 5 volts that it senses on the + 5V pin. This is indicated in FIG. 6 as process 53. [0033] If the voltage monitor 47 observes that the HPD pin is at the high voltage level, the system 50 knows an HDMI sink is present and, accordingly, process 54 keeps the charge pump enabled to maintain contact with the sink. The cell phone 40 then inaugurates the DDC communications described above in the background section. After successful completion of these DDC communications, the sink device can be enabled to receive clock and TMDS signals from the cell phone over the clock pins and the channel 0, 1 and 2 pins shown in FIG. 2. [0034] If, however, the voltage monitor 47 observes that the HPD pin does not rise to the high voltage level within a predetermined time (i.e., it remains at the low voltage level between 0 and 0.4 volts), then it is known that an HDMI sink is notpresent and process 55 turns off the charge pump 48 and reactivates the capacitance detector 46. The system 50 is now returned to a state in which it minimizes current drain while it continues to sense insertion of an added capacitance. Although the capacitance detector 46 is coupled in FIG. 6 to detect inserted capacitance at the HPD pin of the receptacle 30, it is noted that it may be coupled to detect inserted capacitance at other pins in other activation system embodiments. [0035] FIG. 7A illustrates an embodiment 46A of the capacitance detector 46 of FIG. 6. This embodiment includes a resistive voltage divider 61, a comparator 62, and a flip-flop 63. The voltage divider is coupled to one input port of the comparator. A capacitor 64 is coupled to the other comparator input port with a resistor 65 coupled between the capacitor and an exemplary source supply voltage of 1.8 V. A switch 68 grounds the HPD pin (shown in FIG. 5) and then momentarily couples the HPD pin to the capacitor 64. The output of the comparator is coupled to a flip-flop 63 whose output forms a cable detection signal. [0036] FIG. 7A also shows a reset signal that resets the flip-flop 63 and momentarily sets the switch 68 to ground to thereby discharge any voltage at the HPD pin (e.g., voltage indicated on the HDMI cable in FIG. 5B). As long as the switch 68 is set to ground, the capacitor 64 is charged to 1.8V and thus holds an electric charge Q of 1.8/C64. When the switch 64 couples the HPD pin to the capacitor 64, the total capacitance increases but the charge instantaneously remains the same. Because a charge Q in a capacitance C generates a voltage V and because C has suddenly increased, the voltage must decrease. [0037] If no HDMI cable is attached to the HPDI pin, the added capacitance is quite small so that the voltage decrease is also quite small as indicated by the broken-line path for "capacitor 64" in FIG. 7A. This broken-line path does not drop below the 1.6V reference of the resistive voltage divider 61 so that the comparator 62 does not change state. If, however, the increase in capacitance is substantial because an HDMI cable is attached to the HDMI pin (e.g., as in FIGS. 4A-4C), the voltage will momentarily drop below the 1.6V reference and the output of the comparator 62 goes high as indicated in the "comparator" plot of FIG. 7A. In response to the comparator, the flip-flop 63 provides a cable detection signal. Accordingly, the process 51 of FIG. 6 causes the charge pump 48 to be turned on and the capacitance detector 46 to be turned off (to thereby reduce current drain). [0038] It is noted that, in the embodiment 46A, the resistor 65 provides a charging current to the capacitor 64 and the comparator 62 is coupled to sense a predetermined voltage drop across the capacitor when the HPD pin is coupled to the capacitor after the capacitor has been charged. The predetermined voltage drop thus indicates presence of the HDMI cable.[0039] FIG. 7B illustrates an embodiment 46B of the capacitance detector 46 of FIG. 6. This embodiment includes elements of the embodiment 46A of FIG. 7A with like elements indicated by like reference numbers. In the embodiment 46B, however, the comparator inputs are interchanged, the switch 68 is flipped horizontally, and the flip-flop 63 is replaced with a counter 73. When the switch 68 is grounded, it resets the charge in the capaictor 64 to zero. When the switch couples the HPD pin to the "node", the voltage at this node rises. If the only capacitance present is that of the capacitor 64, the node voltage rises rapidly to 1.8V as indicated by the broken line in the node plot. If, however, the HDMI cable is present (as in FIGS. 4A-4C), the node voltage rises slowly to 1.8V as indicated by the solid line in the node plot. The count of the counter 73 (indicated by X's) is thus significantly higher and this produces a cable detection signal. [0040] It is noted that, in the embodiment 46B, the resistor 65 provides a charging current to the capacitor 64 and the comparator 62 is coupled to sense a predetermined time for voltage across the capacitor to rise to a predetermined level after the capacitor has been discharged. The predetermined time indicates presence of the HDMI cable. [0041] FIG. 8A illustrates an embodiment of the charge pump 48 of FIG. 6. In this embodiment, charge pump drivers 82 are clocked to drive charge pump switches 83 which interconnect capacitors 84 in switch modes to thereby generate a desired output voltage Vout from an available input voltage Vjn. The output voltage (or a reduced version of the output voltage that is realized with divider 86) is compared with a reference voltage Vref in a comparator 87 to generate a feedback correction signal 88 that controls (i.e., activates) the switch drivers in a feedback loop 89. In different pump modes, the charge pump injects charges through first plates of capacitors and then applies voltages to second plates of these capacitors to thereby generate an output voltage Vout that is greater than the input voltage Vjn. [0042] FIG. 8B illustrates an arrangement embodiment 90 of the switches and capacitors of FIG. 8A. Attention is initially directed to a capacitor Ci and associated switches 92, 93 and 94. When switches 92 and 93 close in a first operational phase φl, capacitor Ci is charged to the input voltage Vjn at input port 91. When switch 94 closes in a subsequent second operational phase φ2, the input voltage Vjn is applied to the bottom plate of capacitor Ci so that this capacitor's top plate is elevated to 2Vin. In alternating operational phases, capacitor Ci is thus continuously pumped to establish the input voltage V1n at its top plate at the beginning of each second operational phase φ2. [0043] In each second operational phase φ2, switches 95 and 96 couple the top plate of capacitor C2 to the top plate of capacitor Ci so that a voltage 2Vjn is appliedto the top plate of capacitor C2 at the beginning of this phase. Charges are thus transferred from capacitor Ci to capacitor C2 during the remainder of the second operational phase φ2. In alternating operational phases, capacitor C2 is thus continuously pumped to establish the voltage 2V1n across it. [0044] In each first operational phase φl, switches 97 and 98 then couple the top plate of capacitor C2 to the top plate of output capacitor C3 while applying the input voltage V1n to the bottom plate of output capacitor C3. This final operation continuously pumps the output capacitor C3 to establish an output voltage at the output port 99 that substantially equals 3V1n. The output voltage Vout is thus pumped above the input voltage V1n. [0045] In another charge pump embodiment, the switch 94 may be eliminated. Now, in each second operational phase φ2, switches 95 and 96 couple the top plate of capacitor C2 to the top plate of capacitor Ci so that a voltage V1n is established at the top plate of capacitor C2. In each first operational phase φl, switches 97 and 98 then couple the top plate of capacitor C2 to the top plate of output capacitor C3 while applying the input voltage V1n to the bottom plate of output capacitor C3. This final operation continuously pumps the output capacitor C3 to establish an output voltage at the output port 99 that substantially equals 2V1n. [0046] Activation system embodiments have been disclosed to initiate HDMI communication between an HDMI source and an HDMI sink through an HDMI receptacle of the source. These systems and methods are especially suited for use with mobile sources that generally operate from a battery that cannot provide the + 5V signal which the HDMI protocol requires sources to place on the + 5V pin of their HDMI receptacles. These activation embodiments automatically detect the insertion of an HDMI cable into the source's HDMI receptacle and will subsequently generate and apply the required + 5 V signal to the + 5 V pin of the source's HDMI receptacle to initiate HDMI communication. Because they are directed to use in mobile sources, the embodiments are configured to minimize current drain. [0047] The embodiments of the invention described herein are exemplary and numerous modifications, variations and rearrangements can be readily envisioned to achieve substantially equivalent results, all of which are intended to be embraced within the spirit and scope of the appended claims. |
Methods and apparatus for detecting local maximums in a two-dimensional data set. Apparatus is provided for detecting a local maximum in a two-dimensional data set, where a stream of data elements represents the data set. The apparatus includes first detection logic that receives the data stream and operates to detect a first data element that represents a peak in a first dimension of the data set. The apparatus also includes second detection logic that receives the data stream and operates to detect a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. |
CLAIMS 1. Apparatus for detecting a local maximum in a two-dimensional data set, wherein the data set is represented by a stream of data elements, the apparatus comprising: first detection logic that receives the data stream and operates to detect a first data element that represents a peak in a first dimension of the data set; and second detection logic that receives the data stream and operates to detect a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. 2. The apparatus of claim 1, further comprising logic to process the data set to produce the stream of data elements. 3. The apparatus of claim 1, wherein the first detection logic further comprises flag logic to associate a flag with the first data element. 4. The apparatus of claim 1, wherein in the second detection logic further comprises logic to process the flag to determine if the first and second data elements are the same element. 5. The apparatus of claim 1, further comprising output logic that outputs information about the local maximum. 6. The apparatus of claim 5, wherein in the information about the local maximum comprises an identifier that identifies a location of the local maximum in the data set. 7. The apparatus of claim 1, wherein the first detection logic comprises first register logic that operates to receive the data stream and output selected data elements that are adj acent in the first dimension of the data set. 8. The apparatus of claim 7, wherein the first detection logic comprises comparator logic that operates to compare the selected data elements to determine the first data element, and wherein the comparator logic has an output that is coupled to the flag logic. 9. The apparatus of claim 1, wherein the second detection logic comprises register logic that operates to receive the data stream and output selected data elements that are adjacent in the second dimension of the data set. <Desc/Clms Page number 16> 10. The apparatus of claim 9, wherein the second detection logic comprises comparator logic to compare the selected data elements to determine the second data element. 11. The apparatus of claim 1, wherein the two-dimensional data set comprises rows and columns of data elements, and wherein the first dimension of the data set is defined by the number of columns, and the second dimension of the data set is defined by the number of rows. 12. Apparatus for detecting a local maximum in a two-dimensional data set, wherein the data set is represented by a stream of data elements, the apparatus comprising: means for receiving the data stream; means for detecting a first data element that represents a peak in a first dimension of the data set; and means for detecting a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. 13. The apparatus of claim 12, further comprising means for processing the data set to produce the stream of data elements. 14. The apparatus of claim 12, wherein the means for detecting the first data element further comprises means for associating a flag with the first data element. 15. The apparatus of claim 12, wherein the means for detecting the second data element further comprises means for processing the flag to determine if the first and second data elements are the same element. 16. The apparatus of claim 12, further comprising means for outputting information about the local maximum. 17. The apparatus of claim 16, wherein in the information about the local maximum comprises an identifier that identifies a location of the local maximum in the data set. 18. The apparatus of claim 12, wherein the means for detecting the first data element comprises: means for storing a portion of the data stream; and means for outputting selected data elements from the stored of portion of the data stream that are adjacent in the first dimension of the data set. <Desc/Clms Page number 17> 19. The apparatus of claim 18, wherein the means for detecting the first data element comprises means for comparing the selected data elements to determine the first data element. 20. The apparatus of claim 12, wherein the means for detecting the second data elements comprises: means for storing a portion of the data stream; and means for outputting selected data elements from the stored of portion of the data stream that are adjacent in the second dimension of the data set. 21. The apparatus of claim 20, wherein the means for detecting the second data element comprises means for comparing the selected data elements to determine the second data element. 22. The apparatus of claim 12, wherein the two-dimensional data set comprises rows and columns of data elements, and wherein the first dimension of the data set is defined by the number of columns, and the second dimension of the data set is defined by the number of rows. 23. A method for detecting a local maximum in a two-dimensional data set, wherein the data set is represented by a stream of data elements, the method comprising: receiving the data stream; detecting a first data element in the data stream that represents a peak in a first dimension of the data set; associating a flag with the first data element; detecting a second data element in the data stream that represents a peak in a second dimension of the data set; and detecting a local maximum if the flag is associated with the second data element. 24. The method of claim 23, further comprising processing the data set to produce the stream of data elements. 25. The method of claim 23, further comprising outputting information about the local maximum. 26. The method of claim 23, wherein the information about the local maximum comprises an identifier that identifies a location of the local maximum in the data set. 27. The method of claim 23, wherein the step of detecting the first data element comprises: <Desc/Clms Page number 18> storing a portion of the data stream; and outputting selected data elements from the stored of portion of the data stream that are adj acent in the first dimension of the data set. 28. The method of claim 27, wherein the step of detecting the first data element comprises comparing the selected data elements to determine the first data element. 29. The method of claim 23, wherein the step of detecting the second data elements comprises: storing a portion of the data stream; and outputting selected data elements from the stored of portion of the data stream that are adjacent in the second dimension of the data set. 30. The method of claim 29, wherein the step of detecting the second data element comprises comparing the selected data elements to determine the second data element. 31. The method of claim 23, wherein the two-dimensional data set comprises rows and columns of data elements, and wherein the first dimension of the data set is defined by the number of columns, and the second dimension of the data set is defined by the number of rows. 32. A computer-readable media comprising instructions, which when executed by a processor, operate to detect a local maximum in a two-dimensional data set, wherein the data set is represented by a stream of data elements, the computer- readable media comprising: instructions for receiving the data stream; instructions for detecting a first data element that represents a peak in a first dimension of the data set; and instructions for detecting a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. 33. The computer-readable media of claim 32, further comprising instructions for processing the data set to produce the stream of data elements. 34. The computer-readable media of claim 32, wherein the instructions for detecting the first data element further comprise instructions for associating a flag with the first data element. <Desc/Clms Page number 19> 35. The computer-readable media of claim 32, wherein the instructions for detecting the second data element further comprise instructions for processing the flag to determine if the first and second data elements are the same element. 36. The computer-readable media of claim 32, further comprising instructions for outputting information about the local maximum. 37. The computer-readable media of claim 36, wherein in the information about the local maximum comprises an identifier that identifies a location of the local maximum in the data set. 38. The computer-readable media of claim 32, wherein the instructions for detecting the first data element comprise: instructions for storing a portion of the data stream; and instructions for outputting selected data elements from the stored of portion of the data stream that are adjacent in the first dimension of the data set. 39. The computer-readable media of claim 38, wherein the instructions for detecting the first data element comprise instructions for comparing the selected data elements to determine the first data element. 40. The computer-readable media of claim 32, wherein the instructions for detecting the second data elements comprise: instructions for storing a portion of the data stream; and instructions for outputting selected data elements from the stored of portion of the data stream that are adjacent in the second dimension of the data set. 41. The computer-readable media of claim 40, wherein the instructions for detecting the second data element comprise instructions for comparing the selected data elements to determine the second data element. 42. The computer-readable media of claim 32, wherein the two-dimensional data set comprises rows and columns of data elements, and wherein the first dimension of the data set is defined by the number of columns, and the second dimension of the data set is defmed by the number of rows. |
<Desc/Clms Page number 1> METHODS AND APPARATUS FOR DETECTING LOCAL MAXIMUMS IN A TWO-DIMENSIONAL DATA SET BACKGROUND I. FIELD [0001] The present invention relates generally to signal processing systems, and more particularly, to a system for detecting local maximums in a two-dimensional data set. II. DESCRIPTION OF THE RELATED ART [0002] Telecommunications is one area where signal processing has become especially important. For example, in a wireless telecommunications network based on code division multiple access (CDMA) technology, a large number of users communicate within the network using a variety of wireless devices that are sometimes referred to as terminals. These terminals include wireless telephones, pagers, email devices, personal digital assistants (PDA), and others. The network uses data encryption and sophisticated base station receivers to allow communication services to be provided to selected terminals within a predetermined area, or cell. For example, transmissions from each terminal in a cell may be uniquely encoded and transmitted to a base station receiver. In order to receive the transmitted information, the receiver may be tuned for each transmitting terminal to filter out unwanted noise. To accomplish this, the receiver may process the received transmissions to produce a two-dimensional data array, sometimes referred to as a "search space. " One example of a search space provides the received transmissions in a two-dimensional data array where energy values in selected frequencies are associated with variations in a decoding sequence. Typically, the search space includes local data maximums (peaks) that correspond to a frequency and a decoding sequence variation that are associated with a particular transmitting terminal. By detecting local peaks (i. e., frequency and sequence) in the two-dimensional data array, it is possible to use this information to tune the receiver to accurately receive data transmissions from selected transmitting terminals. <Desc/Clms Page number 2> Current communication systems store the two-dimensional data array in a memory and repeatedly access the memory to compare data elements with their neighbor elements to detect local peaks. For example, if one wants to determine if a particular element of the two-dimensional array is a local maximum, then that element is compared to its four surrounding neighbor elements. This results in at least five memory accesses, which may be duplicated when detecting whether or not any of the neighbor elements represent local peaks in the data array. Thus, current systems are very inefficient because they require duplicate memory accesses to detect local peaks in the data array. This operation is especially problematic if the system is utilizing memory having a relatively slow bandwidth. For example, the amount of data is generally too large to store in a cache memory or register bank, and the bandwidth of external memory is typically much slower than internal memory. Therefore, what is needed is a system that operates to efficiently detect local maximums in a two-dimensional data array without performing duplicate memory accesses as required by current systems. SUMMARY [0007] In one or more embodiments, a peak detection system is provided that operates to detect local maximums in a two-dimensional data array. The system is suitable for use in any type of system where it is necessary to detect local data maximums in a data array while conserving memory bandwidth. In one embodiment, the system reads the data elements of a two-dimensional data array out of a memory in row or column order, which results in the data taking on a streaming characteristic. The data then undergoes a series of delays that take advantage of the structure of the two- dimensional array to allow the data elements to be easily compared to detect local maximums. The detection system operates to utilize memory bandwidth very efficiently because the data elements of the array are read out of the memory only once. Thus, the system is suitable for use with any type communication system that needs to detect local maximums in a search space to tune a receiver. In one embodiment, apparatus is provided for detecting a local maximum in a two-dimensional data set, where a stream of data elements represents the data set. The apparatus comprises first detection logic that receives the data stream and operates to detect a first data element that represents a peak in a first dimension of the <Desc/Clms Page number 3> data set. The apparatus also comprises second detection logic that receives the data stream and operates to detect a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. In another embodiment, apparatus is provided for detecting a local maximum in a two-dimensional data set, where a stream of data elements represents the data set. The apparatus comprises means for receiving the data stream, and means for detecting a first data element that represents a peak in a first dimension of the data set. The apparatus also comprises means for detecting a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element [00010] In another embodiment, a method is provided for detecting a local maximum in a two-dimensional data set, where a stream of data elements represents the data set. The method comprises receiving the data stream, and detecting a first data element in the data stream that represents a peak in a first dimension of the data set. The method also comprises associating a flag with the first data element, and detecting a second data element in the data stream that represents a peak in a second dimension of the data set. The method also comprises detecting a local maximum if the flag is associated with the second data element. [00011] In yet another embodiment, a computer-readable media is provided that comprises instructions, which when executed by a processor, operate to detect a local maximum in a two-dimensional data set, wherein the data set is represented by a stream of data elements. The computer-readable media comprises instructions for receiving the data stream, and instructions for detecting a first data element that represents a peak in a first dimension of the data set. The computer-readable media also comprises instructions for detecting a second data element that represents a peak in a second dimension of the data set, wherein a local maximum is detected if the first and second data elements are the same element. BRIEF DESCRIPTION OF THE DRAWINGS [00012] The foregoing aspects and the attendant advantages of the embodiments described herein will become more readily apparent by reference to the <Desc/Clms Page number 4> following detailed description when taken in conjunction with the accompanying drawings wherein: [00013] FIG. 1 shows a communication system that includes one embodiment of a detection system that operates to detect local maximum in a data array; [00014] FIG. 2 shows a functional diagram of one embodiment of a signal pre-processor; [00015] FIG. 3 shows a data array produced by the preprocessor shown in FIG. 2; [00016] FIG. 4 shows one embodiment of a detection system to detect local maximums in a two dimensional data array; [00017] FIG. 5 shows a detail diagram of one embodiment horizontal detector for use in one embodiment of a peak detection system; [00018] FIG. 6 shows a detail diagram of one embodiment vertical detector for use in one embodiment of a peak detection system; and [00019] FIG. 7 shows one embodiment of a method for operating a detection system for detecting local maximums in a two-dimensional data set. DETAILED DESCRIPTION [00020] The following detailed description describes a peak detection system, including methods and apparatus for detecting local maximums in a data array. It should be understood that the described peak detection system could also be used in conjunction with virtually any type of data processing system including, but not limited to, wireless communication systems, wired communication systems, telecommunication systems, networking systems, or any other type of system where detection of local maximums in a data set is needed. [00021] FIG. 1 shows a communication system 100 that includes one embodiment of a peak detection system that operates to detect local maximums in a data array. The communication system 100 comprises a satellite 108 that is in communication with terminals 102,104, and 106. The satellite 108 receives signals transmitted from the terminals 102,104, and 106, and re-transmits these signals to a receiver 110. [00022] In one embodiment, the system 100 operates using CDMA technology so that data from the transmitting terminals 102,104, 106 is encoded and <Desc/Clms Page number 5> spread to look like a noise signal. Thus, it is the job of the receiver 110 to decode the received noise signals to obtain the transmitted data. It should be noted that the system 100 represent just one configuration and that other configurations are possible. For example, in another configuration, the terminals 102,104, 106 communicate directly with the receiver 110. [00023] The receiver 110 comprises a signal pre-processor 112 that receives the signals transmitted from the satellite 108. The pre-processor operates to process the received signals 118 and form a data array that represents the data transmitted from the terminals 102,104, and 106. The data array is input to one embodiment of a peak detection system 114 that operates to detect local maximums in the data array. After the local maximums are detected, the detection system 114 transmits information about the detected local maximums to a discriminator 116. The discriminator 116 uses the information to process the received signals so that data transmitted from each terminal (102, 104, 106) can be recovered from the received signals. [00024] The system 100 in this example comprises a satellite communication system, however, embodiments of the peak detection system 114 are suitable for use with ground-based communication systems, or any other type of processing system that needs to determine local maximums in a data set. [00025] FIG. 2 shows a functional diagram of one embodiment of the pre- processor 112. The pre-processor 112 comprises correlator logic 202, Fast Fourier Transform (FFT) logic 204, memory 206, and pseudorandom noise (PN) generator 208. [00026] It should be understood that the elements of the pre-processor 112 shown in FIG. 2 are for illustrative purposes only, and that implementation of the pre- processor 112 could be achieved in one of any number of ways using greater or fewer functional elements. For example, the correlator logic 202, FFT logic 204, and PN generator 208 could all be implemented in a computer program executed by one or more processors. [00027] During operation of the pre-processor 112, the correlator 202 correlates the received signals 118 with pseudorandom noise sequences generated by the PN generator 208. For example, in one embodiment, the PN generator 208 generates 128 PN sequences that are correlated with the received signals 118 by the correlator 202. However, is should be noted that the any number of sequences can be generated to correlate with the received signals. <Desc/Clms Page number 6> [00028] The correlator 202 produces 128 correlated sequences 210 that are input to the FFT logic 204. The purpose of correlator is to unscramble the data, but the frequency of data still needs to be determined. The FFT logic 204 performs FFTs on the input sequences and produces 128 FFT outputs that are stored in the memory 206. For example, the FFT logic 204 transforms the input sequences 210 to frequency domain signals 212. For example, in one embodiment, the FFT logic 204 transforms an input sequence into 1024 bins where the value associated with each bin represents the energy at a particular frequency. For example, the overall bandwidth is divided by the number of bins to determine the bandwidth represented by each bin. [00029] Thus, as a result of the operation of the correlator 202, PN generator 208, and FFT logic 204, the memory 206 contains a two-dimensional data set that represents the frequency energy of the received signals after being correlated with selected PN sequences. This two-dimensional data set includes local maximums that represent transmitted energy from one or more transmitting terminals. In one or more embodiments, the detection system described herein operates to detect local maximums so that the data transmitted from the transmitting terminals can be received and recovered. For example, the local maximums correspond to the frequency and sequence variations associated with transmitting terminals. This information is used to tune the receiver to accurately receive data transmitted from the terminals. [00030] FIG. 3 shows a data array 300 produced by the preprocessor 112 and stored in the memory 206. The data array 300 comprises a number of Rows (R) and Columns (C), where each element in the data array represents energy in a selected frequency region (bin) for one of the correlated sequences 210 output from the correlator logic 202. For example, there are as many Rows as correlation output sequences 210, and within each row, the Columns represent frequency regions. In one embodiment, there are 128 correlation sequences 210 and 1024 frequency bins per FFT output 212, and so the array 300 includes 128 Rows and 1024 columns. [00031] In one embodiment, each element of the data array 300 comprises a data element that is 32-bits wide. For example, one data element is illustrated by data element 302, which is located at (1,1) in the data array 300. The element 302 comprises a data portion 304, an identifier (ID) 306, and a flag 308. The data portion 304 represents the energy determined by the FFT logic 204 for that frequency bin of that sequence. The ID 306 represents the location of the element in the data array 300. For <Desc/Clms Page number 7> example, the ID indicates the frequency bin and sequence number of the data element. This information is used by the detection system to determine the location of detected local peaks in the data array 300. The flag 308 is used by the detection system during the process of detecting local maximums and its purpose is discussed in more detail in another section of this document. [00032] FIG. 4 shows one embodiment of the peak detection system 114 that operates to detect local maximums in a two-dimensional data array. The detection system comprises horizontal detection logic 402, vertical detection logic 404, a clock 406, and output logic 408. The system 114 also comprises a memory controller 410 and a processor 412. [00033] The memory controller 410 and the processor 412 operate to access a data array stored in a memory, for example the array 300. In one embodiment, the processor 410 comprises a CPU, gate array, hardware logic, software, or any combination of hardware and software. The memory controller 410 comprises any suitable hardware and/or software to allow the system 114 to access a data array via the control signal 416. The clock 406 is used to synchronize the operation of the detection system 114. For example, the memory controller 410 accesses a memory to readout the two-dimensional data array in the form of a pre-processed data stream 414. For example, with reference to the data array 300, the data is read out element by element across one row and then proceeding down to the next row. Thus, the pre-processed data stream 414 is formed. [00034] The horizontal detection logic 402 processes the pre-processed data stream 414 to detect local maximums in one dimension. For example, one dimension (horizontal) is defined to represent the data along each row of the data array 300. The horizontal detection logic 402 operates to detect local maximums by comparing adjacent row elements in the data stream 414 and flagging any data elements in the data stream 414 that are detected as horizontal maximums. [00035] After processing by the horizontal detection logic 402, the preprocessed data stream then flows to the vertical detection logic 404. The vertical detection logic 404 processes the pre-processed data stream 414 to detect local maximums in another dimension referred to as the vertical dimension. For example, with reference to the data array 300, each column represents the vertical dimension and the vertical detection logic 404 detects local maximums down each column. In one <Desc/Clms Page number 8> embodiment, the vertical detection logic 404 utilizes delay elements so that adjacent elements within each column of the data array can be compared to each other and local maximums in the vertical dimension can be detected. [00036] Once local maximums in the horizontal and vertical directions have been detected, the information is passed to the output logic 408. The output logic 408 receives information about which elements of the preprocessed data stream are local maximums in both the vertical and horizontal dimensions. For example, if a data element is found to be a local maximum by the horizontal detection logic 402, then a flag associated with that data element is set. If that same data element is found to be a local maximum by the vertical detection logic 404, the flag associated with that element is tested. If the flag is set, information about that data element is sent to the output logic. For example, peak information including the data value and its identifier is sent to the output logic 408, which forwards the peak information 418 to the next stage of the receiver, i.e. the discriminator. Thus, the detection system operates to detect local maximums in a data array, and provide the detected peak information to the next stage of a receiver. [00037] FIG. 5 shows a detail diagram of one embodiment the horizontal detector 402 for use in one embodiment of a peak detection system. The horizontal detector 402 comprises registers 502,504, 506, comparators 510,512, AND logic 514, and flag logic 508. [00038] The registers 502,504, 506 preferably comprise hardware but may comprise hardware, software, or any combination thereof. The registers 502,504, 506 each provide storage for one data element of the pre-processed data stream 414. The registers 502,504, 506 all receive a clock signal derived from the clock 406 so that the registers operate in a synchronous fashion. [00039] The comparators 510,512 preferably comprise hardware, but may comprises hardware, software, or any combination thereof. The comparators 510,512 have inputs "A" and "B" to receive values that are compared to each other to produce an output. The comparator 510 produces an output value of "1" if the value at its B input is greater than the value at it's A input (B>A). The comparator 512 produces an output value of "1" if the value at it's A input is greater than the value at its B input (A>B). <Desc/Clms Page number 9> [00040] The outputs of the comparators are input to the AND logic 514, which produces an output value of "1" if both inputs are "1." The output value from the AND logic 514 is input to the flag logic 508. [00041] During operation of the horizontal peak detector 402 the pre- processed data stream 414 is input to the register 502. Clock pulses provided by the clock signal cause the pre-processed data stream 414 to sequentially shift through the registers 502,504, and 506. After each shift, the comparators 510,512 compare adjacent data values in the pre-processed data stream 414. If the data value stored at register 504 is greater than the values stored at registers 502 and 506, then a horizontal peak is detected. The comparators 510 and 512 output values of "1" that cause the AND logic 514 to output a value of "1." The output from the AND logic 514 is input to the flag logic 508, which operates to set a flag that is associated with the data element stored at the register 504. [00042] At the next clock cycle, the data value stored at the register 504 transitions to the register 506. The data value that transitions to register 506 includes any flag that may have been set by the flag logic 508. For example, referring to the data element 302, if this element is detected to be a horizontal maximum, the flag 308 would be set. Thus, the data value can be identified as a local horizontal peak value because the flag has been set. [00043] The system continues to clock the pre-processed data stream 414 through the detector 402 until all, or a portion, of the data elements have passed through the registers 502,504, and 506. As a result, a pre-processed data stream with flag values 516 is produced. In the data stream 516, any data element that has been determined to represent a local horizontal maximum will have its associated flag set. The data stream 516 is then input to the vertical detection logic 404. [00044] In one embodiment, the detection logic 402 operates as the result of the execution of instructions stored in a memory to perform the functions described herein. For example, the memory may be part of the processor 412. The instructions may be stored in the memory during manufacture of the detection system 114. In one embodiment, the instructions are stored on a computer-readable media, such as a floppy disk, hard disk, CDROM, flash memory, or any other type of computer-readable media. The instructions on the computer-readable media may be retrieved and executed by the detection system 114. In one embodiment, the instructions are downloaded from the <Desc/Clms Page number 10> computer-readable media into the detection system 114 and stored in the memory for later execution. Thus, in one embodiment, the detection system 114 operates to execute instructions stored on a computer-readable media to perform the functions described herein. [00045] FIG. 6 shows a detail diagram of one embodiment the vertical detector 404 for use in one embodiment of a peak detection system. The vertical detector 404 comprises shift registers 602,604, 606, comparators 608, 610. Also shown is output logic 408 which comprises AND logic 612. [00046] The registers 602,604, 606 preferably comprise hardware logic, but may comprise hardware, software, or any combination thereof. The registers 602,604, 606 each comprise "C" stages to provide storage for "C" data element of the pre- processed data stream 516. The value of "C" is equivalent to the number of columns in the pre-processed data array. For example, in one embodiment, the number of columns is 1024, which is related to the number of bins associated with the output of the FFT logic 204. The registers 502,504, 506 all receive a clock signal derived from the clock 406 so that the registers operate in a synchronous fashion. For example, a data value at the input of register 602 will appear at the output of that register after 1024 clock cycles of the clock input. [00047] The comparators 608,610 preferably comprise hardware logic, but may comprise hardware, software, or any combination thereof. The comparators 608, 610 have inputs "A" and "B" to receive values that are compared to each other to produce an output. The comparator 608 produces an output value of "1" if the value at its B input is greater than the value at it's A input (B>A). The comparator 610 produces an output value of "1" if the value at it's A input is greater than the value at its B input (A>B). [00048] The outputs of the comparators 608,610 are input to the AND logic 612, which produces an output value (E) equal to "1" if all three of its inputs are "1." The third input to the AND logic 612 is a flag value associated with the data element output from register 604. The output value (E) from the AND logic 612 is used to indicate that a local maximum has been detected in the pre-processed data stream 516. For example, if the peak detection system 114 is used in a receiver, the output value (E) and local peak (LP) value may be provided to another circuit in the receiver, such as discriminator 116. <Desc/Clms Page number 11> [00049] During operation of the vertical peak detector 404, the pre-processed data stream with flags 516 is input to the register 602. Clock pulses provided by the clock signal cause each element of the pre-processed data stream 516 to sequentially shift through the registers 602,604, and 606. However, it takes "C" clock cycles to shift a data element completely through each of the registers 602,604, and 606. After each shift, the comparators 510,512 compare data values in the pre-processed data stream 516. The data values being compared are values that are vertically adjacent in the data array. For example, the data values are adjacent values in the Columns of the data array 300. If the data value stored at register 604 is greater than the values stored at registers 602 and 606, then a vertical peak is detected. The comparators 608 and 610 output values of "1" that cause the AND logic 612 to output (E) a value of "1" if the flag value (Flag) associated with the data element output from register 604 is set. [00050] The system continues to clock the pre-processed data stream 516 through the detector 404 until all, or a portion, of the data elements have passed through the registers 602,604, and 606. As a result, a pre-processed data stream with flag values 516 is processed to produce a detection of local maximums in a data array stored in memory. [00051] In one embodiment, the detection logic 404 operates as the result of the execution of instructions stored in a memory to perform the functions described herein. For example, the memory may be part of the processor 412. The instructions may be stored in the memory during manufacture of the detection system 114. In one embodiment, the instructions are stored on a computer-readable media, such as a floppy disk, hard disk, CDROM, flash memory, or any other type of computer-readable media. The instructions on the computer-readable media may be retrieved and executed by the detection system 114. In one embodiment, the instructions are downloaded from the computer-readable media into the detection system 114 and stored in the memory for later execution. Thus, in one embodiment, the detection system 114 operates to execute instructions stored on a computer-readable media to perform the functions described herein. [00052] FIG. 7 shows one embodiment of a method 700 for detecting local maximums in a two-dimensional data set. The method 700 is suitable for use with one or more embodiments of a peak detection system as described herein. For the following description, it will be assumed that a receiver receives a radio signal that comprises <Desc/Clms Page number 12> transmissions from one or more transmitting terminals. The receiver includes one embodiment of a peak detection system as described herein. The receiver also comprises a pre-processor that operates to pre-process the received signal. For example, the receiver may be the receiver 110 shown in FIG. 1. [00053] At block 702, a two-dimensional data array is generated that represents a received signal, which comprises transmissions from one or more transmitting terminals. The data array is stored in a memory at the receiver. For example, a pre-processor included in the receiver processes the received signal and generates the two-dimensional array that is stored in the memory. For example, in one embodiment, the pre-processor is the pre-processor 112 shown in FIG. 2. [00054] At block 704, shift registers are initialized to begin the peak detection process. For example, in one embodiment, the peak detection system comprises horizontal and vertical detection circuits that include shift registers that are used to shift the data array to determine horizontal and vertical peaks. These registers are initialized, (i.e., cleared, or preset) or otherwise setup to handle the peak detection process. [00055] At block 706, the memory is accessed to read-out the data array. The array is read-out element by element in an orderly fashion to create a data stream. For example, the elements are read-out across each row until all, or a portion of, rows have been read-out. [00056] At block 708, a detection process is started that shifts elements of the data stream into the detection system. For example, a clock is used to shift each element into the detection system in a synchronous and orderly fashion. [00057] At block 710, three horizontal elements of the data array are tested to determine if a local horizontal peak exists. For example, in one embodiment, the horizontal peak detector shown in FIG. 5 is used to compare data elements output from three registers (502,504, 506) to detect a local horizontal peak. [00058] At block 712, a test is performed to determine if a local horizontal peak has been detected. For example, referring to FIG. 5, comparators 510 and 512 compare a middle data element with two adjacent data elements. If the middle data element is greater than the adjacent elements, then a peak is detected and the method proceeds to block 714. If a peak is not detected, the method proceeds to block 716. <Desc/Clms Page number 13> [00059] At block 714, a flag is set that is associated with the detected horizontal peak. For example, the flag logic 508 sets a flag that is part of the data element of the detected peak. [00060] At block 716, three vertical elements of the data array are tested to determine if a local vertical peak exists. For example, in one embodiment, the vertical peak detector shown in FIG. 6 is used to compare data elements output from three registers (602,604, 606) to detect a local vertical peak. [00061] At block 718, a test is performed to determine if a local vertical peak has been detected. For example, referring to FIG. 6, comparators 608 and 610 compare a middle data element with two vertically adjacent data elements. If the middle data element is greater than the vertically adjacent elements, then a vertical peak is detected and the method proceeds to block 720. If a peak is not detected, the method proceeds to block 724. [00062] At block 720, a test is performed to determine if a flag associated with the detected vertical peak is set. If the flag is set, a local peak has been detected in a data array. For example, the flag is at block 714 if a horizontal peak is detected at block 712. If the flag is set, the method proceeds to block 722, and if not, the method proceeds to block 724. [00063] At block 722, a local peak has been detected in the data array and information about this peak is output from the detection system. For example, in one embodiment, the detection system may be part of a signal receiver and peak information detected by the detector is output to a discriminator circuit, as shown in FIG. 1. The output information contains the value of the data element that is detected to be a peak, and identifier information that identifies the location of the peak in the data array. [00064] At block 724, the next element of the data array is shifted into the detection system for processing. For example, a clock signal is used to shift another data element from the data stream into the detection system. The method 700 continues until all, or a portion of, the data elements in the data array have been shifting into and processed by the detection system. [00065] Thus, the method 700 describes how local peaks are detected in a data array to determine the frequency and sequence variation associated with data transmitted from a transmitting terminal. The method is suitable for use in any type of processing system that needs to detect local peaks in a data array. It should be noted <Desc/Clms Page number 14> that additions, changes, deletion or combination of the method steps may be performed without deviating from the scope of the embodiments. [00066] In another embodiment, the location of local peaks in the data array is separately accounted for. For example, counters or other type of circuitry may be used to keep track of the location of local peaks in the data array. After the array is processed, the counter values are used to indicate the location of the detected peaks. [00067] A peak detection system has been described that operates to detect local peaks in a two-dimensional data set. Accordingly, while one or more embodiments of a peak detection system have been illustrated and described herein, it will be appreciated that various changes can be made to the embodiments without departing from their spirit or essential characteristics. Therefore, the disclosures and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. |
Implementations relate to systems and methods for real-time image recognition and mobile visual searching. A mobile device, such as a cellular phone, acquires an image and pre-processes the acquired image to generate a visual search query based on objects detected in the acquired image. The visual search query includes the acquired image or a query image extracted therefrom and metadata associated with the detected objects. The mobile device wirelessly communicates the visual search query to a remote server, and in response to the visual search query, the remote server recognizes an object in the query image based on the associated metadata. The remote server then generates information content based on the recognized object and communicates the information content to the mobile device to be presented via the mobile device. |
1.A mobile device, comprising:Wireless interface to the serverContext data sensor; andA processor in communication with the wireless interface, the processor being configured to initiate a process, the process comprising:Obtain an image, including acquiring, via the contextual data sensor, contextual data associated with the acquired image;Detect one or more objects in the acquired image;Generate metadata associated with at least one of the detected objects, wherein generating the metadata comprises:Highlight or indicate the detected object; andReceive input that selects one of the highlighted or indicated objects,Wherein the metadata is based on the selected object and includes a determined object category of the selected object or a generated feature vector of the selected object;Extracting a query image from the acquired image based on the selected object;Generating a visual search query, wherein the visual search query includes the query image, the metadata, and the context data;Transmit the visual search query to the server via the wireless interface; andIn response to the visual search query, the content associated with the selected object is received and presented.2.The mobile device of claim 1, wherein generating the metadata further comprises:Classifying one or more of the detected objects as an object class;Highlight or indicate the classified object;Receive input indicative of a selected one of the sorted objects; andThe metadata is generated based on the object class of the selected object.3.The mobile device of claim 2, wherein classifying one or more of the detected objects further above comprises:Generate a feature vector based on at least one of the detected objects;Compare the feature vector with a set of image coefficients of a training image in the object class to determine a matching training image in the object class that matches the at least one detected object, The collection stored in the mobile device; andThe detected objects are classified based on the matched training images.4.The mobile device according to claim 3, wherein the processor performs the operations of acquiring, detecting, highlighting and classifying in real time.5.The mobile device of claim 3, wherein the object category includes a logo, a design, a face, a landmark, a costume, a sign, a natural object, or an artificial object.6.The mobile device of claim 1, wherein generating the metadata based on the selected object further comprises:Classifying the selected object as an object category; andThe metadata is generated based on the object class of the selected object.7.The mobile device of claim 6, wherein transmitting the visual search query to the server further comprises:Generate a destination address for the visual search query based on the object category of the selected object; andThe visual search query is transmitted to the server based on the destination address.8.The mobile device of claim 1, wherein extracting a query image from the captured image further comprises cropping the captured image, compressing the captured image, scaling the captured image, or The acquired image is switched to gray level.9.The mobile device of claim 1, wherein generating the metadata further comprises:Generate a feature vector based on at least one of the detected objects;Compare the eigenvector with a set of image coefficients of a training image to determine a matching training image that matches the detected object, wherein the set of image coefficients is stored in the mobile device;Identify the detected object based on the matched training image; andGenerate the metadata associated with the recognized object based on the matched training image.10.The mobile device of claim 1, wherein the contextual data comprises at least one of: Global Positioning System GPS Positioning, Auxiliary Global Positioning System A-GPS Positioning, Galileo System Positioning, Tower Trilateration Method Fixed Point, Text Information, Auditory Information, Acceleration Meter reading, gyro reading, or temperature reading.11.The mobile device of claim 1, wherein the content includes a name, a price, a manufacturer, a comment, a coupon, or an advertisement.12.A method of performing image recognition includes:By the mobile device, an image, including acquiring contextual data associated with the acquired image via a contextual data sensor of the mobile device;Detect one or more objects in the acquired image;Generate metadata associated with at least one of the detected objects, wherein generating the metadata comprises:Highlight or indicate the detected object; andReceive input that selects one of the highlighted or indicated objects,Wherein the metadata is based on the selected object and includes a determined object category of the selected object or a generated feature vector of the selected object;Extracting a query image from the acquired image based on the selected object;Generating a visual search query, wherein the visual search query includes the query image, the metadata, and the context data;Transmitting the visual search query wirelessly; andIn response to the visual search query, the content associated with the selected object is received and presented.13.The method of claim 12, wherein generating the metadata further comprises:Classifying one or more of the detected objects as an object class;Highlight or indicate the classified object;Receive input indicative of a selected one of the sorted objects; andThe metadata is generated based on the object class of the selected object.14.The method of claim 13, wherein classifying one or more of the detected objects further above comprises:Generate a feature vector based on at least one of the detected objects;Compare the feature vector with a set of image coefficients of a training image in the object class to determine a matching training image in the object class that matches the at least one detected object, The collection stored in the mobile device; andThe detected objects are classified based on the matched training images.15.The method of claim 14, wherein said mobile device performs said obtaining, detecting, highlighting and classifying operations in real time.16.The method of claim 12, wherein generating the metadata based on the selected object further comprises:Classifying the selected object as an object category; andThe metadata is generated based on the object class of the selected object.17.The method of claim 16, wherein wirelessly transmitting the visual search query further comprises:Generate a destination address for the visual search query based on the object category of the selected object; andThe visual search query is transmitted to the destination based on the destination address.18.A system for performing image recognition, comprising:Means for acquiring an image by a mobile device comprising means for acquiring contextual data associated with the acquired image via a contextual data sensor of the mobile device;Means for detecting one or more objects in the acquired image;Means for generating metadata associated with at least one of the detected objects, wherein the means for generating the metadata comprises:Means for highlighting or indicating said detected object; andMeans for receiving an input selecting one of a highlighted or indicated object, wherein the metadata is based on the selected object and including the determined object category of the selected object or the generated selection The eigenvector of the object;Means for extracting a query image from the acquired image based on the selected object;Means for generating a visual search query, wherein the visual search query includes the query image, the metadata, and the context data;Means for wirelessly transmitting the visual search query; andMeans for receiving and presenting informational content associated with the selected object in response to the visual search query.19.The system of claim 18, wherein the means for generating metadata further comprises:Means for classifying one or more of the detected objects as an object class;Means for highlighting or indicating the classified object;Means for receiving input indicative of a selected one of the sorted objects; andMeans for generating the metadata based on the object class of the selected object.20.The system of claim 19, wherein the means for sorting one or more of the detected objects further comprises:Means for generating a feature vector based on at least one of the detected objects;Means for comparing the feature vector with a set of image coefficients of a training image in the object class to determine a matching training image in the object class that matches the at least one detected object; as well asMeans for classifying the detected objects based on the matched training images.21.The system of claim 20, wherein the means for acquiring, detecting, highlighting and classifying is performed in real time.22.The system of claim 19, wherein the means for generating the metadata based on the selected object further comprises:Means for classifying the selected object as an object category; andMeans for generating the metadata based on the object class of the selected object.23.The system of claim 22, wherein the means for wirelessly transmitting the visual search query further comprises:Means for generating a destination address for the visual search query based on the object category of the selected object; andMeans for transmitting the visual search query to a destination based on the destination address.24.A system for performing image recognition, comprising:Server configured to:Receiving a visual search query from a mobile device, wherein the visual search query includes an image and metadata associated with at least one of the images, wherein the metadata is based on at least one of the image in the mobile device Detected object and comprising a determined object class of the detected object or a generated feature vector of the detected object that is detected by the mobile device at the mobile device Input selected, wherein the visual search query further comprises contextual data associated with the image,Identify an object in the image based on the metadata,Generate informational content based on the recognized object and the contextual data, andThe content is delivered in response to the visual search query.25.The system of claim 24, wherein the contextual data comprises at least one of: Global Positioning System (GPS) positioning, Auxiliary Global Positioning System A-GPS positioning, Galileo positioning, tower trilateration method, textual information, auditory information, acceleration Meter reading, gyro reading, or temperature reading.26.The system of claim 24, wherein the server is further configured to:Comparing the image with a training image to determine a matching training image that matches the image, wherein the training image is selected based on the metadata, andIdentify the object in the image based on the matched training image.27.The system of claim 24, wherein the server is further configured to:Detect the object in the image based on the metadata,Generate a feature vector of the object,Compare the feature vector with image coefficients of a training image to determine a matching training image that matches the object, wherein the image coefficients are selected based on the metadata, andThe object is identified based on the matched training image.28.The system of claim 24, wherein the object comprises a logo, a design, a face, a landmark, a garment, a sign, a natural object, or an artificial object.29.The system of claim 24, wherein the informational content includes a name, a price, a manufacturer, a comment, a coupon, or an advertisement.30.The system of claim 24, wherein the server is further configured to:Store the visual search query, andAssociate the informational content with the visual search query.31.A method of performing image recognition includes:Receiving a visual search query from a mobile device, wherein the visual search query includes an image and metadata associated with at least one of the images, wherein the metadata is based on at least one of the image in the mobile device Detected object and comprising a determined object class of the detected object or a generated feature vector of the detected object that is detected by the mobile device at the mobile device Inputting a selection, wherein the visual search query further comprises contextual data associated with the image;Identify an object in the image based on the metadata;Generate informational content based on the recognized object and the contextual data; andThe content is delivered in response to the visual search query.32.The system of claim 31, wherein the contextual data comprises at least one of: Global Positioning System GPS Positioning, Auxiliary Global Positioning System A-GPS Positioning, Galileo System Positioning, Tower Trilateration Method Fixed Point, Text Information, Auditory Information, Acceleration Meter reading, gyro reading, or temperature reading.33.The method of claim 31, wherein identifying objects in the image further comprises:Comparing the image with a set of training images to determine a matching training image that matches the image, wherein the set of training images is selected based on the metadata; andIdentify the object in the image based on the matched training image.34.A system for performing image recognition, comprising:Means for receiving a visual search query from a mobile device, wherein the visual search query includes an image and metadata associated with at least one of the images, wherein the metadata is based on at least one of The detected object generated from the detected object and including the determined object type of the detected object or the generated feature vector of the detected object, Selected by the input received at the device, wherein the visual search query further comprises contextual data associated with the image;Means for identifying an object in the image based on the metadata;Means for generating informational content based on the recognized object and the contextual data; andMeans for transmitting the content in response to the visual search query.35.The system of claim 34, wherein the contextual data comprises at least one of: Global Positioning System GPS Positioning, Auxiliary Global Positioning System A-GPS Positioning, Galileo System Positioning, Tower Trilateration Method Fixed Point, Textual Information, Auditory Information, Acceleration Meter reading, gyro reading, or temperature reading.36.The system of claim 34, wherein the means for identifying an object in the image further comprises:Means for comparing the image with a set of training images to determine a matching training image that matches the image, wherein the set of training images is selected based on the metadata; andMeans for identifying the object in the image based on the matched training image. |
System and method for image recognition using a mobile deviceDivisional application informationThis case is a divisional application. The divisional parent case is an invention patent application filed on April 14, 2010 and having an application number of 201080016836.0 and entitled "System and Method for Image Recognition Using a Mobile Device".Claim priority under 35 U.S.C. $ 119This patent application claims the benefit of US Provisional Patent Application Serial No. 61 / 043,400, filed on April 14, 2009, by Ricardo dos Santos, Yong Chang, Joseph Huang, Hsiang-Tsun Li and Dev Yamakawa entitled "Systems and Methods for Image Recognition Using Mobile Devices for Image Recognition Using Mobile Devices ", which is assigned or assigned to the same entity as this application and is hereby expressly incorporated by reference In this article.Technical fieldThis application relates to the use of mobile devices for image recognition.Background techniqueThe technology of the present invention relates generally to methods and apparatus for performing image recognition and visual search using mobile devices, and more particularly to platforms and techniques for preprocessing images extracted on a mobile device to extract images A reduced set of parameters, the reduced set of image parameters may be transmitted to a network identification system to identify the object of interest and search for relevant content based on the identification.Advances in cellular communication technologies and mobile communication devices (eg, integration of camera and video recording technologies onto such communication devices, integration of email and short messaging services into cellular communication networks, etc.) will provide greater flexibility, Processing power and communication capabilities are added to existing mobile communication devices. As a result, such mobile communication devices have become more prevalent in the consumer market, and many consumers are now relying on their mobile communication devices (eg, cellular phones) to take pictures and ingest video, exchange messages in their social networks , Making purchasing decisions, conducting financial transactions, and other actions.Targeting, delivering and pricing of advertisements and other content is based on the rate of clicks and conversion of content by the intended recipient affected by the advertising content and the timing of its delivery. For example, in Japan, a large number of consumers have used their cellular phone to take a photo of a barcode in a printed advertisement to obtain information associated with the advertised product or service, and if the relevant advertisement content is quickly sent To a potential consumer cell phone, this ad content may have a higher conversion rate. A potential consumer uses his cell phone to take a photo of the printed advertisement, which then sends a Multimedia Messaging Service (MMS) message with the photo of the printed advertisement to the server. The server performs a one-to-one match of the picture with the advertisement database, and after about thirty to sixty seconds, the server sends a Short Messaging Service (SMS) message containing a web link associated with the printed advertisement to Potential consumers. However, such advertisements and content alignment and delivery systems require a significant amount of bandwidth to transmit pictures of printed advertisements and expend significant resources to match the images with the entire ad database.Content of the inventionThe following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This overview is not an extensive overview of all the aspects covered and is not intended to identify key or decisive elements in all aspects nor delineate any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.According to the teachings of the present invention in one or more aspects, a method and apparatus for performing image recognition and mobile visual search are provided in which a user of a mobile device acquires an image via a mobile device and receives informational content associated with the image. In one or more embodiments of the teachings of the present invention, a mobile device may detect, classify, and / or recognize one or more objects based on a salient feature cluster in the acquired image and generate a visual based on the object Search query The visual search query may include the acquired image or a query image extracted therefrom along with the metadata associated with the object. The mobile device may wirelessly communicate the visual search query to the remote server, the remote server may generate the informational content in response to the visual search query, and thereafter the mobile device may receive and present the informational content.According to one embodiment, a mobile device may detect an object and highlight the object to a user, and receive input indicative of at least one selected object. Objects may include, for example, logos, designs, faces, landmarks, garments, tokens, natural or man-made objects, and the like. The mobile device may then generate a feature vector based on the selected object and compare the feature vector with a set of image coefficients of the training image to determine a matching training image that matches the selected object. The set of image coefficients may be stored in a mobile device. The mobile device may then classify and / or identify the selected object based on the matched training images, and generate the metadata based on the matched training images. The mobile device may also extract the query image from the acquired image based on the selected object, for example, by trimming the acquired image, compressing the acquired image, scaling the acquired image, and converting the acquired image to a gray level.According to one embodiment, the mobile device may include a sensor that obtains contextual data associated with the acquired image and includes the contextual data in a visual search query. The contextual data may include, for example, global positioning system (GPS) positioning, assisted GPS (A-GPS) positioning, Galileo positioning, tower trilateration, user input text or editing information, accelerometer readings, Instrument readings, temperature readings and more.According to one embodiment, the mobile device may wirelessly communicate the visual search query to a remote server in the image recognition system. Upon receiving the visual search query containing the query image and the metadata associated with the at least one of the query images, the remote server may recognize the object in the query image based on the associated metadata at once. For example, the remote server may select a set of training images based on the associated metadata, compare the query image with the set of training images to determine a matching training image that matches the query image, and determine The matched training image identifies the object in the image. The remote server may then generate the informational content based on the identified object and deliver the informational content in response to the visual search query. Informational content may include, for example, name, price, manufacturer, comment, coupon, and advertisement.According to one embodiment, the remote server may receive a visual search query that includes, in addition to the query image and the associated metadata, contextual data associated with the query image. In this embodiment, the remote server may generate the informational content based on the recognized object and the contextual data, and thereafter transmit the informational content to the mobile device in response to the visual search query.According to one embodiment, in an aspect, because the mobile device preprocesses the acquired image prior to wirelessly transmitting the visual search query, the mobile device may extract and send the relevant portion of the acquired image instead of the entire acquired image, and Thus increasing the speed of transmitting visual search queries and reducing communication bandwidth requirements. In addition, the remote server may utilize meta-data and / or context data associated with the query image to aid in identifying objects of interest in the query image that enable the remote server to focus on the scope of the visual search and thus improve remote server and overall image recognition System accuracy, speed and efficiency. In addition, the remote server may use the associated metadata and / or contextual data to focus on or otherwise tailor the content, which may enable the remote server and thus the image recognition system to respond to the visual search query in real-time or near real-time Provide relevant information.To the accomplishment of the foregoing and related ends, one or more aspects are encompassed by the features described below, particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of one or more aspects. However, these features are merely indicative of a number of ways in which the various aspects of the principles may be employed, and this description is intended to include all such aspects and their equivalents.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate aspects of the present teachings and, together with the description, serve to explain the principles of the invention. In the picture:Figure 1 illustrates an exemplary image recognition system consistent with one aspect of the teachings of the present invention that includes a remote server in a backend of a mobile device having a portable image sensor and an image recognition system;Figure 2 illustrates an exemplary configuration of a mobile device in accordance with one embodiment of the present teachings;Figure 3 illustrates an exemplary configuration of a backend for an image recognition system that facilitates and participates in mobile visual search and image recognition in accordance with one embodiment of the teachings of the present invention;Figure 4 illustrates a process flow diagram performed by a mobile device to enable a mobile visual search and facilitate image recognition in accordance with another embodiment of the present teachings;Figure 5 illustrates a process flow diagram performed by an image recognition system to enable a mobile visual search and facilitate image recognition according to yet another embodiment of the teachings of the present invention;6A-6D illustrate a processing sequence of an exemplary mobile visual search in accordance with yet another embodiment of the present teachings.detailed descriptionReference will now be made in detail to one embodiment of the present teachings, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.Various aspects are now described with reference to the drawings. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. However, it is evident that the aspects may be practiced without these specific details.In this description, the word "exemplary" is used to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. In fact, the use of the word "exemplary" would like to present the concept in a concrete way.In addition, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, the phrase "X using A or B," unless otherwise specified or apparent from the context, is intended to refer to any one of the natural inclusive permutations. That is, the phrase "X uses A or B" is satisfied by any of the following examples: X uses A; X uses B; or X uses both A and B. FIG. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more than" unless it is specifically stated otherwise or clear from the context that it is directed to a singular .In addition, various aspects or features will be presented in terms of a system that can contain many devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc., and / or may not include all of the devices, components, modules, etc. discussed in connection with the figures. It is also possible to use a combination of these methods.Aspects of the present teachings relate to systems and methods for performing visual search and image recognition via a mobile device. More specifically, in one or more aspects, and as generally shown, for example, in FIG. 1, platforms and techniques are provided in which mobile visual search and image recognition are initiated via a mobile device 130 and which are identified in time image recognition System 120. According to one embodiment, and as generally shown in FIGS. 1 and 2, for example, the mobile device 130 acquires and pre-processes the image 100 to initiate a mobile visual search. The mobile device 130 may detect one or more objects based on the clusters of salient features or features of interest in the image 100 and highlight the detected objects such as the patterns 115 and compare the objects to the training images for classification Or identify the object. The mobile device may extract the sub-image from the acquired image based on the classified or recognized object. The mobile device 130 may also generate metadata based on the categorized or recognized objects and obtain contextual data (eg, a global positioning system (GPS) location) associated with the image 100. Mobile device 130 may generate a visual search query that includes the extracted image or sub-images extracted therefrom and associated metadata and / or contextual data and transmits the visual search query via wireless connection 132 and wireless service provider 150 To the remote server 140. In one or more instances, the extracted sub-images have a smaller file size than the file size of the acquired image. Therefore, a visual search query containing the extracted sub-image instead of the entire acquired image is transmitted. This image reduction increases the speed at which visual search queries are delivered and reduces the communication bandwidth requirements to servers or other destinations.According to one embodiment, and as shown for example in FIGS. 1 and 3, the remote server 140 of the image recognition system 120 may receive the visual search query and generate the informational content to be presented via the mobile device 130. Upon receiving the visual search query containing the query image and the metadata and / or context data associated with the query image, the remote server 140 may recognize at least one of the query images based on the associated metadata at once. The remote server may generate the informational content based on the identified object and the associated contextual data and then transmit the informational content to the mobile device 130. [ Thereafter, the mobile device 130 may render the informational content in response to the visual search query.Benefiting from the metadata and / or context data associated with the query image, the remote server 140 may focus on the scope of the visual search and thus improve the accuracy, speed, and efficiency of the remote server 140 and the entire image recognition system 120. In addition, the remote server 140 may use the associated metadata and / or contextual data to tailor the informational content that may enable the remote server 140 and, thus, the image recognition system 120 to provide relevant, in real time or near real time, response to the visual search query information.The image 100 or images 115 captured within the image 100 captured by the mobile device 130 may contain one or more clusters (eg, features, objects of interest, etc.) that correspond to the salient features of one or more objects. Subjects may include, for example but without limitation, signs, designs, faces, landmarks, garments (eg, t-shirts, hats, shoes, pockets, etc.), tokens (eg street signs, hotel signs, etc.), to barcodes, , Newspapers, posters (eg, "one", etc.), billboards, posters, paintings, sketches, backdrops on which images are displayed or projected, retail care instructions, digital video disc (DVD) boxes, posters, tickets , Compact disc (CD) box, baseball card, soda can, etc., or any combination thereof. In one example, the image 100 or the pattern 115 may be two-dimensional even though the surface of the object or object captured in the image 100 is not flat and / or two-dimensional. 1 shows one embodiment of an image recognition system 120 in which an image 100 and one or more patterns 115 are captured by a mobile device 130 having a portable image sensor.Image recognition system 120 may be provided to implement a visual search and deliver informational content associated with the objects in image 100 and / or the patterns 115 within image 100. Informational content associated with an object may include visual, auditory or sensory content, or location descriptors that make it possible to access such content. For example, the content may be in the form of images, text, streaming or non-streaming video, streaming or non-streaming audio, universal resource locators (URLs), wireless application protocol (WAP) pages, hypertext markup Language (HTML) pages, Extensible Markup Language (XML) documents, executable programs, file names, Internet Protocol (IP) addresses, telephone calls, devices, or other content. The content may be delivered to the mobile device 130 via a communication protocol such as (without any limitation) e-mail, multimedia messaging service (MMS), enhanced messaging service (EMS) , Short messaging service (SMS), WAP push, application push (eg, push registration, etc.), standard phone display, or messaging protocols such as Transmission Control Protocol (TCP), IP, User Datagram Protocol (UDP), Hypertext Transfer Standard Internet Protocol such as HTTP (HTTP) and File Transfer Protocol (FTP).As shown in FIG. 1, the image recognition system 120 includes a mobile device 130 that captures, generates, retrieves, or otherwise reproduces an image 100 whose sub-image contains a pattern 115 that includes one or more objects and that is based on an object Produce visual search queries. The image 100 is an electronic representation of the object captured by the mobile device 130. For example, image 100 may be a data structure that includes a two-dimensional array of pixel information. Examples of the mobile device 130 may include any mobile electronic device such as (without any limitation) a cellular phone ("cell phone"), a personal digital assistant (PDA), a digital camera or any other suitable device that is suitable for use on a wireless access network (Wi-Fi) operating wireless access networks), or a collection of electrically coupled two or more of these devices, for example wired or wirelessly communicating with a PDA Digital cameraMobile device 130 includes a portable image sensor (eg, image sensor 200 as shown in FIG. 2, etc.), which may be any electronic device capable of producing image 100. For example, the portable image sensor may include a Charge Coupled Device (CCD) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor, and a set of optical lenses to communicate the light pattern onto the sensor and thereby produce the image 100. In one embodiment, the portable image sensor is built onto the mobile device 130. In operation, the user aligns the portable image sensor of the mobile device 130 in the general direction of the target, and the mobile device 130 generates the image 100 after capturing the area covering the target. The mobile device 130 may also retrieve one or more stored images or capture one or more frames of the video to produce the image 100. For example, instead of generating an image using a portable image sensor, the mobile device 130 may retrieve images stored at 130 or sent via a communication protocol (eg, email, MMS, EMS, SMS, HTTP, UDP, etc.) to generate Image 100. In one embodiment, the retrieved images or captured frames may include visual search results and / or user information from a previously performed visual search and the mobile device 130 may be independent of or in combination with (eg, overlaid thereon) Image 100 to display visual search results and / or user annotations. As will be described in more detail below, the mobile device 130 may detect objects in the image 100 and highlight or otherwise indicate to the user, in real time or on a real-time basis, one or more of the objects. Regarding aspects integrated in the mobile device 130, object detection may be performed or augmented using Qualcomm'srogram interface (API) available from San Diego, California. Other image detection and identification APIs or services may be used to integrate object detection into the mobile device 130, as may be found, for example, in the Java platform, MicroMicro from Sun Microsystems (Java METM), SymbianTMOS from Symbian Ltd., Flash from Adobe Systems LiteTM, WindowsMobileTM from Microsoft Corporation, iPhoneTMOS from Apple Inc., and APIs and services under execution from the Open Handset Alliance's Android ™.Mobile device 130 may also include the ability to detect location, locate, orient, move, and / or other contextual data associated with mobile device 130 when generating image 100. Detection and identification of the location or location of the mobile device 130 may be performed, for example, using various location services such as Global Positioning System (GPS), Assisted GPS (A-GPS), cellular-based base station registered cellular Base station triangulation or trilateration of telephone triangulation or trilateration, European Galileo positioning systems, or other location or location services or technologies. The detection and identification of the orientation or movement of the mobile device 130 may be performed, for example, using various services such as a built-in sensor (eg, a sensor 290 as shown in Figure 2, etc.) including, for example, a GPS unit, Accelerometer, gyroscope, and / or other orientation and motion detection services or technologies. The mobile device 130 may further include a user input interface (eg, a keypad, a microphone, etc.) that may receive textual or auditory information entered by a user and provide the textual or auditory information as contextual data. The mobile device 130 may also include other types of sensors, such as temperature sensors, which may provide other types of contextual data. As shown in FIG. 1, the mobile device 130 may communicate with the wireless service provider 150 via the wireless connection 132 and one or more base stations 135 supported by one or more wireless servers operating within the image recognition system 120. The wireless service provider 150 may then communicate with a set of resources containing, for example, a user database that stores user-related subscriptions, configurations, positioning, and other information.In one embodiment, the image recognition system 120 may further include a remote server 140 that operates in conjunction with the mobile device 130 and the wireless service provider 150 to enable visual search and deliver images in real-time, on a real-time basis, or otherwise 100 in the object-related information content. Remote server 140 includes one or more servers 142, 144, and 146 that may be coupled by a connection 148 that spans one or more communication networks, such as a local area network (LAN), an intranet, or the Internet. For example, the remote server 140 may include one or more of the messaging servers 142 to handle communications with the wireless service provider 150 and / or the mobile device 130 and to respond to the visual search query to the mobile device 130 that delivers content or provides access to content that may include image data, metadata, and / or contextual data associated with image 100; remote server 140 may include content server 144 to store and provide information Content; and remote server 140 may include image recognition server 146 to determine how to deliver and / or deliver the informational content. In one embodiment, the messaging server 142, the content server 144, and the image recognition server 146 may reside at different physical locations and be communicatively coupled via connections on the Internet 148. For example, the messaging server 142 and the image recognition server 146 may physically reside at a location managed by a cellular telephone company that also manages the wireless service provider 150. At the same time, the content server 144 may physically reside at the source of advertising sales networks, sales providers, content providers, media providers, or other providers or content to be delivered to the mobile device 130.The remote server 140 may be coupled to the wireless service provider 150 via one or more communication links 170, which may include a wireline link (eg, a T1 or T3 line, etc.), a wireless link, an optical Link or other communication coupling mode. Wireless service provider 150 may provide cellular telephones or other digital communication services to users of electronic devices (eg, mobile device 130). For example, wireless service provider 150 may be a cellular telephone service provider (eg, Sprint Nextel Corporation, etc.), a personal communications service (PCS) provider, or other wireless service providers. The wireless service provider 150 may include a network of one or more wireless servers and base stations 135. Mobile device 130 may communicate with a wireless server of wireless service provider 150 via base station 135 using a multi-tiered (eg, client-server, etc.) software architecture over wireless connection 132. Thus, the mobile device 130 may communicate with the remote server 140 via the wireless service provider 150 and the remote server 140 may deliver the related information content to the mobile device 130 via the wireless service provider 150. Delivering the informational content may include presenting the informational content to the user of the image recognition system 120. For example, the informational content may be transmitted to the mobile device 130 to be presented to the user, for example, on a visual display or on an audio speaker.An exemplary configuration of a mobile device 130 in accordance with one or more embodiments of the present teachings will now be described with reference to FIG. 2. The mobile device 130 (as shown in FIG. 1) may include at least one antenna 202 (eg, a transmit receiver or a group of such receivers including an input interface, etc.) that receives signals (eg, regarding mobile origination Or other handshakes, handshaking responses, mobile application data transfers, data events, data event responses, handshaking, etc.), and a receiver 204 that performs a number of actions on the received signal (eg, filters, amplifies, downconverts, etc.) ). Antenna 202 may, for example, transmit or receive a response to a handshake request, a data event request, or the like. Antenna 202 and receiver 204 may also be coupled to a demodulator 206 that may demodulate the received signal and provide it to processor 208 for processing. Mobile device 130 may additionally include a memory 210 that includes one or more computer-readable media that are operatively coupled to the processor 208 and that may store instructions to be executed, data to be transmitted, received, processed, and the like.The processor 208 may analyze the information received by the user input interface (not depicted) of the antenna 202 and / or the mobile device 130 and / or generate information for transmission by the transmitter 218 via the modulator 216. In addition, the processor 208 may control and / or reference one or more resources or components of the mobile device 130, for example, including the image sensor 200, the demodulator 206, the memory 210, the modulator 216, the transmitter 218, An image detection unit 250, an image recognition unit 260, and a sensor 290. The processor 208 may also execute a runtime environment 212 (eg, Qualcomm'sJava METM from Sun Microsystems, SymbianTMOS from Symbian Ltd., Flash Lite ™ from Adobe Systems, Windows Mobile ™ from Microsoft Corporation, iPhone ™ OS, Android ™ from Open Handheld Alliance, etc.), and application sets 214, or other software, modules, applications, logic, code, and the like.In one embodiment, the mobile device 130 includes memory 210 to store computer readable data (eg, image 100, image coefficient library 262, etc. as shown in FIG. 1) and computer executable software instructions (eg, image detection / recognition Software 270, runtime environment 212, application set 214, etc.). The memory 210 may include, among other things, solid state memory (eg, read only memory, random access memory, flash memory, etc.), magnetic hard drives, optically readable media such as compact discs (CDs) or digital video discs Of one or more than one. Mobile device 130 may also include at least one processor 208 to execute software instructions stored in memory 210. The instructions are executed to configure the processor 208 to control and / or perform, for example, the functions of the image sensor 200, the image detection unit 250, and the image recognition unit 260 as will be described in more detail below with respect to FIG. 4, for example.In one embodiment, the image sensing capabilities and image detection and / or recognition functionality are shown as involving processing performed by the image sensor 200, image detection unit 250, and image recognition unit 260 of the mobile device 130. For example, the image sensor 200 may include a CCD sensor or a CMOS sensor, and a set of optical lenses that communicate the light pattern onto the sensor and, in turn, generate the image 100. In operation, the user aligns the image sensor 200 of the mobile device 130 in the general direction of the target, and the image sensor 200 generates the image 100 after capturing the area covering the target. The mobile device 130 may also retrieve one or more stored images or capture one or more frames of the video to produce the image 100. In one embodiment, the image sensor 200 is built into the mobile device 130. However, the functionality of image detection and image recognition may reside entirely in the mobile device 130, in the remote server 140, or in any combination thereof. For example, the image detection unit 250 and the image recognition unit 260 may be implemented as one or more sets of image processing software stored in the memory 210 of the mobile device 130 and executable by the processor 208 (eg, the image detection / recognition software 270 Wait).In one embodiment, the image detection / recognition software 270 may provide the mobile device 130 and components thereof with functional interfaces to the image sensor 200, image detection unit 250, and / or image recognition unit 260. Image detection / recognition software 270 may include an algorithm for detecting one or more object categories in an image and / or identifying an object in the image based on the salient feature clusters. The algorithm may include, for example, scaling invariant feature transitions (eg SIFT, SIFT ++, LTI-lib SIFT, etc.), accelerated robust features (eg SURF, SURF- d etc.), augmented reality , And other image detection and recognition algorithms known to those skilled in the art. Image detection / recognition software 270 may also include algorithms for detecting or classifying the categories of one or more objects in the image based on the salient feature clusters corresponding to the objects in the image, such as a biological visual cortex network (eg, System maximization architecture, HMAX, etc.), and other object classification algorithms known to those skilled in the art. Object categories can include, for example, natural objects such as faces, animals, plants, terrestrial features, and the like. Object categories may also include, for example, man-made objects such as logos, designs, buildings, landmarks, garments, signs, vehicles, and the like. Although the terms "category of objects" and "category of objects" are used to describe a collection of objects that share certain properties, other similar terms known to those skilled in the art may be used interchangeably, such as the level of objects, the kind of objects, The type of object and so on.In one embodiment, the mobile device 130 may detect the objects in the image using one or more algorithms, detect the categories of the objects using the same or different algorithms, and / or identify the objects using the same or different algorithms. In an embodiment, the mobile device 130 may select the identification algorithm based on the detected object category. For example, the mobile device 130 may use HMAX to detect and classify an object in the image 100, and then use SIFT to identify an object in the image 100 that is classified as an artificial object.In one embodiment, the image detection / recognition software 270 may include an algorithm for detecting the landmark. The logo appears in almost every product for marketing purposes and the logo detection algorithm facilitates mobile visual search by detecting the logo pattern and its boundaries within the image. The logo may have a high degree of contrast but limited brightness and / or color levels, and thus the brightness and / or chromaticity histogram of the logo may have two main peaks. Based on these detected characteristics, the sign can be effectively detected by, for example, obtaining a histogram of brightness (or luma in a case where an RGB component is gamma-compressed) and a chroma component using Equation 1 shown in Table 1 pattern.Table 1 - Equation 1: Luminance and chrominance component acquisitionThe histogram of luminance and chrominance components may have any number of bins. In one example, the 16bin histogram provides sufficient resolution to distinguish the main peak of the logo pattern. After obtaining histograms of luminance and chrominance components, the landmark detection algorithm can locate the strongest peaks in the histogram, typically two. The landmark detection algorithm ensures that the two strongest peaks denoted as (peak1, bin1) and (peak2, bin2) at different bins of the histogram satisfy the criteria given in Table 2.Table 2After detecting the logo pattern, the logo detection algorithm can detect the logo pattern border using a one-dimensional (1-D) projection algorithm. The 1-D projection algorithm can use, for example, Equation 2 provided in Table 3 to obtain the delta in the X and Y directions for the maximum joint component and the minimum joint component.Table 3 - Equation 2: 1-D Projection AlgorithmThe landmark detection algorithm may determine the logo pattern border based on X-projection and Y-projection. In one example, the landmark detection algorithm can determine the logo pattern boundaries effectively and with high confidence due to the significant incremental X-projection and Y-projection waveforms of the logo pattern. A landmark detection algorithm stored in image detection / recognition software 270 may be used by image detection unit 250 to detect and / or locate one or more markers within image 100, as will be described in greater detail below with respect to FIG. 4, for example.In one embodiment, the image processing software may access an image coefficient library 262 that may store image coefficients of possible image candidates or training images. Each of the training images may have a corresponding vector that uniquely represents the coefficients or image coefficients of the training image. The image coefficients may include a set of numbers forming a signature of a corresponding training image, and the size of the image coefficients generally corresponds to the category of the training image. For example, the image coefficient of a logo (BREWGAMING MONKEY ™ logo, etc.) may have a size of about 22 × 18 × 32 bytes or about 12 kilobytes and the image coefficient of a person's face may have a size of more than one megabyte. The training images may be classified based on the objects contained therein using a classification algorithm (eg, HMAX, K-nearest neighbors, support vector machines, neural networks, randomized trees, or other classification algorithms known to those skilled in the art). The image coefficients of the training images may be stored in the image coefficient library 262 and may also be stored in the image coefficient library 262 based on categories of the objects in the training images, metadata (eg, object categories, trademarks, etc.), and / or contextual data For example, GPS location, location identifier, etc.). The image coefficients of the training images stored in the image coefficient library 262 may be used by the image detection unit 250 and the image recognition unit 260 to categorize, recognize, or otherwise identify one or more objects within the image 100 and / or the pattern 115 as follows The text will be described in more detail with respect to FIG. 4, for example.The image processing software in the mobile device 130 may further include image editing software that may be used to trim, compress, scale, shift to gray scale, or otherwise process the image 100 captured by the image sensor 200 to extract Or otherwise generate a sub-image containing the pattern 115. For example, image 100 may be cropped or otherwise processed based on detected, categorized, and / or recognized objects. Alternatively or in addition, the image 100 may be cropped or otherwise processed according to instructions received from or by the user of the mobile device 130 or from computer-readable instructions that have been previously received by the mobile device 130. Image processing software may be written in any suitable programming language and / or development environment (eg,M METM, SymbianTMOS, Flash LiteTM, WindowsMobileTM, iPhoneTMOS, AndroidTM). Alternatively or additionally, the image detection unit 250 and the image recognition unit 260 may be implemented as hardware in the mobile device 130. The hardware may include electronic circuitry that includes passive and / or active electronic components. For example, in one embodiment, the hardware may be implemented in at least one application specific integrated circuit (ASIC).An exemplary configuration of a backend 300 of an image recognition system 120 consistent with an embodiment of the present teachings will now be described with reference to FIG. 3, which includes a remote server that may facilitate and / or participate in image recognition and visual search 140 and wireless service provider 150. In one embodiment, the backend 300 may include a wireless service provider 150 having a receiver 310 that receives input from one or more mobile devices via the receive antenna 306 (eg, as shown in FIG. 1 Mobile device 130, etc.), and transmitter 322 that transmits one or more signals modulated by modulator 320 to the mobile device via transmit antenna 308. [ The receiver 310 may receive the information from the receive antenna 306, and may further include a signal receiver (not shown) that receives feedback data related to an un-received or decodable packet. In addition, the receiver 310 is operatively associated with a demodulator 312 that demodulates the received information. Processor 314 may analyze the demodulated symbols and information provided by demodulator 312.Processor 314 is further coupled to memory 316, which may store one or more applications 318 that facilitate and / or engage in remote communications between mobile devices, wireless service providers 150, and / or remote servers 140. For example, application 318 may include a main application program configured to initiate a handshake and send data event requests (eg, regarding diagnostic information, data analysis, etc.) to a recipient operating on the mobile device application. Alternatively, application 318 may include a secondary application that may receive the handshake request and authenticate the originating application on the mobile device. The application 318 may further include an identifier for generating and / or authenticating a corresponding application on the mobile device 318 to a corresponding application on the mobile device or a mobile device to the application 318 or identifying a particular round trip Increasing rule of trip communication. In addition, the rules may specify policies for retransmitting unacknowledged transmissions, restarting handshake requests and / or responses, and terminating handoffs. Accordingly, application 318 may join mobile communications with one or more applications residing on the mobile device (eg, application set 214 as shown in FIG. 2, etc.), and / or with the implementation of the methods described herein Any other suitable activities related to various actions and functions.In one embodiment, the backend 300 may include a remote server 140 that operates in conjunction with a mobile device (eg, mobile device 130, etc.) and a serviceless provider 150 to enable image recognition and visual search. Remote server 140 may include messaging server 142 to handle communications with mobile devices and / or wireless service providers 150 and deliver content to mobile devices or provide access to content in response to visual search queries. For example, the messaging server 142 may receive a visual search query that may include the image 100 or one or more sub-images (eg, the pattern 115, etc.) extracted from the image 100, as well as metadata and / Or contextual data associated with the image 100 and generated by the mobile device and then transmitting the visual search query to the image recognition server 146. [ For another example, the messaging server 142 may receive visual search results that may include informational content related to the image 100 or the extracted sub-images generated by the content server 144 in response to the visual search query, and The visual search results are then transmitted to the wireless service provider 150 for transmission to the mobile device.The remote server 140 may include or be in communication with the image recognition server 146 to recognize based on image data, metadata, contextual data associated with the image 100, and / or user feedback of search results previously provided for similar visual search queries Or otherwise identify one or more of the one or more objects within the image 100 or the extracted sub-images. User feedback for search results may, for example, include a binary response (eg, yes / no, true / false, good / bad, etc.) or a scaled response (eg, a scale from 1 to 10 regarding the accuracy or relevancy of the search results ), User annotations for search results, user followup actions responsive to search results (eg, clicking on links or ads provided in search results, etc.), and the like. Image recognition server 146 may further generate semantic search results based on at least one identified object, metadata, and / or contextual data associated with image 100, and any user feedback for previously provided search results. In one embodiment, the image recognition server 146 includes a processor 360 and a memory 362 that includes one or more computer-readable media that are operably coupled to the processor 360 and that may Store instructions to be executed and data to be transmitted, received, processed, and the like. Memory 362 may include one or more of solid state memory, magnetic hard drives, optically readable media such as CDs or DVDs, and the like. The instructions stored therein are executed to configure the processor 360 to control and / or perform, for example, visual search and image recognition in conjunction with other components of the image recognition system 120. For example, memory 362 may store image recognition software 364 and image data and coefficient bank 366. Image recognition software 364 may access an image coefficient library 366, which may store and index coefficients of image data and / or possible image candidates or training images. The training images may be based on objects contained in the training images using a classification algorithm (eg, HMAX, K-nearest neighbors, support vector machines, neural networks, randomized trees, or other classification algorithms known to those skilled in the art) classification. The image coefficient library 366 may be a training image based on the categories of objects in the training images, metadata (eg, object categories, trademarks, etc.), and contextual data (eg, GPS locations, location identifiers, etc.) associated with the training images Indexing. Each of the training images has a data and / or coefficient vector that uniquely represents the training image that may be stored in the image data and coefficient library 366 and used by the image recognition software 364 to identify the image 100 Or one or more objects within the pattern 115, as will be described in greater detail below with respect to FIG. 5, for example.The remote server 140 may further include or communicate with the content server 144 to store, index and provide informational content such as product information (eg, name, price, manufacturer, specs, reviews, advertisements, coupons, promotions, etc.) to Guide links to product information, action links (eg, links to online retailers for comparing stores, saving to wish lists, sharing with friends, instant purchases, etc.), celebrity information (eg, names associated with celebrities, Profile, product and / or service, etc.). Landmark information (eg, name, history, product, and / or service associated with the landmark), or any combination thereof. Content server 144 may provide relevant informational content in response to, for example, a semantic search query generated by image recognition server 146 based on at least one object, metadata, and / or contextual data associated with image 100.In one embodiment, the content server 144 includes a processor 340 and a memory 342 that includes one or more computer-readable media that are operably coupled to the processor 340 and that may store Instructions to be executed and data to be transmitted, received, processed, and the like. Memory 342 may include one or more of solid state memory, magnetic hard drives, optically readable media such as CDs or DVDs, and the like. The instructions stored therein are executed to configure the processor 340 to search for and provide related content based on the image 100 or the pattern 115 in conjunction with other components of the image recognition system 120. For example, memory 342 may store instructions for search engine 344 and content database 346.The search engine 344 may locate and provide relevant content in response to search queries from the mobile device and / or image recognition server 146. In the illustrated embodiment, the content server 144 may perform peristalsis on the content database 346 and / or other computer-readable data storage media coupled to the remote server 140 to locate the information stored therein before receiving the search query Content and index it. Thus, search engine 344 may locate the related content by accessing the index in response to the search query. Thus, the content server 144 may determine what content to deliver to the mobile device and / or how to deliver content, such as content and communication protocols, based on semantic search queries generated by the image recognition server 146, as will be described below for example Described in more detail with respect to FIG. 5.FIGS. 4 and 5 illustrate methods and / or flowcharts in accordance with one or more aspects of the present teachings. For ease of explanation, the method is depicted and described as a series of actions. It is to be understood and understood that the innovation is not limited by the described sequence of acts and / or actions. For example, acts may occur in various orders and / or concurrently, and occur with other acts not presented and described herein. In addition, not all illustrated acts may be required to implement a method in accordance with the claimed subject matter. In addition, those skilled in the art will understand and appreciate that the methods may alternatively be represented as a series of related states via a state diagram or event. In addition, it is further understood that the methods disclosed below and throughout this specification can be stored on articles to facilitate shipping and transferring the methods to a computer. The term "article of manufacture" as used herein is intended to encompass a computer program that is accessible from any computer-readable device, carrier, or medium.4 illustrates that one or more implementations in accordance with the teachings of the present invention may be performed by the mobile device 130 (shown in FIGS. 1 and 2) to search and promote images using the image recognition system 120 (shown in FIG. 1) Identification of the flow chart. In 410, the mobile device 130 may initiate visual search and image recognition by acquiring an image (eg, image 100 as shown in FIG. 1, image 600 as shown in FIG. 6A, etc.). For example, a user of the mobile device 130 is aimed at the image sensor 200 of the mobile device 130 in a general direction of the target, and the mobile device 130 may capture, generate, acquire or otherwise duplicate an image representing the target. The mobile device 130 may also retrieve one or more stored images or capture one or more frames of the video to produce an image. For example, instead of generating an image using the image sensor 200, the mobile device 130 may retrieve images stored in the mobile device 130 or transmitted via a communication protocol (eg, email, MMS, EMS, SMS, HTTP, UDP, etc.) To produce an image. In one embodiment, the retrieved images or captured frames may include visual search results and / or user information from a previously performed visual search and the mobile device 130 may be independent of or in combination with (eg, overlaid thereon) Image 100 to display visual search results and / or user annotations.Next, at 415, the mobile device 130 may detect the storage and location of one or more objects based on the salient feature clusters corresponding to the objects in the acquired images. In one embodiment, the mobile device 130 may begin to detect an object without having positive input or other action (eg, pressing a shutter) from the user; instead, the mobile device 130 may compare continuously acquired images It is determined when the image sensor 200 is stationary or has been stationary for a threshold period of time, and thus the object can be detected. In another embodiment, the mobile device 130 may begin to detect an object after a positive input or other action from the user.In one embodiment, the algorithm may be implemented by using one or more image detection algorithms stored in the mobile device 130 (eg, stored in and encoded by the image detection / identification software 270 and image detection unit 250 shown in FIG. 2 Implemented detection algorithm, etc.) to perform or enhance object detection. An object may be detected, for example, by positioning a key point (eg, a line, an edge, a ridge, a corner, a drop, a T-intersection, or other salient features) on the image and then determining, based on Point or zone to generate the keypoint vector. By using the keypoint vector, the mobile device 130 can locate the object in the image, and then for each of the objects, the mobile device 130 can generate a feature vector that uniquely represents the corresponding object. Other image detection algorithms may be used including, for example, HMAX, SIFT, SIFT ++, LTI-lib SIFT, SURF, SURF-d, BazAR, or other image detection algorithms known to those skilled in the art.In one embodiment, object detection may be performed for various categories of objects, such as logos, designs, faces, landmarks, garments, tokens, objects, and the like. In one aspect, object detection may be performed for only one or more preselected or user-selected categories of objects. For example, object detection may utilize a flag detection algorithm stored in image detection / recognition software 270 to detect and / or locate only a logo or a logo-like pattern in an image. Alternatively or in addition, the mobile device 130 may generate the feature vector corresponding to the detected object in real-time and compare the feature vector with possible image candidates for a selected category of objects stored in the image coefficient library 262 Or the image coefficient of the training image to determine the category of the detected object. Each of the training images has a corresponding coefficient vector that uniquely represents the features in the training image. In one embodiment, the mobile device 130 may compare the detected feature vector of the object with a selected category (eg, a Manhalanobis distance, Euclidean distance, etc.) by calculating distances between the vectors The training vectors of vectors are compared to determine the categories of detected objects.In one embodiment, the mobile device 130 may detect an object in the acquired image before enhancing the captured image for viewing by a human (eg, enhancing the sharpness, brightness, and dynamic range of color of the image, etc.) An enhanced image is displayed on the viewfinder or display of the device 130. Although the enhanced image is more aesthetically pleasing to the user, such enhancements may prevent or even prevent the mobile device 130 from accurately and effectively detecting objects in the image.In 420, the mobile device 130 may highlight or otherwise indicate the detected object of the image by overlaying the indicator on the image. For example, the indicator may include various forms of expanded realistic graphics, for example, an image surrounding a pattern 115 as shown in FIG. 1 and an indicator surrounding the patterns 610 through 620 as shown in FIGS. 6A and 6B, Boxes, bull's eye hyperlinks and more. If the mobile device 130 has determined the category of the detected object in 415, the mobile device 130 may highlight only the detected objects categorized as one or more pre-selected or user-selected categories. Next, at 425, the mobile device 130 may receive input from the user to select at least one of the highlighted objects, for example, the selected pattern 610 as shown in FIG. 6B. User input may include positive input or other actions from the user via the user input interface. The user input may also include the user holding the mobile device 130 stationary, causing the image sensor 200 to focus on one of the detected objects for a threshold period of time.Next, at 430, the mobile device 130 may categorize, recognize, or otherwise determine the characteristics of the selected object. The mobile device 130 may optionally refine the generated feature vector corresponding to the selected object. The mobile device 130 may determine the category of the selected object by comparing the feature vectors of the selected object with the image coefficients of the training images for one or more features stored in the image coefficient library 262. If the mobile device 130 has classified the selected objects (in 415), the mobile device 130 may maintain the categories of the selected objects without further sorting the selected objects. In one embodiment, the mobile device 130 may compare the feature vector of the selected object with the image coefficients of the training images stored in the image coefficient library 262 to identify or otherwise determine the characteristics of the selected object. In one embodiment, the mobile device 130 may compare the feature vector of the selected object with the coefficient of the training image by calculating the distance between the vectors (eg, Manhalanobis distance, Euclidean distance, etc.) The vectors are compared to find a training image that matches the selected object. If the mobile device 130 finds a training image that matches the selected object, the mobile device 130 may identify the selected object based on the matching training image. The number of dimensions of the feature vector is directly related to the processing power for the time needed to match the feature vector, and thus may require the number of dimensions that minimize the feature vector. However, the eigenvectors should have enough dimensions to be distinguished and also be robust to noise, detection errors, and geometric and photometric deformations.In 435, the mobile device 130 may generate the metadata associated with the selected object based on the training image that matches the selected object. For example, if the mobile device 130 matches the selected object with the training image of the BREW GAMING MONKEY ™ logo, the mobile device 130 may generate metadata indicating that the selected object is a BREW GAMING MONKEY ™ logo or contains a BREW GAMING ™ product. Alternatively, if the mobile device 130 is unable to match the selected object with the training image, the mobile device 130 may generate the metadata containing the feature vector of the selected object.At 440, the mobile device may retrieve the contextual data associated with the acquired image. The mobile device 130 may retrieve the location, orientation, orientation, movement, and / or other contextual data associated with the mobile device 130 as the image is captured or processed to detect the object. For example, the contextual data can include the GPS location where the image was obtained. For another example, the contextual data may include the orientation of the mobile device 130 (eg, up at the billboard, down at the magazine, etc.) or the ambient temperature when the image was obtained. For yet another example, the contextual data may contain textual or auditory information entered by the user, such as text or voice messages similar to "at USOpen", passive information similar to background noise, and a "lady like the left" who is? "Or" what is worn by the lady on the left ". In one embodiment, the mobile device 130 may independently acquire the contextual data at 440, or concurrently with any of the processes performed at 410 through 435.Next, at 445, the mobile device 130 may generate a visual search query based on the acquired image and transmit the visual search query to the backend 300 of the image recognition system 120. The visual search query may include a destination address to a processor or server in the backend 300 or a process running therein and may decide the destination address based on the category of the selected object. In one embodiment, the visual search query may include the acquired image or sub-images extracted from the acquired images based on the selected object, as well as metadata associated with the acquired images or extracted sub-images and / Or context data. Mobile device 130 may crop, compress, scale, convert to grayscale, or otherwise process the acquired image to extract or otherwise generate at least one sub-image based on the selected object.For example, as illustrated in FIG. 1 and FIG. 6C, if the selected object is identified as a BREWGAMINGMONKEY ™ flag in 430, the mobile device 130 may prune or otherwise manipulate the captured image to extract the image containing the flag or its (Eg, t-shirt 630, advertisement, coupon, hat, pair of shoes, etc.) to which the logo is attached. Alternatively or in addition, the mobile device 130 may be pruned or otherwise processed according to instructions received from or by the user of the mobile device 130 or from computer-readable instructions that have been previously received by the mobile device 130 Acquired image. After generating a visual search query that includes the acquired image or extracted sub-image and the metadata and / or contextual data associated with the acquired image or extracted sub-image, the mobile device 130 may translate the visual The search query is transmitted to the backend 300 of the image recognition system 120. The extracted sub-image has a smaller file size than the file size of the acquired image. Therefore, a visual search query containing the extracted sub-image instead of the entire acquired image is transmitted. This image reduction can once again increase the speed of sending visual search queries. In addition, transmitting visual search queries that include the extracted sub-images instead of the entire captured image may also reduce the communication bandwidth requirements to the server or other destinations.In 450, the mobile device 130 may receive visual search results from the backend 300 in response to the visual search query and present it to the user. The mobile device 130 may also store visual search results and / or associate the visual search results with the visual search query, and may receive and store annotations from the user regarding the visual search results. Thereafter, the mobile device 130 may transmit visual search results, visual search queries, and / or user annotations via the communication protocol. The visual search results may include informational content associated with the selected object in the acquired image. For example, if the selected object is a logo (eg, a logo in the selected logo 610 as shown in FIG. 6B, etc.), the content may include product information (eg, product logo 650 and product type 660, As shown in FIG. 6D), leading links to product information (eg, information links 670), related products (eg, related products 690 and ads 695), for comparing stores, saving to wish lists, sharing with friends, Purchased links to online retailers (eg, purchase links 680), etc., or any combination thereof. If the selected object is a celebrity's face, the informational content may, for example, include the celebrity's name, his or her profile, products and / or services associated with the celebrity, and other related information, or any combination thereof. If the selected object is a landmark, the content may include the name of the landmark, history, products and / or services associated with the landmark, and other related information, or any combination thereof. In one embodiment, the mobile device 130 may receive feedback requests regarding visual search results from the backend 300 such as a confirmation dialog box for the user to rate the accuracy or relevance of the search results, a dialog box for the user to annotate the search results Input dialog box and so on. The above list of various categories and types of images, metadata, contextual data, visual search queries and results, informational content and user feedback mechanisms are for illustrative purposes only and are not intended to limit the teachings of the present invention in any way.5 illustrates a process that may be performed by backend 300 (shown in FIG. 3) to implement a visual search and facilitate image recognition using image recognition system 120 (shown in FIG. 1) in accordance with one embodiment of the present teachings The flow chart. At 510, the remote server 140 (shown in FIGS. 1 and 3) in the backend 300 may receive visuals via the wireless connection 132 and the wireless service provider 150 or other data transmission means known to those skilled in the art Search query As described above, the visual search query may include an image containing at least one object of interest, and metadata and / or context data associated with the image. For purposes of illustration, an exemplary visual search query generated based on image 600 (shown in FIG. 6C) may include an image of t-shirt 630, metadata indicating the image associated with BREWGAMING ™, Get the contextual data of the image at the GPS location.Next, at 515, the remote server 140 may identify or otherwise identify the object of interest in the image based on the visual search query. The remote server 140 may assist in identifying the object of interest with metadata and / or contextual data associated with the image, as well as any user feedback associated with search results previously provided for similar visual search queries that cause the remote server 140 can focus on or otherwise limit the scope of visual search and thus improve the accuracy, speed, and / or efficiency of the image recognition system 120. In one embodiment, the remote server 140 may execute image recognition software 364 stored in the image recognition server 146 to execute the image processing with the image data stored in the image data and coefficient library 366 (eg, image raster data, images Coefficient, etc.) one-to-one match. The remote server 140 may focus on a one-to-one match based on the metadata and / or context data associated with the image. For example, upon receiving an exemplary visual search query generated based on image 600, remote server 140 may focus on a one-to-one match of t-shirt 630 with the stored image data associated with the BREW GAMING ™.In lieu of or in addition to a one-to-one match, in 515, remote server 140 may execute image recognition software 364 to detect at least one object of interest in the image and calculate features that uniquely represent the object of interest vector. The remote server 140 may identify the object of interest based on the calculated feature vector by comparing the feature vector with the image coefficients of possible image candidates or training images stored in the image data and coefficient library 366. In one embodiment, the remote server 140 may compare the calculated feature vector with the coefficient vector of the training image by calculating the distance between the vectors (eg, Manhalanobis distance, Euclidean distance, etc.) Match to identify the object of interest. The remote server 140 may then recognize the object of interest based on the matched training images. The remote server 140 may focus on vector matching based on the metadata and / or context data associated with the image. For example, upon receiving the exemplary visual search query generated based on the image 600, the remote server 140 may focus on the matching of the feature vector calculated from the t-shirt 630 with the stored image coefficients associated with the BREW GAMINGTM.In 520, the remote server 140 may generate a visual search result containing the informational content based on the recognized object of interest in response to the visual search query. The remote server 140 may perform a semantic search based on the identified objects, metadata, and / or any user feedback associated with the contextual data associated with the images, and search results previously provided for similar visual search queries to Retrieve information content associated with and / or related to the identified object. By using the associated metadata and / or contextual data to focus on or otherwise limit the scope of the semantic search, the remote server 140 and thus the image recognition system 120 may provide more accurate and / or relevant information in response to a visual search query content.In one embodiment, the remote server 140 may execute a search engine 344 stored in the content server 144 to perform a semantic search for the informational content stored in the content database 346. The remote server 140 may focus on the semantic search based on the metadata and / or context data associated with the image. For example, upon receiving an exemplary visual search query generated based on image 600 and recognizing that t-shirt 630 contains an image of the BREW GAMING MONKEY ™ logo, remote server 140 may perform a semantic search of the BREW GAMING ™ to retrieve related informational content such as Product information (eg, product mark 650 and product type 660, as shown in FIG. 6D), lead links to product information (eg, information link 670), related products (eg, related products 690) (Eg, purchase link 680) saved to a wish list, shared with friends, or purchased instantly, or the like, or any combination thereof. For further example, the remote server 140 may focus on semantic search using associated contextual data (eg, GPS location, text input by a user, auditory information, etc.) to retrieve related information content based on the GPS location, eg, An advertisement 695 for the related product 690 at the store near the GPS location (as shown in FIG. 6D), coupons and promotions available at a nearby store corresponding to the GPS location, and the like. The above list of various types of search queries, images, objects of interest, metadata, contextual data, visual search queries and results and informational content is for illustrative purposes only and is not intended to limit the teachings of the invention in any way .Next, at 525, remote server 140 may transmit or otherwise provide visual search results containing the relevant content via wireless connection 132 and wireless service provider 150 or other data transmission means known to those skilled in the art To the mobile device 130. The mobile device 140 may also transmit a request for user feedback regarding visual search results, such as a confirmation dialog box for allowing the user to evaluate the accuracy or relevance of search results, an input dialog box for announcing user search results, and the like. At 530, the remote server 140 may record visual search queries for any purpose, such as expense collection, reporting, data mining, user or product profiles, future sales, and the like. In addition, remote server 140 may record, for any purpose, visual search results associated with or not associated with the corresponding visual search query. Remote server 140 may also record user feedback for visual search results and / or user feedback associated with visual search results for any purpose, such as training image recognition software 364 and / or search engine 344, expense collection, Reports, data mining, user or product profiles, future sales, and more. In addition, remote server 140 may cache visual search results to quickly provide search results and minimize redundant processing in response to future visual search queries that are identical or nearly identical to the visual search queries. In addition, the remote server 140 may be associated with statistical data associated with the processing of the visual search query by the image recognition system 120, such as confidence levels in terms of search time, relevance of informational content in visual search results, and the like.When the embodiments described herein are implemented in software, firmware, middleware, microcode and / or program code or code segments, they may be stored in a computer-readable storage medium such as a storage component. A code segment may represent any combination of procedures, functions, subroutines, programs, routines, subroutines, modules, packages, classes, or instructions, data structures, or programmatic statements. A code segment may be coupled to another code segment or hardware circuit by passing and / or receiving information, data, arguments, parameters, or memory contents. The information, arguments, parameters, data, etc. may be delivered, retransmitted, or transmitted using any suitable means including memory sharing, messaging, token passing, network transmission, and the like. For software implementations, the techniques described herein can be implemented with modules (eg, programs, functions, etc.) that perform the functions described herein. The software code may be stored in the memory unit and executed by the processor. The memory unit may be implemented within the processor or external to the processor, and the memory unit may be communicatively coupled to the processor via various means as is known in the art, as implemented external to the processor.The foregoing description is illustrative and variations in the configuration and implementation may be suggested to one skilled in the art. For example, a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or It is designed to perform any combination of the functionalities described herein, and various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein. The general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, a combination of one or more microprocessors and a DSP core, or any other such configuration.In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. For software implementations, the techniques described herein can be implemented with modules (eg, programs, functions, etc.) that perform the functions described herein. The software code may be stored in the memory unit and executed by the processor. The memory unit may be implemented within the processor or external to the processor, and the memory unit may be communicatively coupled to the processor via various means as is known in the art, as implemented external to the processor. If implemented in software, the functions may be stored on a computer-readable medium as one or more instructions or code or transmitted via a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such tangible computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or can be used to carry or store instructions Or any other medium that can be accessed by a computer in the form of desired program code in the form of a data structure. Also, any connection may be properly referred to as a computer-readable medium. For example, if the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave are used to transmit software from a website, server, or other remote source, the coaxial cable , Fiber optic cables, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of media. Combinations of the above should also be included within the scope of computer-readable media.The techniques described herein may be used in various wireless communication systems, for example, CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and other systems. The terms "system" and "network" are often used interchangeably. The CDMA system may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000 and the like. UTRA includes Wideband CDMA (W-CDMA) and other variants of CDMA. In addition, cdma2000 covers IS-2000, IS-95 and IS-856 standards. The TDMA system may implement a radio technology such as Global System for Mobile communications (GSM). The OFDMA system may implement, for example, Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash- OFDM and the like. UTRA and E-UTRA are part of the Global System for Mobile Telecommunication (UMTS). 3GPP Long Term Evolution (LTE) is a release of UMTS that uses E-UTRA, which employs OFDMA on the downlink and SC-FDMA on the uplink. UTRA, E-UTRA, UMTS, LTE and GSM are described in the literature from the organization named "3rd Generation Partnership Project" (3GPP). In addition, cdma2000 and UMB are described in the literature from the organization named "3rd Generation Partnership Project 2" (3GPP2). In addition, the wireless communication system may additionally include a peer-to-peer (eg, mobile-to-mobile) private network system that often uses unpaired unlicensed spectrum, 802.xx wireless LAN, BLUETOOTH, and any other Short or long range wireless communication technology.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. Although the methods have been described by way of example, the steps of the methods may be performed in an order different from that illustrated or concurrently. The software module may reside in RAM memory, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from the storage medium and write information to the storage medium. In the alternative, the storage medium may be integral with the processor. The processor and storage medium may reside in an ASIC. The ASIC may reside in a mobile device. In the alternative, the processor and storage medium may reside as discrete components in a mobile device. Other resources described as singular or integral may be plural or distributed in one embodiment and the resources described as plural or distribution may be combined in several embodiments. Accordingly, it is intended that the scope of the teachings of the present invention be limited only by the accompanying claims. |
A method and an apparatus are provided. The apparatus is a hardware module that controls a power mode of a plurality of modules. The apparatus receives an indication of a desired operational frequency. Based on the received indication, the apparatus determines to switch from a first power mode associated with a first set of modules to a second power mode corresponding to the desired operational frequency and associated with a second set of modules. The apparatus enables modules in the second set of modules that are unassociated with the first power mode, stops traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode, routes traffic through the second set of modules, and disables modules in the first set of modules that are unassociated with the second power mode. |
WHAT IS CLAIMED IS:CLAIMS1. A method of a hardware module for controlling a power mode of a plurality of modules, comprising:receiving an indication of a desired operational frequency;determining to switch from a first power mode to a second power mode based on the received indication of the desired operational frequency, the first power mode being associated with a first set of modules of the plurality of modules, the second power mode being associated with a second set of modules of the plurality of modules, the second power mode corresponding to the desired operational frequency;enabling modules in the second set of modules that are unassociated with the first power mode;stopping traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode;routing traffic through the second set of modules; anddisabling modules in the first set of modules that are unassociated with the second power mode.2. The method of claim 1, wherein the enabling the modules comprises turning on the modules and the disabling the modules comprises turning off the modules.3. The method of claim 1, wherein the enabling the modules comprises changing a state of the modules from a lower-power standby state to a higher-power operational state, and the disabling the modules comprises changing a state of the modules from a higher- power operational state to a lower-power standby state.4. The method of claim 1, wherein the traffic is stopped for approximately 10 ns to 20 ns.5. The method of claim 1, further comprising waiting for the time period until the second set of modules reaches a steady state.6. The method of claim 1, wherein the hardware module and the first and second sets of modules are within a double data rate (DDR) physical (PHY) hardware module.7. The method of claim 1, wherein the plurality of modules is associated with a double data rate (DDR) dynamic random access memory (DRAM).8. The method of claim 1, wherein the plurality of modules comprises a first calibrated delay circuit (CDC) and a second CDC in parallel with the first CDC, the first set of modules comprises the first CDC, and the second set of modules comprises the second CDC, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first CDC.9. The method of claim 8, wherein the second CDC supports a higher power mode than the first CDC.10. The method of claim 8, wherein the second CDC supports a lower power mode than the first CDC.11. The method of claim 1, wherein the plurality of modules comprises a first input receiver and a second input receiver in parallel with the first input receiver, the first set of modules comprises the first input receiver, and the second set of modules comprises the second input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first input receiver.12. The method of claim 11, wherein the second input receiver supports a higher power mode than the first input receiver.13. The method of claim 11, wherein the second input receiver supports a lower power mode than the first input receiver.14. The method of claim 1, wherein the plurality of modules comprises at least one of a plurality of calibrated delay circuits (CDCs), a plurality of input receivers, a low- dropout (LDO) regulator, a current-to-voltage converter, a phase lock loop (PLL), a bias current generator, or a reference voltage generator.15. The method of claim 14, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises the low-power CDC and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power input receiver.16. The method of claim 14, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.17. The method of claim 14, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises a high- power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high- power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.18. The method of claim 14, wherein the first power mode comprises a low power mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the low-power CDC and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.19. The method of claim 14, wherein the first power mode comprises a low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the high-power CDC, the current-to-voltage converter, the PLL, and the bias current generator, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power CDC.20. The method of claim 14, wherein the first power mode comprises a low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.21. The method of claim 14, wherein the first power mode comprises a medium performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.22. The method of claim 14, wherein the first power mode comprises a medium performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, and the bias current generator.23. The method of claim 14, wherein the first power mode comprises a medium performance mode and the second power mode comprises a high performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.24. The method of claim 14, wherein the first power mode comprises a high performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.25. The method of claim 14, wherein the first power mode comprises a high performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.26. The method of claim 14, wherein the first power mode comprises a high performance mode and the second power mode comprises a medium performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver.27. The method of claim 1, wherein the modules are enabled in a particular sequence.28. A hardware module apparatus for controlling a power mode of a plurality of modules, comprising:a plurality of modules;means for receiving an indication of a desired operational frequency;means for determining to switch from a first power mode to a second power mode based on the received indication of the desired operational frequency, the first power mode being associated with a first set of modules of the plurality of modules, the second power mode being associated with a second set of modules of the plurality of modules, the second power mode corresponding to the desired operational frequency; means for enabling modules in the second set of modules that are unassociated with the first power mode;means for stopping traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode;means for routing traffic through the second set of modules; andmeans for disabling modules in the first set of modules that are unassociated with the second power mode.29. The apparatus of claim 28, wherein the means for enabling the modules is configured to turn on the modules and the means for disabling the modules is configured to turn off the modules.30. The apparatus of claim 28, wherein the means for enabling the modules is configured to change a state of the modules from a lower-power standby state to a higher-power operational state, and the means for disabling the modules is configured to change a state of the modules from a higher-power operational state to a lower-power standby state.31. The apparatus of claim 28, wherein the traffic is stopped for approximately 10 ns to 20 ns.32. The apparatus of claim 28, further comprising means for waiting for the time period until the second set of modules reaches a steady state.33. The apparatus of claim 28, wherein the hardware module and the first and second sets of modules are within a double data rate (DDR) physical (PHY) hardware module.34. The apparatus of claim 28, wherein the plurality of modules is associated with a double data rate (DDR) dynamic random access memory (DRAM).35. The apparatus of claim 28, wherein the plurality of modules comprises a first calibrated delay circuit (CDC) and a second CDC in parallel with the first CDC, the first set of modules comprises the first CDC, and the second set of modules comprises the second CDC, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first CDC.36. The apparatus of claim 35, wherein the second CDC supports a higher power mode than the first CDC.37. The apparatus of claim 35, wherein the second CDC supports a lower power mode than the first CDC.38. The apparatus of claim 28, wherein the plurality of modules comprises a first input receiver and a second input receiver in parallel with the first input receiver, the first set of modules comprises the first input receiver, and the second set of modules comprises the second input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first input receiver.39. The apparatus of claim 38, wherein the second input receiver supports a higher power mode than the first input receiver.40. The apparatus of claim 38, wherein the second input receiver supports a lower power mode than the first input receiver.41. The apparatus of claim 28, wherein the plurality of modules comprises at least one of a plurality of calibrated delay circuits (CDCs), a plurality of input receivers, a low- dropout (LDO) regulator, a current-to-voltage converter, a phase lock loop (PLL), a bias current generator, or a reference voltage generator.42. The apparatus of claim 41, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises the low- power CDC and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power input receiver.43. The apparatus of claim 41, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low- power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.44. The apparatus of claim 41, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.45. The apparatus of claim 41, wherein the first power mode comprises a low power mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the low-power CDC and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.46. The apparatus of claim 41, wherein the first power mode comprises a low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the high-power CDC, the current-to-voltage converter, the PLL, and the bias current generator, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power CDC.47. The apparatus of claim 41, wherein the first power mode comprises a low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.48. The apparatus of claim 41, wherein the first power mode comprises a medium performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.49. The apparatus of claim 41, wherein the first power mode comprises a medium performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, and the bias current generator.50. The apparatus of claim 41, wherein the first power mode comprises a medium performance mode and the second power mode comprises a high performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.51. The apparatus of claim 41, wherein the first power mode comprises a high performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.52. The apparatus of claim 41, wherein the first power mode comprises a high performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.53. The apparatus of claim 41, wherein the first power mode comprises a high performance mode and the second power mode comprises a medium performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver.54. The apparatus of claim 28, wherein the modules are enabled in a particular sequence.55. An integrated circuit hardware module apparatus for controlling a power mode of a plurality of modules, comprising:a plurality of modules; anda frequency power manager configured to:receive an indication of a desired operational frequency;determine to switch from a first power mode to a second power mode based on the received indication of the desired operational frequency, the first power mode being associated with a first set of modules of the plurality of modules, the second power mode being associated with a second set of modules of the plurality of modules, the second power mode corresponding to the desired operational frequency;enable modules in the second set of modules that are unassociated with the first power mode;stop traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode;route traffic through the second set of modules; anddisable modules in the first set of modules that are unassociated with the second power mode.56. The apparatus of claim 55, wherein the frequency power manager is configured to enable the modules by turning on the modules, and to disable the modules by turning off the modules.57. The apparatus of claim 55, wherein the frequency power manager is configured to enable the modules by changing a state of the modules from a lower-power standby state to a higher-power operational state, and to disable the modules by changing a state of the modules from a higher-power operational state to a lower-power standby state.58. The apparatus of claim 55, wherein the traffic is stopped for approximately 10 ns to 20 ns.59. The apparatus of claim 55, wherein the frequency power manager is configured to wait for the time period until the second set of modules reaches a steady state.60. The apparatus of claim 55, wherein the hardware module and the first and second sets of modules are within a double data rate (DDR) physical (PHY) hardware module.61. The apparatus of claim 55, wherein the plurality of modules is associated with a double data rate (DDR) dynamic random access memory (DRAM).62. The apparatus of claim 55, wherein the plurality of modules comprises a first calibrated delay circuit (CDC) and a second CDC in parallel with the first CDC, the first set of modules comprises the first CDC, and the second set of modules comprises the second CDC, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first CDC.63. The apparatus of claim 62, wherein the second CDC supports a higher power mode than the first CDC.64. The apparatus of claim 62, wherein the second CDC supports a lower power mode than the first CDC.65. The apparatus of claim 55, wherein the plurality of modules comprises a first input receiver and a second input receiver in parallel with the first input receiver, the first set of modules comprises the first input receiver, and the second set of modules comprises the second input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first input receiver.66. The apparatus of claim 65, wherein the second input receiver supports a higher power mode than the first input receiver.67. The apparatus of claim 65, wherein the second input receiver supports a lower power mode than the first input receiver.68. The apparatus of claim 55, wherein the plurality of modules comprises at least one of a plurality of calibrated delay circuits (CDCs), a plurality of input receivers, a low- dropout (LDO) regulator, a current-to-voltage converter, a phase lock loop (PLL), a bias current generator, or a reference voltage generator.69. The apparatus of claim 68, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises the low- power CDC and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power input receiver.70. The apparatus of claim 68, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low- power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.71. The apparatus of claim 68, wherein the first power mode comprises a ultra-low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.72. The apparatus of claim 68, wherein the first power mode comprises a low power mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the low-power CDC and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.73. The apparatus of claim 68, wherein the first power mode comprises a low power mode and the second power mode comprises a medium performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the bias current generator, and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the high-power CDC, the current-to-voltage converter, the PLL, and the bias current generator, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the low-power CDC.74. The apparatus of claim 68, wherein the first power mode comprises a low power mode and the second power mode comprises a high performance mode, the first set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.75. The apparatus of claim 68, wherein the first power mode comprises a medium performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.76. The apparatus of claim 68, wherein the first power mode comprises a medium performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises a low- power CDC of the plurality of CDCs and the medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the low-power CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, and the bias current generator.77. The apparatus of claim 68, wherein the first power mode comprises a medium performance mode and the second power mode comprises a high performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the bias current generator, and a medium-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the medium-power input receiver.78. The apparatus of claim 68, wherein the first power mode comprises a high performance mode and the second power mode comprises an ultra-low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current- to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a low-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.79. The apparatus of claim 68, wherein the first power mode comprises a high performance mode and the second power mode comprises a low power mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to- voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises a low-power CDC of the plurality of CDCs and a medium-power input receiver of the plurality of input receivers, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the second set of modules, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the first set of modules.80. The apparatus of claim 68, wherein the first power mode comprises a high performance mode and the second power mode comprises a medium performance mode, the first set of modules comprises a high-power CDC of the plurality of CDCs, the current-to-voltage converter, the PLL, the LDO regulator, the bias current generator, the reference voltage generator, and a high-power input receiver of the plurality of input receivers, the second set of modules comprises the high-power CDC, the current-to- voltage converter, the PLL, the bias current generator, and a medium-power input receiver, wherein the modules that are enabled in the second set of modules that are unassociated with the first power mode comprise the medium-power input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode comprise the LDO regulator, the reference voltage generator, and the high-power input receiver.81. The apparatus of claim 55, wherein the modules are enabled in a particular sequence.82. The apparatus of claim 55, wherein the frequency power manager comprises one or more finite state machines (FSMs). |
FREQUENCY POWER MANAGERCROSS-REFERENCE TO RELATED APPLICATION(S)This application claims the benefit of U.S. Provisional Application Serial No. 61/817,130, entitled "FREQUENCY POWER MANAGER" and filed on April 29, 2013, and U.S. non-Provisional Application Serial No. 13/901,511, entitled "FREQUENCY POWER MANAGER", and filed on May 23, 2013, which are expressly incorporated by reference herein in their entirety.BACKGROUNDField[0002] The present disclosure relates to a frequency power manager.Background[0003] For some hardware applications, various power modes are needed. The power modes may be supported by different sets of modules (components) within an interface and may correspond to clock frequencies at which an external module and the different sets of modules interfacing the external module operate. For example, an interface may communicate with an external module, and the interface may include various sets of modules including a first set of higher-power modules that operates with a higher performance and a second set of lower-power modules that operates with a lower performance. There is a current need for a frequency power manager to manage a power utilization of the first and second set of modules in order to optimize a power consumed by the first and second set of modules within the interface.SUMMARYIn an aspect of the disclosure, a method and an apparatus are provided. The apparatus may be a frequency power manager. The apparatus is a hardware module that controls a power mode of a plurality of modules. The apparatus receives an indication of a desired operational frequency. The apparatus determines to switch from a first power mode to a second power mode based on the received indication of the desired operational frequency. The first power mode is associated with a first set of modules of the plurality of modules. The second power mode is associated with a second set of modules of the plurality of modules. The second power mode corresponds to the desired operational frequency. The apparatus enables modules in the second set of modules that are unassociated with the first power mode. The apparatus stops traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode. The apparatus routes traffic through the second set of modules. The apparatus disables modules in the first set of modules that are unassociated with the second power mode.The apparatus may enable the modules by turning on the modules and disable the modules by turning off the modules. The apparatus may enable the modules by changing a state of the modules from a lower-power standby state to a higher-power operational state, and may disable the modules by changing a state of the modules from a higher-power operational state to a lower-power standby state. The apparatus may stop the traffic for approximately 10 ns to 20 ns. However, the amount of time the traffic is stopped may be programmable. The plurality of modules may be within a double data rate (DDR) physical (PHY) interface and may be associated with and used to send contra 1/data to and to receive data from a DDR dynamic random access memory (DRAM).BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a diagram illustrating a use of an exemplary frequency power manager for controlling a power mode.FIG. 2 is a diagram illustrating an exemplary set of modules controlled by the frequency power manager.FIG. 3 is a diagram illustrating modules that may be utilized in a first power mode.FIG. 4 is a diagram illustrating modules that may be utilized in a second power mode.FIG. 5 is a diagram illustrating modules that may be utilized in a third power mode.FIG. 6 is a diagram illustrating modules that may be utilized in a fourth power mode. [0012] FIG. 7 is a flow chart of a method of a hardware module for controlling a power mode of a plurality of modules.[0013] Fig. 8 is a diagram illustrating finite state machines within the frequency power manager.[0014] FIG. 9 is a conceptual data flow diagram illustrating the data flow between different modules/means/components in an exemplary apparatus.DETAILED DESCRIPTION[0015] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0016] Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.[0017] By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a "processing system" that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.[0018] Accordingly, in one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes CD, laser disc, optical disc, digital versatile disc (DVD), and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0019] FIG. 1 is a diagram 100 illustrating use of an exemplary frequency power manager 106 for controlling a power mode. A system on a chip (SoC) 102 includes processor(s) 160, a memory controller 170, and an interface 180. The interface 180 includes the frequency power manager 106 and a module component 128. The module component 128 includes a plurality of modules 130-150. The processor(s) 160 determines to transition to a particular desired operational frequency and informs the memory controller 170. The memory controller 170 provides the desired operation frequency to the frequency power manager 106. Based on the received desired operational frequency, the frequency power manager 106 controls a power mode of the module component 128 including the modules 130-150 and a power mode of communication between the interface 180 and the external module(s) 190. The selected power mode controls the operational frequency of some of the modules 130-150 and the communication between the interface 180 and the external module(s) 190. The modules 130-150 interface with the external module(s) 190. In one example, the interface may be a PHY interface 180, and specifically may be a DDR PHY interface, and the external module 190 may be a DDR DRAM. However, the exemplary methods and apparatuses are not limited to applications involving a DDR DRAM. As such, the interface 180 may be any mixed signal design for interfacing within any external module(s) 190 in order to control a power mode of the interface 180 and the communication with the external module(s) 190.Upon receiving a desired operational frequency from the memory controller 170, the frequency power manager 106 determines whether to switch power modes. If the desired operational frequency is obtainable in a current power mode, the frequency power manager 106 maintains the current power mode. If the frequency power manager 106 determines to switch power modes, the frequency power manager 106 transitions the module component 128 from a first power mode corresponding to a prior power mode to a second power mode corresponding to a subsequent power mode. The second power mode is requisite for providing the desired operation frequency received from the memory controller 170. For example, if the frequency power manager 106 receives an operation frequency/ where/ < 200 MHz, the frequency power manager 106 may transition the module component 128 to an ultra-low power mode for operating at the desired operational frequency. For another example, if the frequency power manager 106 receives an operation frequency/ where 200 MHz </ < 250 MHz, the frequency power manager 106 may transition the module component 128 to a low power mode for operating at the desired operational frequency. For another example, if the frequency power manager 106 receives an operation frequency/ where 250 MHz </ < 533 MHz, the frequency power manager 106 may transition the module component 128 to a medium performance mode for operating at the desired operational frequency. For another example, if the frequency power manager 106 receives an operation frequency/ where/ > 533 MHz, the frequency power manager 106 may transition the module component 128 to a high performance mode for operating at the desired operational frequency. The aforementioned frequencies and frequency ranges may be programmable. [0021] The first power mode (or prior power mode) may be associated with a first set of modules of the module component 128 and the second power mode (or subsequent power mode) may be associated with a second set of modules of the module component 128 different than the first set of modules. For example, the first power mode may be associated with any one of the sets of modules 108, 110, 112, or 114, and the second power mode may be associated with any other of the sets of modules 108, 110, 112, or 114. Each of the sets of modules 108, 110, 112, and 114 are different, but may include some of the same modules. For example, the set of modules 108 may include the modules 130, 132, 134, 136, 140, 142, and 144; the set of modules 110 may include the modules 134, 136, 142, 144, and 146; the set of modules 112 may include the modules 146 and 148; and the set of modules 1 14 may include the modules 138 and 148. Some modules may be associated with all power modes, such as for example, the module(s) 150.[0022] The frequency power manager 106 includes a plurality of finite state machines(FSMs) and other circuitry. The frequency power manager 106 may therefore include only hardware components for optimizing a power consumed by the module component 128 through hardware-driven dynamic voltage/frequency switching and for providing a fast and efficient transition from the first power mode to the second power mode. In one example, the hardware components of the FSMs are produced using a 28 nm process technology. Other process technologies may be used, such as 20 nm, 16 nm fin field effect transistor (FinFET), or other process technologies. The frequency power manager 106 enables (e.g., turns on or changes from a lower- power standby state to a higher power operational state) modules in the second set of modules that are unassociated with the first power mode. For example, if the first power mode is associated with the set of modules 108 and the second power mode is associated with the set of modules 110, the frequency power manager 106 enables the module 146. The module 146 is the only module in the set of modules 110 that is unassociated with the first power mode, which is associated with the set of modules 108. The frequency power manager 106 waits a time period (or startup time period) until the module 146 reaches a steady state after enabling the module 146. For example, the frequency power manager 106 may wait until the module 146 is providing a particular and expected output after enabling the module 146. In one configuration, the time period may be predetermined based on a known or tested time for the module 146 to provide the particular or expected output. In another configuration, the time period may be programmable. In another configuration, the time period may be based on receiving a "ready" signal from the module 146. Accordingly, the frequency power manager 106 may wait a time period until the module 146 reaches a steady state, and the time period may be predetermined, programmable, and/or based on when a "ready" signal is received from the module 146. The length of time of the time period may depend on the particular modules that the frequency power manager 106 enables and an order of enabling the particular modules. Upon expiration of the time period after enabling the module 146, the frequency power manager 106 briefly stops traffic through the modules 130-150. Stopping the flow of traffic through the modules 130-150 stops the flow of traffic between the SoC 102 and the external module(s) 190. With a 28 nm process technology utilized within the FSMs of the frequency power manager 106, the traffic is stopped for about 10-20 ns. However, the amount of time the traffic is stopped may be programmable. The frequency power manager 106 then routes traffic through the set of modules 110. After the traffic is routed through the set of modules 110, the frequency power manager 106 disables (e.g., turns off or changes from a higher-power operational state to a lower-power standby state) the modules in the set of modules 108 that are unassociated with the second power mode. Specifically, the frequency power manager 106 disables the modules 130, 132, and 140.FIG. 2 is a diagram 200 illustrating an exemplary set of modules controlled by the frequency power manager 106. The module component 128 may include the modules 202-226. As shown in FIG. 2, a multiplexer 202 receives inputs from a high-power (HP) input receiver 206 and a multiplexer 204. The multiplexer 202 selects one of the inputs to output based on a select signal. The multiplexer 204 receives inputs from a medium-power (MP) input receiver 208 and a low-power (LP) input receiver 210. The HP input receiver 206, the MP input receiver 208, and the LP input receiver 210 may be connected in parallel. The multiplexer 204 selects one of the inputs to output based on a select signal. The HP input receiver 206 receives a reference voltage from a reference voltage generator 212 and a bias current from a bias current generator 214. The bias current generator 214 also provides a bias current to a phase lock loop (PLL) 216. The PLL 216 outputs a current to a current-to-voltage converter 218. The current-to-voltage converter 218 converts the received current to a voltage, and provides the voltage to an HP calibrated delay circuit (CDC) 220. The PLL 216, the current-to-voltage converter 218, and the HP CDC 220 receive a supply voltage from a low-dropout (LDO) regulator 224. The LDO regulator 224 may be supplied with one or more supply voltages Vddi, Vdd2(e.g., Vddi=1.05 V, Vdd2=1.8 V). A multiplexer 226, which may include one or more multiplexers, receives inputs from the HP CDC 220 and an LP CDC 222. The HP CDC 220 and the LP CDC 222 may be connected in parallel. The multiplexer 226 selects one of the inputs to output based on a select signal. The output of the multiplexer 226 may be a delayed clock signal. For example, the delayed clock signal may be delayed by one fourth of a cycle and may be used to transmit data to the external module(s) 190 and/or used by the input receiver 206, 208, 210 in receiving data from the external module(s) 190.Each of the modules 202-226 may be associated with one or more power modes. For example, the multiplexers 202, 204, 226 may be associated with all power modes. The multiplexers 202, 204, 226 may correspond to the module(s) 150. For another example, the LP CDC 222 and the LP input receiver 210 may be associated with a first power mode (e.g., ultra-low power mode); the LP CDC 222 and the MP input receiver 208 may be associated with a second power mode (e.g., low power mode); the MP input receiver 208, the bias current generator 214, the PLL 216, the current-to-voltage converter 218, and the HP CDC 220 may be associated with a third power mode (e.g., a medium performance mode); and the HP input receiver 206, the reference voltage generator 214, the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the LDO regulator 224, and the HP CDC 220 may be associated with a fourth power mode (e.g., a high performance mode). Some of the modules corresponding to the first, second, third, and fourth power modes may operate at different clock frequencies based on the power mode. For example, some modules enabled in the first power mode may operate at a frequency fi, some modules enabled in the second power mode may operate at a frequency f2, some modules enabled in the third power mode may operate at a frequency f3, and some modules enabled in the fourth power mode may operate at a frequency f4. In one example, fi < 200 MHz, 200 MHz < f2< 250 MHz, 250 MHz < f3< 533 MHz, and f4> 533 MHz. The aforementioned frequencies and frequency ranges may be programmable.[0025] Referring to FIGs. 1 and 2, when the modules 202-226 are configured in the first power mode (e.g., an ultra-low power mode), some of the modules 202-226 may have an operational frequency of fi and the PHY interface 180 including the modules 202-226 may communicate with an external module (e.g., the external module(s) 190, a DDR DRAM) at the operational frequency of fi. When the modules 202-226 are configured in the second power mode (e.g., a low power mode), some of the modules 202-226 may have an operational frequency of f2and the PHY interface 180 including the modules 202-226 may communicate with an external module (e.g., the external module(s) 190, a DDR DRAM) at the operational frequency of f2. When the modules 202-226 are configured in the third power mode (e.g., a medium performance mode), some of the modules 202-226 may have an operational frequency of f3and the PHY interface 180 including the modules 202- 226 may communicate with an external module (e.g., the external module(s) 190, a DDR DRAM) at the operational frequency of f3. When the modules 202-226 are configured in the fourth power mode (e.g., a high performance mode), some of the modules 202-226 may have an operational frequency of f4and the PHY interface 180 including the modules 202-226 may communicate with an external module (e.g., the external module(s) 190, a DDR DRAM) at the operational frequency of f4.[0026] FIG. 3 is a diagram 300 illustrating modules that may be utilized in a first power mode. When transitioning from a prior mode to the first power mode, the frequency power manager 106 enables any of the shaded modules, including the LP CDC 222 and the LP input receiver 210, that were disabled in the prior mode. If any of the shaded modules, including the LP CDC 222 and the LP input receiver 210, were already enabled in the prior mode, the frequency power manager 106 maintains the enabled state. The frequency power manager 106 may also provide appropriate select signals to the multiplexers 202, 204, 226 so that the multiplexers 202, 204, 226 output the correct signals for the first power mode. Subsequently, the frequency power manager 106 may configure the modules 202-226 so that communication between the SOC 102 and the external module 190 is suspended for a brief period of time (which may be programmable), such as for example, 10-20 ns. Thereafter, the frequency power manger 106 may configure the modules 202-226 to resume communication between the SOC 102 and the external module 190 using the LP CDC 222 and the LP input receiver 210. The frequency power manager 106 may then disable any modules that are unassociated with the first power mode.[0027] FIG. 4 is a diagram 400 illustrating modules that may be utilized in a second power mode. When transitioning from a prior mode to the second power mode, the frequency power manager 106 enables any of the shaded modules, including the LP CDC 222 and the MP input receiver 208, that were disabled in the prior mode. If any of the shaded modules, including the LP CDC 222 and the MP input receiver 208, were already enabled in the prior mode, the frequency power manager 106 maintains the enabled state. The frequency power manager 106 may also provide appropriate select signals to the multiplexers 202, 204, 226 so that the multiplexers 202, 204, 226 output the correct signals for the second power mode. Subsequently, the frequency power manager 106 may configure the modules 202-226 so that communication between the SOC 102 and the external module 190 is suspended for a brief period of time (which may be programmable), such as for example, 10-20 ns. Thereafter, the frequency power manger 106 may configure the modules 202-226 to resume communication between the SOC 102 and the external module 190 using the LP CDC 222 and the MP input receiver 208. The frequency power manager 106 may then disable any modules that are unassociated with the second power mode.[0028] FIG. 5 is a diagram 500 illustrating modules that may be utilized in a third power mode. When transitioning from a prior mode to the third power mode, the frequency power manager 106 enables any of the shaded modules, including the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the HP CDC 220, and the MP input receiver 208, that were disabled in the prior mode. If any of the shaded modules, including the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the HP CDC 220, and the MP input receiver 208, were already enabled in the prior mode, the frequency power manager 106 maintains the enabled state. The frequency power manager 106 may also provide appropriate select signals to the multiplexers 202, 204, 226 so that the multiplexers 202, 204, 226 output the correct signals for the third power mode. Subsequently, the frequency power manager 106 may configure the modules 202-226 so that communication between the SOC 102 and the external module 190 is suspended for a brief period of time (which may be programmable), such as for example, 10-20 ns. Thereafter, the frequency power manger 106 may configure the modules 202-226 to resume communication between the SOC 102 and the external module 190 using the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the HP CDC 220, and the MP input receiver 208. The frequency power manager 106 may then disable any modules that are unassociated with the third power mode.[0029] FIG. 6 is a diagram 600 illustrating modules that may be utilized in a fourth power mode. When transitioning from a prior mode to the fourth power mode, the frequency power manager 106 enables any of the shaded modules, including the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the HP CDC 220, the LDO regulator 224, the reference voltage generator 212, and the HP input receiver 206, that were disabled in the prior mode. If any of the shaded modules, including the bias current generator 214, the PLL 216, the current-to- voltage converter 218, the HP CDC 220, the LDO regulator 224, the reference voltage generator 212, and the HP input receiver 206, were already enabled in the prior mode, the frequency power manager 106 maintains the enabled state. The frequency power manager 106 may also provide appropriate select signals to the multiplexers 202, 204, 226 so that the multiplexers 202, 204, 226 output the correct signals for the fourth power mode. Subsequently, the frequency power manager 106 may configure the modules 202-226 so that communication between the SOC 102 and the external module 190 is suspended for a brief period of time (which may be programmable), such as for example, 10-20 ns. Thereafter, the frequency power manger 106 may configure the modules 202-226 to resume communication between the SOC 102 and the external module 190 using the bias current generator 214, the PLL 216, the current-to-voltage converter 218, the HP CDC 220, the LDO regulator 224, the reference voltage generator 212, and the HP input receiver 206. The frequency power manager 106 may then disable any modules that are unassociated with the fourth power mode.[0030] FIG. 7 is a flow chart 700 of a method of a hardware module for controlling a power mode of a plurality of modules. The hardware module may be a frequency power manager (e.g., the frequency power manager 106 of FIG. 1). The frequency power manager may include a plurality of FSMs for optimizing a power consumed by the plurality of modules through hardware-driven dynamic voltage/frequency switching and for providing a fast and efficient transition from a first power mode to a second power mode. The FSMs may be constructed based on a 28 nm, 20 nm, 16 nm FinFET, or other process technologies. In step 702, the frequency power manager may receive an indication of a desired operational frequency. If the desired operation frequency is within a frequency range supported by the first power mode (or a current power mode), the frequency power manager maintains the first power mode. However, if the desired operational frequency range is within a frequency range unsupported by the first power mode, but supported by a second power mode, the frequency power manager determines to switch from the first power mode to the second power mode. In step 704, based on the received indication of the desired operational frequency, the frequency power manager determines to switch from a first power mode to a second power mode. The second power mode corresponds to the desired operational frequency. The first power mode is associated with a first set of modules of the plurality of modules, and the second power mode is associated with a second set of modules of the plurality of modules. In one configuration, the hardware module and the first and second sets of modules are within a DDR PHY hardware module (e.g., the DDR PHY hardware module 180 of FIG. 1). In step 706, the frequency power manager begins to transition the plurality of modules from the first power mode to the second power mode and enables modules in the second set of modules that are unassociated with the first power mode. The frequency power manager may enable the modules by turning on the modules and/or by changing a power state of the modules from a lower-power standby state to a higher- power operational state. The frequency power manager may enable the modules in a particular order or sequence. For example, referring to FIG. 2, the frequency power manager may enable the bias current generator 214 before enabling the LDO regulator 224 or the HP input receiver 206, and may enable the bias current generator 214 and the LDO regulator 224 before enabling the HP CDC 220. The frequency power manager enables the modules at different times based on the length of time that each module takes to be ready for operation (e.g., the amount of time each module needs to get to a steady state). The frequency power manager enables the modules at different times and in a particular order so that all of the modules are ready for operation in the least amount of time. In step 708, the frequency power manager waits for a time period (or startup time period) until the second set of modules reaches a steady state. In step 710, the frequency power manager stops traffic through the plurality of modules upon expiration of the time period after enabling the modules in the second set of modules that are unassociated with the first power mode. The frequency power manager also stops traffic between the plurality of modules and an external module(s) with which the plurality of modules are communicating. The frequency power manager may stop traffic for about 10-20 ns, assuming the frequency power manager utilizes a 28 nm process technology. However, other process technologies may be used, as discussed supra. In step 712, the frequency power manager routes traffic through the second set of modules. In step 714, the frequency power manager disables modules in the first set of modules that are unassociated with the second power mode. The frequency power manager may disable the modules by turning off the modules and/or by changing a power state of the modules from a higher-power operational state to a lower-power standby state.[0031] In one configuration, the plurality of modules includes a first CDC and a secondCDC in parallel with the first CDC. The first set of modules includes the first CDC, and the second set of modules includes the second CDC. The modules that are enabled in the second set of modules that are unassociated with the first power mode include the second CDC, and the modules that are disabled in the first set of modules that are unassociated with the second power mode include the first CDC. The second CDC may support a higher power mode or a lower power mode than the first CDC. For example, referring to FIG. 2, the modules 202-226 include an HP CDC 220 and an LP CDC 222. If a prior power mode utilizes the HP CDC 220 and a subsequent power mode utilizes the LP CDC 222, the LP CDC 222 is enabled. After traffic is routed through the LP CDC 222, the HP CDC 220 is disabled.[0032] In one configuration, the plurality of modules includes a first input receiver and a second input receiver in parallel with the first input receiver. The first set of modules includes the first input receiver, and the second set of modules includes the second input receiver. The modules that are enabled in the second set of modules that are unassociated with the first power mode include the second input receiver, and the modules that are disabled in the first set of modules that are unassociated with the second power mode include the first input receiver. The second input receiver may support a higher power mode or a lower power mode than the first input receiver. For example, referring to FIG. 2, the modules 202-226 include an MP input receiver 208 and an LP input receiver 210. If a prior power mode utilizes the MP input receiver 208 and a subsequent power mode utilizes the LP input receiver 210, the LP input receiver 210 is enabled. After traffic is routed through the LP input receiver 210, the MP input receiver 208 is disabled.[0033] When the plurality of modules interface with a DDR DRAM (i.e., the external module(s) 190 is a DDR DRAM), the plurality of modules may include at least one of a plurality of CDCs, a plurality of input receivers, an LDO regulator, a current-to- voltage converter, a PLL, a bias current generator, or a reference voltage generator. As discussed supra, the frequency power manager may manage transition from a first power mode to a second power mode. The first power mode may be any one of N power modes and the second power mode may be any other of the N power modes. In general, N>2. In the examples provided with respect to FIGs. 2-6, N=4. For the following examples, assume N=4 and that the power modes include an ultra- low power mode, a low power mode, a medium performance mode, and a high performance mode.[0034] In one example, the frequency power manager transitions from the ultra-low power mode to the low power mode. Accordingly, the first power mode is the ultra- low power mode and the second power mode is the low power mode. Referring to FIGs. 3, 4, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The second set of modules may include the low- power CDC 222 and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the medium-power input receiver 208. The frequency power manager refrains from enabling the low-power CDC 222, as the low-power CDC 222 was already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the low-power input receiver 210. The frequency power manager refrains from disabling the low-power CDC 222, as the low-power CDC 222 is utilized for the second power mode.[0035] In one example, the frequency power manager transitions from the ultra-low power mode to the medium performance mode. Accordingly, the first power mode is the ultra-low power mode and the second power mode is the medium performance mode. Referring to FIGs, 3, 5, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The second set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to- voltage converter 218, the PLL 216, the bias current generator 214, and a medium- power input receiver 208 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 220, 218, 216, 214, 208, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 222, 210.[0036] In one example, the frequency power manager transitions from the ultra-low power mode to the high performance mode. Accordingly, the first power mode is the ultra-low power mode and the second power mode is the high performance mode. Referring to FIGs. 3, 6, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The second set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to- voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 220, 218, 216, 224, 214, 212, 206, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 222, 210.[0037] In one example, the frequency power manager transitions from the low power mode to the ultra-low power mode. Accordingly, the first power mode is the low power mode and the second power mode is the ultra-low power mode. Referring to FIGs. 3, 4, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include the low- power CDC 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the low-power input receiver 210. The frequency power manager refrains from enabling the low-power CDC 222, as the low-power CDC 222 was already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the medium-power input receiver 208. The frequency power manager refrains from disabling the low-power CDC 222, as the low-power CDC 222 is utilized for the second power mode.[0038] In one example, the frequency power manager transitions from the low power mode to the medium performance mode. Accordingly, the first power mode is the low power mode and the second power mode is the medium performance mode. Referring to FIGs. 4, 5, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the bias current generator 214, and the medium-power input receiver 208. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214. The frequency power manager refrains from enabling the medium-power input receiver 208, as the medium-power input receiver 208 was already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the low-power CDC 222. The frequency power manager refrains from disabling the medium-power input receiver 208, as the medium-power input receiver 208 is utilized for the second power mode.[0039] In one example, the frequency power manager transitions from the low power mode to the high performance mode. Accordingly, the first power mode is a low power mode and the second power mode is a high performance mode. Referring to FIGs. 4, 6, the first set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include a high- power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 220, 218, 216, 224, 214, 212, and 206, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 222, 208.[0040] In one example, the frequency power manager transitions from the medium performance mode to the ultra-low power mode. Accordingly, the first power mode is the medium performance mode and the second power mode is the ultra-low power mode. Referring to FIGs. 3, 5, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the bias current generator 214, and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 222, 210, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 220, 218, 216, 214, 208.[0041] In one example, the frequency power manager transitions from the medium performance mode to the low power mode. Accordingly, the first power mode is the medium performance mode and the second power mode is the low power mode. Referring to FIGs. 4, 5, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the bias current generator 214, and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and the medium-power input receiver 208. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the low-power CDC 222. The frequency power manager refrains from enabling the medium-power input receiver 208, as the medium-power input receiver 208 was already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214. The frequency power manager refrains from disabling the medium- power input receiver 208, as the medium-power input receiver 208 is utilized for the second power mode.In one example, the frequency power manager transitions from the medium performance mode to the high performance mode. Accordingly, the first power mode is the medium performance mode and the second power mode is the high performance mode. Referring to FIGs. 5, 6, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220 222, the current-to-voltage converter 218, the PLL 216, the bias current generator 214, and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The second set of modules may include the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the LDO regulator 224, the reference voltage generator 212, and the high-power input receiver 206. The frequency power manager refrains from enabling the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214, as the high-power CDC 220, the current-to- voltage converter 218, the PLL 216, and the bias current generator 214 were already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the medium-power input receiver 208. The frequency power manager refrains from disabling the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214, as the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214 are utilized for the second power mode.[0043] In one example, the frequency power manager transitions from the high performance mode to the ultra-low power mode. Accordingly, the first power mode is the high performance mode and the second power mode is the ultra-low power mode. Referring to FIGs. 3, 6, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The second set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a low-power input receiver 210 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 222, 210, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 220, 218, 216, 224, 214, 212, 206.[0044] In one example, the frequency power manager transitions from the high performance mode to the low power mode. Accordingly, the first power mode is the high performance mode and the second power mode is the low power mode. Referring to FIGs. 4, 6, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The second set of modules may include a low-power CDC 222 of the plurality of CDCs 220, 222 and a medium-power input receiver 208 of the plurality of input receivers 206, 208, 210. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include all of the second set of modules 222, 208, and the modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include all of the first set of modules 220, 218, 216, 224, 214, 212, 206. [0045] In one example, the frequency power manager transitions from the high performance mode to the medium performance mode. Accordingly, the first power mode is the high performance mode and the second power mode is the medium performance mode. Referring to FIGs. 5, 6, the first set of modules may include a high-power CDC 220 of the plurality of CDCs 220, 222, the current-to-voltage converter 218, the PLL 216, the LDO regulator 224, the bias current generator 214, the reference voltage generator 212, and a high-power input receiver 206 of the plurality of input receivers 206, 208, 210. The second set of modules may include the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, the bias current generator 214, and a medium-power input receiver 208. The modules that the frequency power manager enables (step 706) in the second set of modules that are unassociated with the first power mode include the medium-power input receiver 208. The frequency power manager refrains from enabling the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214, as the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214 were already enabled in the first power mode. The modules that the frequency power manager disables (step 714) in the first set of modules that are unassociated with the second power mode include the LDO regulator 224, the reference voltage generator 212, and the high-power input receiver 206. The frequency power manager refrains from disabling the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214, as the high-power CDC 220, the current-to-voltage converter 218, the PLL 216, and the bias current generator 214 are utilized for the second power mode.[0046] Fig. 8 is a diagram 800 illustrating FSM modules within the frequency power manager. The arrows in FIG. 8 illustrate an enabling sequence. The frequency power manager may include a bias generator FSM 802, an LDO FSM 804, a CDC FSM 806, an input receiver FSM 808, and an input receiver calibration FSM 810. The bias generator FSM 802 enables the bias current generator 214, the reference voltage generator 212, the PLL 216, and the current-to-voltage converter 218. The LDO FSM 804 enables the LDO regulator 224. The CDC FSM 806 enables the HP CDC 220 and the LP CDC 222. The input receiver FSM 808 enables the input receivers 206, 208, 210. The input receiver calibration FSM 810 calibrates drivers within the input receivers 206, 208, 210. [0047] If both the bias current generator 214 and the LDO regulator 224 are enabled(e.g., in a high performance mode), the frequency power manager initially starts the bias generator FSM 802. When the bias generator FSM 802 reaches a final state, the frequency power manager starts in parallel the LDO FSM 804, the input receiver FSM 808, and the input receiver calibration FSM 810. When the LDO FSM 804 reaches a final state, the frequency power manager starts the CDC FSM 806. If the bias current generator 214 is enabled, but the LDO regulator 224 is not enabled (e.g., in a medium performance mode), the frequency power manager initially starts the bias generator FSM 802. When the bias generator FSM 802 reaches a final state, the frequency power manager starts in parallel the input receiver FSM 808, the input receiver calibration FSM 810, and the CDC FSM 806. If the bias current generator 214 is not enabled (e.g., in an ultra-low power mode or a low power mode), the frequency power manager starts in parallel the input receiver FSM 808, the input receiver calibration FSM 810, and the CDC FSM 806.[0048] FIG. 9 is a conceptual data flow diagram illustrating the data flow between different modules/means/components in an exemplary apparatus 902. The apparatus is a frequency power manager hardware module that controls a power mode of a plurality of modules and an external module with which the plurality of modules interface. The apparatus may include a receiving module 904 that is configured to receive an indication of a desired operational frequency. The apparatus may include a power mode switch determination module 906 that is configured to determine to switch from a first power mode to a second power mode corresponding to the desired operational frequency. The first power mode is associated with a first set of modules of the plurality of modules. The second power mode is associated with a second set of modules of the plurality of modules. The apparatus may include an enabling module 908 that is configured to enable modules in the second set of modules that are unassociated with the first power mode. The apparatus may include a waiting module 910 that is configured to wait for the time period until the second set of modules reaches a steady state. The apparatus may include a traffic stopping module 912 that is configured to stop traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode. The apparatus may include a traffic routing module 914 that is configured to route traffic through the second set of modules. The apparatus may include a disabling module 916 that is configured to disable modules in the first set of modules that are unassociated with the second power mode.[0049] The modules 904-914 may be included in one or more FSMs. For example, the module 906 may be implemented with a first FSM module, the module 908 may be implemented with a second FSM module, the module 912 may be implemented with a third FSM module, the module 914 may be implemented with a fourth FSM module, the module 916 may be implemented with a fifth FSM module, the module 910 may be implemented with a sixth FSM module, and the module 904 may be implemented with a seventh FSM module. The aforementioned FSM modules may be implemented in one or more FSMs. The apparatus may include additional modules (e.g., FSM modules) that perform each of the steps of the algorithm in the aforementioned flow chart of FIG. 7. As such, each step in the aforementioned flow chart of FIG. 7 may be performed by a module and the apparatus may include one or more of those modules. The modules may be one or more hardware components such as FSMs specifically configured to carry out the stated processes/algorithm. In particular, the FSMs may be implemented using a set of combinational logic gates (e.g., AND, OR, XOR, etc.) in order to achieve a precise timing requisite for enabling the modules with the least downtime (e.g., 10-20 ns). By implementing the modules 904-914 with special purpose hardware, rather than software, the modules 904-914 optimize a power through hardware-driven dynamic voltage/frequency switching and provide for a fast and efficient transition between the power modes.[0050] In one configuration, the frequency power manager apparatus is a hardware module that controls a power mode of a plurality of modules. The apparatus includes means for determining to switch from a first power mode to a second power mode. The first power mode is associated with a first set of modules of the plurality of modules. The second power mode is associated with a second set of modules of the plurality of modules. The apparatus further includes means for enabling modules in the second set of modules that are unassociated with the first power mode. The apparatus further includes means for stopping traffic through the plurality of modules upon expiration of a time period after enabling the modules in the second set of modules that are unassociated with the first power mode. The apparatus further includes means for routing traffic through the second set of modules. The apparatus further includes means for disabling modules in the first set of modules that are unassociated with the second power mode. The apparatus may further include means for waiting for the time period until the second set of modules reaches a steady state. The apparatus may further include means for receiving an indication of a desired operational frequency. The second power mode may correspond to the desired operational frequency. The aforementioned means may be one or more of the aforementioned FSM modules 802-810 and/or the modules 904- 916 configured to perform the functions recited by the aforementioned means, within the frequency power manager apparatus 106, 902.[0051] It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.[0052] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects." Unless specifically stated otherwise, the term "some" refers to one or more. Combinations such as "at least one of A, B, or C," "at least one of A, B, and C," and "A, B, C, or any combination thereof include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as "at least one of A, B, or C," "at least one of A, B, and C," and "A, B, C, or any combination thereof may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase "means for." |
Battery management can be advantageously integrated into a programmable logic device (PLD). Specifically, by using a programmable battery controller provided on the PLD, the user can make a decision regarding battery choice much later in the design process, reduce the inventory of batteries associated with the system/product, increase the life of the batteries, and upgrade to the newest technology battery at the user's discretion. The battery controller can be implemented on any type of PLD, e.g., an FPGA, potentially requiring battery management for critical circuits. |
What is claimed is:1. A programmable logic device (PLD) comprising:a battery voltage pin;a battery controller connected to the battery voltage pin; andat least one critical circuit implemented in programmable logic of the PLD and selectively connected to the battery voltage pin;a memory for storing a charging algorithm and a charging methodology associated with a battery connectable to the battery voltage pin;a voltage source pin connected to the at least one critical circuit; anda voltage detector connected to the voltage source pin, the voltage detector selectively connecting the battery voltage pin to the at least one critical circuit in response to a voltage drop at the voltage source pin.2. The PLD of claim 1, wherein the battery controller includes:a battery charger controlled by the battery controller and operatively coupled to the battery voltage pin, the battery charger for charging the battery using the charging algorithm and the charging methodology.3. The PLD of claim 1, wherein the battery controller further includes end of life circuitry operatively coupled to the battery voltage pin.4. The PLD of claim 1, wherein the battery controller includes programmable logic resources.5. A programmable logic device (PLD) comprising:a first battery voltage pin;a second battery voltage pin;a battery controller selectively connected to one of the first battery voltage pin and the second battery voltage pin;at least one critical circuit implemented in programmable logic of the PLD and selectively connected to one of the first battery voltage pin and the second battery voltage pin;a voltage source pin connected to the at least one critical circuit;a selector arrangement coupled to the first and second battery voltage pins and adapted to couple the at least one critical circuit to one of the first and second battery voltage pins;an analog demultiplexer including an input terminal connected to the battery controller, a first output terminal selectively connected to the first battery voltage pin, and a second output terminal selectively connected to the second battery voltage pin; andan analog multiplexer including a first input terminal connected to the first battery voltage pin, a second input terminal connected to the second battery voltage pin, and an output terminal selectively connected to the at least one critical circuit.6. The PLD of claim 5, wherein the battery controller includes:a memory for storing a plurality of charging algorithms and a plurality of charging methodologies, wherein a first charging algorithm and a first charging methodology are associated with a first battery external to the PLD and connectable to the first battery voltage pin, and wherein a second charging algorithm and a second charging methodology are associated with a second battery external to the PLD and connectable to the second battery voltage pin; anda battery charger controlled by the battery controller and operatively coupled to an input terminal of the analog demultiplexer, the battery charger for charging one of the first battery using the first charging algorithm and the first charging methodology and the second battery using the second charging algorithm and the second charging methodology.7. The PLD of claim 5, wherein the selector arrangement includesa voltage detector connected to the voltage source pin, the voltage detector selectively connecting the output terminal of the analog multiplexer to the at least one critical circuit.8. The PLD of claim 5, wherein the battery controller further includes end of life circuitry operatively coupled to at least one the first battery voltage pin and the second battery voltage pin.9. The PLD of claim 5, wherein the battery controller includes programmable logic resources.10. A method of fabricating a programmable logic device (PLD), the method comprising:providing a first battery voltage pin;providing a second battery voltage pin;providing a selective connection between a battery controller and one of the first battery voltage pin and the second battery voltage pin;providing a selective connection between programmable logic of the PLD and one of the first battery voltage pin and the second battery voltage pin;providing a volatile memory for storing a plurality of charging algorithms and a plurality of charging methodologies, wherein a first charging algorithm and a first charging methodology are associated with a first battery connectable to the first battery voltage pin, and a second charging algorithm and a second charging methodology are associated with a second battery connectable to the second battery voltage pin;connecting an input terminal of a demultiplexer to the battery controller;providing a selective connection between a first output terminal of the demultiplexer and the first battery voltage pin;providing a selective connection between a second output terminal of the demultiplexer and the second battery voltage pin;connecting a first input terminal of a multiplexer to the first battery voltage pin;connecting a second input terminal of the multiplexer to the second battery voltage pin; andproviding a selective connection between an output terminal of the multiplexer and the at least one critical circuit.11. The method of claim 10, further including:coupling a battery charger, controlled by the battery controller, to the input terminal of the demultiplexer, the battery charger for charging one of the first battery using the first charging algorithm and the first charging methodology and the second battery using the second charging algorithm and the second charging methodology.12. The method of claim 11, further including:connecting a voltage source pin to the at least one critical circuit; andconnecting a voltage detector to the voltage source pin, the voltage detector for selectively connecting the output terminal of the analog multiplexer to the at least one critical circuit.13. The method of claim 12, further including providing a selective connection between end of life circuitry and at least one of the first battery voltage pin and the second battery voltage pin.14. The method of claim 10, further including implementing at least one of the selective connections with programmable logic resources. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to a programmable logic device (PLD) and particularly to a programmable battery controller in the PLD.2. Description of the Related ArtThe use of batteries for systems and circuits in today's increasingly mobile, broadband society is ubiquitous. Many of these system/circuit applications require significantly longer battery lives as well as smaller battery area than previous generations of applications. To meet these specifications, new chemistries, charging algorithms, and charging methodologies are constantly being developed for batteries. Currently, application specific integrated circuits (ASICs) can provide the algorithms and implement the methodologies for charging this new generation of batteries. Exemplary battery charging ASICs include the MAX712 sold by Maxim, the BQ2063 sold by Texas Instruments, and the S-8243 sold by Seiko. Unfortunately, due to the wide range of algorithms and methodologies that are provided, the use of an ASIC is limited to only one battery chemistry. Moreover, these battery charging ASICs include predetermined pins for setting the number of batteries in series. Thus, once this ASIC is installed, the user is locked into the size of the battery system, the battery chemistry, and the end of life voltages.Therefore, a need arises for a method and circuit for allowing an integrated circuit to provide multiple algorithms and implement multiple methodologies. Moreover, a need arises for a method and a circuit readily adaptable to new algorithms and methodologies, thereby allowing a user to take advantage of new technology.SUMMARY OF THE INVENTIONIn accordance with one feature of the invention, battery management can be advantageously integrated into a programmable logic device (PLD). Specifically, a battery controller provided on the PLD can ensure that power demands for any application can be met. By using this programmable solution, the user can make the decision regarding battery choice much later in the design process, reduce the inventory of batteries associated with the system/product, increase the life of the batteries, and upgrade to the newest technology battery at the user's discretion. The battery controller can be implemented on any type of PLD, e.g., a FPGA, potentially requiring battery management.The PLD can include a battery voltage pin, a battery controller connected to the battery voltage pin, and at least one critical circuit selectively connected to the battery voltage pin. The battery controller can include a memory for storing a charging algorithm and a charging methodology associated with a battery external to the PLD and connectable to the battery voltage pin. The battery controller can also include a battery charger controlled by the battery controller and operatively coupled to the battery voltage pin, the battery charger charges the battery using the charging algorithm and the charging methodology.The PLD can further include a voltage source pin connected to the at least one critical circuit and a voltage detector connected to the voltage source pin. The voltage detector can selectively connect the battery voltage pin to the at least one critical circuit. In one embodiment, the battery controller can further include end of life circuitry operatively coupled to the battery voltage pin.In another embodiment, the PLD can include first and second battery voltage pins and a battery controller selectively connected to one of the first battery voltage pin and the second battery voltage pin. The PLD includes at least one critical circuit selectively connected to either the first battery voltage pin or the second battery voltage pin. The PLD can include a analog demultiplexer having an input terminal connected to the battery controller, a first output terminal selectively connected to the first battery voltage pin, and a second output terminal selectively connected to the second battery voltage pin. The PLD can also include an analog multiplexer having a first input terminal connected to the first battery voltage pin, a second input terminal connected to the second battery voltage pin, and an output terminal selectively connected to the at least one critical circuit.The battery controller can include a memory for storing a plurality of charging algorithms and a plurality of charging methodologies. A first charging algorithm and a first charging methodology are associated with a first battery external to the PLD and connectable to the first battery voltage pin. A second charging algorithm and a second charging methodology are associated with a second battery external to the PLD and connectable to the second battery voltage pin. A battery charger, controlled by the battery controller and operatively coupled to the input terminal of the analog demultiplexer, can charge either the first battery using the first charging algorithm and the first charging methodology or the second battery using the second charging algorithm and the second charging methodology.The PLD can further include a voltage source pin connected to the at least one critical circuit and a voltage detector connected to the voltage source pin. The voltage detector selectively connects the output terminal of the analog multiplexer to the at least one critical circuit. In one embodiment, the battery controller can further include end of life circuitry operatively coupled to at least one the first battery voltage pin and the second battery voltage pin.A method of fabricating a PLD is also provided. The method includes providing a battery voltage pin, connecting a battery controller to the battery voltage pin, and providing a selective connection between the at least one critical circuit and the battery voltage pin. The method can further include providing a volatile memory in the battery controller for storing a charging algorithm and a charging methodology associated with a battery external to the PLD and connectable to the battery voltage pin. The method can also include coupling a battery charger, controlled by the battery controller, to the battery voltage pin. The battery charger can charge the battery using the charging algorithm and the charging methodology. The method can further include connecting a voltage source pin to the at least one critical circuit and connecting a voltage detector to the voltage source pin. The voltage detector can selectively connect the battery voltage pin to the at least one critical circuit. End of life circuitry can also be coupled to the battery voltage pin.In another embodiment, a method of fabricating a PLD connectable to multiple batteries is provided. The method includes providing first and second battery voltage pins, providing a selective connection between a battery controller and one of the first battery voltage pin and the second battery voltage pin, and providing a selective connection between the at least one critical circuit and one of the first battery voltage pin and the second battery voltage pin. The method can include connecting an input terminal of an analog demultiplexer to the battery controller, providing a selective connection between a first output terminal of the analog demultiplexer and the first battery voltage pin, and providing a selective connection between a second output terminal of the analog demultiplexer and the second battery voltage pin. The method can also include connecting a first input terminal of an analog multiplexer to the first battery voltage pin, connecting a second input terminal of the analog multiplexer to the second battery voltage pin, and providing a selective connection between an output terminal of the analog multiplexer and the at least one critical circuit. Finally, the method can include providing a non-volatile memory for storing a plurality of charging algorithms and a plurality of charging methodologies. For example, a first charging algorithm and a first charging methodology can be associated with a first battery external to the PLD (connectable via the first battery voltage pin) and a second charging algorithm and a second charging methodology can be associated with a second battery external to the PLD (connectable via the second battery voltage pin). The battery charger, controlled by the battery controller, can be coupled to the input terminal of the analog demultiplexer. In this manner, the battery charger can charge either the first battery using the first charging algorithm and the first charging methodology or the second battery using the second charging algorithm and the second charging methodology.BRIEF DESCRIPTION OF THE FIGURESFIG. 1A illustrates a known battery back-up and powerdown configuration.FIG. 1B illustrates a known charging battery algorithm and methodology for a nickel metal hydride battery.FIG. 2 illustrates an FPGA including a battery controller on-chip for controlling an off-chip battery.FIG. 3 illustrates an FPGA including a battery controller on-chip for controlling multiple off-chip batteries.In these figures, similar reference numerals refer to similar elements.DETAILED DESCRIPTION OF THE FIGURESProgrammable logic devices (PLDs) are well known in the art of integrated circuits (ICs). A PLD can be user-programmed in the field to implement logic designs. One type of PLD is the field programmable gate array (FPGA). In a typical architecture, an FPGA includes an array of configurable logic blocks (CLBs) surrounded by programmable input/output blocks (IOBs). The IOBs provide the interface between the package pins and the CLBs, whereas the CLBs provide the functional elements for constructing logic on the FPGA. The CLBs and IOBs are interconnected by a hierarchy of programmable routing resources. These CLBs, IOBs, and programmable routing resources are customized by loading a configuration bitstream into the FPGA.FPGAs are typically implemented with volatile memory, such as static random access memory (SRAM), thereby allowing the IC to be reconfigured at the user's discretion. Unfortunately, this design flexibility has the attendant disadvantage of requiring reconfiguration in the event of a power outage. To facilitate this reconfiguration, the current configuration bitstream can be stored in a non-volatile memory IC coupled to the FPGA, which is programmed to download the configuration bitstream if a power outage occurs. Alternatively, to eliminate the need for reconfiguration, a battery back-up of the FPGA can be provided.Specifically, the FPGA can be forced into a low-power, non-operational state while supplying the minimal current requirement from a battery, thereby allowing the FPGA to retain its configuration prior to entering the low-power state. FIG. 1A illustrates a battery back-up and power-down circuit for an FPGA 100 that operates at 5.0V+/-5%. In this configuration, a power monitor circuit 101 monitors power supply VCC and pulls a power-down terminal PWRDWN on FPGA 100 to a predetermined voltage whenever VCC falls below 4.0V. In one embodiment, power monitor circuit 101 could be implemented by the Seiko S8054 power device, which has a minimum detect voltage of 3.995V, a maximum detect voltage of 4.305V, a hysteresis of 208 mV, a temperature coefficient of 0.52 mV/[deg.] C., and a current ICC at 6V of 2.6 uA. Two Schottky diodes 102 and 103 can power FPGA 100 from either the 5.0V power supply VCC or a 3V lithium battery 104.In another embodiment, an FPGA can include an on-chip voltage detector, wherein the FPGA is then coupled to a battery using a dedicated terminal VCCBAT. For example, in a Virtex(TM) II FPGA, sold by Xilinx, Inc., a battery-supported RAM (BRAM) is provided to store a decryption key set for a triple data encryption standard (DES) encryption code. Specifically, up to six 56-bit DES algorithm keys can be stored in the BRAM, and any series of three used for the triple key decryption. In this FPGA, an encrypted configuration bitstream (encrypted by the bitstream generation software by specifying the order of the three keys) can be received and decrypted on-chip using the decryption key set stored in the BRAM. In the event of a power outage, the on-chip voltage detector switches to the battery power supply, thereby allowing the BRAM to retain the decryption key. In one embodiment, the BRAM requires approximately 0.1 uA at 1.0V, minimum. Note that the Virtex II FPGA advantageously prevents a readout of the BRAM, thereby ensuring the security of the decryption key. As soon as power is restored (either from VCC or an auxiliary power source), the on-chip voltage detector switches back to the standard power supply.In accordance with one feature of the present invention, battery management can be provided on an FPGA, thereby allowing the charging algorithms and methodologies to be changed at the discretion of the user. In this manner, the user can easily upgrade to new chemistries, algorithms, and methodologies as they are developed. Because of this technology flexibility, the user can advantageously reduce battery inventory and ensure the application is implemented with the most advanced power source.To understand the complexity of batteries, a brief summary of battery types, chemistries, and care as well as an illustrative algorithm/methodology are provided herein.Battery Types and ChemistriesIn a system including a battery, the designer typically considers whether a primary battery providing a single discharge or a secondary battery with recharging capability is more appropriate. Primary batteries simplify the system as they cannot be recharged and therefore require no extra circuitry. Secondary batteries require a method of charging and therefore entail additional circuitry to provide this function. Moreover, secondary batteries can be damaged if charging currents are not controlled, i.e. if the charging methodology is not followed. Note that primary and secondary batteries are considered "types" of batteries, wherein each type of battery has its own voltage capability, temperature tolerance, and life, depending on its chemistry.The following chemistries are the most common for primary batteries: alkaline, silver-oxide, and lithium. These chemistries can provide cell voltages of 1.5V, 1.55V, and 3.0V, respectively, at the beginning of service. Alkaline batteries are a good choice for operation below 54[deg.] C. and above -20[deg.] C. However, alkaline batteries have a maximum 1- or 2-year life without a load. Silver oxide batteries are designed to operate from +60[deg.] C. to -10[deg.] C. and have a less than 5% per year self-discharge rate at 21[deg.] C., thereby providing a +10-year life. Finally, lithium batteries can operate in the most adverse temperatures, i.e. up to 150[deg.] C. and down to -40[deg.] C. Moreover, lithium batteries have a 15-year life without a load and, if sized correctly, may exhibit the same life in an operating system. Thus, silver oxide batteries have the longest life of these three primary batteries.The following chemistries are most common for secondary batteries: nickel cadmium, nickel metal hydride, lithium, and lead-acid batteries. These chemistries can provide cell voltages of: 1.2V, 1.2V, 3.6V, and 2.0V, respectively, after charge under nominal load. The self-discharge rates of all of these batteries is at best 1% per month. Therefore, recharging of secondary batteries is typically recommended within 30 days. In fact, nickel metal hydride batteries, which are used in cell phones because of their high energy density, are particularly prone to high discharge rates. Thus, if a user misses even one day of using the battery followed by re-charging, then the cell phone may be inoperable. The number of deep (>80%) charge/recharge cycles is usually less than 200 for most secondary batteries.Battery CareIn addition to the considerations described above regarding battery voltage, temperature tolerance, and life, system design can also include an analysis of use restrictions. Specifically, certain batteries, due to their chemistries, may have attendant disposal, availability, and/or use limitations that could adversely affect product distribution. For example, nickel cadmium, lead acid, and silver oxide batteries have chemical compositions that are considered hazardous waste and therefore have corresponding disposal limitations. Moreover, the availability of batteries can vary. For example, although nickel cadmium and alkaline batteries are generally available world-wide, other batteries, such as lithium and silver oxide are less commonly available.Most batteries contain highly corrosive base or acid electrolytes, and will seriously damage or destroy electrical components if they leak. Any battery will leak if it is overcharged, if a primary battery is charged, or if the battery suffers physical damage (e.g. is dented or punctured).Additionally, any battery may pose a risk of explosion or fire if it is shorted (wherein the use of a small [1/8] Watt surface mount resistor as a fuse can be used to reduce this risk).Battery Algorithm and MethodologyThe identification of the appropriate charging algorithm and methodology is necessary to charge a secondary battery. For example, in one charging methodology, the current is turned on and off at predetermined intervals and then the battery temperature is checked. In another example charging methodology, the current is turned on and off at a second predetermined interval and then the voltage is checked at the battery terminals.Unfortunately, in many instances, charging algorithms and methodologies are ignored or simplified because of their complexity. However, if a battery is stressed by an inappropriate recharge operation, then that battery will undesirably fail before its designed end of life. For example, a nickel cadmium battery has between 200-500 deep discharge cycles and many thousands more if it does not deeply discharge. If a user allows the nickel cadmium battery to discharge past a certain point, then its life can be severely limited to only 100-200 discharge cycles. In fact, in general, any deviation from the algorithm or methodology recommended by a battery manufacturer can result in reduced life or even unsafe operation of that battery.In accordance with one feature of the invention, the FPGA can include the battery management, thereby allowing the FPGA to automatically recharge a secondary battery using the appropriate algorithm and methodology and thus minimize stress on the battery relating to recharging. Many applications using secondary batteries could benefit from the advantages of using an FPGA with battery management provided on-chip. For example, satellites in space typically use secondary batteries. Therefore, to ensure best use of the significant equipment investment in these satellites, the user should implement the manufacturer's required charging algorithm.Although a battery charging ASIC could be used, the battery choice for such a satellite might be made years before the satellite is actually launched. In contrast, using an FPGA with battery management on-chip, the battery choice could be made at the time of installation of the battery itself, thereby allowing the latest (and theoretically the best) battery technology to be used. Alternatively, in this example, different batteries (different secondary batteries, primary batteries, or a combination of secondary and primary batteries) could be provided in the satellite. In this example, in the event of one battery failure, the succeeding battery could be seamlessly incorporated into the operating system with the use of a single IC, i.e. the FPGA. As mentioned previously, a battery charging ASIC is tailored for a specific battery type/chemistry. It logically follows that multiple battery charging ASICs would need to be included in such a satellite. Therefore, compared to known battery charging ASICS, an FPGA including on-chip battery management can also significantly reduce the number of ICs required for battery operation.An illustrative charging algorithm and methodology for a nickel metal hydride battery from Panasonic is provided herein to emphasize the advantages of automating the recharging process. Referring to FIG. 1B, a rapid charge current 120 of between 0.5 CmA and 1 CmA is provided, wherein C is defined as the current time capacity of the battery in ampere or milliampere hours. For example, 1 CmA for a 150 mahr battery would be 150 mA. Charging the metal hydride battery with a current greater than 1 CmA can create an undesirable electrolyte leakage. If the temperature of the metal hydride battery is under 0[deg.] or over 40[deg.] C. at the beginning of the charge, then a low-level charge current between 0.033-0.05 CmA is used instead of the rapid charge current 120.In the case that the metal hydride battery is excessively discharged or deep-discharged, a medium-level current 133 can be provided initially followed by the rapid charge current 120 after a battery voltage 123 has risen to a predetermined level. Specifically, the voltage begins at approximately 0.8V/cell and transitions (see arrow 126) at a current of 0.2-0.3 CmA. The maximum battery voltage 132 is approximately 1.8V/cell. Note that the rapid charge current can be switched to the low-level charge current if the battery voltage 123 reaches approximately 1.8V/cell due to any malfunction.The delta voltage drop 131 is typically 5 to 10 mV/cell. The rise in the battery temperature per unit of time is approximately 10 to 2[deg.] C./min. When a predetermined rise 124 is detected during the rapid charge period 128, the rapid charge current 120 is switched to the low-level charge current 121. Note that the voltage drop also corresponds to the completed recharge operation. If the battery temperature reaches an upper limit 133, then the rapid charge current 120 should be decreased to the low-level charge current 121 to ensure the metal hydride battery is not damaged.An initial delay 125 of up to 10 minutes can be provided to prevent the delta voltage detection circuit from being activated by a pseudo voltage change, wherein such a pseudo voltage change can occur if the metal hydride battery has been non-operational for a predetermined period of time or excessively discharged. However, the dT/dt detection circuit can be activated during this delay.In the Panasonic nickel metal hydride battery, the rapid charge transfer time 127, the rapid charge time 128, and the total charge time 129 are 60 minutes, 90 minutes, and 10-20 hours, respectively. Because the overcharging of nickel metal hydride batteries, even by low-level charging, can adversely affect the characteristics of the batteries, close adherence to these times is highly recommended. As noted by Panasonic, the temperature and voltage of these batteries varies depending on various factors including the shape of the battery pack, the number of cells, and the arrangement of the cell. Additional details regarding the Panasonic nickel metal hydride battery are provided at the following address on the Panasonic Web site: http://www.panasonic.com/industrial/battery/oem/images/pdf/nimhchar.pdfFPGA ImplementationIn accordance with one feature of the invention, a battery can be provided external to an FPGA integrated circuit. Note that this battery can be integrated into the package of the FPGA or can be a separate component in a system including the FPGA. In one embodiment providing optimal user flexibility, the battery is provided as a separate component, thereby allowing the user to implement the power source with either a primary battery or a secondary battery.In many applications, it would be highly advantageous to be able to provide the FPGA with information regarding the coupled battery, thereby allowing the FPGA to make different use, better use, or optimal use of that battery. Logically, if the battery chemistry is unknown, then the operating voltage and the specification of what is fully charged or half-charged cannot be determined. Moreover, if the charging algorithm and methodology are unknown, then recharging (of a secondary battery) cannot be done accurately.In accordance with one feature of the invention, the configuration bitstream can include information regarding one or more batteries, i.e. their type, chemistry, charging algorithm, and charging methodology. This information can be stored in the BRAM for use during subsequent FPGA and/or battery operations. For example, a user could query the FPGA on the type of battery (assuming the battery is embedded in the FPGA package and therefore is inaccessible for user identification). In another example, the FPGA could access the information to automatically recharge a secondary battery when the charging level drops below a threshold voltage. In one embodiment, the BRAM can include look-up tables (LUTs) that are provided in certain CLBs of the FPGA. In another embodiment, the BRAM can include block RAM or other memory arrays provided on the FPGA.To effectively use this battery information, an FPGA can include a controller. FIG. 2 illustrates a simplified FPGA 200 including a battery controller 201 in accordance with the invention. FPGA 200 further includes a standard power supply VCC pin 204 and a dedicated, battery voltage VBATT pin 205. Battery controller 201 can control a battery (primary or secondary) external to FGPA 200 via VBATT pin 205 (note that FPGA 200 could refer to an FPGA IC or a packaged FPGA IC). Because of critical circuits 207, e.g. a BRAM for providing information regarding one or more batteries, decrypting an encrypted bitstream, or any other circuits providing critical functions, a continuous power supply is required. In this embodiment, the power can be supplied either from VCC pin 204 or VBATT pin 205. Note that VCC pin 204 is directly connected to critical circuits 207, whereas VBATT pin 205 is selectively connected to critical circuits 207 via a switch 209.A standard on-chip detector 208, which is coupled to VCC pin 204, can detect whether the power supplied by VCC pin 204 is above a threshold voltage. If not, detector 208 can activate a switch 209, thereby coupling critical circuits 207 to VBATT pin 205. In this manner, until that threshold voltage can be maintained, critical circuits 207 are powered by battery 206.In accordance with one feature of the invention, battery controller 201 can include a charger 202, thereby allowing battery 206 to be implemented with a secondary battery. In a preferred embodiment, memory 203 (e.g. volatile RAM, such as SRAM or DRAM) can provide the appropriate charging algorithm and methodology for battery 206. Battery charger 202 could be implemented with dedicated logic on FPGA 200, or implemented, at least in part, using the programmable fabric of FPGA 200. Specifically, to build battery charger 202, digital values will need to be converted into analog voltages or analog currents. In one embodiment, a standard digital-to-analog converter (DAC) known by those skilled in the art can be used. Because of the desired analog result, the DAC (or any functional equivalent) is typically implemented with dedicated logic. In another embodiment, battery charger 202 can include a transistor that is turned on and off at predetermined intervals, thereby providing a pulse-width-modulated (PWM) signal. Note that providing variable current on the output pins of FPGA 200 is known in the art, typically by turning on more/less output transistors. Therefore, by including battery controller 201 (comprising charger 202 and memory 203), FPGA 200 is fully capable of recharging battery 206.Note that each battery chemistry has its own definition of what the "end of life" is. To address this issue, battery controller 201 can include end of life circuitry 210. In one embodiment, end of life circuitry 210 can comprise an analog-to-digital (A/D) converter 212 that senses the voltage (e.g. a voltmeter implemented in hard logic) as well as additional circuitry to compare this voltage to a table of end of life voltage (wherein the table could be built using the configuration bitstream). Alternatively, the additional circuitry could include a comparator (implemented using programmable resources) that compares the sensed voltage to a reference voltage. In one embodiment, this reference voltage could be a band-gap reference voltage generated by programmable resources on FPGA 200.In one embodiment, end of life (EOL) circuitry 210 could also include circuitry for measuring battery temperature, which also can determine the level of charging of the battery. For example, during a recharge of a nickel cadmium battery, the temperature at the terminals of the battery continues to rises until a chemical reaction stops. At this point, the temperature drops dramatically, thereby signally the completion of the recharge. In fact, in this battery chemistry, a temperature drop is a more accurate recharge indicator than the terminal voltage on the battery. To measure temperature in one embodiment, end of life circuitry 210 could be coupled to a thermister 211 (implemented as hard logic external to FPGA 200) via A/D converter 212.Advantageously, the charging of battery 206 can be done while FPGA 200 is connected to the primary power source VCC. Currently, for example referring back to the prior art described in FIG. 1, battery 104would have to be disconnected from FPGA 100 before recharging could be performed. Otherwise, the recharging of battery 104 would adversely affect the voltage provided to the VCC terminal of FGPA 100. In contrast, in FPGA 200 (FIG. 2), battery 206 can be selectively isolated from critical circuits 207 via switch 209, thereby allowing charger 202 to perform a recharge operation battery 206 during any time that battery 206 is not coupled to critical circuits 207.In one embodiment, any portion of battery controller 201 not implemented as hard logic could be implemented using a programmable logic resource, e.g., a "core". Specifically, some FPGAs, like the Virtex II FGPA, can be programmed to incorporate blocks with a pre-designed functionality (programmable logic resources) called a "core". In one embodiment, a core can include a predetermined set of configuration bits that program the FPGA to perform one or more functions. In another embodiment, a core can include source code or schematics, which describe the logic and connectivity of a design. Cores can be provided with an optimally floorplanned layout for specific FPGAs. Moreover, cores can also be parameterizable, i.e. allowing the user to enter parameters to activate or change certain core functionality. For example, a parameterizable battery controller core could allow the user to enter the number of batteries being coupled to the FPGA (described in further detail in reference to FIG. 3) or to enter a new battery type, charging algorithm, and charging methodology.FIG. 3 illustrates a simplified FPGA 300 similar to FPGA 200, but capable of selectively coupling one of a plurality of batteries to critical circuits 207. Specifically, in one embodiment, an analog multiplexer 306 can be provided to allow a parameterizable battery controller 201' to selectively couple one of N batteries to critical circuits 207, wherein N is a parameter that can be set by the user. One skilled in the art will appreciate that other well know techniques and circuits can also be used to perform the functions of the analog multiplexer 306 In FIG. 3, parameterizable battery controller 201' can selectively couple one of battery 302 (via VBATT pin 301) and battery 304 (via VBATT pin 303) to critical circuits 207. An analog demultiplexer 307 allows parameterizable battery controller 201' to selectively charge one of batteries 302 and 304. Logically, if battery 302 is coupled to critical circuits 207, then battery 304 can be recharged and vice versa. In light of the flexibility provided by parameterizable battery controller 201', batteries 302 and 304 could be any type or chemistry. Note that in some embodiments, the primary voltage source might not be present at all (thus leaving VCC pin 204 unused). In such embodiments, multiple batteries could be advantageously used as primary voltage sources.Many new battery chemistries, including aluminum air, carbon zinc, and zinc air batteries, are being developed. The battery controller of the invention can advantageously update systems to leverage the attendant benefits of these new chemistries. In one embodiment, this updating can include a partial configuration bitstream loaded in the FPGA to store the new algorithm and methodology in memory 203.SUMMARYOnly a few years ago, designers had a limited number of primary batteries and secondary batteries from which to choose. Now, a plethora of battery chemistries exist. In fact, many applications spawn the development of their own custom battery chemistries. However, once chosen (typically early in the design process), the battery type and chemistry are effectively locked into the design. The associated ASIC chargers for these batteries are expensive and waste increasingly valuable system space. Fortunately, FPGAs are becoming more prevalent in many types of equipment and networks, including, for example, set-top boxes, personal communication systems, MP3 players, Cisco servers, and Lucent transmission systems.An FPGA including a battery controller in accordance with the invention can ensure that power demands for any application can be met. By using this programmable solution, the user can make the decision regarding battery choice much later in the design process, reduce the inventory of batteries associated with the system/product, increase the life of the batteries, and upgrade to the newest technology battery at the user's discretion.The descriptions of the invention provided herein are illustrative only and not limiting. Specifically, various embodiments of the invention have been described in detail above. Modifications to those embodiments will be apparent to those skilled in the art. For example, although only two batteries are shown in FIGS. 2 and 3, any number of batteries could be connected to an FPGA using the appropriate analog multiplexers and demultiplexers. Therefore, the scope of the present invention can be defined only by the appended claims. |
Apparatus and method supporting deprecated instructions. One embodiment comprises: a plurality of cores, each core comprising a current microarchitecture to execute instructions and process data, the current microarchitecture including virtual execution environment support for a hypervisor running at a first privilege level and one or more virtual machines running at a second privilege level, the microarchitecture further including support for executing deprecated instructions associated with a prior microarchitecture. At least one core comprising: a decoder to specify one or more microoperations corresponding to each of the instructions; execution circuitry to execute the corresponding microoperations; wherein either a first type or a second type of virtual machine exit is supported. Responsive to the first type of virtual machine exit, the hypervisor performs a first emulation without the partial hardware support. Responsive to the second type of virtual machine exit, the hypervisor perform a second emulation using the partial hardware support. |
CLAIMSWhat is claimed is:1. A processor comprising: a plurality of cores, each core comprising a current micro architecture to execute instructions and process data, the current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; at least one core of the plurality of cores comprising: a decoder to decode the instructions, the decoder to specify one or more microoperations corresponding to each of the instructions; execution circuitry to execute the corresponding microoperations; wherein either a first type or a second type of virtual machine exit is to be performed responsive to detecting a deprecated instruction in a first virtual machine, wherein responsive to the first type of virtual machine exit, the hypervisor is to perform a first emulation of the prior micro architecture without reliance on the partial hardware support, and wherein responsive to the second type of virtual machine exit, the hypervisor is to perform a second emulation of the prior microarchitecture relying on the partial hardware support.2. The processor of claim 1 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.3. The processor of claim 1 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.4. The processor of claim 2 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.5. The processor of claim 4 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.6. The processor of claim 5 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine.7. The processor of claim 3 wherein the first type of exception comprises an invalid or undefined opcode exception.8. The processor of claim 7 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction.9. A method comprising: executing instructions on at least one core of a plurality of cores, each having a current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; performing either a first type or a second type of virtual machine exit responsive to detecting a deprecated instruction in a first virtual machine, performing by the hypervisor, responsive to the first type of virtual machine exit, a first emulation of the prior micro architecture without reliance on the partial hardware support, and performing by the hypervisor, responsive to the first type of virtual machine exit, a second emulation of the prior micro architecture relying on the partial hardware support.10. The method of claim 9 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.11. The method of claim 9 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.12. The method of claim 10 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.13. The method of claim 12 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.14. The method of claim 13 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine.15. The method of claim 11 wherein the first type of exception comprises an invalid or undefined opcode exception.16. The method of claim 15 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction.17. A machine-readable medium having program code stored thereon which, when executed by a machine, causes the machine to perform the operations of: executing instructions on at least one core of a plurality of cores, each having a current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture;
performing either a first type or a second type of virtual machine exit responsive to detecting a deprecated instruction in a first virtual machine, performing by the hypervisor, responsive to the first type of virtual machine exit, a first emulation of the prior micro architecture without reliance on the partial hardware support, and performing by the hypervisor, responsive to the first type of virtual machine exit, a second emulation of the prior micro architecture relying on the partial hardware support.18. The machine -readable medium of claim 17 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.19. The machine -readable medium of claim 17 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.20. The machine -readable medium of claim 18 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.21. The machine -readable medium of claim 20 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.22. The machine -readable medium of claim 21 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine.23. The machine -readable medium of claim 19 wherein the first type of exception comprises an invalid or undefined opcode exception.24. The machine -readable medium of claim 23 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction. |
APPARATUS AND METHOD FOR MANAGING UNSUPPORTED INSTRUCTION SET ARCHITECTURE (ISA) FEATURES IN A VIRTUALIZED ENVIRONMENTBACKGROUNDField of the Invention[0001] The embodiments of the invention relate generally to the field of computer processors. More particularly, the embodiments relate to an apparatus and method for managing unsupported/deprecated features of an instruction set architecture (ISA) in a virtualized environment.Description of the Related Art[0002] Deprecating ISA features is desirable for a variety of reasons including, but not limited to, reducing attack surfaces, simplifying the validation space, and reducing implementation efforts. However, existing software will no longer work on a new instruction set architecture (ISA) if the software uses these deprecated features. Thus, deprecating ISA features can be a challenging task.BRIEF DESCRIPTION OF THE DRAWINGS[0003] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:[0004] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:[0005] FIG. 1 illustrates an example computer system architecture;[0006] FIG. 2 illustrates a processor comprising a plurality of cores;[0007] FIG. 3A illustrates a plurality of stages of a processing pipeline;[0008] FIG. 3B illustrates details of one embodiment of a core;[0009] FIG. 4 illustrates execution circuitry in accordance with one embodiment;[0010] FIG. 5 illustrates one embodiment of a register architecture;[0011] FIG. 6 illustrates one example of an instruction format;[0012] FIG. 7 illustrates addressing techniques in accordance with one embodiment;[0013] FIG. 8 illustrates one embodiment of an instruction prefix;[0014] FIGS. 9A-D illustrate embodiments of how the R, X, and B fields of the prefix are used;[0015] FIGS. 10A-B illustrate examples of a second instruction prefix;[0016] FIG. 11 illustrates payload bytes of one embodiment of an instruction prefix;[0017] FIG. 12 illustrates instruction conversion and binary translation implementations;[0018] FIG. 13 illustrates one example of a virtualization environment on which embodiments of the invention may be implemented;[0019] FIG. 14 illustrates one embodiment of the present disclosure including a deprecated instruction processor;[0020] FIG. 15 illustrates timing data related to the execution of certain instructions within an instruction set architecture;
[0021] FIG. 16A illustrates operations associated with transitioning between a lower privilege level and a higher privilege level;[0022] FIG. 16B illustrates operations associated with transitioning between a higher privilege level lower privilege level and a higher privilege level; and[0023] FIG. 17 illustrates deprecated state structures managed in accordance with one embodiment of the invention.[0024] FIGS. 18A-I illustrate program code in accordance with one embodiment of the invention.DETAILED DESCRIPTION[0025] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.Exemplary Computer Architectures[0026] Detailed below are describes of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0027] FIG. 1 illustrates embodiments of an exemplary system. Multiprocessor system 100 is a point-to-point interconnect system and includes a plurality of processors including a first processor 170 and a second processor 180 coupled via a point-to-point interconnect 150. In some embodiments, the first processor 170 and the second processor 180 are homogeneous. In some embodiments, first processor 170 and the second processor 180 are heterogenous.[0028] Processors 170 and 180 are shown including integrated memory controller (IMC) units circuitry 172 and 182, respectively. Processor 170 also includes as part of its interconnect controller units point-to-point (P-P) interfaces 176 and 178; similarly, second processor 180 includes P-P interfaces 186 and 188. Processors 170, 180 may exchange information via the point-to-point (P-P) interconnect 150 using P-P interface circuits 178, 188. IMCs 172 and 182 couple the processors 170, 180 to respective memories, namely a memory 132 and a memory 134, which may be portions of main memory locally attached to the respective processors.[0029] Processors 170, 180 may each exchange information with a chipset 190 via individual P-P interconnects 152, 154 using point to point interface circuits 176, 194, 186, 198. Chipset 190 may optionally exchange information with a coprocessor 138 via a high-performance interface 192. In some embodiments, the coprocessor 138 is a special-purpose processor, such as, for example, a high-throughput
MIC processor, a network or communication processor, compression engine, graphics processor,GPGPU, embedded processor, or the like.[0030] A shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode. [0031] Chipset 190 may be coupled to a first interconnect 116 via an interface 196. In some embodiments, first interconnect 116 may be a Peripheral Component Interconnect (PCI) interconnect, or an interconnect such as a PCI Express interconnect or another I/O interconnect. In some embodiments, one of the interconnects couples to a power control unit (PCU) 117, which may include circuitry, software, and/or firmware to perform power management operations with regard to the processors 170, 180 and/or co-processor 138. PCU 117 provides control information to a voltage regulator to cause the voltage regulator to generate the appropriate regulated voltage. PCU 117 also provides control information to control the operating voltage generated. In various embodiments, PCU 117 may include a variety of power management logic units (circuitry) to perform hardware-based power management. Such power management may be wholly processor controlled (e.g., by various processor hardware, and which may be triggered by workload and/or power, thermal or other processor constraints) and/or the power management may be performed responsive to external sources (such as a platform or power management source or system software).[0032] PCU 117 is illustrated as being present as logic separate from the processor 170 and/or processor 180. In other cases, PCU 117 may execute on a given one or more of cores (not shown) of processor 170 or 180. In some cases, PCU 117 may be implemented as a microcontroller (dedicated or general-purpose) or other control logic conFIG.d to execute its own dedicated power management code, sometimes referred to as P-code. In yet other embodiments, power management operations to be performed by PCU 117 may be implemented externally to a processor, such as by way of a separate power management integrated circuit (PMIC) or another component external to the processor. In yet other embodiments, power management operations to be performed by PCU 117 may be implemented within BIOS or other system software.[0033] Various I/O devices 114 may be coupled to first interconnect 116, along with an interconnect (bus) bridge 118 which couples first interconnect 116 to a second interconnect 120. In some embodiments, one or more additional processor(s) 115, such as coprocessors, high-throughput MIC processors, GPGPU’ s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays (FPGAs), or any other processor, are coupled to first interconnect 116. In some embodiments, second interconnect 120 may be a low pin count (FPC) interconnect.Various devices may be coupled to second interconnect 120 including, for example, a keyboard and/or mouse 122, communication devices 127 and a storage unit circuitry 128. Storage unit circuitry 128 may be a disk drive or other mass storage device which may include instmctions/code and data 130, in some embodiments. Further, an audio I/O 124 may be coupled to second interconnect 120. Note that other architectures than the point-to-point architecture described above are possible. For example, instead of
the point-to-point architecture, a system such as multiprocessor system 100 may implement a multi-drop interconnect or other such architecture.Exemplary Core Architectures, Processors, and Computer Architectures [0034] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in- order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die as the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.[0035] FIG. 2 illustrates a block diagram of embodiments of a processor 200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics. The solid lined boxes illustrate a processor 200 with a single core 202A, a system agent 210, a set of one or more interconnect controller units circuitry 216, while the optional addition of the dashed lined boxes illustrates an alternative processor 200 with multiple cores 202(A)-(N), a set of one or more integrated memory controller unit(s) circuitry 214 in the system agent unit circuitry 210, and special purpose logic 208, as well as a set of one or more interconnect controller units circuitry 216. Note that the processor 200 may be one of the processors 170 or 180, or co-processor 138 or 115 of FIG. 1.[0036] Thus, different implementations of the processor 200 may include: 1) a CPU with the special purpose logic 208 being integrated graphics and or scientific (throughput) logic (which may include one or more cores, not shown), and the cores 202(A)-(N) being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, or a combination of the two); 2) a coprocessor with the cores 202(A)-(N) being a large number of special purpose cores intended primarily for graphics and or scientific (throughput); and 3) a coprocessor with the cores 202(A)-(N) being a large number of general purpose in-order cores. Thus, the processor 200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit circuitry), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded
processor, or the like. The processor may be implemented on one or more chips. The processor 200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0037] A memory hierarchy includes one or more levels of cache unit(s) circuitry 204(A)-(N) within the cores 202(A)-(N), a set of one or more shared cache units circuitry 206, and external memory (not shown) coupled to the set of integrated memory controller units circuitry 214. The set of one or more shared cache units circuitry 206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, such as a last level cache (LLC), and or combinations thereof. While in some embodiments ring-based interconnect network circuitry 212 interconnects the special purpose logic 208 (e.g., integrated graphics logic), the set of shared cache units circuitry 206, and the system agent unit circuitry 210, alternative embodiments use any number of well-known techniques for interconnecting such units. In some embodiments, coherency is maintained between one or more of the shared cache units circuitry 206 and cores 202(A)-(N).[0038] In some embodiments, one or more of the cores 202(A)-(N) are capable of multi-threading. The system agent unit circuitry 210 includes those components coordinating and operating cores 202(A)- (N). The system agent unit circuitry 210 may include, for example, power control unit (PCU) circuitry and/or display unit circuitry (not shown). The PCU may be or may include logic and components needed for regulating the power state of the cores 202(A)-(N) and/or the special purpose logic 208 (e.g., integrated graphics logic). The display unit circuitry is for driving one or more externally connected displays.[0039] The cores 202(A)-(N) may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 202(A)-(N) may be capable of executing the same instruction set, while other cores may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Core ArchitecturesIn-order and out-of-order core block diagram [0040] FIG. 3(A) is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. FIG. 3(B) is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGS. 3(A)- (B) illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[0041] In FIG. 3(A), a processor pipeline 300 includes a fetch stage 302, an optional length decode stage 304, a decode stage 306, an optional allocation stage 308, an optional renaming stage 310, a scheduling (also known as a dispatch or issue) stage 312, an optional register read/memory read stage
314, an execute stage 316, a write back/memory write stage 318, an optional exception handling stage 322, and an optional commit stage 324. One or more operations can be performed in each of these processor pipeline stages. For example, during the fetch stage 302, one or more instructions are fetched from instruction memory, during the decode stage 306, the one or more fetched instructions may be decoded, addresses (e.g., load store unit (LSU) addresses) using forwarded register ports may be generated, and branch forwarding (e.g., immediate offset or an link register (LR)) may be performed. In one embodiment, the decode stage 306 and the register read/memory read stage 314 may be combined into one pipeline stage. In one embodiment, during the execute stage 316, the decoded instructions may be executed, LSU address/data pipelining to an Advanced Microcontroller Bus (AHB) interface may be performed, multiply and add operations may be performed, arithmetic operations with branch results may be performed, etc.[0042] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 300 as follows: 1) the instruction fetch 338 performs the fetch and length decoding stages 302 and 304; 2) the decode unit circuitry 340 performs the decode stage 306; 3) the rename/ allocator unit circuitry 352 performs the allocation stage 308 and renaming stage 310; 4) the scheduler unit(s) circuitry 356 performs the schedule stage 312; 5) the physical register file(s) unit(s) circuitry 358 and the memory unit circuitry 370 perform the register read/memory read stage 314; the execution cluster 360 perform the execute stage 316; 6) the memory unit circuitry 370 and the physical register file(s) unit(s) circuitry 358 perform the write back/memory write stage 318; 7) various units (unit circuitry) may be involved in the exception handling stage 322; and 8) the retirement unit circuitry 354 and the physical register file(s) unit(s) circuitry 358 perform the commit stage 324.[0043] FIG. 3(B) shows processor core 390 including front-end unit circuitry 330 coupled to an execution engine unit circuitry 350, and both are coupled to a memory unit circuitry 370. The core 390 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 390 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[0044] The front end unit circuitry 330 may include branch prediction unit circuitry 332 coupled to an instruction cache unit circuitry 334, which is coupled to an instruction translation lookaside buffer (TLB) 336, which is coupled to instruction fetch unit circuitry 338, which is coupled to decode unit circuitry 340. In one embodiment, the instruction cache unit circuitry 334 is included in the memory unit circuitry 370 rather than the front-end unit circuitry 330. The decode unit circuitry 340 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit circuitry 340 may further include an address generation unit circuitry (AGU, not shown). In one embodiment, the AGU generates an LSU address using forwarded register ports, and may further perform branch forwarding (e.g.,
immediate offset branch forwarding, LR register branch forwarding, etc.). The decode unit circuitry 340 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 390 includes a microcode ROM (not shown) or other medium that stores microcode for certain macroinstmctions (e.g., in decode unit circuitry 340 or otherwise within the front end unit circuitry 330). In one embodiment, the decode unit circuitry 340 includes a micro-operation (micro-op) or operation cache (not shown) to hold/cache decoded operations, micro-tags, or micro-operations generated during the decode or other stages of the processor pipeline 300. The decode unit circuitry 340 may be coupled to rename/allocator unit circuitry 352 in the execution engine unit circuitry 350.[0045] The execution engine circuitry 350 includes the rename/allocator unit circuitry 352 coupled to a retirement unit circuitry 354 and a set of one or more scheduler(s) circuitry 356. The scheduler(s) circuitry 356 represents any number of different schedulers, including reservations stations, central instruction window, etc. In some embodiments, the scheduler(s) circuitry 356 can include arithmetic logic unit (ALU) scheduler/scheduling circuitry, ALU queues, arithmetic generation unit (AGU) scheduler/scheduling circuitry, AGU queues, etc. The scheduler(s) circuitry 356 is coupled to the physical register file(s) circuitry 358. Each of the physical register file(s) circuitry 358 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit circuitry 358 includes vector registers unit circuitry, writemask registers unit circuitry, and scalar register unit circuitry. These register units may provide architectural vector registers, vector mask registers, general-purpose registers, etc. The physical register file(s) unit(s) circuitry 358 is overlapped by the retirement unit circuitry 354 (also known as a retire queue or a retirement queue) to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) (ROB(s)) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit circuitry 354 and the physical register file(s) circuitry 358 are coupled to the execution cluster(s) 360. The execution cluster(s) 360 includes a set of one or more execution units circuitry 362 and a set of one or more memory access circuitry 364. The execution units circuitry 362 may perform various arithmetic, logic, floating-point or other types of operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point). While some embodiments may include a number of execution units or execution unit circuitry dedicated to specific functions or sets of functions, other embodiments may include only one execution unit circuitry or multiple execution units/execution unit circuitry that all perform all functions. The scheduler(s) circuitry 356, physical register file(s) unit(s) circuitry 358, and execution cluster(s) 360 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline,
a scalar floating-point/packed integer/packed floating-point/vector integer/vector floating-point pipeline, and/or a memory access pipeline that each have their own scheduler circuitry, physical register file(s) unit circuitry, and or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) circuitry 364). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0046] In some embodiments, the execution engine unit circuitry 350 may perform load store unit (LSU) address/data pipelining to an Advanced Microcontroller Bus (AHB) interface (not shown), and address phase and writeback, data phase load, store, and branches.[0047] The set of memory access circuitry 364 is coupled to the memory unit circuitry 370, which includes data TLB unit circuitry 372 coupled to a data cache circuitry 374 coupled to a level 2 (L2) cache circuitry 376. In one exemplary embodiment, the memory access units circuitry 364 may include a load unit circuitry, a store address unit circuit, and a store data unit circuitry, each of which is coupled to the data TLB circuitry 372 in the memory unit circuitry 370. The instruction cache circuitry 334 is further coupled to a level 2 (L2) cache unit circuitry 376 in the memory unit circuitry 370. In one embodiment, the instruction cache 334 and the data cache 374 are combined into a single instruction and data cache (not shown) in L2 cache unit circuitry 376, a level 3 (L3) cache unit circuitry (not shown), and/or main memory. The L2 cache unit circuitry 376 is coupled to one or more other levels of cache and eventually to a main memory.[0048] The core 390 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set; the ARM instruction set (with optional additional extensions such as NEON)), including the instruction(s) described herein. In one embodiment, the core 390 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.Exemplary Execution Unit(s) Circuitry[0049] FIG. 4 illustrates embodiments of execution unit(s) circuitry, such as execution unit(s) circuitry 362 of FIG. 3(B). As illustrated, execution unit(s) circuity 362 may include one or more ALU circuits 401, vector/SIMD unit circuits 403, load/store unit circuits 405, and/or branch/jump unit circuits 407. ALU circuits 401 perform integer arithmetic and or Boolean operations. Vector/SIMD unit circuits 403 perform vector/SIMD operations on packed data (such as SIMD/vector registers). Load/store unit circuits 405 execute load and store instructions to load data from memory into registers or store from registers to memory. Load/store unit circuits 405 may also generate addresses. Branch/jump unit circuits 407 cause a branch or jump to a memory address depending on the instruction. Floating-point unit (FPU) circuits 409 perform floating-point arithmetic. The width of the execution unit(s) circuitry 362 varies depending upon the embodiment and can range from 16-bit to 1,024-bit. In some embodiments, two or
more smaller execution units are logically combined to form a larger execution unit (e.g., two 128-bit execution units are logically combined to form a 256-bit execution unit).Exemplary Register Architecture[0050] FIG. 5 is a block diagram of a register architecture 500 according to some embodiments. As illustrated, there are vector/SIMD registers 510 that vary from 128-bit to 1,024 bits width. In some embodiments, the vector/SIMD registers 510 are physically 512-bits and, depending upon the mapping, only some of the lower bits are used. For example, in some embodiments, the vector/SIMD registers 510 are ZMM registers which are 512 bits: the lower 256 bits are used for YMM registers and the lower 128 bits are used for XMM registers. As such, there is an overlay of registers. In some embodiments, a vector length field selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length. Scalar operations are operations performed on the lowest order data element position in a ZMM/YMM/XMM register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0051] In some embodiments, the register architecture 500 includes writemask/predicate registers 515. For example, in some embodiments, there are 8 writemask/predicate registers (sometimes called kO through k7) that are each 16-bit, 32-bit, 64-bit, or 128-bit in size. Writemask/predicate registers 515 may allow for merging (e.g., allowing any set of elements in the destination to be protected from updates during the execution of any operation) and or zeroing (e.g., zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation). In some embodiments, each data element position in a given writemask/predicate register 515 corresponds to a data element position of the destination. In other embodiments, the writemask/predicate registers 515 are scalable and consists of a set number of enable bits for a given vector element (e.g., 8 enable bits per 64-bit vector element).[0052] The register architecture 500 includes a plurality of general-purpose registers 525. These registers may be 16-bit, 32-bit, 64-bit, etc. and can be used for scalar operations. In some embodiments, these registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0053] In some embodiments, the register architecture 500 includes scalar floating-point register 545 which is used for scalar floating-point operations on 32/64/80-bit floating-point data using the x87 instruction set extension or as MMX registers to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0054] One or more flag registers 540 (e.g., EFLAGS, RFLAGS, etc.) store status and control information for arithmetic, compare, and system operations. For example, the one or more flag registers 540 may store condition code information such as carry, parity, auxiliary carry, zero, sign, and overflow. In some embodiments, the one or more flag registers 540 are called program status and control registers. [0055] Segment registers 520 contain segment points for use in accessing memory. In some embodiments, these registers are referenced by the names CS, DS, SS, ES, FS, and GS.
[0056] Machine specific registers (MSRs) 535 control and report on processor performance. Most MSRs 535 handle system- related functions and are not accessible to an application program. Machine check registers 560 consist of control, status, and error reporting MSRs that are used to detect and report on hardware errors.[0057] One or more instruction pointer register! s) 530 store an instruction pointer value. Control register(s) 555 (e.g., CR0-CR4) determine the operating mode of a processor (e.g., processor 170, 180, 138, 115, and/or 200) and the characteristics of a currently executing task. Debug registers 550 control and allow for the monitoring of a processor or core’s debugging operations.[0058] Memory management registers 565 specify the locations of data structures used in protected mode memory management. These registers may include a GDTR, IDRT, task register, and a LDTR register.[0059] Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Instruction Sets[0060] An instruction set architecture (ISA) may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or sub-formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format’ s fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source 1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands.Exemplary Instruction Formats[0061] Embodiments of the instruction! s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.[0062] FIG. 6 illustrates embodiments of an instruction format. As illustrated, an instruction may include multiple components including, but not limited to, one or more fields for: one or more prefixes 601, an opcode 603, addressing information 605 (e.g., register identifiers, memory addressing
information, etc.), a displacement value 607, and/or an immediate 609. Note that some instructions utilize some or all of the fields of the format whereas others may only use the field for the opcode 603. In some embodiments, the order illustrated is the order in which these fields are to be encoded, however, it should be appreciated that in other embodiments these fields may be encoded in a different order, combined, etc.[0063] The prefix(es) field(s) 601, when used, modifies an instruction. In some embodiments, one or more prefixes are used to repeat string instructions (e.g., OxFO, 0xF2, 0xF3, etc.), to provide section overrides (e.g., 0x2E, 0x36, 0x3E, 0x26, 0x64, 0x65, 0x2E, 0x3E, etc.), to perform bus lock operations, and/or to change operand (e.g., 0x66) and address sizes (e.g., 0x67). Certain instructions require a mandatory prefix (e.g., 0x66, 0xF2, 0xF3, etc.). Certain of these prefixes may be considered “legacy” prefixes. Other prefixes, one or more examples of which are detailed herein, indicate, and or provide further capability, such as specifying particular registers, etc. The other prefixes typically follow the “legacy” prefixes.[0064] The opcode field 603 is used to at least partially define the operation to be performed upon a decoding of the instruction. In some embodiments, a primary opcode encoded in the opcode field 603 is 1, 2, or 3 bytes in length. In other embodiments, a primary opcode can be a different length. An additional 3-bit opcode field is sometimes encoded in another field.[0065] The addressing field 605 is used to address one or more operands of the instruction, such as a location in memory or one or more registers. FIG. 7 illustrates embodiments of the addressing field 605. In this illustration, an optional ModR/M byte 702 and an optional Scale, Index, Base (SIB) byte 704 are shown. The ModR/M byte 702 and the SIB byte 704 are used to encode up to two operands of an instruction, each of which is a direct register or effective memory address. Note that each of these fields are optional in that not all instructions include one or more of these fields. The MOD R/M byte 702 includes a MOD field 742, a register field 744, and R/M field 746.[0066] The content of the MOD field 742 distinguishes between memory access and non-memory access modes. In some embodiments, when the MOD field 742 has a value of bl 1, a register-direct addressing mode is utilized, and otherwise register-indirect addressing is used.[0067] The register field 744 may encode either the destination register operand or a source register operand, or may encode an opcode extension and not be used to encode any instruction operand. The content of register index field 744, directly or through address generation, specifies the locations of a source or destination operand (either in a register or in memory). In some embodiments, the register field 744 is supplemented with an additional bit from a prefix (e.g., prefix 601) to allow for greater addressing. [0068] The R/M field 746 may be used to encode an instruction operand that references a memory address, or may be used to encode either the destination register operand or a source register operand.Note the R/M field 746 may be combined with the MOD field 742 to dictate an addressing mode in some embodiments.[0069] The SIB byte 704 includes a scale field 752, an index field 754, and a base field 756 to be used in the generation of an address. The scale field 752 indicates scaling factor. The index field 754
specifies an index register to use. In some embodiments, the index field 754 is supplemented with an additional bit from a prefix (e.g., prefix 601) to allow for greater addressing. The base field 756 specifies a base register to use. In some embodiments, the base field 756 is supplemented with an additional bit from a prefix (e.g., prefix 601) to allow for greater addressing. In practice, the content of the scale field 752 allows for the scaling of the content of the index field 754 for memory address generation (e.g., for address generation that uses 2scale* index + base).[0070] Some addressing forms utilize a displacement value to generate a memory address. For example, a memory address may be generated according to 2scale* index + base + displacement, index*scale+displacement, r/m + displacement, instruction pointer (RIP/EIP) + displacement, register + displacement, etc. The displacement may be a 1-byte, 2-byte, 4-byte, etc. value. In some embodiments, a displacement field 607 provides this value. Additionally, in some embodiments, a displacement factor usage is encoded in the MOD field of the addressing field 605 that indicates a compressed displacement scheme for which a displacement value is calculated by multiplying disp8 in conjunction with a scaling factor N that is determined based on the vector length, the value of a b bit, and the input element size of the instruction. The displacement value is stored in the displacement field 607.[0071] In some embodiments, an immediate field 609 specifies an immediate for the instruction.An immediate may be encoded as a 1-byte value, a 2-byte value, a 4-byte value, etc.[0072] FIG. 8 illustrates embodiments of a first prefix 601(A). In some embodiments, the first prefix 601(A) is an embodiment of a REX prefix. Instructions that use this prefix may specify general purpose registers, 64-bit packed data registers (e.g., single instruction, multiple data (SIMD) registers or vector registers), and/or control registers and debug registers (e.g., CR8-CR15 and DR8-DR15).[0073] Instructions using the first prefix 601(A) may specify up to three registers using 3-bit fields depending on the format: 1) using the reg field 744 and the R/M field 746 of the Mod R/M byte 702; 2) using the Mod R/M byte 702 with the SIB byte 704 including using the reg field 744 and the base field 756 and index field 754; or 3) using the register field of an opcode.[0074] In the first prefix 601(A), bit positions 7:4 are set as 0100. Bit position 3 (W) can be used to determine the operand size, but may not solely determine operand width. As such, when W = 0, the operand size is determined by a code segment descriptor (CS.D) and when W = 1, the operand size is 64- bit.[0075] Note that the addition of another bit allows for 16 (24) registers to be addressed, whereas the MOD R/M reg field 744 and MOD R/M R/M field 746 alone can each only address 8 registers.[0076] In the first prefix 601(A), bit position 2 (R) may an extension of the MOD R/M reg field 744 and may be used to modify the ModR/M reg field 744 when that field encodes a general purpose register, a 64-bit packed data register (e.g., a SSE register), or a control or debug register. R is ignored when Mod R/M byte 702 specifies other registers or defines an extended opcode.[0077] Bit position 1 (X) X bit may modify the SIB byte index field 754.
[0078] Bit position B (B) B may modify the base in the Mod R/M R/M field 746 or the SIB byte base field 756; or it may modify the opcode register field used for accessing general purpose registers (e.g., general purpose registers 525).[0079] FIGS. 9(A)-(D) illustrate embodiments of how the R, X, and B fields of the first prefix 601(A) are used. FIG. 9(A) illustrates R and B from the first prefix 601(A) being used to extend the reg field 744 and R/M field 746 of the MOD R/M byte 702 when the SIB byte 7 04 is not used for memory addressing. FIG. 9(B) illustrates R and B from the first prefix 601(A) being used to extend the reg field 744 and R/M field 746 of the MOD R/M byte 702 when the SIB byte 704 is not used (register-register addressing). FIG. 9(C) illustrates R, X, and B from the first prefix 601(A) being used to extend the reg field 744 of the MOD R/M byte 702 and the index field 754 and base field 756 when the SIB byte 704 being used for memory addressing. FIG. 9(D) illustrates B from the first prefix 601(A) being used to extend the reg field 744 of the MOD R/M byte 702 when a register is encoded in the opcode 603.[0080] FIGS. 10(A)-(B) illustrate embodiments of a second prefix 601(B). In some embodiments, the second prefix 601(B) is an embodiment of a VEX prefix. The second prefix 601(B) encoding allows instructions to have more than two operands, and allows SIMD vector registers (e.g., vector/SIMD registers 510) to be longer than 64-bits (e.g., 128-bit and 256-bit). The use of the second prefix 601(B) provides for three-operand (or more) syntax. For example, previous two-operand instructions performed operations such as A = A + B, which overwrites a source operand. The use of the second prefix 601(B) enables operands to perform nondestructive operations such as A = B + C.[0081] In some embodiments, the second prefix 601(B) comes in two forms - a two-byte form and a three-byte form. The two-byte second prefix 601(B) is used mainly for 128-bit, scalar, and some 256-bit instructions; while the three-byte second prefix 601(B) provides a compact replacement of the first prefix 601(A) and 3 -byte opcode instructions.[0082] FIG. 10(A) illustrates embodiments of a two-byte form of the second prefix 601(B). In one example, a format field 1001 (byte 0 1003) contains the value C5H. In one example, byte 1 1005 includes a “R” value in bit[7] . This value is the complement of the same value of the first prefix 601(A). Bit[2] is used to dictate the length (L) of the vector (where a value of 0 is a scalar or 128-bit vector and a value of 1 is a 256-bit vector). Bits[l:0] provide opcode extensionality equivalent to some legacy prefixes (e.g., 00 = no prefix, 01 = 66H, 10 = F3H, and 11 = F2H). Bits[6:3] shown as vvvv may be used to: 1) encode the first source register operand, specified in inverted (Is complement) form and valid for instructions with 2 or more source operands; 2) encode the destination register operand, specified in Is complement form for certain vector shifts; or 3) not encode any operand, the field is reserved and should contain a certain value, such as 111 lb.[0083] Instructions that use this prefix may use the Mod R/M R/M field 746 to encode the instruction operand that references a memory address or encode either the destination register operand or a source register operand.
[0084] Instructions that use this prefix may use the Mod R/M reg field 744 to encode either the destination register operand or a source register operand, be treated as an opcode extension and not used to encode any instruction operand.[0085] For instruction syntax that support four operands, vvvv, the Mod R/M R/M field 746 and the Mod R/M reg field 744 encode three of the four operands. Bits [7:4] of the immediate 609 are then used to encode the third source register operand.[0086] FIG. 10(B) illustrates embodiments of a three-byte form of the second prefix 601(B). in one example, a format field 1011 (byte 0 1013) contains the value C4H. Byte 1 1015 includes in bits [7:5]“R,” “X,” and “B” which are the complements of the same values of the first prefix 601(A). Bits[4:0] of byte 1 1015 (shown as mmmmm) include content to encode, as need, one or more implied leading opcode bytes. For example, 00001 implies a 0FH leading opcode, 00010 implies a 0F38H leading opcode, 00011 implies a leading 0F3AH opcode, etc.[0087] Bit[7] of byte 2 1017 is used similar to W of the first prefix 601(A) including helping to determine promotable operand sizes. Bit[2] is used to dictate the length (L) of the vector (where a value of 0 is a scalar or 128 -bit vector and a value of 1 is a 256-bit vector). Bits [1:0] provide opcode extensionality equivalent to some legacy prefixes (e.g., 00 = no prefix, 01 = 66H, 10 = F3H, and 11 = F2H). Bits [6: 3], shown as vvvv, may be used to: 1) encode the first source register operand, specified in inverted (Is complement) form and valid for instructions with 2 or more source operands; 2) encode the destination register operand, specified in Is complement form for certain vector shifts; or 3) not encode any operand, the field is reserved and should contain a certain value, such as 111 lb.[0088] Instructions that use this prefix may use the Mod R/M R/M field 746 to encode the instruction operand that references a memory address or encode either the destination register operand or a source register operand.[0089] Instructions that use this prefix may use the Mod R/M reg field 744 to encode either the destination register operand or a source register operand, be treated as an opcode extension and not used to encode any instruction operand.[0090] For instruction syntax that support four operands, vvvv, the Mod R/M R/M field 746, and the Mod R/M reg field 744 encode three of the four operands. Bits[7:4] of the immediate 609 are then used to encode the third source register operand.[0091] FIG. 11 illustrates embodiments of a third prefix 601(C). In some embodiments, the first prefix 601(A) is an embodiment of an EVEX prefix. The third prefix 601(C) is a four-byte prefix.[0092] The third prefix 601(C) can encode 32 vector registers (e.g., 128-bit, 256-bit, and 512-bit registers) in 64-bit mode. In some embodiments, instructions that utilize a writemask/opmask (see discussion of registers in a previous FIG., such as FIG. 5) or predication utilize this prefix. Opmask register allow for conditional processing or selection control. Opmask instructions, whose source/destination operands are opmask registers and treat the content of an opmask register as a single value, are encoded using the second prefix 601(B).
[0093] The third prefix 601(C) may encode functionality that is specific to instruction classes (e.g., a packed instruction with “load+op” semantic can support embedded broadcast functionality, a floating point instruction with rounding semantic can support static rounding functionality, a floating-point instruction with non-rounding arithmetic semantic can support “suppress all exceptions” functionality, etc.).[0094] The first byte of the third prefix 601(C) is a format field 1111 that has a value, in one example, of 62H. Subsequent bytes are referred to as payload bytes 1115-1119 and collectively form a 24-bit value of P[23:0] providing specific capability in the form of one or more fields (detailed herein). [0095] In some embodiments, P[1 :0] of payload byte 1119 are identical to the low two mmmmm bits. P[3:2] are reserved in some embodiments. Bit P[4] (R’) allows access to the high 16 vector register set when combined with P[7] and the ModR/M reg field 744. P[6] can also provide access to a high 16 vector register when SIB-type addressing is not needed. P[7:5] consist of an R, X, and B which are operand specifier modifier bits for vector register, general purpose register, memory addressing and allow access to the next set of 8 registers beyond the low 8 registers when combined with the ModR/M register field 744 and ModR/M R/M field 746. P[9:8] provide opcode extensionality equivalent to some legacy prefixes (e.g., 00 = no prefix, 01 = 66H, 10 = F3H, and 11 = F2H). P[10] in some embodiments is a fixed value of 1. P[ 14: 11], shown as vvvv, may be used to: 1) encode the first source register operand, specified in inverted (Is complement) form and valid for instructions with 2 or more source operands; 2) encode the destination register operand, specified in Is complement form for certain vector shifts; or 3) not encode any operand, the field is reserved and should contain a certain value, such as 111 lb.[0096] P[15] is similar to W of the first prefix 601(A) and second prefix 611(B) and may serve as an opcode extension bit or operand size promotion.[0097] P[18: 16] specify the index of a register in the opmask (writemask) registers (e.g., writemask/predicate registers 515). In one embodiment of the invention, the specific value aaa = 000 has a special behavior implying no opmask is used for the particular instruction (this may be implemented in a variety of ways including the use of a opmask hardwired to all ones or hardware that bypasses the masking hardware). When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the opmask field allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the opmask field’ s content selects one of a number of opmask registers that contains the opmask to be used (and thus the opmask field’ s content indirectly
identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field’s content to directly specify the masking to be performed.[0098] P[19] can be combined with P[ 14: 11] to encode a second source vector register in a non destructive source syntax which can access an upper 16 vector registers using P[19]. P[20] encodes multiple functionalities, which differs across different classes of instructions and can affect the meaning of the vector length/ rounding control specifier field (P[22:21]). P[23] indicates support for merging- writemasking (e.g., when set to 0) or support for zeroing and merging-writemasking (e.g., when set to 1). [0099] Exemplary embodiments of encoding of registers in instructions using the third prefix 601(C) are detailed in the following tables.Table 1: 32-Register Support in 64-bit ModeTable 2: Encoding Register Specifiers in 32-bit ModeTable 3: Opmask Register Specifier Encoding
[0100] Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0101] The program code may be implemented in a high-level procedural or object-oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[0102] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[0103] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[0104] Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read only memories (CD-ROMs), compact disk rewritable’s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0105] Accordingly, embodiments of the invention also include non-transitory, tangible machine- readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)[0106] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using
static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[0107] FIG. 12 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 12 shows a program in a high level language 1202 may be compiled using a first ISA compiler 1204 to generate first ISA binary code 1206 that may be natively executed by a processor with at least one first instruction set core 1216. The processor with at least one first ISA instruction set core 1216 represents any processor that can perform substantially the same functions as an Intel® processor with at least one first ISA instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the first ISA instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one first ISA instruction set core, in order to achieve substantially the same result as a processor with at least one first ISA instruction set core. The first ISA compiler 1204 represents a compiler that is operable to generate first ISA binary code 1206 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one first ISA instruction set core 1216.[0108] Similarly, FIG. 12 shows the program in the high level language 1202 may be compiled using an alternative instruction set compiler 1208 to generate alternative instruction set binary code 1210 that may be natively executed by a processor without a first ISA instruction set core 1214. The instruction converter 1212 is used to convert the first ISA binary code 1206 into code that may be natively executed by the processor without a first ISA instruction set core 1214. This converted code is not likely to be the same as the alternative instruction set binary code 1210 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1212 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have a first ISA instruction set processor or core to execute the first ISA binary code 1206.APPARATUS AND METHOD FOR MANAGING UNSUPPORTED INSTRUCTION SET ARCHITECTURE (ISA) FEATURES IN A VIRTUALIZED ENVIRONMENT [0109] Deprecating ISA features is desirable for a variety of reasons including, but not limited to, reducing attack surfaces, simplifying the validation space, and reducing implementation efforts. However, removal of ISA features can be challenging given the fact that existing software which relies on these features may no longer work.
[0110] To mitigate this problem, embodiments of the invention provide virtualized support for a variety of deprecated ISA features. By way of example, and not limitation, in an x86 implementation, the deprecated features include the legacy interrupt descriptor table (IDT), the global descriptor tables (GDT), the local descriptor table (LDT), and the task-state segment (TSS). In one embodiment, a virtual machine (VM) is conFIG.d with operational modes in which certain deprecated instructions and instructions utilizing these deprecated features trigger a virtual machine exit (VMEXIT) and are emulated by the hypervisor. A small amount of legacy state may be maintained in the virtual machine control structure (VMCS) including, for example, the LDT, IDT, GDT, task register (TR), and the code segment (CS)/segment selector (SS).1. Example Virtualization Architectures[0111] Processor virtualization has been used to reduce the attack surface. However, using processor virtualization to reduce the attack surface often requires the execution of virtual machine (VM) operations including VM exit, VM context switch, and VM resume. These VM operations may be associated with expensive overheads. As the granularity of code becomes finer, frequent VM exit operations associated with context switches become the bottleneck to high performance computing.[0112] The kernel of an operating system (either an operating system of a host machine or a guest operating system of a virtual machine) may include one or more components that provide services such as, memory management, task scheduling, process management, EO management, drivers (e.g., file system and volume drivers, mass storage drivers, and bus drivers), and code integrity management services to software applications. The memory management service may use one or more page tables to provide memory address mappings between the guest virtual address space and the guest physical address space. The kernel may include components that are vulnerable to unauthorized modifications of the page tables themselves. Embodiments of the present disclosure add extensions to the virtual machine control structure (VMCS) that may be used to prevent the guest page table attack. A VMCS is a data structure (stored in the host physical address (HP A) space) containing operational states of the guest VM and the host machine. The operational states may include states of control registers, instruction pointers, and stack pointers. Data stored in VMCS may be organized into different groups including a guest-state area, a host state area and other fields relating to VM-execution control, VM-exit control, VM-entry control, and VM- exit information. Processor state (such as content stored in control registers, instruction pointer registers, and stack pointer registers of the processor) may be loaded from the guest-state area upon entering the VM and saved into the guest-state area upon exiting the VM, whereas the processor state may be loaded from the host-state area upon VM exits. Thus, the VM is associated with a current VMCS. The extensions may help secure the guest page table, thus securing the mapping between the guest virtual address space and the guest physical address space, and may allow fast switching (i.e., changing the corresponding page table) of guest memory address mappings without triggering VM exit operations, where the switching of guest memory address mappings includes updating the page tables for storing guest memory address mappings. A VM exit is a hardware-forced transition from the guest execution mode to the VMM
execution mode in response to detecting one of the triggering events (such as an attempt to execute a certain privileged instruction or to access a certain memory address).[0113] In some processor implementations, the base address (referred to as the root) of the page stable is stored in a control register (e.g., CR3) associated with the processor. For example, the CR3 may be used to store the physical address of a head entry in the page table. To secure the mapping between the guest virtual address space and the guest physical address space using the hardware-assisted virtualization features, the processor may:1). set, by VMM, write protection in enhanced page tables (EPT) setup (e.g., by setting the write protection flag of the pages in the page tables) on the portion of the guest physical address space used by the current context and setting a VMEXIT control flag in the VMCS. This step ensures non-root page tables in page table hierarchy are not subject to modification from any inadvertent modifications.2). set CR3 load VMEXIT control flag in VMCS. This step ensures that any inadvertent execution of a register instruction (e.g., mov cr3, <register>) by the guest cannot happen.[0114] Both of above steps ensure that the guest virtual to guest physical addressing mapping cannot be modified without VMM's intervention. Both of these steps, however, trigger the VMEXIT operation and thus may introduce performance.[0115] Setting of the CR3 load VMEXIT control flag forces the initiation of a VM exit operation (VMEXIT) prior to loading the root of the guest page table from the CR3 control register. After loading the root of the page table from the CR3 control register, the processor may execute VM entry operation (VMENTRY or VMRESUME) to resume execution of the virtual machine. This approach, however, increases the overall latency for switching between different memory address mappings by adding the round-trip time of VMEXIT and VMENTRY for each CR3 control register load by the guest operating system.[0116] Embodiments of the present disclosure provide a virtual machine (VM) guest control mode (indicated by a VMX_GUEST_CR3_LOAD_CONTROL_BIT in the VMCS). Under the VM guest control mode (e.g., when VMX_GUEST_CR3_LOAD_CONTROL_ BIT is set), a guest operating system may request a switch between memory address mappings without triggering the VM exit operations, if the guest operating system can provide an index value and a root value that match the corresponding root value retrieved by the VMM. Without the VM guest control mode, a request by the guest operating system to switch the memory address mappings would trigger a VM exit operation. Further, the VMCS may be expanded to include a control field to store a reference (e.g., an address pointer) linked to a host physical memory page in the physical address space. In one embodiment, the host physical memory page may be aligned by a page boundary in the physical address space. The host memory page may contain an array data structure (VMX_CR3_TARGET_ARRAY, referred to as the CR3 target array). The CR3 target array may contain entries, where each entry may be identified by an index value and include a certain number of bits (e.g., 64 bits). The virtual machine monitor may use an entry of the CR3 target array to store the root of a page table associated with a context (or a process) of the virtual machine. A context is a set of data used by a task (e.g., a process or a thread) saved in registers (or memory) that
allow the task to resume after an interruption. The context of a VM is the set of data that allow the VM to resume from an interruption. Each time a guest operating system needs to switch the memory mapping between the guest virtual address space and the guest physical address space (e.g., due to a context switch), the guest operating system may provide both the index value and the root of the page table to the virtual machine monitor. The virtual machine monitor may retrieve the root value of the page table stored in the CR3 target array entry identified by the index value and compare the retrieved root value with the root value provided by the guest operating system. If the two root values do not match, the virtual machine monitor may trigger the VMEXIT operation with exit reason being 'control-register access exit (Oxlc)' and report usual exit qualification of access to the CR3 register (as currently defined in existing architectures). Because this feature is mutually exclusive with existing VMEXIT control setting of CR3 exiting, the existing exit reason and exit qualification can be used without modifications.[0117] FIG. 13 illustrates a system 1300 for efficient switches of memory address mapping according to an embodiment of the present disclosure. A processor may change from executing a first task (a first process) to a second task (a second process). The change of tasks causes a switch of the corresponding contexts. The system 1300 may include a host 1302 such as, for example, a server computer or any suitable computing devices that support virtualization. Host 1302 may further include a processor 1304 and a memory 1306. In one embodiment, processor 1304 and memory 1306 may be implemented on a system-on-a-chip (SoC) 1307.[0118] The processor 1304 may be a hardware processor such as a central processing unit (CPU) that includes one or more processing cores (not shown) that may be conFIG.d to execute system software and user application software. The memory 1306 may be a suitable type of storage device to store instructions of software applications and the data associated with the software applications. Memory 1306 may be addressed according to memory addresses defined in a host physical address (HP A) space 1318. [0119] Processor 1304 may further include an execution unit 1308 to execute instructions and a register 1310 to store data. In one embodiment, execution unit 1308 of processor 1304 may include a logic circuit 1309 implemented to support execution of a set of virtualization instructions (virtual- machine extension (VMX)) to provide support for one or more virtualization environments ported on host 1302. The VMX may provide processor- level support for virtual machines. In one embodiment, the VMX may refer to hardware features corresponding to instructions to generate a virtual machine monitor (VMM) 1320 that is a host program that allows one or more execution environments (or virtual machines (VMs)) to run on the host 1302. Referring to FIG. 13, VMM 1320 may create and support the operations of virtual machines (VMs) 1322. Alternatively, execution unit 1308 may execute VMX instructions to directly generate VMs 1322 without the need for VMM 1320.[0120] VMs 1322 may behave like a regular computing device including a virtual CPU (vCPU) 1329. The vCPU 1329 associated with VMs 1322 may execute a respective guest operating system (guest OS) 1324. Guest applications 1328 may run within the environments of guest operating systems 1324. Guest operating systems 1328 (including a kernel) may include a number of guest-OS components (or
kernel components) to provide a number of services to guest applications 1328 including memory address management.[0121] VMs 1322 may access memory 1306 through a series of memory space mappings. Each VM 1322 may construct a guest virtual address (GVA) space 1326 that may be mapped to a corresponding guest physical address (GPA) space 1331, for the VM 1322. A control register (e.g., CR3) 1330 associated with the processor 1304 may contain the base address of the page directory that may be used to calculate a mapping between the GVA space 1326 and the corresponding GPA space 1331 for the VM 1322. In one implementation, control register 1330 can be a virtualized control register that corresponds to a physical control register associated with host processor 1304. The GPA space 1331 of the VM 1322 may be further mapped to the host physical address (HP A) space 1381 of the host system 1302. The mapping from the GPA space 1331 of a VM 1322 to the HPA space of the host may be translated via the extended page table (EPT) associated with the current VMCS running on the processor 1304. In some implementations, the GPA space 1331 and the HPA space 1318 may be the same, thus GVA space 1326 may be directly mapped to HPA space 1318.[0122] VMs can be created and removed from host 1302 by executing appropriate VMX instructions. Execution unit 1308 of processor 1304 via logic circuit 1309 may execute VMX instructions to implement life cycles of VMM software and associated VMs. FIG. 2 illustrates a life cycle of VMM 1320 and the associated VMs 1322 according to an embodiment of the present disclosure. As shown in FIG. 2, a host software application executing by execution unit 1308 on processor 1304 may enter VMX operations by executing a VMX start instruction (e.g., VMXON) to start VMM 1320. Under the VMX operations, VMM 1320 can then enter VMs 1322 by executing VM entry instructions (e.g.,VMLAUNCH or VMRESUME). End users may use created VMs to run guest applications. A guest application may be associated with a first context (CO) that may be switched to a second context (Cl) through a context switch process. After the use of VMs, VMM 1320 can regain control using VM exit instructions that would stop the VMs.[0123] Thus, VMX operations are divided into root operations under which VMM runs and non root operations under which the guest software (e.g., VMs and guest OS) runs. Therefore, there are two kinds of VMX transitions: transitions into VMX non-root operation (VM entries) from root operations and transitions from VMX non-root operation to VMX root operation (VM exits).[0124] Processor 1304 of the host 1302 may control non-root operation and VMX transitions using virtual machine control structures (VMCSs). A VMCS is a data structure (stored in the HPA space) containing operational states of the guest VM and the host machine. The operational states may include states of control registers (e.g., CR3), instruction pointers, and stack pointers. VMM 1320 may manage access to the VMCSs using a VMCS pointer (one per virtual processor or logic processor) stored in register 1310. VMM 1320 may conFIG. a VMCS using VMX operations (e.g., VMREAD, VMWRITE, and VMCLEAR). A VMCS is a data structure that includes data fields to store parameters associated with a VM context (CO, Cl) for VMs supported by host 1302. Thus, VM 1322 may run under the first VM context (CO) as the active context based on a first set of parameters stored in VMCS, and then switch to
the second VM context (Cl) as the active context based on a second set of parameters stored in the VMCS. VMM 1320 may have access via the HPA to a number of active VMCSs stored in memory 1306 as shown in FIG. 13. At a given time, one VMCS is current and is used to specify the VM context for a currently -running VM with respect to one virtual processor.[0125] In one embodiment, as shown in FIG. 13, memory 1306 may include one or more regions (referred to as VMCS regions) to store active VMCSs 1312. For example, each VMCS region may contain parameters associated with one VMCS that can be used to specify a VM context. In response to receiving a request for VM entry, VMM 1320 may determine a current VMCS based on the request and use the current VMCS to specify the VM context. Processor 1304 may include or be associated with a register 1310 to store the VMCS pointer to the current VMCS (e.g., as shown in FIG. 13, VMCS 1312). Register 1310 may store a reference (e.g., a memory address in the HPA space 1318) to the location where the current VMCS 1312 is stored.[0126] Parameter values stored in VMCS 1312 may be organized into different groups including a guest-state area, a host state area and other fields relating to VM-execution control, VM-exit control, VM- entry control, and VM-exit information. Processor state (such as content stored in control registers, instruction pointer registers, and stack pointer registers of the processor) may be loaded from the guest- state area upon entering the VM and saved into the guest-state area upon exiting the VM, whereas the processor state may be loaded from the host-state area upon VM exits. Thus, the VM is associated with a current VMCS.[0127] In one embodiment, the guest-state area of VMCSs 1312 may further include fields to store processor state that is loaded from these fields on every VM entry of the corresponding VM and saved into these fields on every VM exit. These fields may store, but not limited to, content of control registers (e.g., CR3) that may be used to calculate a mapping from the guest virtual address (GVA) to the guest physical address (GPA) of the VM, content of instruction pointer registers (RIP), and content of stack pointer registers (RSP). These fields may optionally include a field to store a pointer to the extended page table (EPTP) that may be used to calculate a mapping from the guest physical address (GPA) space to host physical address (HPA) space of the VM. The host-state area may include similar fields to store processor state upon VM exits.[0128] Guest operating systems (including kernels) 1324 may provide different services to guest applications 1328 and manage different processes associated with these applications 1328. Each process may be associated with a corresponding context (CO, Cl etc.) specified in the GVA space 1326. In some implementations, vCPU 1329 may execute one process associated with a current context (in an active state) while other contexts are in an idle state. One or more pages in a page table may contain the memory address mapping to translate the addresses associated with a current context in the GVA space 1326 to the GPA space 1331. The guest OS 1324 may use a base address (or root) referencing to the one or more pages in the page table used to determine the current memory address mapping. In some implementations, the guest OS 1324 may store the root in one of the CR3 control registers 1330. When guest OS 1324 switches from the current process to another process, guest OS 1324 may need to update pages in the
page table used to provide the current memory address mapping. For example, guest OS 1324 may need to load, from one of the CR3 control registers, a new root for the pages in the page table to provide the memory address mapping for the newly activated process.[0129] As discussed above, to prevent malicious memory address attack by a guest application, the guest OS 1324 may write-protect memory pages that store the guest page tables. The write-protect may be achieved by setting the write prevention bits associated with these pages. In some implementations, to ensure the security of the root stored in the CR3 control register, processor 1304 may further execute a VM exit operation (VMEXIT) prior to loading the root from the CR3 control register and execute a VM entry instruction (VMENTRY) after loading the root from the CR3 control register. Therefore, current approaches to reducing the attack surface require frequent switches of the entire VMCS (i.e., VM exits) which can be computationally expensive.[0130] To reduce the overhead associated with executing the VMEXIT and VMENTRY associated with loading a CR3 control register, embodiments of the present disclosure provide a CR3 load control mode under which the VMM 1320 may determine whether the content of the CR3 control registers can be trusted. If VMM 1320 determines that the CR3 control registers can be trusted (e.g., it has not been tampered with by the guest application), VMM 1320 may allow the guest OS 1324 load the root value associated with the pages in the page table without triggering the VM exit instruction, where the root value may reference the next memory address mapping associated with a new context.[0131] In one embodiment, VMCS 1312 may include a CR3 load control bit (a bit flag) to indicate whether the VM guest control mode is enabled. When the CR3 load control bit is set "1", VMM 1320 enters into the VM guest control mode. VMCS 1312 may further contain a CR3 control field 1314 to store a reference to a CR3 target array 1316. CR3 target array 1316 may be stored in the host memory that can be referenced by a host physical address in the HPA space 1318. Since CR3 target array 1316 is stored and accessed in the HPA space 1318, it is not directly accessible by the guest OS 1324. Instead, the guest OS 1324 needs to employ VMM 1320 and/or host operating system to access the HPA space 1318. Thus, VMM 1320 may store trusted values in CR3 target array 1316. In one embodiment, VMM 1320 may store CR3 target array 1316 in a host physical memory page with the reference to the CR3 target array 1316 aligned with a page boundary. Thus, CR3 target array 1316 can be referenced according to a page number in HPA space 1318.[0132] In one embodiment, entries of the CRE target array 1316 may be referenced by the respective index values. Each entry, identified by a unique index value, may include a certain number of bits (e.g., 64 bits) to store flags and a CR3 root. FIG. 3 illustrates a CR3 target array according to an embodiment of the present disclosure. As shown in FIG. 3, the host physical space 300 of a memory may contain a virtual machine control structure (VMCS) 302 and a page-aligned CR3 target array 304. VMCS 302 may contain a control field 306 to store a reference to the CR3 target array 304. CR3 target array may further include entries that are identified by index numbers.[0133] For example, as shown in FIG. 3, CR3 target array 304 may include 64-bit entries 308A, 308B, . . . , 308N that are identified by Index_l, Index_2, . . . , Index_N. Each entry may include a first
bit (V, at bit position 63) to indicate whether the entry is a valid entry. For example, if V is set to "1," the entry is valid; otherwise, the entry is invalid. The entry may include a second bit (A, at bit position 62) that is set to "1" by processor 1302 whenever processor 1302 switches the memory address mappings using the index value and the root value stored in an entry 308A, 308B, . . . , 308N without triggering the VM exit operation. The A bit is set after it is determined that the root values match. For example, A set to " 1 " means that the requested CR3 value and the CR3 value stored in the entry match, and the CR3 load has proceeded. The entry may further include reserved bits (bits 52-61) that should be clear. The entry may further include a CR3 field (bits 0-51) to store a CR3 value. The CR3 value stored in the CR3 field is trusted by the VMM and used to match with the CR3 value provided by the guest OS. In one embodiment, the CR3 target array may include 512 entries.[0134] In one embodiment, the VMM may use the index values (Index_l, Index_2, . . . , Index_N) and the CR3 values stored in the entries 308 A, 308B, . . . , 308N to verify the integrity of guest application. Referring to FIG. 13, when a guest OS 1324 creates a new GVA space (e.g., in conjunction with creating a new process), guest OS 1324 may issue a hypercall to VMM 1320 to request VMM 1320 to store the root of the page table that stores the memory address mapping between the GVA space to the GPA space. The hypercall is a software trap issued by the guest OS 1324 to VMM 1320 to request privileged operations such as, updating the page table. The root value may be stored in a CR3 control register 1330 associated with the VM 1322. Responsive receiving the hypercall including the status indicating that VMM 1320 has successfully stored the new value in the CR3 target array and returned an index value to the guest OS, the guest OS may make the mov CR3 <value> instruction without triggering the VM exit operation. Prior to receiving the hypercall, the mov CR3 <value> issued by the guest OS triggers the VM exit operation. Responsive to determining that the CR3 control bit is set to "1," VMM 1320 may store the received root value in an entry in the CR3 target array 1316, where the entry is identified by an index value. Responsive to storing the CR3 value in the entry (and setting the V bit to valid), VMM 1320 may return the index value to guest OS 1324. Guest OS 1324 may store the index value in a data structure private to the VM.[0135] When guest OS 1324 needs to switch the GVA space (by switching the CR3 control register that stores the root for the mapping between the GVA space and GPA space), guest OS 1324 may need to provide the root value stored in CR3 control register and the index value to the VMM 1320 for verification. VMM 1320 may compare the root value received from the guest OS 1324 with the root value stored in the entry identified by the received index value. If they match, VMM 1320 may allow the GVA space switch (by switching the CR3 control register) without triggering the VM exit operation, thus allowing a secure, fast switch. In one embodiment, processor 1304 may set the A bit (referred to as the access status bit) to "1" to indicate that processor 1304 has performed CR3 switch without the VM exit operation by making sure that the root value stored in the entry is matched to a root value provided by the guest OS 1324.[0136] When guest OS 1324 deletes a GVA space (or a corresponding process), guest OS 1324 may destruct pages that store the memory address mapping between the GVA space and the GPA space. Guest
OS 1324 may further make another hypercall (as defined above) to VMM 1320 to inform VMM 1320 of the destruction of the GVA space associated with an index value. VMM 1320 may remove the entry identified by the index value. In one embodiment, VMM 1320 may set the V bit to "0."[0137] In one embodiment, the access status bit (A bit) of each entry in CR3 target array 1316 may be used to indicate the time that the entry has been in CR3 target array 1316.[0138] Thus, the A bit is set whenever processor 1304 determines that the root value in the request matches the root value stored in CR3 target array 1316. In one embodiment, VMM 1320 may be associated with a private data structure to store an age count ("AgeCount") associated with a corresponding entry in CR3 target array 1316. VMM 1320 may periodically scan all entries in CR3 target array. If VMM 1320 determines that the A bit of an entry is set (meaning that processor 1304 recently switched to the memory address space), VMM 1320 may increment the AgeCount associated with the corresponding entry. If VMM 1320 determines that the A bit of an entry is cleared (meaning that processor 1304 recently switch off the memory address space), VMM 1320 may decrement the AgeCount associated with the corresponding entry. After each scan of the CR3 target array 1316, VMM 1320 may clear all A bits so that VMM 1320 may determine if the A bit has been set since the last scan. Thus, the access status bit may be used to implement a Least Recently Used (LRU) algorithm. In the event that all 512 entries in the CR3 target array have been used up, the LRU algorithm may select the least recently used entry to evict and make space for a new entry.[0139] In another embodiment, an existing instruction may be modified to achieve the VM exit free guest memory address space switching. For example, certain bits (e.g., bit 52-62) of the operand of the register mov CR3 <register operand> instruction may be used to store the index value that identifies a corresponding entry in the target array. Thus, responsive to executing mov CR3 <register operand>, the processor may first determine if the CR3 load control bit stored in VMCS is set. Responsive to determining that the CR3 load control bit is not set, the processor may initiate the VM exit operation. Responsive to determining that the CR3 load control bit is set, the processor may retrieve the index value from the operand (e.g., bits 52-62), and retrieve, based on the index value, the root value stored in a corresponding entry of the target array. The retrieved target value may be compared to a root value encoded in the operand to determine whether the guest memory address mapping can be switched without initiating the VM exit operation. In one embodiment, the modified mov CR3 <register operand> instruction may be executed independent of whether the VM guest control mode is set or not. In another embodiment, the modified mov CR3 <register operand> instruction may be executed only when the VM guest control mode is set.[0140] In another embodiment, a new virtualization support instruction may be added to VMX to the VM exit free guest memory address space switching. The new virtualization instruction may include a first reference to a register for storing the index value and a second reference to the CR3 control register. The new virtualization instruction may be enabled when the CR3 load control bit is set; the new virtualization instruction may be disabled when the CR3 load control bit is not set. The guest OS may
trigger the new virtualization instruction to initiate the VM exit free guest memory address space switching.2. Managing Unsupported ISA Features With Virtualization [0141] In one embodiment, for all legacy instructions that are not supported by the modes of the new architecture (e.g., virtual machine extensions (VMX)), microcode is executed to handle the common- case legacy behavior. If the legacy behavior requires complex system interaction, such as the examples provided herein, a VMEXIT is performed and the hypervisor emulates the complex behavior. Extremely infrequent cases, such as real and protected mode execution that is typically required for boot, can be interpreted by the hypervisor at an acceptable overhead.[0142] One embodiment is illustrated in FIG. 14 in which virtualization techniques are used to emulate legacy behavior. In particular, in response to detecting legacy instruction, the virtual machine 1422 executes a VMEXIT 1426, 1427 in accordance with the following options:Option 1: This option is implemented with no modifications to existing microarchitectures, but provides lower performance. In response to detecting a deprecated instruction or access to a deprecated state, an Invalid/Undefined Opcode exception (#UD) triggers a first type of VMEXIT 1426. A deprecated instruction processor 1425 detects the first type of VMEXIT 1426, which may require complex system interactions, and an emulator 1435 emulates the complex behavior. While this approach is limited in performance, it comes at no cost since no architectural changes are required to the SoC 1407 microarchitecture.Option 2: In one embodiment, a second type of the VMEXIT instruction 1427 is executed for certain legacy instructions which provides additional information for instructions combined with partial hardware support for the legacy architecture state. In this embodiment, the deprecated instruction processor 1425 of the hypervisor 1420 relies on the microarchitectural components 1435 provided by the SoC 1407 to efficiently process these types of legacy instructions. In one implementation, the deprecated instruction processor 1425 executes one or more privileged instructions which access the microarchitectural components 1435 using parameters indicated by the VMEXIT 1427, and return results to the virtual machine 1422 (which may then return the results to the guest OS 1324 which updates the relevant execution context (e.g., CO, Cl)). Alternatively, or in addition, the hypervisor 1420 validates the VMEXIT 1427, which is executed directly by the microarchitectural components 1425 and returns the results directly to the virtual machine 1422.[0143] In both types of VMEXIT 1426, 1427, the deprecated instruction processor 1425 of the hypervisor 1420 emulates deprecated instructions and operations related to deprecated state and returns execution to the VM 1422. If instructions requiring these VMEXITs 1426, 1427 are infrequent, then they will not result in poor performance of the legacy VM 1422. Non-deprecated instructions and those not interacting with deprecated state will operate at native performance regardless of their frequency.[0144] In order to reduce complexity and increase performance of the hypervisor 1420, in one embodiment, a new type of exception is delivered to the hypervisor 1420 when a legacy instruction is executed. Instead of delivering a generic “invalid opcode” exception, a more specific exception is
delivered which provides the deprecated instruction processor 1425 a “fast-path” for handling legacy instructions, instead of considering all possibilities that could generate the #UD exception.[0145] To demonstrate the viability of this approach, several legacy 64b workloads were profiled, running on different operating systems. Using a full-system simulator, occurrences of instructions that are not supported in the new microarchitecture were counted in non-virtualized mode (e.g., legacy OS instructions).[0146] FIG. 15 illustrates the results, showing the frequency of relevant instructions in 64-bit long mode (per-thousand instructions (“PKI”)). A dash means the instruction was never observed (in some cases these are illegal in long mode). These results indicate that most of the depreciated instructions are rarely executed, further validating the trap-and-emulate approach described herein. These profiling experiments show that the majority of the legacy state interactions and instructions occur during ring transitions.[0147] The two tables 1601-1602 in FIG.s 16A-B below enumerate the state modified on the ringO- to-ring3 transition and ring3-to-ring0 transitions, respectively. These tables cover 64-bit long-mode operation. The first column 1610 of table 1601 indicates the type of transition operation for ring 0-ring 3 transition and the first column 1611 of table 1602 indicates the type of return operation for the ring 0-to- ring 3 transition.[0148] The additional columns in tables 1601-1602 denote architectural state that is modified on each respective transition operation, with the cell values denoting where the modified value comes from. While the SYSCALL and SYSRET interact with legacy state, the behavior interacts with far less legacy state than a call-gate, an exception, an interrupt, or the IRET instruction. SYSCALL and SYSRET load segmentation state values from MSRs are straightforward to handle. Call-gates occur infrequently and can be handled with a trap-and-emulation approach. IRETQ instructions stand-out as they are executed much more frequently. IRETQ presents a much more challenging case as it potentially interacts with a large amount of segmentation state and is currently implemented with a complicated microcode flow on existing machines.[0149] As IRET and legacy interrupt delivery are commonly executed in legacy workloads, an implementation of both needs to be performant. In one embodiment, it is processed with “enlightened microcode” in VMX mode. In this approach, common-case behavior executes in microcode, using a minimal amount of legacy state as required. Uncommon case behavior that requires extensive interaction with segmentation-state, microcode causes a VMEXIT. Embodiments of the invention may include new mechanisms for system architecture, maintaining a relatively small amount of legacy register support. [0150] FIG. 17 illustrates examples of this legacy state support included in one embodiment. The illustrated state includes the interrupt descriptor table register (IDTR) 1705 which stores a pointer to the interrupt descriptor table 1740, a global descriptor table (GDT) register 1735 for storing a pointer to a GDT 1720, a segment selector 1730 storing a pointer to a segment descriptor in the GDT 1720, and a task register 1725 for storing a pointer to a task state segment (TSS) entry in the GDT 1720. In addition, a local descriptor table register (LDTR) 1710 stores a pointer to a local descriptor table (LDT) 1715 and a
call-gate segment selector 1750 includes a pointer to a call gate entry in the LDT 1715. The IDT 1740, GDT 1720, and LDT 1715 point to various data structures including code, data, or stack segment 1757, the Task State Segment (TSS) 1756, interrupt handlers 1760A-B, exception handlers 1760C, and protected procedures 1760D.[0151] Based on this state, the situations where a VMEXIT would be required to support legacy behavior to perform legacy interrupt delivery and to execute the IRETQ instruction have been evaluated. The program code shown in FIGS. 18A-I reflects one embodiment which emulates these operations while relying on a small number of legacy registers only in virtualization mode. These registers may be implemented as MSRs, loaded from a fixed offset in the VMCS, or directly supported in logic. The other operations in the illustrated program code flow rely on conventional computation microoperations (e.g. loads, adds, etc) that are executed directly by the SoC micro architecture 1407, in one embodiment.[0152] One implementation also includes counters to the virtualization implementation (e.g., QEMU in one embodiment), to profile existing legacy operating systems. This provides information on the frequency of exits due to complex legacy behavior in event delivery and IRETQ. In the example code sequence shown below, within the function “do_interrupt64” for the interrupt flow and helper_iret_protected/ helper_ret_protected for IRETQ, the counters “int_exit_cases” and “iret_exit_cases” record specific exit conditions. These counters correspond to situations where the SoC microarchitecture needs to perform a VMEXIT to emulate complex behavior.[0153] Today, booting a processor (e.g., an x86 CPU) generally requires the use of real and protected modes, which make heavy use of features that are targets for deprecation, including features that increase the attack surface exposed by the ISA, require complex validation, and generally make it challenging to introduce new features while at the same time providing little value. In one embodiment of the invention, during these states (e.g., real and protected mode code executed during boot) the deprecated instruction processor 1425 in the hypervisor 1420 emulates/interprets this small number of instructions as needed (e.g., using an instruction interpreter or similar technology).[0154] In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[0155] EXAMPLES[0156] The following are example implementations of different embodiments of the invention. [0157] Example 1. A processor comprising: a plurality of cores, each core comprising a current microarchitecture to execute instructions and process data, the current micro architecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; at least one core of the plurality of cores comprising: a decoder to decode the
instructions, the decoder to specify one or more microoperations corresponding to each of the instructions; execution circuitry to execute the corresponding microoperations; wherein either a first type or a second type of virtual machine exit is to be performed responsive to detecting a deprecated instruction in a first virtual machine, wherein responsive to the first type of virtual machine exit, the hypervisor is to perform a first emulation of the prior micro architecture without reliance on the partial hardware support, and wherein responsive to the second type of virtual machine exit, the hypervisor is to perform a second emulation of the prior micro architecture relying on the partial hardware support.[0158] Example 2. The processor of example 1 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.[0159] Example 3. The processor of example 1 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.[0160] Example 4. The processor of example 2 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.[0161] Example 5. The processor of example 4 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.[0162] Example 6. The processor of example 5 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine. [0163] Example 7. The processor of example 3 wherein the first type of exception comprises an invalid or undefined opcode exception.[0164] Example 8. The processor of example 7 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction.[0165] Example 9. A method comprising: executing instructions on at least one core of a plurality of cores, each having a current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; performing either a first type or a second type of virtual machine exit responsive to detecting a deprecated instruction in a first virtual machine, performing by the hypervisor, responsive to the first type of virtual machine exit, a first emulation of the prior micro architecture without reliance on the partial hardware support, and performing by the hypervisor, responsive to the first type of virtual machine exit, a second emulation of the prior microarchitecture relying on the partial hardware support.[0166] Example 10. The method of example 9 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.
[0167] Example 11. The method of example 9 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.[0168] Example 12. The method of example 10 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.[0169] Example 13. The method of example 12 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.[0170] Example 14. The method of example 13 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine. [0171] Example 15. The method of example 11 wherein the first type of exception comprises an invalid or undefined opcode exception.[0172] Example 16. The method of example 15 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction.[0173] Example 17. A machine-readable medium having program code stored thereon which, when executed by a machine, causes the machine to perform the operations of: executing instructions on at least one core of a plurality of cores, each having a current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the micro architecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; performing either a first type or a second type of virtual machine exit responsive to detecting a deprecated instruction in a first virtual machine, performing by the hypervisor, responsive to the first type of virtual machine exit, a first emulation of the prior micro architecture without reliance on the partial hardware support, and performing by the hypervisor, responsive to the first type of virtual machine exit, a second emulation of the prior micro architecture relying on the partial hardware support.[0174] Example 18. The machine-readable medium of example 17 wherein the partial hardware support comprises microcode including one or more emulation microoperations to execute the deprecated instruction on the execution circuitry.[0175] Example 19. The machine-readable medium of example 17 wherein the first type of virtual machine exit comprises or is to be triggered by a first type of exception.[0176] Example 20. The machine-readable medium of example 18 wherein the hardware support comprises one or more microarchitectural components including one or more registers for storing state values and/or execution circuits for executing the emulation microoperations.[0177] Example 21. The machine-readable medium of example 20 wherein the second type of virtual machine exit is to specify parameters associated with the deprecated instruction, the parameters to be used to execute the emulation microoperations.
[0178] Example 22. The machine-readable medium of example 21 wherein upon completion of the first emulation or the second emulation, the hypervisor is to provide results and return control to the first virtual machine.[0179] Example 23. The machine-readable medium of example 19 wherein the first type of exception comprises an invalid or undefined opcode exception.[0180] Example 24. The machine-readable medium of example 23 wherein the second type of virtual machine exit comprises or is to be triggered by a second type of exception, the second type of exception to specify parameters associated with the deprecated instruction.[0181] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general- purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.[0182] As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) conFIG.d to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the FIG.s can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine -readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic respectively represent one or more machine- readable storage media and machine-readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the
subject matter of the present invention. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow. |
Reducing SIMD fragmentation for SIMD execution widths of 32 or even 64 channels in a single hardware thread leads to better EU utilization. Increasing SIMD execution widths to 32 or 64 channels per thread, enables handling more vertices, patches, primitives and triangles per EU hardware thread. Modified 3D pipeline shader payloads can handle multiple patches in case of domain shaders or multiple primitives when primitive object instance count is greater than one in the case of geometry shaders and multiple triangles in case of pixel shaders. |
What is claimed is:1 . A method comprising:packing one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread. 2. The method of claim 1 including modifying the pipeline domain shader payload to handle multiple patches. 3. The method of claim 2 including:packing domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane; andstoring an attribute for each domain point in its own partition in a register space addressable by a programmed thread. 4. The method of claim 1 including modifying the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. 5. The method of claim 4 including replicating primitive unified return buffer handles into lanes containing an instance-ID of the primitive. 6. The method of claim 1 including modifying the pipeline pixel shader payload to handle multiple triangles. 7. The method of claim 6 including using barycentric parameters for attribute interpolation. 8. The method of claim 7 including delivering a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute.9. The method of claim 1 including enabling attribute deltas from multiple triangles to be included in the same pixel shader payload. 10. The method of claim 1 including packing for an SIMD width of 32 channels per thread or higher. 1 1 . One or more non-transitory computer readable media storing instructions to perform a sequence comprising:packing one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread. 12. The media of claim 1 1 , further storing instructions to perform a sequence including modifying the pipeline domain shader payload to handle multiple patches. 13. The media of claim 12, further storing instructions to perform a sequence including:packing domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane; andstoring an attribute for each domain point in its own partition in a register space addressable by a programmed thread. 14. The media of claim 1 1 , further storing instructions to perform a sequence including modifying the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. 15. The media of claim 14, further storing instructions to perform a sequence including replicating primitive unified return buffer handles into lanes containing an instance-ID of the primitive.16. The media of claim 1 1 , further storing instructions to perform a sequence including modifying the pipeline pixel shader payload to handle multiple triangles. 17. The media of claim 16, further storing instructions to perform a sequence including using barycentric parameters for attribute interpolation. 18. The media of claim 17, further storing instructions to perform a sequence including delivering a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute. 19. The media of claim 1 1 , further storing instructions to perform a sequence including enabling attribute deltas from multiple triangles to be included in the same pixel shader payload. 20. The media of claim 1 1 , further storing instructions to perform a sequence including packing for an SIMD width of 32 channels per thread or higher. 21 . An apparatus comprising:a processor to pack one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread; anda memory coupled to said processor. 22. The apparatus of claim 21 , said processor to modify the pipeline domain shader payload to handle multiple patches. 23. The apparatus of claim 22, said processor to pack domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane, and to store an attribute for each domain point in its own partition in a register space addressable by aprogrammed thread.24. The apparatus of claim 21 , said processor to modify the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. 25. The apparatus of claim 24, said processor to replicate primitive unified return buffer handles into lanes containing an instance-ID of the primitive. 26. The apparatus of claim 21 , said processor to modify the pipeline pixel shader payload to handle multiple triangles. 27. The apparatus of claim 26, said processor to use barycentric parameters for attribute interpolation. 28. The apparatus of claim 27, said processor to deliver a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute. 29. The apparatus of claim 21 , said processor to enable attribute deltas from multiple triangles to be included in the same pixel shader payload. 30. The apparatus of claim 21 , said processor to pack for an SIMD width of 32 channels per thread or higher. |
INCREASING THREAD PAYLOAD FOR 3DPIPELINE WITH WIDER SIMP EXECUTION WIDTHBackground[0001 ] Within the limit of register space, a compiler tries to map as many channels (i.e. pixels) (up to 32) as possible to one execution unit (EU) hardware thread. Every EU has its own thread control whose functionality starts when a thread dispatcher (TDL) loads a thread into the EU. The thread control helps execute threads independently without synchronization with other EUs. Thread control takes a large portion of EU gate area.Brief Description Of The Drawings[0002] Some embodiments are described with respect to the following figures:Figure 1 is a schematic depiction of a graphics pipeline in accordance with one embodiment;Figure 2A is a depiction of a triangle with 3 vertices vO, v1 and v2 and a point P at (x, y) in the triangles;Figure 2B is a depiction of the triangles' barycentric (α, β, γ) coordinates at point P and the barycentric coordinates at vertices vO, v1 and v2 are (1 , 0, 0), (0, 0, 1 ) and (0, 1 , 0) respectively;Figure 2C is a depiction of attribute Apat pixel P and attributes A0, Ai, A2at input vertex locations of the triangle;Figure 3 is a flow chart for one embodiment;Figure 4 is a block diagram of a processing system according to one embodiment;Figure 5 is a block diagram of a processor according to one embodiment; Figure 6 is a block diagram of a graphics processor according to one embodiment;Figure 7 is a block diagram of a graphics processing engine according to one embodiment;Figure 8 is a block diagram of another embodiment of a graphics processor; Figure 9 is a depiction of thread execution logic according to oneembodiment; Figure 10 is a block diagram of a graphics processor instruction format according to some embodiments;Figure 1 1 is a block diagram of another embodiment of a graphics processor;Figure 12A is a block diagram of a graphics processor command format according to some embodiments;Figure 12B is a block diagram illustrating a graphics processor command sequence according to some embodiments;Figure 13 is a depiction of an exemplary graphics software architecture according to some embodiments;Figure 14 is a block diagram illustrating an IP core development system according to some embodiments; andFigure 15 is a block diagram showing an exemplary system on chip integrated circuit according to some embodiments.Detailed Description[0003] SIMD width per thread control is advantageously increased to increase performance. For instance, each thread control can control execution of SIMD64, instead of execution width of 16 (i.e. 4x thread control area reduction).[0004] One EU thread execution model is that all channels (e.g. pixels) come from the same primitive. With triangles getting smaller in workloads, it is common that there are not enough pixels in smaller triangles to fill an SIMD64 EU. This leads to SIMD fragmentation causing EU underutilization.[0005] Thread payload changes for a 3D pipeline can mitigate SIMD fragmentation issues that arise with wider SIMD EU. A payload layout may improve flexibility to pack multiple vertices, patches, primitives and triangles in vertex, hull, domain, geometry and pixel shader stages into one EU hardware thread.[0006] Reducing SIMD fragmentation for SIMD execution widths of 32 or even 64 channels in a single hardware thread leads to better EU utilization. Increasing SIMD execution widths to 32 or 64 channels per thread, enables handling more vertices, patches, primitives and triangles per EU hardware thread. Otherwise, simply having threads with larger execution widths that process fewer patches, triangles or primitives than they can potentially handle leads to EU underutilization. The existing 3D pipeline shader payloads cannot handle multiple patches in case of domain shaders or multiple primitives when primitive object instance count is greater than one in the case of geometry shaders and multiple triangles in case of pixel shaders.[0007] The graphics pipeline 10 shown in Figure 1 may be implemented in a graphics processor as a stand-alone, dedicated integrated circuit, or software, through software implemented general purpose processors, or by combinations of software and hardware[0008] The graphics pipeline 10 shown in Figure 1 may be implemented for example in a wireless telephone, a mobile hand-held computing device that incorporates a wired or wireless communication device or any computer. The graphics pipeline may provide images or video for display to a display device.Various techniques can be used to process images provided to the display.[0009] For simplicity and brevity, SIMD32 is used to explain one embodiment. But other SIMD widths including SIMD64 are contemplated.[0010] The command streamer stage 12 is responsible for managing the pipeline and passing commands down the pipeline. In addition, the command streamer reads constant data from the memory buffers and places it in the unified return buffer (URB) 32. The URB is on-chip memory shared by fixed functions in order for a thread to return data that will be consumed by a fixed function or other threads.Fixed function is a pipeline function performed by dedicated (not programmable) hardware.[001 1 ] The vertex fetch 14, in response to primitive processing commands, is responsible for reading vertex data from memory, reformatting it and writing the results into the vertex URB entries.[0012] The vertex shader stage 16 processes vertices, typically performing operations such as skinning, lighting, and transformations. A vertex shader (VS) takes a single input vertex and produces a single output vertex. The primary function of the VS stage is to pass vertices that miss in the VS Cache to VS threads, and then pass the VS thread-generated vertices down the pipeline. Vertices that hit in the VS Cache have already been shaded and are therefore passed down the pipeline unmodified.[0013] A typical SIMD8 VS execution mode processes eight vertices in a SIMD8 thread. Each lane of the SIMD8 thread contains all the vertex attribute data to process the vertex in its own partition of the General Register File (GRF) space. The GRF is a large read/write register shared by execution units for operand sources and destinations. With a wider SIMD execution size, the SIMD8 vertex shader payload may be widened. Thus, SIMD16 execution mode processes 16 vertices andSIMD32 execution mode processes 32 vertices in a single hardware thread as shown in Table-1 .[0014] A Hull Shader (HS) (also called Tessellation Control Shader in OpenGL) 18 is the first tessellation stage which is invoked once per output control point of a patch and transforms input control points that define a low-order surface into control points that make up a patch. In addition the HS also performs some per patch calculations to provide tessellation factors and patch constant data to the tessellator and domain shader stages. [0015] A typical SIMD8 8-Patch tessellation execution mode operates on 8 tessellation patches in a SIMD8 thread. Each SIMD lane contains all the attributes for the input control point data and the input control point Unified Return Buffer 32 (URB) handles of the patch in its own partition of the GRF space. With a wider SIMD execution size, the existing SIMD8 8-Patch tessellation execution mode payload is widened. Thus, the SIMD16 execution mode processes 16 patches and a SIMD32 execution mode processes 32 patches in a single hardware thread as shown in Table-2.Table-2: Shows the HS payload layout for SIMD32 execution mode which processes 32 patches in a single hardware thread.Lane31 LaneN Lane5 Lane4 Lane3 Lane2 Lanel LaneO Register(<32) NoROPatch 31 Patch 5 Patch 4 Patch 3 Patch 2 Patch 1 Patch 0 R1Handle - Handle Handle Handle Handle Handle HandleID ID ID ID ID ID IDPatch 31 Patch 5 Patch 4 Patch 3 Patch 2 Patch 1 Patch 0 R2 Primitive - Primitive Primitive Primitive Primitive Primitive PrimitiveID ID ID ID ID ID IDPatch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Rp ICPO - ICPO ICPO ICPO ICPO ICPO ICPOVertexl Vertexl Vertexl Vertexl Vertexl Vertexl VertexlD D D D D D DPatch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Rp+1 ICP1 - ICP1 ICP1 ICP1 ICP1 ICP1 ICP1Vertexl Vertexl Vertexl Vertexl Vertexl Vertexl VertexlD D D D D D DPatch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Rp+31 ICP31 - ICP31 ICP31 ICP31 ICP31 ICP31 ICP31Vertexl Vertexl Vertexl Vertexl Vertexl Vertexl VertexlD D D D D D DPatch31 Patch5 Patch4 Patch3 Patch2 PatchO PatchO Rp+xICPO- - ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.xLane31 LaneN Lane5 Lane4 Lane3 Lane2 Lanel LaneO Registe r(<32) NoPatch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Lane31ICPO- - ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y Patch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Rp+x+2ICPO- - ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.z AttrO.z AttrO.z AttrO.z AttrO.z AttrO.z AttrO.zPatch31 Patch5 Patch4 Patch3 Patch2 Patch 1 PatchO Rp+x+3ICPO- - ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.w AttrO.w AttrO.w AttrO.w AttrO.w AttrO.w AttrO.w[0016] A Domain Shader (DS) (also called Tessellation Evaluation Shader in OpenGL) 20 calculates the vertex position of a subdivided point in the output patch. A domain shader is run once per tessellator stage domain point and has read-only access to the UV coordinates for the domain point. After the DS completes, tessellation is complete and pipeline data continues to the next pipeline stage (geometry shader, pixel shader).[0017] There exist two DS SIMD8 execution modes in one current implementation. Single patch execution mode processes all domain points that belong to a single tessellation patch. However, many times a tessellation patch is minimally tessellated resulting in four or less than four domain points. In that case, the Dual patch execution mode processes two patches each containing four or less than four domain points in a single SIMD8 thread (see Table-3). However, even with the Dual patch execution mode there are unused lanes because a patch may not have as many domain points as the size of the execution mode. To use SIMD lanes efficiently, domain point data from different DS patches may be packed into a single SIMD thread. To generate efficient code sequence, each domain point occupies one SIMD lane and all attributes for the domain point reside in its own partition of the GRF space (see Table-4).Table-3: A Dual Patch SIMD8 execution mode thread payload. Some SIMD lanes may be unutilized if less than four domain points are present per patch.Lane7 Lane6 Lane5 Lane4 Lane3 Lane2 Lanel LaneO RegisterNoPatch 1 Patch 0 RO Handle HandleID IDPatch 1 Patch 0 R1Primitive PrimitiveID ID Lane7 Lane6 Lane5 Lane4 Lane3 Lane2 Lanel LaneO RegisterNoPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO R2 DP3-U DP2-U DP1-U DPO-U DP3-U DP2-U DP1-U DPO-UPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO R3 DP3-V DP2-V DP1-V DPO-V DP3-V DP2-V DP1-V DPO-VPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO R4 DP3-W DP2-W DP1-W DPO-W DP3-W DP2-W DP1-W DPO-WPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO R5DP3- DP2- DP1- DPO- DP3- DP2- DP1- DPO-URBH URBH URBH URBH URBH URBH URBH URBHPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO RpICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.w AttrO.z AttrO.v AttrO.x AttrO.w AttrO.z AttrO.v AttrO.xPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO Rp+1ICP1- ICP1- ICP1- ICP1- ICP1- ICP1- ICP1- ICP1-AttrO.w AttrO.z AttrO.v AttrO.x AttrO.w AttrO.z AttrO.v AttrO.xPatch 1 Patch 1 Patch 1 Patch 1 PatchO PatchO PatchO PatchO Rp+2ICP2- ICP2- ICP2- ICP2- ICP2- ICP2- ICP2- ICP2-AttrO.w AttrO.z AttrO.v AttrO.x AttrO.w AttrO.z AttrO.v AttrO.xTable-4: Shows the thread payload for many DS patches execution mode in a single SIMD32 DS thread. In the thread payload shown, patchO generates only 3 domain points, patch 1 generates 3 domain points and so on. Domain point data from different DS patches is packed into a single SIMD thread. To generate efficient code sequence, each domain point occupies one SIMD lane and all attributes for the domain point reside in its own partition of GRF space.Lane31 LaneN Lane5 Lane4 Lane3 Lane2 Lanel LaneO Register(<32) NoPatch X Patch 1 Patch 1 Patch 1 Patch 0 Patch 0 Patch 0 ROHandle Handle Handle Handle Handle Handle HandleID ID ID ID ID ID IDPatch X Patch 1 Patch 1 Patch 1 Patch 0 Patch 0 Patch 0 R1 Primitive Primitive Primitive Primitive Primitive Primitive PrimitiveID ID ID ID ID ID IDPatch X Patchl Patchl Patchl PatchO PatchO PatchO R2 DP1-U DP1-U DP1-U DPO-U DP2-U DP1-U DPO-UPatch X Patchl Patchl Patchl PatchO PatchO PatchO R3 DP1-V DP1-V DP1-V DPO-V DP2-V DP1-V DPO-VPatch X Patchl Patchl Patchl PatchO PatchO PatchO R4 DP1-W DP1-W DP1-W DPO-W DP2-W DP1-W DPO-WPatch X Patchl Patchl Patchl PatchO PatchO PatchO R5DP1- DP1- DP1- DPO- DP2- DP1- DPO-URBH URBH URBH URBH URBH URBH URBH Lane31 LaneN Lane5 Lane4 Lane3 Lane2 Lanel LaneO Register (<32) NoPatch X Patchl Patchl Patchl PatchO PatchO PatchO RpICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.xPatch X Patchl Patchl Patchl PatchO PatchO PatchO Rp+1ICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.yPatch X Patchl Patchl Patchl PatchO PatchO PatchO Rp+2ICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.z AttrO.z AttrO.z AttrO.z AttrO.z AttrO.z AttrO.zPatch X Patchl Patchl Patchl PatchO PatchO PatchO Rp+3ICPO- ICPO- ICPO- ICPO- ICPO- ICPO- ICPO-AttrO.w AttrO.w AttrO.w AttrO.w AttrO.w AttrO.w AttrO.wPatch X Patchl Patchl Patchl PatchO PatchO PatchO Rp+4ICP1- ICP1- ICP1- ICP1- ICP1- ICP1- ICP1-AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.x AttrO.xPatch X Patchl Patchl Patchl PatchO PatchO PatchO Rp+5ICP1- ICP1- ICP1- ICP1- ICP1- ICP1- ICP1-AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y AttrO.y[0018] The Geometry Shader (GS) (when present) 22 receives as input an entire primitive assembled in the previous stage and passes the primitive object vertices to the graphics subsystem to be processed by a GS thread. Thus, the GS has full knowledge of the primitive it is working on, including all its vertices and any adjacency information, if specified. Since the GS supports limited amplification or de- amplification of primitives, the output of a geometry shader can be zero or more primitives.[0019] There are two different GS thread payloads that exist at present based on whether primitive object instancing is enabled or not. When instancing is not enabled (see Table-5), this means that the rendered mesh is used exactly once for that primitive. Instancing allows multiple copies of the same mesh to be rendered at different locations and each instance is identified by a unique instance identifier (see Table-6). Table-5: Shows the current #instance=1 case of GS SIMD8 Thread Payload with each lane of the thread processing a single primitive. Payload vertex handles for a triangle primitive with three vertices is shown below. For larger primitives additional registers would be needed to hold the additional vertex handles.Table-6: Shows the current #instance > 1 case of GS SIMD8 Thread Payload where a single triangle primitive with three vertices is processed for 5 instances. Each instance is associated with a unique object instance id.[0020] As shown in Table-6, the SIMD lanes are not getting utilized fully for the instance greater than 1 case when fewer than eight instances need to be processed for a primitive. With wider SIMD execution size, one can utilize all the lanes of the payload to process primitive objects, ensuring efficient SIMD lane and execution unit utilization. Instead of having one copy of the primitive URB input handles, one can replicate the primitive united return buffer (URB) handles into lanes containing the instance-id of the primitive as shown in Table-7(a). This allows the unused lanes to process additional primitive instances. Alternatively, one instance per hardware thread for multiple primitives (depending on execution mode chosen) can be processed as shown in Table-7 b).[0021 ] The single instance case as shown in Table-5 is efficiently using the SIMD lanes and hence the existing SIMD8 thread payload is widened for theSIMD16/SIMD32 case.Table-7 a): Shows how all the un-utilized lanes of the GS Thread Payload can be used to process additional primitive object instances in SIMD32 execution modes when #instance > 1 .Table-7 b): Shows an alternative approach to how all the un-utilized lanes of the GS Thread Payload can be used to process additional primitive objects in SIMD32 execution modes when #instance > 1. In the alternative approach when #instance > 1 , each hardware thread handles a single instance of as many primitives as the execution mode size.dle2 dle2 dle2 dle2 dle2 dle2 dle2[0022] A pixel shader (PS) 24 is a program that combines constant variables, texture data, interpolated per-vertex values, and other data to produce per-pixel outputs. The rasterizer stage invokes a PS once for each pixel (fragment) covered by a primitive. In addition to executing an Application Program Interface (API)- supplied PS program for each fragment, the PS unit calculates the values of the various vertex attributes that are to be interpolated across the object using the barycentric algorithm.[0023] A triangle with vertices vO, v1 , v2 (Figure 2) can be used to set up a non- orthogonal coordinate system with origin vO and basis vectors (v1 -v0) and (v2-v0) (Figure 2A). A point P inside the triangle is then represented by Ρ(α, β, γ) = a*vO + β*ν1 + Y*V2, where (a, β, γ) are the barycentric coordinates of the point P (Figure 2B).[0024] (α, β, γ) have the barycentric characteristic of α + β + γ = 1 for a point P inside the triangle. Thus attribute Ap for pixel P can be computed as Ap = AO + β*(Α1 -Α0) + γ*(Α2-Α0) using only two barycentric coordinates β and γ and a single plane ISA instruction. Here A0, A-i , A2are the input vertex attributes at triangle vertices vO, v1 and v2 respectively (Figure 2C). The attribute Apcalculation at pixel P described above is in case of linear interpolation is applied to the PS attributes. The interpolation attribute deltas (Ai-A0) and (A2-A0) calculated above vary based on the type of interpolation mode used. In general, AO, A1 and A2 represent the set of attribute deltas used irrespective of the interpolation mode.[0025] The hardware thus uses barycentric parameters to aid in attributeinterpolation, and these parameters are computed in hardware per-pixel (or per- sample) and delivered in the thread payload to the PS. Also delivered in the payload are a set of vertex attribute deltas (aO, a1 , and a2) per channel of each attribute.[0026] In the pixel shader kernel, the following computation is done for each attribute channel of each pixel/sample given the corresponding attribute channel deltas a0/a1/a2 and the pixel/sample's β/ γ barycentric parameters, where V is the more vertical space value of the attribute channel at that pixel/sample:V = a0 + (a1 * P) + (a2*Y).[0027] The clipper (clip) 26 performs clip tests on incoming objects and, if required, clips objects by a fixed function hardware.[0028] The strip/fan (SF) 28 performs object setup by use of fixed function hardware. The thread dispatcher 34 arbitrates thread initiation requests from fixed function units and initiates the threads on the execution units 36. The execution unit is a multi-threaded processor. Each execution unit is a fully capable processor containing instruction fetch and decode, register files, source operand swizzle and SIMD arithmetic logic units.[0029] The Windower Mask unit (WM) 30 can pass a grouping of 2 subspans (8 pixels), 4 subspans (16 pixels), or 8 subspans (32 pixels) to a PS thread payload (Table-8). The groupings of subspans that the WM unit is allowed to include in a PS thread payload are controlled by the 32, 16,8 Pixel Dispatch Enable state variables programmed in WM_STATE. Using these state variables, the WM unit attempts to dispatch the largest allowed grouping of subspans. However the present thread payload of PS only supports attribute deltas belonging to the same triangle. This means that no matter what execution mode is chosen, the subspans all need to belong to the same triangle. This often results in the hardware (WM) picking the smaller SIMD execution modes (namely SIMD8) when fewer subspans are needed to cover a triangle. Table-8: Shows the thread payload attribute deltas (aO, a1 and a2) for the existing SIMD8/SIMD16/SIMD32 thread payload with three attributes and two components per attribute. We have 2 subspans, 4 subspans or 8 subspans of pixels depending on the grouping of subspans the WM is allowed to include in the thread payload and the SIMD execution mode picked by the hardware. The limitation however is that all the subspans need to belong to the same triangle. Thus all the attribute deltas shown below belong to a single triangle.[0030] A thread payload layout for SIMD16 (Table-9) and SIMD32 (Table-10) execution modes allows attribute deltas from multiple triangles to be included in the same payload. This makes it easier for the hardware to always choose the highest possible execution mode because subspans from multiple triangles can be grouped together in a single PS thread payload. Not only does this improve thread efficiency because there is some amount of overhead involved with PS thread dispatch and launching larger execution threads is better in general, but also improves the execution unit efficiency which now pumps 2-SIMD8 instructions instead of 2- SIMD4 instructions.Table-9: Shows the thread payload attribute deltas (aO, a1 and a2) for SIMD32 payload with 8 subspans. Each subspan can belong to a different triangle. As shown below, the triangle has three attributes and three components per attribute (partial attribute data shown below).Table 10: Shows the thread payload attribute deltas (aO, a1 and a2) for SIMD16 payload with 4 subspans. Each subspan can belong to different triangles with each triangle having three attributes and three components per attribute (partial attribute data shown below).[0031 ] Referring now to Figure 3, a sequence 40 may be implemented in software, firmware and/or hardware. In software and firmware embodiments it may be executed using computer executed instructions stored in one or more non-transitory computer readable media such as magnetic, optical, or semiconductor storages. Generally these storages may be part of or coupled to a graphics processor.[0032] The sequence 40 begins by modifying the domain shader payload to handle multiple patches as indicated in block 42. This may be done for example by packing domain point data from different domain shader patches into one SIMD thread with each domain point occupying one SIMD lane and storing an attribute for each domain point in its own partition in a register space addressable by programmed threads. Then as shown in block 44, the geometry shader payload may be modified to handle multiple primitives when the primitive object instance count is greater than one. This may be done by replicating primitive unified return buffer handles into lanes containing an instance-ID of the primitive. Further, barycentric parameters may be used for attribute interpolation and a payload may be delivered to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel of each attribute. Attribute deltas from multiple triangles may be included in the same pixel shader payload in some embodiments as indicated in block 46.[0033] Figure 4 is a block diagram of a processing system 100, according to an embodiment. In various embodiments the system 100 includes one or more processors 102 and one or more graphics processors 108, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 102 or processor cores 107. In onembodiment, the system 100 is a processing platform incorporated within a system- on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.[0034] An embodiment of system 100 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system 100 is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system 100 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system 100 is a television or set top box device having one or more processors 102 and a graphical interface generated by one or more graphics processors 108.[0035] In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, instruction set 109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 107 may each process a different instruction set 109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 107 may also include other processing devices, such a Digital Signal Processor (DSP).[0036] In some embodiments, the processor 102 includes cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 107 using known cache coherency techniques. A register file 106 is additionally included in processor 102 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 102.[0037] In some embodiments, processor 102 is coupled to a processor bus 1 10 to transmit communication signals such as address, data, or control signals between processor 102 and other components in system 100. In one embodiment the system 100 uses an exemplary 'hub' system architecture, including a memory controller hub 1 16 and an Input Output (I/O) controller hub 130. A memory controller hub 1 16 facilitates communication between a memory device and other components of system 100, while an I/O Controller Hub (ICH) 130 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 1 16 is integrated within the processor.[0038] Memory device 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 120 can operate as system memory for the system 100, to store data 122 and instructions 121 for use when the one or more processors 102 executes an application or process. Memory controller hub 1 16 also couples with an optional external graphics processor 1 12, which may communicate with the one or more graphics processors 108 in processors 102 to perform graphics and media operations.[0039] In some embodiments, ICH 130 enables peripherals to connect to memory device 120 and processor 102 via a high-speed I/O bus. The I/Operipherals include, but are not limited to, an audio controller 146, a firmware interface 128, a wireless transceiver 126 (e.g., Wi-Fi, Bluetooth), a data storage device 124 (e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller 140 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers 142 connect input devices, such as keyboard and mouse 144 combinations. A network controller 134 may also couple to ICH 130. In some embodiments, a high-performance network controller (not shown) couples to processor bus 1 10. It will be appreciated that the system 100 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub 130 may be integrated within the one or more processor 102, or the memory controller hub 1 16 and I/O controller hub 130 may be integrated into a discreet external graphics processor, such as the external graphics processor 1 12.[0040] Figure 5 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of Figure 5 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 200 can include additional cores up to and including additional core 202N represented by the dashed lined boxes. Each of processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments each processor core also has access to one or more shared cached units 206.[0041 ] The internal cache units 204A-204N and shared cache units 206 represent a cache memory hierarchy within the processor 200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 206 and 204A-204N.[0042] In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. The one or more bus controller units 216 manage a set of peripheral buses, such as one or morePeripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core 210 provides management functionality for the various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).[0043] In some embodiments, one or more of the processor cores 202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core 210 includes components for coordinating and operating cores 202A- 202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 202A-202N and graphics processor 208.[0044] In some embodiments, processor 200 additionally includes graphics processor 208 to execute graphics processing operations. In some embodiments, the graphics processor 208 couples with the set of shared cache units 206, and the system agent core 210, including the one or more integrated memory controllers 214. In some embodiments, a display controller 21 1 is coupled with the graphics processor 208 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 21 1 may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 208 or system agent core 210.[0045] In some embodiments, a ring based interconnect unit 212 is used to couple the internal components of the processor 200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 couples with the ring interconnect 212 via an I/O link 213.[0046] The exemplary I/O link 213 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 218, such as an eDRAM module. In someembodiments, each of the processor cores 202-202N and graphics processor 208 use embedded memory modules 218 as a shared Last Level Cache.[0047] In some embodiments, processor cores 202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 202A-N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 200 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.[0048] Figure 6 is a block diagram of a graphics processor 300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 to access memory. Memory interface 314 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. [0049] In some embodiments, graphics processor 300 also includes a display controller 302 to drive display output data to a display device 320. Display controller 302 includes hardware for one or more overlay planes for the display andcomposition of multiple layers of video or user interface elements. In some embodiments, graphics processor 300 includes a video codec engine 306 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421 M/VC-1 , and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.[0050] In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 310. In some embodiments, graphics processing engine 310 is a compute engine for performing graphics operations, including three- dimensional (3D) graphics operations and media operations.[0051 ] In some embodiments, GPE 310 includes a 3D pipeline 312 forperforming 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 312 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 315. While 3D pipeline 312 can be used to perform media operations, an embodiment of GPE 310 also includes a media pipeline 316 that is specifically used to perform media operations, such as video post-processing and image enhancement.[0052] In some embodiments, media pipeline 316 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 306. In some embodiments, media pipeline 316 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 315. The spawned threads performcomputations for the media operations on one or more graphics execution units included in 3D/Media sub-system 315.[0053] In some embodiments, 3D/Media subsystem 315 includes logic for executing threads spawned by 3D pipeline 312 and media pipeline 316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.[0054] Figure 7 is a block diagram of a graphics processing engine 410 of a graphics processor in accordance with some embodiments. In one embodiment, the GPE 410 is a version of the GPE 310 shown in Figure 6. Elements of Figure 7 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.[0055] In some embodiments, GPE 410 couples with a command streamer 403, which provides a command stream to the GPE 3D and media pipelines 412, 416. In some embodiments, command streamer 403 is coupled to memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from the memory and sends the commands to 3D pipeline 412 and/or media pipeline 416. The commands are directives fetched from a ring buffer, which stores commands for the 3D and media pipelines 412, 416. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The 3D and media pipelines 412, 416 process the commands by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to an execution unit array 414. In some embodiments, execution unit array 414 is scalable, such that the array includes a variable number of execution units based on the target power and performance level of GPE 410.[0056] In some embodiments, a sampling engine 430 couples with memory (e.g., cache memory or system memory) and execution unit array 414. In someembodiments, sampling engine 430 provides a memory access mechanism for execution unit array 414 that allows execution array 414 to read graphics and media data from memory. In some embodiments, sampling engine 430 includes logic to perform specialized image sampling operations for media.[0057] In some embodiments, the specialized media sampling logic in sampling engine 430 includes a de-noise/de-interlace module 432, a motion estimation module 434, and an image scaling and filtering module 436. In some embodiments, de-noise/de-interlace module 432 includes logic to perform one or more of a de- noise or a de-interlace algorithm on decoded video data. The de-interlace logic combines alternating fields of interlaced video content into a single fame of video. The de-noise logic reduces or removes data noise from video and image data. In some embodiments, the de-noise logic and de-interlace logic are motion adaptive and use spatial or temporal filtering based on the amount of motion detected in the video data. In some embodiments, the de-noise/de-interlace module 432 includes dedicated motion detection logic (e.g., within the motion estimation engine 434).[0058] In some embodiments, motion estimation engine 434 provides hardware acceleration for video operations by performing video acceleration functions such as motion vector estimation and prediction on video data. The motion estimation engine determines motion vectors that describe the transformation of image data between successive video frames. In some embodiments, a graphics processor media codec uses video motion estimation engine 434 to perform operations on video at the macro-block level that may otherwise be too computationally intensive to perform with a general-purpose processor. In some embodiments, motion estimation engine 434 is generally available to graphics processor components to assist with video decode and processing functions that are sensitive or adaptive to the direction or magnitude of the motion within video data.[0059] In some embodiments, image scaling and filtering module 436 performs image-processing operations to enhance the visual quality of generated images and video. In some embodiments, scaling and filtering module 436 processes image and video data during the sampling operation before providing the data to execution unit array 414.[0060] In some embodiments, the GPE 410 includes a data port 444, which provides an additional mechanism for graphics subsystems to access memory. In some embodiments, data port 444 facilitates memory access for operations including render target writes, constant buffer reads, scratch memory space reads/writes, and media surface accesses. In some embodiments, data port 444 includes cache memory space to cache accesses to memory. The cache memory can be a single data cache or separated into multiple caches for the multiple subsystems that access memory via the data port (e.g., a render buffer cache, a constant buffer cache, etc.). In some embodiments, threads executing on an execution unit in execution unit array 414 communicate with the data port by exchanging messages via a data distribution interconnect that couples each of the sub-systems of GPE 410.[0061 ] Figure 8 is a block diagram of another embodiment of a graphics processor 500. Elements of Figure 8 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.[0062] In some embodiments, graphics processor 500 includes a ringinterconnect 502, a pipeline front-end 504, a media engine 537, and graphics cores 580A-580N. In some embodiments, ring interconnect 502 couples the graphics processor to other processing units, including other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated within a multi-core processing system. [0063] In some embodiments, graphics processor 500 receives batches of commands via ring interconnect 502. The incoming commands are interpreted by a command streamer 503 in the pipeline front-end 504. In some embodiments, graphics processor 500 includes scalable execution logic to perform 3D geometry processing and media processing via the graphics core(s) 580A-580N. For 3D geometry processing commands, command streamer 503 supplies commands to geometry pipeline 536. For at least some media processing commands, command streamer 503 supplies the commands to a video front end 534, which couples with a media engine 537. In some embodiments, media engine 537 includes a Video Quality Engine (VQE) 530 for video and image post-processing and a multi-format encode/decode (MFX) 533 engine to provide hardware-accelerated media data encode and decode. In some embodiments, geometry pipeline 536 and media engine 537 each generate execution threads for the thread execution resources provided by at least one graphics core 580A.[0064] In some embodiments, graphics processor 500 includes scalable thread execution resources featuring modular cores 580A-580N (sometimes referred to as core slices), each having multiple sub-cores 550A-550N, 560A-560N (sometimes referred to as core sub-slices). In some embodiments, graphics processor 500 can have any number of graphics cores 580A through 580N. In some embodiments, graphics processor 500 includes a graphics core 580A having at least a first sub- core 550A and a second core sub-core 560A. In other embodiments, the graphics processor is a low power processor with a single sub-core (e.g., 550A). In some embodiments, graphics processor 500 includes multiple graphics cores 580A-580N, each including a set of first sub-cores 550A-550N and a set of second sub-cores 560A-560N. Each sub-core in the set of first sub-cores 550A-550N includes at least a first set of execution units 552A-552N and media/texture samplers 554A-554N. Each sub-core in the set of second sub-cores 560A-560N includes at least a second set of execution units 562A-562N and samplers 564A-564N. In some embodiments, each sub-core 550A-550N, 560A-560N shares a set of shared resources 570A- 570N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in the various embodiments of the graphics processor. [0065] Figure 9 illustrates thread execution logic 600 including an array of processing elements employed in some embodiments of a GPE. Elements of Figure 9 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.[0066] In some embodiments, thread execution logic 600 includes a pixel shader 602, a thread dispatcher 604, instruction cache 606, a scalable execution unit array including a plurality of execution units 608A-608N, a sampler 610, a data cache 612, and a data port 614. In one embodiment the included components areinterconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 600 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 606, data port 614, sampler 610, and execution unit array 608A- 608N. In some embodiments, each execution unit (e.g. 608A) is an individual vector processor capable of executing multiple simultaneous threads and processing multiple data elements in parallel for each thread. In some embodiments, execution unit array 608A-608N includes any number individual execution units.[0067] In some embodiments, execution unit array 608A-608N is primarily used to execute "shader" programs. In some embodiments, the execution units in array 608A-608N execute an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders).[0068] Each execution unit in execution unit array 608A-608N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 608A-608N support integer and floating-point data types.[0069] The execution unit instruction set includes single instruction multiple data (SIMD) instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.[0070] One or more internal instruction caches (e.g., 606) are included in the thread execution logic 600 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 612) are included to cache thread data during thread execution. In some embodiments, sampler 610 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 610 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.[0071 ] During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 600 via thread spawning and dispatch logic. In some embodiments, thread execution logic 600 includes a local thread dispatcher 604 that arbitrates thread initiation requests from the graphics and media pipelines and instantiates the requested threads on one or more execution units 608A-608N. For example, the geometry pipeline (e.g., 536 of Figure 8) dispatches vertex processing, tessellation, or geometry processing threads to thread execution logic 600 (Figure 9). In some embodiments, thread dispatcher 604 can also process runtime thread spawning requests from the executing shader programs. [0072] Once a group of geometric objects has been processed and rasterized into pixel data, pixel shader 602 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, pixel shader 602 calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel shader 602 then executes an application programming interface (API)-supplied pixel shader program. To execute the pixel shader program, pixel shader 602 dispatches threads to an execution unit (e.g., 608A) via thread dispatcher 604. In some embodiments, pixel shader 602 uses texture sampling logic in sampler 610 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.[0073] In some embodiments, the data port 614 provides a memory access mechanism for the thread execution logic 600 output processed data to memory for processing on a graphics processor output pipeline. In some embodiments, the data port 614 includes or couples to one or more cache memories (e.g., data cache 612) to cache data for memory access via the data port.[0074] Figure 10 is a block diagram illustrating a graphics processor instruction formats 700 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 700 described and illustrated are macro- instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.[0075] In some embodiments, the graphics processor execution units natively support instructions in a 128-bit format 710. A 64-bit compacted instruction format 730 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit format 710 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 730. The native instructions available in the 64-bit format 730 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 713. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit format 710.[0076] For each format, instruction opcode 712 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 714 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For 128-bit instructions 710 an exec-size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 716 is not available for use in the 64-bit compact instruction format 730.[0077] Some execution unit instructions have up to three operands including two source operands, srcO 722, srd 722, and one destination 718. In someembodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 724), where the instruction opcode 712 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.[0078] In some embodiments, the 128-bit instruction format 710 includes an access/address mode information 726 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction 710.[0079] In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1 -byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction 710 may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction 710 may use 16-byte-aligned addressing for all source and destination operands.[0080] In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction 710 directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.[0081 ] In some embodiments instructions are grouped based on opcode 712 bitfields to simplify Opcode decode 740. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 742 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 742 shares the five most significant bits (MSB), where move (mov) instructions are in the form of OOOOxxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 744 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 746 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 001 Ixxxxb (e.g., 0x30). A parallel math instruction group 748 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 748 performs the arithmetic operations in parallel across data channels. The vector math group 750 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands.[0082] Figure 11 is a block diagram of another embodiment of a graphics processor 800. Elements of Figure 1 1 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.[0083] In some embodiments, graphics processor 800 includes a graphics pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a render output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 800 via a ring interconnect 802. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 802 are interpreted by a command streamer 803, which supplies instructions to individual components of graphics pipeline 820 or media pipeline 830.[0084] In some embodiments, command streamer 803 directs the operation of a vertex fetcher 805 that reads vertex data from memory and executes vertex- processing commands provided by command streamer 803. In some embodiments, vertex fetcher 805 provides vertex data to a vertex shader 807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex-processing instructions by dispatching execution threads to execution units 852A, 852B via a thread dispatcher 831 . [0085] In some embodiments, execution units 852A, 852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 852A, 852B have an attached L1 cache 851 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.[0086] In some embodiments, graphics pipeline 820 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 81 1 configures the tessellationoperations. A programmable domain shader 817 provides back-end evaluation of tessellation output. A tessellator 813 operates at the direction of hull shader 81 1 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to graphics pipeline 820. In some embodiments, if tessellation is not used, tessellation components 81 1 , 813, 817 can be bypassed.[0087] In some embodiments, complete geometric objects can be processed by a geometry shader 819 via one or more threads dispatched to execution units 852A, 852B, or can proceed directly to the clipper 829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 819 receives input from the vertex shader 807. In some embodiments, geometry shader 819 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.[0088] Before rasterization, a clipper 829 processes vertex data. The clipper 829 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer/depth 873 in the render output pipeline 870 dispatches pixel shaders to convert the geometric objects into their per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, an application can bypass the rasterizer 873 and access un-rasterized vertex data via a stream out unit 823.[0089] The graphics processor 800 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 852A, 852B and associated cache(s) 851 , texture and media sampler 854, and texture/sampler cache 858 interconnect via a data port 856 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 854, caches 851 , 858 and execution units 852A, 852B each have separate memory access paths.[0090] In some embodiments, render output pipeline 870 contains a rasterizer and depth test component 873 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 878 and depth cache 879 are also available in some embodiments. A pixel operations component 877 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 841 , or substituted at display time by the display controller 843 using overlay display planes. In some embodiments, a shared L3 cache 875 is available to all graphics components, allowing the sharing of data without the use of main system memory.[0091 ] In some embodiments, graphics processor media pipeline 830 includes a media engine 837 and a video front end 834. In some embodiments, video front end 834 receives pipeline commands from the command streamer 803. In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, video front-end 834 processes media commands before sending the command to the media engine 837. In some embodiments, media engine 337 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 850 via thread dispatcher 831. [0092] In some embodiments, graphics processor 800 includes a display engine 840. In some embodiments, display engine 840 is external to processor 800 and couples with the graphics processor via the ring interconnect 802, or some other interconnect bus or fabric. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 843 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.[0093] In some embodiments, graphics pipeline 820 and media pipeline 830 are configurable to perform operations based on multiple graphics and mediaprogramming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In someembodiments, support is provided for the Open Graphics Library (OpenGL) and Open Computing Language (OpenCL) from the Khronos Group, the Direct3D library from the Microsoft Corporation, or support may be provided to both OpenGL and D3D. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.[0094] Figure 12A is a block diagram illustrating a graphics processor command format 900 according to some embodiments. Figure 12B is a block diagram illustrating a graphics processor command sequence 910 according to anembodiment. The solid lined boxes in Figure 12A illustrate the components that are generally included in a graphics command while the dashed lines includecomponents that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 900 of Figure 12A includes data fields to identify a target client 902 of the command, a command operation code (opcode) 904, and the relevant data 906 for the command. A sub- opcode 905 and a command size 908 are also included in some commands.[0095] In some embodiments, client 902 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 904 and, if present, sub-opcode 905 to determine the operation to perform. The client unit performs the command using information in data field 906. For some commands an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automaticallydetermines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word.[0096] The flow diagram in Figure 12B shows an exemplary graphics processor command sequence 910. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.[0097] In some embodiments, the graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In someembodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. In some embodiments, pipeline flush command 912 can be used for pipeline synchronization or before placing the graphics processor into a low power state.[0098] In some embodiments, a pipeline select command 913 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 913 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command is 912 is required immediately before a pipeline switch via the pipeline select command 913.[0099] In some embodiments, a pipeline control command 914 configures a graphics pipeline for operation and is used to program the 3D pipeline 922 and the media pipeline 924. In some embodiments, pipeline control command 914 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.[0100] In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In someembodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations. [0101 ] The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 920, the command sequence is tailored to the 3D pipeline 922 beginning with the 3D pipeline state 930, or the media pipeline 924 beginning at the media pipeline state 940.[0102] The commands for the 3D pipeline state 930 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based the particular 3D API in use. In some embodiments, 3D pipeline state 930 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.[0103] In some embodiments, 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associatedparameters that are passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 932 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 932 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 922 dispatches shader execution threads to graphics processor execution units.[0104] In some embodiments, 3D pipeline 922 is triggered via an execute 934 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a 'go' or 'kick' command in the command sequence. In one embodiment command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.[0105] In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 924 depends on the media or compute operations to be performed. Specific media decodeoperations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations usingcomputational shader programs that are not explicitly related to the rendering of graphics primitives.[0106] In some embodiments, media pipeline 924 is configured in a similar manner as the 3D pipeline 922. A set of media pipeline state commands 940 are dispatched or placed into in a command queue before the media object commands 942. In some embodiments, media pipeline state commands 940 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, media pipeline state commands 940 also support the use one or more pointers to "indirect" state elements that contain a batch of state settings.[0107] In some embodiments, media object commands 942 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 942. Once the pipeline state is configured and media object commands 942 are queued, the media pipeline 924 is triggered via an execute command 944 or an equivalent execute event (e.g., register write). Output from media pipeline 924 may then be post processed by operations provided by the 3D pipeline 922 or the media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.[0108] Figure 13 illustrates exemplary graphics software architecture for a data processing system 1000 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general-purpose processor core(s) 1034. The graphics application 1010 and operating system 1020 each execute in the system memory 1050 of the data processing system.[0109] In some embodiments, 3D graphics application 1010 contains one or more shader programs including shader instructions 1012. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 1014 in a machine language suitable for execution by the general-purpose processor core 1034. The application also includes graphics objects 1016 defined by vertex data.[01 10] In some embodiments, operating system 1020 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. When the Direct3D API is in use, the operating system 1020 uses a front- end shader compiler 1024 to compile any shader instructions 1012 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT)compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 1010.[01 1 1 ] In some embodiments, user mode graphics driver 1026 contains a back- end shader compiler 1027 to convert the shader instructions 1012 into a hardware specific representation. When the OpenGL API is in use, shader instructions 1012 in the GLSL high-level language are passed to a user mode graphics driver 1026 for compilation. In some embodiments, user mode graphics driver 1026 uses operating system kernel mode functions 1028 to communicate with a kernel mode graphics driver 1029. In some embodiments, kernel mode graphics driver 1029communicates with graphics processor 1032 to dispatch commands and instructions.[01 12] One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.[01 13] Figure 14 is a block diagram illustrating an IP core development system 1 100 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 1 100 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 1 130 can generate a software simulation 1 1 10 of an IP core design in a high level programming language (e.g., C/C++). The software simulation 1 1 10 can be used to design, test, and verify the behavior of the IP core using a simulation model 1 1 12. The simulation model 1 1 12 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design can then be created or synthesized from the simulation model 1 1 12. The RTL design 1 1 15 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 1 1 15, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.[01 14] The RTL design 1 1 15 or equivalent may be further synthesized by the design facility into a hardware model 1 120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rdparty fabrication facility 1 165 using non-volatile memory 1 140 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 1 150 or wireless connection 1 160. The fabrication facility 1 165 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.[01 15] Figure 15 is a block diagram illustrating an exemplary system on a chip integrated circuit 1200 that may be fabricated using one or more IP cores, according to an embodiment. The exemplary integrated circuit includes one or more application processors 1205 (e.g., CPUs), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be a modular IP core from the same or multiple different design facilities. The integrated circuit includes peripheral or bus logic including a USB controller 1225, UART controller 1230, an SPI/SDIO controller 1235, and an l2S/l2C controller 1240. Additionally, the integrated circuit can include a display device 1245 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255.Storage may be provided by a flash memory subsystem 1260 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 1265 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 1270. [01 16] Additionally, other logic and circuits may be included in the processor of integrated circuit 1200, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.[01 17] The following clauses and/or examples pertain to further embodiments:One example embodiment may be a method comprising packing one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread. The method may also include modifying the pipeline domain shader payload to handle multiple patches. The method may also include packing domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane, and storing an attribute for each domain point in its own partition in a register space addressable by a programmed thread. The method may also include modifying the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. The method may also include replicating primitive unified return buffer handles into lanes containing an instance-ID of the primitive. The method may also include modifying the pipeline pixel shader payload to handle multiple triangles. The method may also include using barycentric parameters for attribute interpolation The method may also include delivering a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute. The method may also include enabling attribute deltas from multiple triangles to be included in the same pixel shader payload. The method may also include packing for an SIMD width of 32 channels per thread or higher.[01 18] Another example embodiment may be one or more non-transitory computer readable media storing instructions to perform a sequence comprising packing one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread. The media may include further storing instructions to perform a sequence including modifying the pipeline domain shader payload to handle multiple patches. The media may include further storing instructions to perform a sequence including packing domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane, and storing an attribute for each domain point in its own partition in a register space addressable by aprogrammed thread. The media may include further storing instructions to perform a sequence including modifying the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. The media may include further storing instructions to perform a sequence including replicating primitive unified return buffer handles into lanes containing an instance-ID of the primitive. The media may include further storing instructions to perform a sequence including modifying the pipeline pixel shader payload to handle multiple triangles. The media may include further storing instructions to perform a sequence including using barycentric parameters for attribute interpolation. The media may include further storing instructions to perform a sequence including delivering a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute. The media may include further storing instructions to perform a sequence including enabling attribute deltas from multiple triangles to be included in the same pixel shader payload. The media may include further storing instructions to perform a sequence including packing for an SIMD width of 32 channels per thread or higher.[01 19] In another example embodiment may include an apparatus comprising a processor to pack one of multiple vertices, patches, primitives or triangles in one graphics pipeline stage into one execution unit hardware thread, and a memory coupled to said processor. The apparatus may include said processor to modify the pipeline domain shader payload to handle multiple patches. The apparatus may include said processor to pack domain point data from different domain shader patches into one single instruction multiple data (SIMD) thread with each domain point occupying one SIMD lane, and to store an attribute for each domain point in its own partition in a register space addressable by a programmed thread. The apparatus may include said processor to modify the pipeline geometry shader payload to handle multiple primitives when primitive objects instance count is greater than one. The apparatus may include said processor to replicate primitive unified return buffer handles into lanes containing an instance-ID of the primitive. The apparatus may include said processor to modify the pipeline pixel shader payload to handle multiple triangles. The apparatus may include said processor to use barycentric parameters for attribute interpolation. The apparatus may include said processor to deliver a payload to a pixel shader including barycentric parameters per pixel or per sample with a set of vertex attribute deltas per channel for each attribute. The apparatus may include said processor to enable attribute deltas from multiple triangles to be included in the same pixel shader payload. The apparatus may include said processor to pack for an SIMD width of 32 channels per thread or higher.[0120] The graphics processing techniques described herein may beimplemented in various hardware architectures. For example, graphics functionality may be integrated within a chipset. Alternatively, a discrete graphics processor may be used. As still another embodiment, the graphics functions may be implemented by a general purpose processor, including a multicore processor.[0121 ] References throughout this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementationencompassed within the present disclosure. Thus, appearances of the phrase "one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application.[0122] While a limited number of embodiments have been described, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this disclosure. |
This disclosure provides apparatus and methods for mitigating electrostatic discharge (ESD) in display devices. In one aspect, a display device (100) includes an encapsulation substrates (102) having an anti-static coating (120) on one or more of surfaces (112, 114, 116) of the encapsulation substrate (102). The display devices (100) can include transparent substrates (104) having display elements (106) thereon such that the encapsulation substrate (102) that covers the display elements (106). The anti-static coating (120) on one or more surfaces (112, 114, 116) of the encapsulation substrate (102) can dissipate charge that may build up during fabrication or operation of the display device (100). The anti-static coating (120) can be conductive and transparent, with examples of such coatings (120) including transparent conducting oxides (TCOs), thin metal films, thin carbon films, and networks of conductive nanostructures. |
1.A display device comprising:Encapsulated substrate;A conductive antistatic coating on at least a portion of the encapsulated substrate;A transparent substrate sealed to the encapsulated substrate by a seal;One or more display elements sealed between the transparent substrate and the encapsulation substrate, the one or more display elements being configured to produce an image that can be viewed via the transparent substrate, wherein the The encapsulated substrate has a first side and a second measurement, said first side facing said display element.2.A display device according to claim 1, wherein said conductive antistatic coating is disposed between said seal and said encapsulated substrate.3.The display device according to claim 1, wherein said seal member is an epoxy resin seal.4.The display device according to claim 1, wherein the conductive antistatic coating is translucent or transparent.5.The display device according to claim 1, wherein the conductive antistatic coating comprises at least one of a transparent conductive oxide, a conductive nanostructured web, a metal thin film and a carbon-based thin film.6.The display device according to any one of claims 1 to 5, wherein the display element and the encapsulation substrate are spaced apart by a gas or a vacuum gap.7.The display device according to any one of claims 1 to 5, wherein the display element is an electromechanical system EMS display element.8.A display device according to any one of claims 1 to 5, wherein the display element is an interference modulator IMOD display element.9.A display device according to any one of claims 1 to 5, wherein said first side comprises a recessed portion for receiving said display element and a peripheral portion sealed to said transparent substrate.10.The display device according to claim 9, wherein the conductive antistatic coating is located on the peripheral portion of the first side of the encapsulation substrate.11.The display device according to claim 9, wherein the conductive antistatic coating is located on the recessed portion of the first side of the encapsulation substrate.12.The display device according to claim 9, wherein said conductive antistatic coating is continuous across said peripheral portion of said first side of said capsule substrate and said recessed portion.13.A display device according to any one of claims 1 to 5, wherein the conductive antistatic coating comprises a conductive configuration feature having a height of at least 5 nm.14.The display device according to claim 1, further comprising a processor configured to communicate with the display element, the processor configured to process image data; and a memory device configured to communicate with the processor The15.The display device according to claim 14, further comprising a driver circuit configured to transmit at least a signal to the display element; and a driver circuit configured to transmit at least a portion of the image data to a controller of the driver circuit The16.The apparatus of claim 14, further comprising an image source module configured to send the image data to the processor, wherein the image source module comprises at least one of a receiver, a transceiver, and a transmitter By.17.The apparatus of claim 14, further comprising input means configured to receive input data and communicate said input data to said processor.18.A display device comprising:Encapsulated substrate;A transparent substrate sealed to the encapsulated substrate by a seal;One or more display elements sealed between the transparent substrate and the encapsulation substrate, the one or more display elements being configured to produce an image that can be viewed via the transparent substrate; andA conductive antistatic coating on the encapsulated substrate, wherein the conductive antistatic coating faces the display element and comprises a plurality of conductive configuration features.19.The display device according to claim 18, wherein the conductive configuration feature has a height of at least 5 nm.20.The display device according to claim 18, wherein the conductive configuration feature has a height of at least 20 nm.21.A display device according to any one of claims 18 to 20, wherein said conductive antistatic coating comprises at least one of a transparent conductive oxide and a conductive nanostructured web.22.A method of manufacturing a display device, the method comprising:Coating one or more surfaces of the encapsulated substrate with a conductive antistatic coating;Forming a sealant material on the encapsulated substrate, comprising forming a sealant material on the conductive antistatic coating.23.The method according to claim 22, wherein the first side of the encapsulated substrate comprises a recessed portion and a peripheral portion surrounding the recessed portion, and wherein the one or more of the encapsulated substrate The plurality of surfaces comprises conformally coating the first side of the encapsulated substrate.24.The method according to claim 22 or 23, further comprising sealing said encapsulated substrate to a transparent substrate on which one or more display elements are disposed so that said one or more display elements are made of said Encapsulated capsule.25.A display device comprising:A transparent substrate having a display element thereon;A sealing substrate sealed to the transparent substrate, thereby encapsulating the display element;A device for dissipating electrostatic discharge.26.A display device according to claim 25, wherein said means for dissipating electrostatic discharge comprises means for reducing static friction in said display means. |
The electrostatic discharge in the display device is reducedPriority requestThis application claims the benefit of U.S. Patent Application No. 14 / 290,771, filed May 29, 2014, which is hereby incorporated by reference in its entirety.Technical fieldThe present invention relates to a display device and, more particularly, to an electrostatic discharge reduction in a display device.Background techniqueElectromechanical systems (EMS) contain devices with electrical and mechanical components, actuators, transducers, sensors, optical components (eg, mirrors and optical films) and electronic devices. EMS devices or components can be manufactured on a variety of scales, including but not limited to micro-scale and nano-scale. For example, a microelectromechanical system (MEMS) device may comprise a structure having a size in the range of about one micron to several hundred microns or greater. The nanoelectromechanical system (NEMS) device may comprise a structure having a size of less than one micron (including, for example, less than several hundred nanometers). The electromechanical elements may be generated using deposition, etching, photolithography and / or etching of portions and / or layers of the substrate and / or deposited material layer to form electrical and electromechanical devices.One type of EMS device is called an interferometric modulator (IMOD). The term IMOD or interference light modulator refers to a device that selectively absorbs and / or reflects light using optical interference principles. In some embodiments, the IMOD display element may comprise a pair of conductive plates, one or both of which may be transparent or / or reflective in whole or in part and capable of relative movement after application of an appropriate electrical signal. For example, a plate may comprise a deposition layer deposited over the substrate, deposited on the substrate or supported by the substrate, and the other plate may comprise a reflective film spaced from the fixed layer. The position of one plate relative to the other plate may change the optical interference of the light incident on the IMOD display element. IMOD-based display devices have a wide range of applications and are expected to be used to improve existing products and generate new products, especially those with display capabilities.The contents of the inventionThe system, method, and apparatus of the present invention each have a number of novel aspects in which no one is solely responsible for the desired attributes disclosed herein.A novel aspect of the subject matter described in the present invention may be embodied in a display device comprising a encapsulated substrate, a conductive antistatic coating on the encapsulated substrate, sealed to the A transparent substrate encapsulating the substrate, and one or more display elements sealed between the transparent substrate and the encapsulated substrate. The one or more display elements may be configured to produce an image that can be viewed via a transparent substrate. The encapsulated substrate may have a first side and a second side, wherein the first side faces the display element. In some embodiments, the conductive antistatic coating is disposed between the seal and the encapsulated substrate. In some embodiments, the seal is an epoxy seal.In some embodiments, the conductive antistatic coating comprises at least one of a transparent conductive oxide, a conductive nanostructured web, a metal film, and a carbon-based film. In some embodiments, the conductive antistatic coating is translucent or transparent.In some embodiments, the display element is spaced from the encapsulated substrate by a gas or vacuum gap. In some embodiments, the display element is an electromechanical system (EMS) display element. For example, the display element may be an interference modulator (IMOD) display element.In some embodiments, the first side of the encapsulated substrate may comprise a recessed portion for receiving the display element and a peripheral portion sealed to the transparent substrate. The conductive antistatic coating may be located on one or more of the peripheral portion and the recessed portion of the first side of the encapsulation substrate. In some embodiments, the conductive antireflective coating is continuous across the peripheral portion and the recessed portion of the first side of the encapsulated substrate. In some embodiments, the conductive antistatic coating comprises conductive configuration features.Another novel aspect of the subject matter described in the present invention may be implemented in a display device comprising: a capsule closure substrate; a transparent substrate sealed to the encapsulation substrate by a seal; Or a plurality of display elements sealed between the transparent substrate and the encapsulated substrate and configured to produce an image that can be viewed via a transparent substrate; and a conductive antistatic coating on the encapsulated substrate, The conductive antistatic coating faces the display element and comprises a plurality of conductive configuration features. In some embodiments, the conductive configuration feature may have a height of at least 5 nm. In some embodiments, the conductive configuration feature may have a height of at least 20 nm. In some embodiments, the conductive antistatic coating comprises at least one of a transparent conductive oxide and a conductive nanostructured web.Another innovative aspect of the subject matter described in the present invention may be implemented in a method of manufacturing a display device. The method may comprise coating one or more surfaces of the encapsulated substrate by a conductive antistatic coating and forming a sealant material on the encapsulated substrate, comprising forming a sealant material on the conductive antistatic coating.In some embodiments, the first side of the encapsulated substrate comprises a recessed portion and a peripheral portion surrounding the recessed portion. One or more surfaces coated with the encapsulated substrate may comprise a first side that is conformally coated with a encapsulated substrate. In some embodiments, the method may comprise sealing the encapsulation substrate to a transparent substrate on which one or more display elements are disposed such that the one or more display elements are encapsulated by a capsule closure.Another novel aspect of the subject matter described in the present invention may be implemented in a display device comprising: a transparent substrate having a display element thereon; a capsule sealing substrate sealed to a transparent substrate, The package display element; and a device for dissipating electrostatic discharge. In some embodiments, the means for dissipating electrostatic discharges comprises means for reducing static friction in the display device.The details of one or more implementations of the subject matter described in the present invention are set forth in the accompanying drawings and the following description. While the examples provided in the present invention are primarily based on EMS and MEMS displays, the concepts provided herein may be applicable to other types of displays such as liquid crystal displays, organic light emitting diode (& quot; OLED & quot;) displays, and field emission monitor. Other features, aspects and advantages of the self-description, drawings and claims will become apparent. It should be noted that the relative dimensions of the following figures may not be drawn to scale.Description of the drawings1 is an isometric view illustrating two adjacent IMOD display elements in a series of display elements or display element arrays of an interferometric modulator (IMOD) display device.Figure 2 is a system block diagram illustrating an electronic device incorporating an IMOD-based display incorporating a three-element array comprising an IMOD display element.Figures 3A and 3B are schematic exploded perspective views of a portion of an electromechanical system (EMS) package comprising an EMS element array and a backplane.Figure 4 shows an example of a cross-sectional schematic diagram illustrating a display device comprising a conductive antistatic coating.Figures 5A to 5G show examples of cross-sectional schematic diagrams illustrating the arrangement of conductive antistatic coatings on encapsulated substrates.Figure 6 shows an example of a flow chart illustrating the manufacturing process for a encapsulated substrate with a conductive antistatic coating.Figure 7 shows an example of a flow chart illustrating the manufacturing process for a display device having a encapsulated substrate containing a conductive antistatic coating.Figures 8A and 8B illustrate an example illustrating a schematic diagram of manufacturing certain stages of a display device having a encapsulated substrate comprising a conductive antistatic coating.9A and 9B show an example of a schematic diagram illustrating the response of the display device to mechanical shock.Figures 9C and 9D show examples of schematic diagrams illustrating conductive antistatic films containing configuration features.10A and 10B are system diagrams illustrating a display device including a plurality of IMOD display elements.The same reference numerals and numerals in the drawings indicate the same elements.detailed descriptionThe following description is directed to certain embodiments for the purpose of describing novel aspects of the invention. However, one of ordinary skill in the art will readily appreciate that the teachings herein may be applied in a number of different ways. The described implementation may be implemented in any device, device, or system that may be configured to display an image, whether the image is motion (e.g., video) or still (e.g., still image), and whether the image is text, Or the picture. More specifically, it is contemplated that the described embodiments may be included in, for example, but not limited to, a variety of electronic devices, or associated with the electronic device: a mobile phone, a cellular telephone with a multimedia Internet function, TV receivers, wireless devices, smart phones,devices, personal data assistants (PDAs), wireless e-mail receivers, handheld or portable computers, netbooks, notebook computers, smartbooks, tablet computers, printers, copiers, scanners (For example, MP3 player), video recorder, game console, watch, watch, calculator, TV monitor, camera, digital media player (E.g., an electronic reader), a computer monitor, an automotive display (including an odometer display and a speedometer display, etc.), a cockpit control and / or a display, a camera landscape display (e.g., a vehicle Rear view camera display), electronic photo, electronic billboard or logo, projector, building structure, micro A refrigerator, a refrigerator, a stereo system, a cassette recorder or a player, a DVD player, a CD player, a VCR, a radio, a portable memory chip, a washing machine, a dryer, a washing machine / dryer, a parking timer, Microelectromechanical systems (MEMS) applications for electromechanical systems (EMS) applications, and non-EMS applications), aesthetic structures (e.g., displays of images of a jewelery or clothing), and a variety of EMS devices. The teachings herein may also be used in non-display applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion sensing devices, magnetometers, inertia for consumer electronics Components, consumer electronics parts, transformers, liquid crystal devices, electrophoresis devices, drive solutions, manufacturing processes, and electronic test equipment. Accordingly, the teachings are not intended to be limited to the practice described only in the drawings, and the truth is that it will be readily apparent to those skilled in the art to which the invention will be apparent to those skilled in the art.The embodiments described herein relate to a display device comprising an antistatic coating. Antistatic coatings reduce damage due to electrostatic discharge (ESD). The display device may comprise a transparent substrate having a display element thereon and a capsule closure substrate covering the display element. The antistatic coating on one or more surfaces of the encapsulated substrate may prevent or dissipate charges that may accumulate during manufacture or operation of the display device. The antistatic coating may be conductive and transparent, wherein the example of the coating comprises a transparent conductive oxide (TCO), a thin metal film, a thin carbon film, and a conductive nanostructured web.In some embodiments, the conductive antistatic coating may be applied to the peripheral region of the encapsulated substrate on which the epoxy resin and other sealing material are placed. The conductive antistatic coating may be disposed between the encapsulation substrate and the sealing material. In some embodiments, the conductive antistatic coating on the encapsulated substrate faces the display element disposed on the display glass or other transparent substrate. In some embodiments, the conductive antistatic coating on the encapsulated substrate may comprise a configuration feature that reduces static friction.Specific embodiments of the subject matter described in the present invention may be implemented to achieve one or more of the following potential advantages. The antistatic coating on the encapsulated substrate of the display device may improve the yield and use period and reduce the failure of the ESD event due to the manufacture or operation of the display device. The antistatic coating on the encapsulated substrate may reduce the thickness of the thin film transistor (TFT) and other electrical components on the display glass or other transparent substrate during, for example, scoring and fracture processes and other post-production line (BEOL) damage. The antistatic coating on the encapsulated substrate may mitigate the damage to ESD due to the display element and the encapsulated substrate that can occur during display of the display device. An antistatic coating comprising a configuration feature reduces the contact between the display element and the encapsulated substrate, thereby reducing damage due to such contact.An example of a suitable EMS or MEMS device or device to which the described embodiments may be applied is a reflective display device. The reflective display device may incorporate an interferometric modulator (IMOD) display element that may be embodied to selectively absorb and / or reflect light incident thereon with an optical interference principle. The IMOD display element may comprise a partial optical absorber, a reflector movable relative to the absorber, and an optical resonant cavity defined between the absorber and the reflector. In some embodiments, the reflector may be moved to two or more different positions, which may vary the size of the optical resonant cavity and thereby affect the reflectivity of the IMOD. The reflection spectrum of the IMOD display element can produce a fairly broad spectral band that can be shifted across the visible wavelength to produce a different color. The position of the spectral band can be adjusted by changing the thickness of the optical resonator. One way to change the optical resonator is by changing the position of the reflector relative to the absorber.1 is an isometric view illustrating two adjacent IMOD display elements in a series of display elements or display element arrays of an interferometric modulator (IMOD) display device. The IMOD display device comprises one or more interfering EMS (e.g., MEMS) display elements. In these devices, the interferometric MEMS display assembly may be configured in a bright or dark state. In the bright ("relaxed", "open" or "on", etc., the display element reflects most of the incident visible light. Conversely, in the dark ("actuated", "off" or "off") state, the display element reflects very little incident visible light. The MEMS display element may be configured to reflect primarily at a particular wavelength of light, allowing color display in addition to black and white. In some embodiments, color primaries and grayscale of different intensities can be achieved by using a plurality of display elements.The IMOD display device may comprise an array of IMOD display elements that may be arranged in rows and columns. Each display element in the array may comprise at least one pair of reflective and semi-reflective layers, for example, a movable reflective layer (i.e., a movable layer, also referred to as a mechanical layer) and a fixed portion of the reflective layer (i.e., a stationary layer ), The layers being positioned at variable and controllable distances from each other to form an air gap (also referred to as an optical gap, cavity, or optical resonator). The movable reflective layer can be moved between at least two positions. For example, in the first position (i.e., the relaxed position), the movable reflective layer may be positioned at a distance from the reflective portion of the fixed portion. In the second position (i.e., the actuated position), the movable reflective layer may be positioned closer to the partial reflective layer. The incident light reflected from the two layers may depend on the position of the movable reflective layer and the wavelength of the incident light to interfere with each other in order to produce an integral or non-reflective state for each display element. In some embodiments, the display element may be in a reflective state when not actuated to reflect light within the visible spectrum and may be in a dark state upon actuation to absorb and / or obscure interference within the visible range Light. However, in some other embodiments, the IMOD display element may be in a dark state when not actuated and in a reflective state upon actuation. In some embodiments, the introduction of the applied voltage may drive the display element to change state. In some other embodiments, the applied charge may drive the display element to change state.The depicted portion of the array in Figure 1 contains two adjacent interferometric MEMS display elements in the form of an IMOD display element 12. [ In the display element 12 on the right side (as illustrated), it is illustrated that the movable reflective layer 14 is in an actuated position in proximity, proximity or touching the optical stack 16. The voltage Vref applied across the display element 12 on the right side is sufficient to move the movable reflective layer 14 and also maintain it in the actuated position. In the display element 12 on the left side (as illustrated), it is described that the movable reflective layer 14 is in a relaxed position at a distance (which may be predetermined based on design parameters) from the optical stack 16 containing the partial reflective layer. The voltage V 0 applied across the display element 12 on the left side is not sufficient to cause the actuation of the movable reflective layer 14 to the actuated position (e.g., the reciprocally actuated position of the right display element 12).In Fig. 1, the reflection property of the IMOD display element 12 is described generally by an arrow indicating the light 13 incident on the IMOD display element 12 and the light 15 reflected from the display element 12 on the left side. Most of the light 13 incident on the display element 12 can be transmitted through the transparent substrate 20 toward the optical stack 16. A portion of the light incident on the optical stack 16 may be transmitted through the partial reflective layer of the optical stack 16 and a portion will be reflected back through the transparent substrate 20. [ The portion of the light 13 that is transmitted through the optical stack 16 can be reflected back toward and through the transparent substrate 20 from the movable reflective layer 14. [ The interference (phase and / or phase cancellation) between the light reflected by the partial reflection layer of the optical stack 16 and the light reflected from the movable reflection layer 14 will be determined in part on the observation side or the substrate side self-display element 12 reflects the intensity of the light 15. In some embodiments, the transparent substrate 20 may be a glass substrate (sometimes referred to as a glass plate or panel). The glass substrate may or may comprise, for example, borosilicate glass, soda lime glass, quartz, Pyrex or other suitable glass material. In some embodiments, the glass substrate may have a thickness of 0.3 mm, 0.5 mm, or 0.7 mm, but in some embodiments the glass substrate may be thick (e.g., several tens of millimeters) or less For example, less than 0.3 mm). In some embodiments, non-glass substrates, such as polycarbonate, acrylic acid, polyethylene terephthalate (PET) or polyetheretherketone (PEEK) substrates, may be used. In this embodiment, the non-glass substrate will most likely have a thickness of less than 0.7 mm, but depending on design considerations, the substrate may be thicker. In some embodiments, a non-transparent substrate, such as a substrate based on a metal foil or stainless steel, may be used. For example, a reverse IMOD-based display comprising a fixed reflective layer and a partially transmissive and partially reflective movable layer may be configured to be viewed from the opposite side of the substrate as a display element 12 of FIG. 1 and may be formed from a non-transparent substrate support.The optical stack 16 may comprise a single layer or several layers. The layer may comprise an electrode layer, partially reflective and one or more of a portion of the transmissive layer and the transparent dielectric layer. In some embodiments, the optical stack 16 is electrically conductive, partially transparent and partially reflective, and may be fabricated, for example, by depositing one or more of the layers onto the transparent substrate 20. In some embodiments, The electrode layer may be formed from a variety of materials such as various metals (e.g., indium tin oxide (ITO)). The partially reflective layer may be formed from a variety of materials, such as various metals (e.g., chromium and / or molybdenum), semiconductors, and dielectrics. The partial reflective layer may be formed from one or more layers of material, and each of the layers may be formed from a single material or a combination of materials. In some embodiments, certain portions of the optical stack 16 may comprise a metal or semiconductor that acts as a single translucent thickness of both the partial optical absorber and the electrical conductor, and the different conductive layers or portions (e.g., the optical stack 16 Or a layer or portion of other structures of the display element) may be used to transmit a bus signal between the IMOD display elements. The optical stack 16 may also comprise one or more insulating or dielectric layers covering one or more conductive layers or a conductive / partial absorption layer.In some embodiments, at least some of the layers of the layers of the optical stack 16 may be patterned as parallel strips and may form row electrodes in the display device, as described further below. One of ordinary skill in the art will appreciate that the term & quot; patterned & quot; is used herein to refer to masking and etching processes. In some embodiments, highly conductive and reflective materials (e.g., aluminum (Al)) may be used for the movable reflective layer 14, and these strips may form column electrodes in the display device. The movable reflective layer 14 may be formed as a series of parallel strips of one or more deposited metal layers (orthogonal to the row electrodes of the optical stack 16) to form a layer deposited on the support (e.g., the illustrated column 18 and Column 18 between the involved sacrificial material) on the top of the column. When the sacrificial material is etched, the defined gap 19 or optical cavity may be formed between the movable reflective layer 14 and the optical stack 16. In some embodiments, the spacing between the posts 18 may be from about 1 [mu] m to 1000 [mu] m and the gap 19 may be less than about 10,000 angstroms &In some embodiments, each IMOD display element, whether in an actuated or relaxed state, may be considered as a capacitor formed by a fixed reflective layer and a moving reflective layer. As illustrated by the display element 12 on the left side of Fig. 1, the movable reflective layer 14 remains in a mechanically relaxed state where the gap 19 is between the movable reflective layer 14 and the optical stack 16 when no voltage is applied. However, when a potential difference (i.e., voltage) is applied to at least one of the selected row and column, the capacitor formed at the intersection of the row electrode and the column electrode at the corresponding display element becomes charged, and the electrostatic force The electrodes are pulled together. If the applied voltage exceeds the threshold, the movable reflective layer 14 may be deformed and moved close to or against the optical stack 16. [ The dielectric layer (not shown) within the optical stack 16 prevents short-circuiting and the separation distance between the control layer 14 and the layer 16, as illustrated by the actuated display element 12 on the right side of Fig. The columns may be the same regardless of the polarity of the applied potential difference. Although a series of display elements in an array may be referred to as & quot; rows & quot; or & quot; columns & quot; in some examples, one of ordinary skill in the art will readily understand that one direction is referred to as & quot; "Column" is arbitrary. Again, in some orientations, rows can be treated as columns and columns are treated as rows. In some embodiments, rows may be referred to as & quot; common & quot; lines and may be referred to as & quot; segmented & quot; lines, or vice versa. In addition, the display elements may be arranged evenly in orthogonal rows and columns (& quot; arrays & quot;), or in a non-linear configuration, for example, with certain positions offset (& quot; mosaic & quot;) with respect to each other. The terms "array" and "mosaic" may refer to any configuration. Thus, although the display is referred to as containing & quot; array & quot; or & quot; mosaic & quot ;, the elements themselves do not need to be arranged orthogonally or in a uniform distribution, and in any example, elements having asymmetric shapes and uneven distribution The arrangement.Figure 2 is a system block diagram illustrating an electronic device incorporating an IMOD-based display incorporating three elements of an array comprising three elements of an IMOD display element. The electronic device comprises a processor 21 that may be configured to execute one or more software modules. In addition to executing the operating system, the processor 21 may also be configured to execute one or more software applications including a web browser, a telephony application, an e-mail program, or any other software application.The processor 21 may be configured to communicate with the array driver 22. [ The array driver 22 may include a row driver circuit 24 and a row driver circuit 26 that provide signals to, for example, a display array or panel 30. [ The cross section of the IMOD display device illustrated in Figure 1 is shown by line 1-1 in Figure 2. 2 illustrates a 3x3 array of IMOD display elements for clarity, the display array 30 may contain a significant number of IMOD display elements and may have a different number of IMOD display elements in the row, and vice versa Then.3A and 3B are schematic exploded perspective views of a portion of the EMS package 91 containing the EMS element array 36 and the backplane 92. FIG. Figure 3A is shown as cutting the two corners of the backplate 92 to better illustrate portions of the backplate 92 and Figure 3B is shown as a case where no corner is shown. The EMS array 36 may comprise a substrate 20, a support post 18, and a movable layer 14. [ In some embodiments, the EMS array 36 may comprise an array of IMOD display elements having one or more optical stack portions 16 on a transparent substrate and the movable layer 14 may be implemented as a movable reflective layer.The backplate 92 may be substantially planar or may have at least one corrugated surface (e.g., the backplate 92 may be formed with recesses and / or protrusions). The backplate 92 may be made of any suitable material, whether transparent or opaque, conductive or insulating. Suitable materials for the backplate 92 include, but are not limited to, glass, plastic, ceramics, polymers, laminates, metals, metal foils, Kovar and electroplated kova alloys.3A and 3B, the backplate 92 may include one or more backplate assemblies 94a and 94b that may be partially or completely embedded in the backplate 92. As shown in FIGS. As can be seen in Figure 3A, the backplate assembly 94a is embedded in the backplate 92. As shown in FIG. As can be seen in Figures 3A and 3B, the backplate assembly 94b is disposed within the recess 93 formed in the surface of the backplate 92. In some embodiments, the backplate assemblies 94a and / or 94b may protrude from the surface of the backplate 92. In some embodiments, Although the backplate assembly 94b is disposed on the side of the backing plate 92 facing the substrate 20, in other embodiments the backplate assembly may be disposed on the opposite side of the backplate 92. [The backplate assembly 94a and / or 94b may comprise one or more active or passive electrical components such as transistors, capacitors, inductors, resistors, diodes, switches, and / or, for example, packaged standard or discrete integrated circuits ) Of the IC. Other examples of backplane assemblies that may be used in various embodiments include antennas, batteries and sensors (e.g., electrical sensors, touch sensors, optical sensors or chemical sensors) or thin film deposition devices.In some embodiments, the backplate assemblies 94a and / or 94b may be in electrical communication with portions of the EMS array 36. In some embodiments, Conductive structures such as traces, bumps, posts, or vias may be formed on one or both of the back plate 92 or the substrate 20 and may contact or contact the other conductive components with each other to communicate with the backplane assembly 94a and / or 94b. For example, FIG. 3B includes one or more conductive vias 96 on the backplate 92 that can be aligned with an electrical contact 98 extending upwardly from the movable layer 14 within the EMS array 36. As shown in FIG. In some embodiments, the backplate 92 may also include one or more insulating layers that electrically isolate the backplate assemblies 94a and / or 94b from other components of the EMS array 36. [ In some embodiments where the backing plate 92 is formed from a breathable material, the inner surface of the backing plate 92 may be coated with a vapor barrier (not shown).The backplate assemblies 94a and 94b may comprise one or more desiccants for absorbing any moisture that can enter the EMS package 91. [ In some embodiments, the desiccant (or other moisture absorbing material (e.g., degassing agent)) may, for example, be used as a sheet that is attached to the backing plate 92 (or the recess formed therein) using an adhesive Any other backplane components are provided separately. Alternatively, the desiccant may be integrated into the backing plate 92. [ In some other embodiments, the desiccant may be applied directly or indirectly over other backplate assemblies, for example, by spraying, screen printing, or any other suitable method.In some embodiments, the EMS array 36 and / or the backplate 92 may include a mechanical support 97 to maintain the distance between the backplate assembly and the display element and thereby prevent mechanical interference between those components. In the embodiment illustrated in Figures 3A and 3B, the mechanical support 97 is formed as a post that is aligned with the support post 18 of the EMS array 36 protruding from the backing plate 92. As shown in FIGS. Alternatively or in addition, a mechanical support, such as a rail or a column, may be provided along the edge of the EMS package 91. [Although not shown in Figures 3A and 3B, a seal that partially or completely surrounds the EMS array 36 may be provided. The seal may be formed with the backing plate 92 and the substrate 20 to form a protective chamber enclosing the EMS array 36. [ The seal may be a semi-hermetic seal, such as a conventional epoxy-based adhesive. In some other embodiments, the seal may be a hermetic seal, such as a thin film metal weld or glass frit. In some other embodiments, the seal may comprise polyisobutylene (PIB), a polyurethane, a liquid spin-on glass, a solder, a polymer, a plastic, or other material. In some embodiments, the enhanced sealant may be used to form a mechanical support.In an alternative embodiment, the seal ring may comprise an extension of one or both of the back plate 92 or the substrate 20. [ For example, the seal ring may include a mechanical extension of the backplate 92 (not shown). In some embodiments, the sealing ring may comprise a separate component, such as an O-ring or other annular member.In some embodiments, the EMS array 36 and the backplate 92 are separately formed before being attached or coupled together. For example, the edges of the substrate 20 may be attached and sealed to the edges of the backplate 92 as discussed above. Alternatively, the EMS array 36 and the backplane 92 may be formed and joined together as an EMS package 91. [ In some other embodiments, the EMS package 91 may be fabricated in any other suitable manner, such as by forming an assembly of the backplate 92 by deposition on the EMS array 36. [Electrostatic discharge (ESD) in the display device can cause the device to fail. For example, ESD during manufacture or operation of the display device may result in failure of the IMOD or other display elements. According to an aspect of the present invention, there is provided a display device comprising a capsule sealing substrate sealed to a transparent substrate, one or more display elements sealed between the display device and the encapsulated substrate, and a capsule sealing substrate At least a portion of the conductive antistatic coating. According to various embodiments, the antistatic coating may carry out one or both of the following operations: preventing electrostatic charge buildup and dissipating static charges that may accumulate during manufacture or operation of the display device.In some embodiments, the display device comprises a gap or cavity between the display element and the encapsulated substrate. The gap may be filled with air or other gas combinations, or a vacuum chamber. The IMOD display, for example, may contain an air gap between the IMOD pixel and the back glass. In some embodiments, the display element may contact the encapsulated substrate, or the region between the display element and the encapsulated substrate may be filled with a solid or liquid material. For example, a protective glass cover of an organic light emitting diode (OLED) or a liquid crystal display (LCD) display device may contact an electrode or other layer of an optical stack. Although the examples given below focus on a display device having an air gap, the encapsulated substrate disclosed herein may be implemented in other display devices, such as in OLEDs and LCD display devices. In addition, the encapsulated substrate disclosed herein may be implemented in a non-display device. For example, a encapsulated substrate comprising a conductive antistatic coating disclosed herein may be implemented in a non-display EMS device.For a display device, the conductive antistatic coating disclosed herein is located substantially on a encapsulated substrate opposite the display glass or other transparent substrate via which the display is viewed. The display device may have an active matrix or a passive matrix display. In some embodiments, the encapsulated substrate may be useful for an active matrix display by mitigating ESD damage to thin film transistors (TFTs) of such display devices.Figure 4 shows an example of a cross-sectional view illustrating a display device comprising a conductive antistatic coating. The display device 100 includes a capsule substrate 102 and a transparent substrate 104. As shown in FIG. The encapsulated substrate 102 may also be characterized as a capsule closure glass, a back glass, a recessed glass, or a backplate, depending on various embodiments. The transparent substrate 104 may be characterized as a display glass or process glass according to various embodiments. The display element 106 is disposed on the transparent substrate 104. In some embodiments, the display element 106 may be fabricated on a transparent substrate 104. In some embodiments, In addition, in some embodiments, the display element 106 is configured to produce an image that can be viewed via the transparent substrate 104. In some embodiments, In some embodiments, the display element may be an EMS display element, such as the IMOD display element 12 depicted in FIG. In some embodiments, the display element may be an organic light emitting diode (OLED) display element and the like. In addition, in some embodiments, the TFT may be electrically connected to the display element for active matrix control of the display.The transparent substrate 104 may be, for example, a transparent substrate 20 as described above with respect to FIG. 1, examples including a glass substrate and a non-glass polymeric substrate. The encapsulation substrate 102 may be, for example, a backplate 92 as described above with respect to Figures 3A and 3B. According to various embodiments, the encapsulated substrate 102 may be transparent or opaque, and may be electrically or insulating. Suitable materials for encapsulating the substrate 102 include, but are not limited to, glass, plastic, ceramics, polymers, and laminates. In some embodiments, the encapsulated substrate 102 has one or more corrugated surfaces; for example, the encapsulated substrate 102 shown in FIG. 4 includes a recess 108 that houses the display element 106, the recess being oriented to an effective display of the display device Area 122. In some other embodiments, the encapsulated substrate 102 may be substantially planar.The capsule substrate 102 is sealed to the transparent substrate 104 by contacting the seal 110 of the transparent substrate 104 outside the effective display area 122. [ The seal may be any suitable seal comprising an epoxy seal, a metal seal or a glass frit. In some embodiments, the seal may comprise PIB, a polyurethane, a liquid spin-on glass, a solder, a polymer, a plastic, or other material.The capsule substrate 102 has a front side 112, a back side 114, and a side wall 116. [ The front side 112 (including the recess 108 and the peripheral region 118 surrounding the recess 108) faces the side of the transparent substrate 104 on which the display element 106 is disposed and is coated with a conductive antistatic coating 120. In some embodiments, the conductive antistatic coating 120 is transparent to facilitate alignment of the encapsulated substrate 102 with the transparent substrate 104. In some embodiments, The conductive antistatic coating 120 may be a suitable conductive material, including a transparent conductive oxide, a metal film, a conductive carbon nanotube mesh, and the like. Other examples of conductive antistatic coatings are described below. In the example of FIG. 4, the conductive antistatic coating is conformally coated with the front side 112 such that it crosses the front side 112 (including the planar portion 118 across the peripheral region 118, the recessed side wall 124 of the recess 108 and the plane of the recess 108) continuously. As discussed further below, in some embodiments, a conformal coating comprising a coating across the depressions and the walls of the curve may contribute to charge dissipation.Figures 5A to 5G show examples of cross-sectional schematic diagrams illustrating the arrangement of conductive antistatic coatings on encapsulated substrates. In FIG. 5A, the front side 112 of the encapsulated substrate 102 includes a recess 108 and a peripheral region 118. As shown in FIG. The conductive antistatic coating 120 is on the peripheral region 118 and is not in the recess 108. [ A similar arrangement is shown in Figure 5B, wherein the encapsulated substrate 102 is planar. The encapsulated substrate 102 does not contain a depression, but has a region 128 configured to cover a display element on a transparent substrate of the display device. The arrangement in the examples of Figures 5A and 5B may be used to mitigate damage to ESD due to scoring and fracture processes while not allowing the conductive material to enter the effective display area of the display device containing the encapsulation substrate 102. [ This is further discussed below with respect to Figs. 8A and 8B.5C shows an example of a encapsulated substrate 102 in which the conductive antistatic coating 120 is located on the planar surface of the recess 108 and on the peripheral region 118 of the encapsulation substrate 102 but not on the graded sidewalls 124 of the recess 108. [ The conductive antistatic coating 120 on the planar surface of the recess 108 may face the display element of the display device and may be the same or different material as the conductive antistatic coating 120 on the peripheral region 118. [ 5D shows an example in which the conductive antistatic coating 120 is located within the recess 108 of the encapsulation substrate 102 and is not located on the peripheral region 118 of the encapsulation substrate 102. [ The implementation of a conductive antistatic coating comprising a display element can be used to mitigate damage to ESD due to contact of the display element due to shock, shock, or user interaction. This is further discussed below with respect to Figs. 9A and 9B.In some embodiments, one or both of the backside and side walls of the encapsulated substrate of the device display device is coated with a conductive antistatic coating. 5E shows an example in which the front side 112, the back side 114, and the side wall 116 of the encapsulation substrate 102 are coated with a conductive antistatic coating 120. As shown in FIG. In FIG. 5F, the sidewall 116 of the encapsulated substrate 102 is coated with a conductive antistatic coating 120. 5G shows an example in which the backside 114 of the encapsulated substrate 102 is coated with a conductive antistatic coating 120. As shown in FIG. The implementation of the conductive antistatic coating containing the sidewalls may reduce damage due to ESD in handling. In some embodiments, the sidewall coating may provide a conductive path away from the front side of the encapsulated substrate to facilitate dissipation of charge.According to various embodiments, the conductive antistatic coating may or may not be grounded. In some embodiments, the conductive antistatic coating may be electrically connected to other conductive components of the display device. For example, the conductive antistatic coating may be associated with conductive vias (e.g., conductive in FIG. 3B) extending through the metal foil on the surface of the encapsulated substrate, the surface of the encapsulated substrate or the metal wiring on the surface of the transparent substrate Through hole 96) is electrically connected. In some embodiments, the conductive antistatic coating may be connected to the ground plane. In some embodiments, the conductive antistatic coating may be electrically connected to other electrical active components on the device, circuit, or transparent substrate via a metal seal that seals the encapsulated substrate to a transparent substrate.While Figures 4 and 5A to 5G provide examples of various arrangements of conductive antistatic coatings on encapsulated substrates, other arrangements are possible. For example, the conductive antistatic coating may be located on the backside and sidewall, but not on the front side of the encapsulated substrate.The conductive antistatic coating may be formed by any suitable conductive material that is sufficiently conductive to dissipate the accumulated charge. The antistatic coating can be characterized in terms of sheet resistance. The sheet resistance of the material may depend on the amount of charge to be dissipated; the antistatic coating configured to dissipate the charge accumulated from the larger surface rubbing against each other may have extremely low sheet resistance. The charge accumulated above the smaller surface area can be dissipated by the more resistive material.In general, the antistatic coating material has a sheet resistance of less than 10 6 ohms per square (Ω / sq). In some embodiments, the conductive antistatic material may have a sheet resistance between about 1 [Omega] / sq and 200 [Omega] / sq, or between about 40 [Omega] / sq and 200 [mu] / sq. For example, the conductive antistatic coating may be aITO layer having a sheet resistance of about 50 [Omega] / sq. In some embodiments, a relatively conductive material (e.g., a thin carbon film or a metal film) having less than 1 [Omega] / sq may be used. In addition, in some embodiments, an antistatic coating characterized by dissipation rather than conductivity may be used. The dissipative material is a sheet with a sheet resistance of between 10 6 Ω / sq and 10 9 Ω / sq.As indicated above, according to various embodiments, the antistatic coating may be transparent or opaque. In some embodiments, the transparency is not related to the display characteristics of the display device, but facilitates alignment of the display glass or other transparent substrate to the encapsulation substrate. In some embodiments, the transparent conductive antistatic coating may comprise a transparent conductive oxide (TCO). For example, the conductive antistatic coating may comprise indium tin oxide (ITO) and doped zinc oxide (e.g., zinc oxide (AZO)). In some embodiments, the transparent conductive antistatic coating may comprise a transparent conductive polymer. For example, the conductive antistatic coating may comprise at least one of polyaniline, polypyrrole, polythiophene such as poly (3,4-ethylenedioxythiophene), or any other inherently conductive or semiconductive polymer. In some embodiments, the transparent conductive antistatic coating may comprise a transparent conductive ink. In some embodiments, a conductive nanowire or nanotube web may be used. Examples of conductive nanostructures include silver nanowires and carbon nanotubes. An example of a transparent conductive ink containing silver nanowires that can be used is ClearOhm (TM) from Cambrios Technologies.The thickness of the TCO or other transparent conductive material used may depend on its conductivity and transparency. The electrical conductivity of ITO and other transparent conductive materials is negatively correlated with the increase in oxide in ITO resulting in more transparent, less conductive materials. For a particular thickness, the TCO material may have a range of sheet resistance and transparency depending on the relative amount of the constituent components thereof. The thickness of the examples of TCO and other transparent conductive materials is between aboutand. The thickness of these ranges can be used depending on the sheet resistance of the material. In some embodiments, since the transparency is not used for display, a less transparent TCO film that is thinner than the film to be used on the transparent substrate serving as the display glass can be used. For example, a TCO film that is less transparent and relatively conductive and has a thickness betweenandcompared to a typical TCO film can be used. In some embodiments, the transparent conductive antistatic coating may comprise a sufficiently thin, transparent metal film for alignment purposes. For example, the thin metal film may be transparent such that the alignment marks on the encapsulated substrate may be read by an aligned laser or other alignment device. Examples of metals include aluminum, molybdenum, copper and the like. For example, in some embodiments, an aluminum film between aboutand _ENT5 may be used to provide a conductive and transparent coating. Carbon-based conductive films, such as graphite or carbon paste films, may be used. At a relatively small thickness, the carbon-based film can be sufficiently transparent for alignment.In embodiments where the optical detection of alignment marks is not a problem, the conductive antistatic film may be opaque or transparent. In addition, in an embodiment in which the front side of the encapsulated substrate is not coated or only partially coated, the alignment marks may be positioned in the uncoated region of the encapsulated substrate. In these embodiments, the conductive antistatic film may be opaque or transparent.As further discussed below with respect to Figures 9C and 9D, in some embodiments, the conductive antistatic coating comprises a configuration feature or inherent roughness that provides antistatic and antistatic properties. The configuration of these features can be as low as, for example, several nanometers to hundreds of nanometers. In some embodiments, the conductive antistatic coating may comprise, for example, a conductive nanowire mesh on top of the TCO coating. Conductive nanowires are examples of configuration features that prevent or reduce static friction.Figure 6 shows an example of a flow chart illustrating the manufacturing process of a encapsulated substrate with conductive antistatic coating. Any of the operations of the manufacturing process may be performed at any suitable point prior to individual granulation at the wafer or panel level of the batch process or on a separate packaging stage after individual granulation.Process 200 begins at block 210 to form one or more depressions as desired in the encapsulated substrate. According to various embodiments, block 210 may be performed at the panel or wafer level in which recesses of the encapsulated substrate for a plurality of display devices are formed. The formation of depressions may involve any suitable process including, but not limited to, wet etching or sandblasting, or a combination of these techniques. For example, a glass envelope substrate may be etched using a hydrogen fluoride-based solution. In an embodiment in which the planar envelope is used, the block 210 is not executed. In some embodiments, depressions are formed to facilitate conformal coating of the conductive antistatic film in the recess. The recess may have a non-vertical wall, such as the graded sidewall 124 in the example of Fig. According to various embodiments, the wall may be a sloped linear or curved wall. In embodiments where the coating is not formed on the recessed sidewalls (e.g., in the example of FIG. 5C), the sidewalls may be vertical or nearly vertical to facilitate selectivity on the planar portion of the encapsulated substrate Coating.The process 200 continues at block 220 with an optional cleaning of the surface on which the conductive antistatic film will be coated. Whether or not cleaning is performed depends on the method by which one or more recesses are formed; for example, the sandblasting surface allows the particles to be removed prior to coating.Process 200 continues at block 230 with conductive antistatic coating to coat one or more surfaces of the encapsulated substrate. As discussed above with respect to Figures 5A to 5G, one or more of the anterior, dorsal and side walls may be coated. When coating a surface, all or a portion of the surface may be coated. For example, the ring of conductive antistatic material may be patterned on the front side of the encapsulated substrate. In some implementations, block 230 may be performed before block 210. In some embodiments, For example, in order to form a conductive antistatic coating in a peripheral region but not in the recess on the front side of the encapsulated substrate, a coating may be formed prior to forming the recess.Any suitable coating technique comprising one or more of the following may be used: an electron beam coating process, a sputter deposition process or other physical vapor deposition (PVD) process, a vacuum coating process, a chemical vapor deposition (CVD) process , Atomic layer deposition (ALD) processes, solution based coating processes, evaporation processes, implantation processes, dispersion processes, scratching processes, or spin coating processes. The coating process may depend on whether the material to be coated and the coating is patterned or conformal. Conformal deposition of ITO or other TCO materials may involve, for example, a vacuum deposition process, an electron beam coating or evaporation process. The formation of the patterned coating may involve the use of deposition or photoresist on screen printing, peeling off the mask. In some embodiments, the coating may be formed by a maskless direct writing process (e.g., dispensing or inkjet printing). Conformal deposition of thin metal films may involve, for example, PVD, ALD or CVD processes.The coating technique may be determined in part by the amount of roughness in the conductive antistatic coating. Vapor deposition techniques tend to produce, for example, highly uniform films with less than 1 nm root mean square (RMS) surface roughness. Wet coating techniques, such as spraying of particle dispersions, provide films with higher roughness. For example, spraying of 10 nm TCO particles may have a roughness of about 10 nm. Thus, the desired roughness can be generated by using conductive nanoparticles of appropriate size. As further discussed below with respect to Figures 9C and 9D, in some embodiments, a conductive antistatic coating having a nanoscale or higher roughness may be used as an antistatic friction film.Process 200 continues at block 240 to form a sealant for one or more display devices on the encapsulated substrate. This may involve, for example, distributing the epoxy resin in one or more sealing areas on the encapsulated substrate. For example, the epoxy resin may be dispensed around each depression on the encapsulated substrate. In some embodiments, a glass frit, a metal seal ring or a solder material may be formed.As discussed above with respect to Figure 4, block 240 may comprise forming a sealant on a conductive antistatic coating. For example, the epoxy resin may be dispensed onto the ITO layer covering the front side of the encapsulated substrate. In some embodiments, the conductive antistatic coating provides a more uniform surface property, allowing the epoxy or other type of seal to be more easily adhered than on the bare surface of the encapsulated substrate.As indicated above, any of the operations of the manufacturing process may be performed at the wafer or panel level. The formation of a coating on the front or back side of the encapsulated substrate may be performed in one operation (or two operations for double-sided coating) for the encapsulation substrate for a plurality of display devices. However, the formation of a coating on the sidewalls of the encapsulation device generally involves first pelletizing the wafer or panel-level encapsulated substrate into individual units so that the sidewalls are accessible.Figure 7 shows an example of a flow chart illustrating the manufacturing process for a display device having a encapsulated substrate containing a conductive antistatic coating. Process 300 begins at block 310 by providing a capsule closure substrate for one or more display devices, wherein the encapsulated substrate comprises a conductive antistatic coating. The box 310 may relate to providing, for example, the encapsulated substrate as described above with respect to FIG.The process 300 continues at block 320 to provide a transparent substrate thereon containing a display element for one or more displays and a contact pad. The transparent substrate may additionally comprise thereon a TFT, a metal wiring, and other components for display on the display element or otherwise associated with the display element. For example, a black mask for each display device may be located on a transparent substrate.Process 300 continues at block 330 to align the encapsulated substrate with the transparent substrate. As indicated above, in some embodiments this may involve the use of alignment marks on the encapsulated substrate, the transparent anti-conductive coating may facilitate the use.The process continues at block 340 to seal the encapsulated substrate to a transparent substrate such that the display element for one or more display devices is enclosed by a capsule closure. Block 340 may involve one or more of the following: applying pressure and exposing the epoxy or other sealant material to heat or UV radiation to cure the material. The process 300 continues to scribe and break the capsule substrate at block 350 to expose the contact pads on the transparent substrate. Standard scoring and breaking processes can be used. Further processing may be performed, for example, by individually bonding the bonded encapsulated substrate and the transparent substrate to form a separate display device. As discussed further below, in some embodiments, the conductive antistatic coating mitigates ESD events that may occur during processing at, for example,Figures 8A and 8B illustrate an example illustrating a schematic diagram of manufacturing certain stages of a display device having a encapsulated substrate comprising a conductive antistatic coating. 8A shows an example of a display device 100 that includes a sealed substrate 102 sealed to a transparent substrate 104 by a seal 110. FIG. The display element 106 is disposed on the transparent substrate 104. The metal wiring on the transparent substrate 104 and the contact pad 130 provide electrical connection to the display element 106. [ The encapsulated substrate 102 comprises a conductive antistatic coating 120. The scribe line 132 indicates the position at which the capsule 102 is to be cut. 8B shows a display device 100 after the capsule closure 102 is broken along the scribe line 132 in FIG. 8A. This exposes the contact pad 130 on the transparent substrate 104 so that it can be used for electrical connection. In some embodiments, the conductive antistatic coating 120 mitigates ESD events that occur during one or both of the scribing and fracture operations. This is useful for active matrix displays where the TFT can be compromised by an underestimated ESD event. In some embodiments, the encapsulated substrate or display device may be exposed to ion beam at various stages of the manufacturing process illustrated in FIGS. 8A and 8B to facilitate charge dissipation.In the examples of Figures 8A and 8B, the conductive antistatic coating 120 does not extend into the recesses of the encapsulation substrate 102. [ In some embodiments, however, it may be useful to have conformal and adjacent conductive antistatic coatings that extend to the depressions to facilitate charge dissipation. Examples of the conductive antistatic coating described above with respect to Figures 4 and 5E are described above.In some embodiments, the conductive antistatic film may mitigate damage from ESD events attributable to the display element to the encapsulated substrate. The event may occur, for example, as a result of mechanical shock from the display device due to descent, point contact load, and the like. The possibility of contact of the display element with the encapsulated substrate increases with the size of the display device. By way of example, the transparent substrate 104 may be 5 to 10 inches diagonally and the distance between the display element 106 and the encapsulation substrate 102 is approximately several hundred microns. 9A and 9B show an example of a schematic diagram illustrating the response of the display device to mechanical shock. In FIG. 9A, the display device 100 includes a capsule substrate 102 and a transparent substrate 104. FIG. The display element 106 is disposed on the transparent substrate 104. The conductive antistatic coating 120 is located on the encapsulation substrate 102 and contains a recess 108 in the display element 106 on the transparent substrate 104. As shown in FIG. If the display device 100 is large enough, the load on the transparent substrate 104 may cause the transparent substrate 104 to bend, as illustrated in FIG. 9B. Point contact, descent or other load may result in a reduction in the distance between the display element 106 and the encapsulated substrate 102. [ This reduction in distance can result in static discharge. The conductive antistatic coating 120 relieves damage due to discharge. In the example of Fig. 9B, the conductive antistatic coating is not continuous from the depression of the encapsulation substrate 102 to the peripheral region. In an alternative embodiment, the conductive antistatic coating may be adjacent and conformal, as described above. This can help to promote charge dissipation.In some embodiments, the conductive antistatic coating 120 has anti-static friction properties to reduce adhesion to the encapsulated substrate 102 and to mitigate damage due to contact and static friction. The configuration feature may have a height that is at least an order of magnitude less than the display element size and in some cases less than the display element size by at least two orders of magnitude. For example, if the IMOD pixel size is several tens of microns, the configuration feature may have a height of no more than 1 micron or 100 nanometers.Figures 9C and 9D show examples of schematic diagrams illustrating conductive antistatic films containing configuration features. In FIG. 9C, a top view of a portion of the conductive antistatic coating 120 on the encapsulated substrate 102 is depicted. The conductive antistatic coating 120 is patterned such that it forms a configuration feature 126 that protrudes from the surface of the encapsulated substrate 102. [ In FIG. 9D, a cross-sectional view of a portion of the conductive antistatic coating 120 is depicted. The conductive antistatic coating is not patterned but contains a configuration feature 126. The configuration features 126 may be formed, for example, by patterning a deposited film by depositing a conformal conductive film on a layer (e.g., an insulating layer) comprising a configuration feature by using a deposition technique and a material comprising nanoscale roughness. In the example of Figures 9C and 9D, the configuration feature 126 is conductive. In an alternative embodiment, the configuration feature may comprise conductive or insulating features on a continuous conductive antistatic coating.According to various embodiments, the configuration feature 126 may have a height of at least 5 nm, at least 20 nm, or at least 100 nm. As discussed above, in some embodiments, the configuration feature 126 may be introduced by using a conductive antistatic coating having a nano-scale RMS roughness. Examples include a TCO particle having a diameter between 5 nanometers and 20 nanometers and a wet coating solution of nanowire webs having a diameter between 5 nanometers and 100 nanometers. In some embodiments, the configuration features may be introduced by patterning the conductive antistatic material on the encapsulated substrate. For example, the patterned graphite layer may be screen printed on the encapsulated substrate to form a conductive antistatic coating. Graphite or other patterned conductive material is spatially patterned so that the electrical connection is maintained to dissipate static electricity, but the potential contact area between the display element and the conductive antistatic film is reduced in the event of mechanical shock. In another example, the insulating material may be patterned to form protrusions, and a continuous conductive antistatic film is applied above or below the protrusions.10A and 10B are system diagrams illustrating a display device 40 including a plurality of IMOD display elements. The display device 40 may be, for example, a smartphone, a cellular or a mobile phone. However, the same components of the display device 40, or slight variations thereof, also illustrate various types of display devices such as televisions, computers, tablet computers, electronic readers, hand-held devices, and portable media devices.The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. [ The housing 41 may be formed by any of a variety of manufacturing processes (including injection molding and vacuum forming). In addition, the housing 41 may be made of any of a variety of materials including, but not limited to, plastics, metals, glass, rubber and ceramics, or combinations thereof. The housing 41 may comprise a removable portion (not shown) that may be interchangeable with different colors or other removable portions containing different indicia, pictures, or symbols.The display 30 may be any of a variety of displays as described herein, including a bistable or analog display. The display 30 may also be configured to include a flat panel display, such as a plasma, an EL, an OLED, an STN LCD, or a TFTLCD; or a non-flat panel display, such as a CRT or other tubular device. In addition, the display 30 may comprise an IMOD-based display as described herein.The components of the display device 40 are schematically illustrated in FIG. 6A. The display device 40 includes a housing 41 and may include additional components that are at least partially enclosed therein. For example, the display device 40 includes a network interface 27 that includes an antenna 43 that may be coupled to the transceiver 47. [ The network interface 27 may be a source of image data that may be displayed on the display device 40. [ Thus, the network interface 27 is an example of an image source module, but the processor 21 and the input device 48 may also act as image source modules. The transceiver 47 is connected to a processor 21, which is connected to the adjustment hardware 52. [ The adjustment hardware 52 may be configured to adjust the signal (e.g., to filter the signal or otherwise manipulate the signal). The adjustment hardware 52 may be connected to the speaker 45 and the microphone 46. The processor 21 may also be connected to the input device 48 and the driver controller 29. [ The driver controller 29 may be coupled to a frame buffer 28 and coupled to an array driver 22, which in turn may be coupled to the display array 30. [ One or more elements (including elements not specifically depicted in FIG. 6A) of the display device 40 may be configured to act as a memory device and configured to communicate with the processor 21. [ In some embodiments, the power supply 50 may provide power to substantially all of the components in the particular display device 40. In some embodiments,The network interface 27 includes an antenna 43 and a transceiver 47 such that the display device 40 may communicate with one or more devices via a network. The network interface 27 may also have some processing capabilities to reduce, for example, the data processing requirements of the processor 21. The antenna 43 may transmit and receive signals. In some embodiments, the antenna 43 is transmitted in accordance with an IEEE 16.11 standard (including IEEE 16.11 (a), (b) or (g)) or the IEEE 802.11 standard (including IEEE 802.11a, b, g, n) and its additional implementation And receive RF signals. In some other embodiments, the antenna 43 transmits and receives RF signals according to thestandard. In the case of a cellular telephone, the antenna 43 may be designed to receive Code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Global System for Mobile Communications (GSM), GSM / (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunk Radio (TETRA), Wideband CDMA (W-CDMA), Evolved Data Optimization (EV-DO), 1xEV-DO, EV-DO Rev A, EV- DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA +), Long Term Evolution (LTE) AMPS or other known signals used to communicate within a wireless network (e.g., a system using 3G, 4G or 5G technology). The transceiver 47 may pre-process the signal received from the antenna 43 so that the signal may be received by the processor 21 and further manipulated. The transceiver 47 may also process the signal received from the processor 21 so that the signal may be transmitted from the display device 40 via the antenna 43. [In some embodiments, the transceiver 47 may be replaced with a receiver. In addition, in some embodiments, the network interface 27 may be replaced with an image source that may store or generate image data to be sent to the processor 21. The processor 21 may control the overall operation of the display device 40. [ The processor 21 receives data (e.g., compressed image data) from the network interface 27 or the image source, and processes the data into original image data or a format that can be processed to be easily processed into the original image data. The processor 21 may send the processed data to the drive controller 29 or to the frame buffer 28 for storage. The raw data generally refers to information identifying the image characteristics at each location within the image. For example, the image characteristics may include color, saturation, and grayscale.The processor 21 may comprise a microcontroller, a CPU, or a logic unit to control the operation of the display device 40. [ The adjustment hardware 52 may include an amplifier and a filter for transmitting the signal to the speaker 45 and for receiving a signal from the microphone 46. [ The adjustment hardware 52 may be a discrete component within the display device 40, or may be incorporated into the processor 21 or other components.The drive controller 29 may acquire the original image data generated by the processor 21 directly from the processor 21 or from the frame buffer 28 and may appropriately reformat the original image data for high speed transmission to the array driver 22. [ In some embodiments, the drive controller 29 may reformat the original image data into a data stream having a raster format such that it has a temporal order suitable for scanning across the display array 30. In some embodiments, The drive controller 29 then sends the formatted information to the array driver 22. Although the driver controller 29, such as the LCD controller, is often associated with the system processor 21 as an independent integrated circuit (IC), the controller may be implemented in a number of ways. For example, the controller may be embedded in the processor 21 as hardware, embedded in the processor 21, or fully integrated with the array driver 22 in hardware.The array driver 22 may receive the formatted information from the drive controller 29 and may reformat the video information into a set of parallel waveforms that are applied to the xy display element matrix from the display many times per second Of hundreds and sometimes thousands (or more) leads.In some embodiments, the driver controller 29, the array driver 22, and the display array 30 are suitable for any type of display as described herein. For example, the drive controller 29 may be a conventional display controller or a bistable display controller (e.g., an IMOD display element controller). In addition, the array driver 22 may be a conventional drive or a bistable display driver (e.g., an IMOD display element driver). In addition, the display array 30 may be a conventional display array or a bistable display array (e.g., a display that includes an array of IMOD display elements). In some embodiments, the driver controller 29 may be integrated with the array driver 22. In some embodiments, This implementation may be applied to a highly integrated system such as a mobile phone, a portable electronic device, a watch or a small area display.In some embodiments, the input device 48 may be configured to allow, for example, the user to control the operation of the display device 40. In some embodiments, The input device 48 may include a keypad (e.g., a QWERTY keypad or a telephone keypad), a button, a switch, a rocker, a touch screen, a touch screen, or a pressure sensitive or heat sensitive film integrated with the display array 30. [ The microphone 46 may be configured as an input device for the display device 40. [ In some embodiments, a voice command via the microphone 46 may be used to control the operation of the display device 40. In some embodiments,The power supply 50 may include a plurality of energy storage devices. For example, the power supply 50 may be a rechargeable battery, such as a nickel cadmium battery or a lithium ion battery. In embodiments where a rechargeable battery is used, the rechargeable battery may be charged using power from, for example, a wall outlet or a photovoltaic device or array. Alternatively, the rechargeable battery may be wirelessly chargeable. The power supply 50 may also be a renewable energy source, a capacitor or a solar cell (including a plastic solar cell or a solar cell paint). The power supply 50 may also be configured to receive power from the wall outlet.In some embodiments, the control programmability resides in a drive controller 29 that may be located in a number of locations within the electronic display system. In some other embodiments, the control programmability resides in the array driver 22. In some embodiments, The optimization described above may be implemented in any number of hardware and / or software components and implemented in a variety of configurations.As used herein, phrases referring to at least one of the item lists refer to any combination of those items, including a single member. As an example, "at least one of" a, b, or c "is intended to cover: a, b, c, a-b, a-c, b-c and a-b-c.The various illustrative logical, logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or a combination of both. The interchangeability of hardware and software has been largely described in terms of functionality and is described in the various illustrative components, blocks, modules, circuits and steps described above. Whether this functionality is implemented in hardware or software depends on the particular application and design constraints imposed on the entire system.The hardware and data processing apparatus for implementing various illustrative logic, logic blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented by a general purpose single chip or multi-chip processor, a digital signal processor (DSP) (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein to perform or perform The A general purpose processor may be a microprocessor, or any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with the DSP core, or any other such configuration. In some embodiments, the particular steps and methods may be performed by a circuit specifically for a given function.In one or more aspects, the functions described may be implemented in hardware, digital electronic circuits, computer software, firmware (including structures and structural conformations disclosed in this specification), or any combination thereof. Embodiments of the subject matter described in this specification may also be implemented as one or more computer programs (i.e., one or more modules of computer program instructions) encoded on a computer storage medium for execution or control of data processing by a data processing device Equipment operation.Various modifications of the embodiments described in the present invention may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to others without departing from the spirit or scope of the invention implementation plan. Accordingly, the claims are not intended to be limited to the embodiments shown herein, but are to be accorded the widest scope consistent with the invention, principles and novel features disclosed herein. In addition, those of ordinary skill in the art will readily appreciate that sometimes the terms & quot; up & quot; and & quot; lower & quot; are sometimes used to easily describe the figures and the term indicates a relative position corresponding to the orientation of a graph on a properly oriented page , And may not reflect appropriate orientations of, for example, IMOD display elements as implemented.Certain features described in this specification in the case of a separate embodiment may also be implemented in a single embodiment in a combined form. In contrast, the various features described in the case of a single embodiment may also be implemented separately in a number of embodiments or in any suitable subcombination. In addition, although the features described above may be described as being operative with certain combinations and even initially requested, one or more features from the requested combination may be removed from the combination in some cases and requested May be directed to changes in sub-combinations or sub-combinations.Similarly, although the operation is depicted in a particular order in the drawings, one of ordinary skill in the art will readily appreciate that the operations need not be performed in the particular order shown or in sequential order, or all of the illustrated operations To achieve the desired outcome. In addition, the schema may schematically depict one or more instance processes in the form of flowcharts. However, other operations not depicted may be incorporated into the illustrative example process. For example, one or more additional operations may be performed simultaneously, before, or after any person in the illustrated operation. In some cases, multitasking and parallel processing may be advantageous. In addition, the separation of the various system components in the above embodiments should not be construed as requiring separation in all embodiments, and it is to be understood that the described program components and systems may be integrated substantially in a single software product Together or packaged into multiple software products. In addition, other embodiments are within the scope of the following claims. In some cases, the actions cited in the claims may be performed in different orders and still achieve the desired result. |
Systems, devices and methods are described including using a Human Interface Device (HID) source device to configure a HID sink device to provide interface data such as multi-touch data. The HID source device may enable a data module in the HID sink device to generate the interface data. After receiving the interface data the HID source device may generate output data and provide the output data to the HID sink device. |
WHAT IS CLAIMED: 1. A method, comprising: at a Human Interface Device (HID) source device: configuring, over an auxiliary (AUX) channel, an HID sink device to provide interface data, the HID sink device including a data module to generate the interface data; enabling the data module over the AUX channel; and receiving the interface data over the AUX channel. 2. The method of claim 1, wherein the interface data comprises multi-touch data. 3. The method of claim 1, wherein the AUX channel comprises a fast auxiliary (F)AUX channel. 4. The method of claim 1, further comprising, at the HID source device: generating output data in response to the interface data; and providing the output data to the HID sink device. 5. The method of claim 1, wherein configuring the HID sink device to provide interface data comprises configuring a data access method. 6. The method of claim 5, wherein the data access method comprises at least one of an interrupt based data access method, a polled data access method, or an interleaved data access method. 7. The method of claim 1, wherein configuring the HID sink device to provide interface data and enabling the data module comprises a single write operation. 8. The method of claim 1, wherein receiving the interface data comprises: at the HID source device: reading an interrupt reason; reading the interface data in response to reading the interrupt reason; and clearing the interrupt reason. 9. The method of claim 1, wherein receiving the interface data comprises:at the HID source device: requesting a report; polling for a report availability flag; reading the interface data in response to detecting the report availability flag; and clearing the report availability flag. 10. The method of claim 1, wherein receiving the interface data comprises obtaining the interface data from at least one register of the HID sink device. 1 1. An article comprising a computer program product having stored therein instructions that, if executed, result in: at a Human Interface Device (HID) source device: configuring, over an auxiliary (AUX) channel, a HID sink device to provide interface data, the HID sink device including a data module to generate the interface data; enabling the data module over the AUX channel; and receiving the interface data over the AUX channel. 12. The article of claim 11, wherein the interface data comprises multi-touch data. 13. The article of claim 1 1, wherein the AUX channel comprises a fast auxiliary (F)AUX channel. 14. The article of claim 11, further comprising, at the HID source device: generating output data in response to the interface data; and providing the output data to the HID sink device. 15. The article of claim 11, wherein configuring the touch sink device to provide multi-touch data comprises configuring a data access method. 16. The article of claim 13, wherein the data access method comprises at least one of an interrupt based data access method, a polled data access method, or an interleaved data access method. 17. The article of claim 1 1, wherein configuring the HID sink device to provide interface data and enabling the data module comprises a single write operation. 18. The article of claim 11, wherein receiving the interface data comprises: at the HID source device: reading an interrupt reason; reading the interface data in response to reading the interrupt reason; and clearing the interrupt reason. 19. The article of claim 11, wherein receiving the interface data comprises: at the HID source device: requesting a report; polling for a report availability flag; reading the interface data in response to detecting the report availability flag; and clearing the report availability flag. 20. The article of claim 1 1, wherein receiving the interface data comprises obtaining the interface data from at least one register of the HID sink device. 21. An apparatus, comprising: a Human Interface Device (HID) sink device including a data module to capture interface data; and logic in the HID sink device to provide the interface data and to signal the availability of the interface data over an auxiliary (AUX) channel using one of an interrupt or a flag. 22. The apparatus of claim 21, wherein the HID sink device further comprises a plurality of register regions. 23. The apparatus of claim 22, wherein the logic is to provide the interface data by storing the interface data in the at least one of the plurality of register regions. 24. The apparatus of claim 22, wherein the logic is to signal the availability of the interface data using the flag by setting a bit in at least one of the plurality of register regions. 25. The apparatus of claim 21, wherein the logic is provide the interface data in response to being configured by an HID source device. 26. A system comprising: a Human Interface Device (HID) sink device including a data module to capture interface data; and a HID source device communicatively coupled to the HID sink device by an auxiliary (AUX) channel, wherein the touch source device is to use the AUX channel to: configure the HID sink device to provide the interface data; enable the data module; and receive the interface data. 27. The system of claim 26, the HID source device further to: generate output data in response to the interface data; and provide the output data to the HID sink device over the AUX channel. 28. The system of claim 26, wherein to configure the HID sink device to provide interface data the HID source device is to configure a data access method comprising at least one of an interrupt based data access method, a polled data access method, or an interleaved data access method. 29. The system of claim 28, wherein to receive the interface data using the interrupt based data access method the HID source device is to: read an interrupt reason; read the interface data in response to reading the interrupt reason; and clear the interrupt reason. 30. The system of claim 28, wherein to receive the interface data using the polled data access method the HID source device is to: request a report; poll for a report availability flag; read the interface data in response to detecting the report availability flag; and clear the report availability flag. |
MULTI-TOUCH INTERFACE SCHEMES RELATED APPLICATIONS This application claims priority to and benefit of U.S. Provisional Patent Application No. 61/551,712, filed on October 26, 201 1. BACKGROUND Recently developed display interface schemes, such as the DisplayPort® (DP) display interface protocol or standard (see, e.g., DisplayPort® Version 1.2 (December 2009)), are designed to replace older standards such as Video Graphics Array (VGA) and Digital Video Interface (DVI), and rely on packetized data transmission similar to other data communication protocols such as Ethernet, USB, and PCI Express. For example, DP supports both external (e.g., box-to-box) and internal (e.g., laptop display panel) display connections, and unlike DVI and Low-voltage Differential Signaling (LVDS) standards where differential pairs transmit pixel data and a clock signal, the DP protocol is based on the transmission of small data packets with an embedded clock. The use of data packets also allows interface standards such as DP to be extensible by permitting additional features to be added without significant changes to the interface itself. Embedded DisplayPort® (eDP) is a companion standard (see, e.g., Embedded DisplayPort® Version 1.3 (February 201 1)) to the DP standard and provides a standardized display panel interface for internal connections (e.g., between a graphics processor and a notebook display panel) and is designed to replace the LVDS standard. BRIEF DESCRIPTION OF THE DRAWINGS The material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. In the figures: FIGS. 1 and 2 are illustrative diagrams of example display interface systems; FIGS. 3-5 illustrate example register layouts;FIGS. 6-12 illustrate example display interface processes; FIG. 13 is an illustrative diagram of an example data scheme; FIG. 14 is an illustrative diagram of an example system; and FIG. 15 is an illustrative diagram of an example device, all arranged in accordance with at least some implementations of the present disclosure. DETAILED DESCRIPTION One or more embodiments or implementations are now described with reference to the enclosed figures. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. Persons skilled in the relevant art will recognize that other configurations and arrangements may be employed without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that techniques and/or arrangements described herein may also be employed in a variety of other systems and applications other than what is described herein. While the following description sets forth various implementations that may be manifested in architectures such as notebook or desktop computers for example, implementation of the techniques and/or arrangements described herein are not restricted to particular architectures and/or computing systems and may be implemented by any architecture and/or computing system for similar purposes. For instance, various architectures employing, for example, multiple integrated circuit (IC) chips and/or packages, and/or various computing devices and/or consumer electronic (CE) devices such as set top boxes, smart phones, etc., may implement the techniques and/or arrangements described herein. Further, while the following description may set forth numerous specific details such as logic implementations, types and interrelationships of system components, logic partitioning/integration choices, etc., claimed subject matter may be practiced without such specific details. In other instances, some material such as, for example, control structures and full software instruction sequences, may not be shown in detail in order not to obscure the material disclosed herein. The material disclosed herein may be implemented in hardware, firmware, software, or any combination thereof. The material disclosed herein may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism forstoring or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. References in the specification to "one implementation", "an implementation", "an example implementation", etc., indicate that the implementation described may include a particular feature, structure, or characteristic, but every implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same implementation. Further, when a particular feature, structure, or characteristic is described in connection with an implementation, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other implementations whether or not explicitly described herein. In accordance with the present disclosure, the phrase "Human Interface Device" (HID) as used herein may describe devices used to control the operation of computer systems (e.g., any device consistent with the HID class definition as provided by the Universal Serial Bus Implementers Forum (USB-IF) - see, "Device Class Definition for Human Interface Devices (HID)" Firmware Specification, Version 1.1 1, published June 27, 2001; hereinafter the "HID 1.1 1" specification). Further, the phrase "touch data" may be used synonymously with the phrase "multi-touch data" and, as used herein, may refer to data that describes one or more locations where contact with a touch sensitive surface (e.g., a touch screen display) have occurred. In addition, the phrases "touch sink" and "touch sink device" may be used synonymously with the term "sink" and the phrase "sink device" and may be used herein to refer to a sink device configured to support the DisplayPort® (DP) and/or Embedded DisplayPort® (eDP) standards (see, Embedded DisplayPort® Version 1.3, published February 201 1) and that is capable of reporting touch data. Further, the phrases "touch source" and "touch source device" may be used synonymously with the term "source" and the phrase "source device" and may be used herein to refer to a source device as defined in the DP 1.2 standard (see, DisplayPort® Version 1.2, published December 2009; hereinafter the "DP 1.2" standard) that is configured to process touch data received, for example, from a touch sink device. In the interest of clarity, the various devices, systems and processes will described herein in the context of the DP and/or eDPstandards although the present disclosure is not limited to any particular touch and/or display interface standards, and/or specifications. While the various systems, devices, schemes and/or processes may be described herein in the context of touch data, the present disclosure is not limited to touch data and/or touch devices. Hence, because information passed between a touch sink and touch source may be HID compliant, a touch sink may present other types of HID devices, including, but not limited to, a keypad, an alphanumeric display, and so forth. Thus, the various interface schemes and/or processes described herein may be applied to other types of HID devices that may be presented by a touch sink, and may apply to any combination of one or more other types of HID devices with HID touch devices. Further, while the term "interface data" may be used herein to refer largely to touch data, in various implementations, interface data in accordance with the present disclosure may refer to data generated by any type of HID device such as a keypad, an alphanumeric display, and the like. In various implementations, a touch sink may include touch sensors to generate raw touch data and a formatter to convert raw touch data into formats as described herein. Further, a touch sink may be configured to transfer touch data to a touch source as described herein. In various implementations, a touch source may have the ability to receive or obtain touch data from a touch sink in addition to being configured to parse and/or interpret the touch data as described herein. For example, FIG. 1 illustrates a multi-touch display interface system 100 in accordance with the present disclosure including a touch source 102 communicatively coupled to a touch sink 104. An example format in which touch data may be transmitted between touch sink 104 and touch source 102, and various mechanisms by which the transfer of such data may be effected will be described in greater detail below. In various embodiments, touch source 102 may be configured to implement the DP 1.2 standard and may include a microprocessor such as a graphics processing unit (GPU), Digital Signal Processor (DSP), or the like, configured to parse and interpret interface data such as touch data using a parser module 106 and an interpreter module 108, respectively. Touch source 102 may also include interface logic 109 to implement various schemes or processes to be described herein. For example, interface logic 109 may include logic configured to implement data transfer processes or portions thereof to be described in greater detail below. In various embodiments, touch sink 104 may include a multi-touch capable display (not shown) configured to capture interface data in the form of multi-touch data and to format thetouch data using touch sensors 110 and a formatter module 112, respectively. In various implementations, touch sensors 110 may be any type of known touch sensors, such as capacitive touch sensors, that enable touch sink 104 to capture multi-touch data. Touch sink 104 may also include interface logic 123 to implement various schemes or processes to be described herein. For example, interface logic 123 may include logic configured to implement data transfer processes or portions thereof to be described in greater detail below. In various implementations, touch sink 104 may be incorporated into a mobile communications device (e.g., smart phone), mobile computer, tablet computer, or the like. In various embodiments, the components of system 100 may be implemented within a single device such as a mobile communications device, tablet computer, or the like, where touch sink 104 may correspond to one or more logic and/or hardware module(s) associated with a touch screen display, and touch source 102 may include a microprocessor communicatively coupled to the logic and/or hardware module(s) of touch sink 104. As shown in FIG. 1, system 100 includes a DP link 114 communicatively coupling touch source 102 to touch sink 104 and including a main link 1 16, an auxiliary or fast-auxiliary channel (F)AUX 1 18, and a Hot Plug Detect (HPD) signal line 120. In various implementations, main link 1 16 may be a uni-directional, high-bandwidth, and low-latency channel used for the transport of isochronous streams such as uncompressed video and/or audio data from touch source 102 to touch sink 104. In various implementations, (F)AUX channel 118 may be a half- duplex bidirectional channel used for link management and device control using schemes in accordance with the present disclosure. For example, in various implementations, touch source 102 and touch sink 104 may be designated as master and slave, respectively. In general, transactions over (F)AUX channel 118 are initiated by touch source 102. However, in various implementations, touch sink 104 may prompt the initiation of (F)AUX channel transactions by, for example, sending an interrupt (IRQ) to touch source 102 over signal line 120. In various implementations, multi-touch display interface systems and/or schemes in accordance with the present disclosure may include one or more intervening devices and/or systems communicatively coupling a touch sink with a touch source. For example, FIG. 2 illustrates a multi-touch display interface system 200 in accordance with the present disclosure including an intervening branch device 202 communicatively coupling touch source 104 to touch sink 102. In various implementations, branch device 202 may be a device configured in accordance with the DP 1.2 standard and may implement sideband messages as described in greater detail below. Further, in various implementations, a branch device may processdownstream signals (such as interrupts appearing on signal line 120) within specific time limits based on the rate at which touch sink 102 generates touch data. For example, branch device 202 may need to process interrupts within 20msec to support a minimum touch data sample rate of 50Hz. Returning to the discussion of FIG. 1, in various implementations, and as will be described in greater detail below, touch source 102 may obtain interface data such as touch data from touch sink 104 over (F)AUX channel 1 18 where the touch data may be in a Human Interface Device (HID) format as defined by an HID Report Descriptor (see, Device Class Definition for Human Interface Devices (HID) Firmware Specification, Version 1.1 1, published June 27, 2001 ; hereinafter, the "HID 1.11 specification"). Further, in accordance with the present disclosure, touch sink 104 may conform to touch-related format requirements as set forth in the multi-touch extensions for digitizers (see, "Addition of usages related to multi-touch digitizers", Request #HUTRR34, published February 27, 2009; hereinafter the "HUTRR34" document) that modify the HID Usage Tables (see, HID Usage Tables, Version 1.12, published October 28, 2004). In addition, touch sink 104 may expose touch capability as a digitizer device in accordance with the HID 1.11 specification. In accordance with the present disclosure, HID information including HID Report Descriptors may be conveyed between touch sink 104 and touch source 102 over (F)AUX channel 118. In various implementations, and as will be explained in greater detail below, touch sink 104 may include registers 122 that touch sink 104 may use to temporarily store data such as configuration data, touch data and the like, that may be accessed by touch source 102 via (F)AUX channel 1 18. In various implementations, registers 122 may include DP Configuration Data (DPCD) registers and touch source 102 may have read and write access to those DPCD registers for various purposes as will be explained in greater detail below. Further, in various implementations, as will be explained in greater detail below, a data module 124 (including sensors 110 and formatter module 1 12) may be configured in response to one or more DPCD registers of registers 122. In various implementations, touch sink 104 may support a sample rate of at least 50Hz for touch data. In various implementations, touch sink 104 may convey touch data related interrupts to touch source 102 and touch source 102 may process those interrupts at a rate commensurate with the sample rate of touch sink 104. For example, touch sink 104 may convey IRQ HPD interrupts to touch source 102 via signal line 120. While the DP 1.2 standard requires sourcedevices to process an IRQ HPD within 100msec (corresponding to a sample rate of 10Hz), in various implementations in accordance with the present disclosure, touch source 102 may process an IRQ HPDs within 20msec to support processing of touch data sampled at a rate of 50Hz by touch sink 104. In various implementations, system 100 may not support separate device descriptors specifying touch capability. However, in other implementations, touch sink 104 may provide Extended Display Identification Data (EDID), in accordance with the DP 1.2 standard, to indicate that touch sink 104 has touch capability. In addition, touch sink 104 may include vendor information for Plug-and-Play (PnP) purposes. Such EDID and/or vendor information may be stored in memory (not shown) internal to touch sink 104. In addition, in various implementations, touch source 102 may include system software and/or firmware that supports HID Boot Mode operation and HID Descriptor parsing as described herein. Capability Discovery In various implementations, touch sink 104 may announce its touch capability through a TOUCH SUPPORTED bit in a TOUCH CAP ABILITY DPCD register portion of registers 122. Touch source 102 may then read the TOUCH SUPPORTED bit as part of sink capability discovery triggered by Sink discovery. In various implementations, touch sink 104 may provide a HID Descriptor and a HID Report Descriptor formatted as described in the HID specifications. These two descriptors (and optionally other HID class descriptors, as allowed in the HID 1.11 specification) may be obtained by touch source 102 from respective HID_ CLASS DESCRIPTORS DPCD register portions of registers 122 after ascertaining touch capability in touch sink 104. In various implementations, class descriptors declared in the HID Descriptor may immediately follow the HID Descriptor in the HID DESCRIPTORS DPCD registers in the order in which they were declared in the HID Descriptor. Touch Sink Configuration In accordance with the present disclosure, touch source 102 may configure touch sink 104 in various ways. For example, Table 1 lists various example commands that touch source 102 may issue to touch sink 104:Command Description ENABLE_TOUCH_FEATURE Enable or disable touch feature in a touch sink DATA ACCES S METHOD Configure touch sink for interrupt generation RESET Reset touch functionality (hardware/ firmware) in a touch sink GET FEATURE REPORT Get the feature report for the specified Report ID SET FEATURE REPORT Drive a Feature Report out to a touch sink SET OUTPUT REPORT Drive an Output Report to a touch sink SET IDLE Configure rate at which Input Reports are generated in a touch sink SET LOW POWER Put the touch feature in the sink to low power state Table 1 : Example Commands In various implementations, touch sink 104 may acknowledge success or failure in response to the commands of Table 1 via standard DPCD AUX transaction mechanisms as defined in the DP 1.2 standard. In various implementations, touch source 102 may enable or disable touch features of touch sink 104 by enabling or disabling data module 124. For example, as set forth in Table 1, touch source 102 may disable the data module 124 (and associated data reporting) in touch sink 104 by setting an ENABLE TOUCH FEATURE bit in a CONFIGURE TOUCH DPCD register. Conversely, touch source 102 may enable data module 124 by resetting that bit. For example, touch source 102 may disable data module 124 by writing a zero to the ENABLE_TOUCH_FEATURE bit, and may enable data module 124 by writing a one to that bit.In various implementations, touch source 102 may configure a method for accessing touch data from touch sink 104. In various implementations, a data access method may be configured to be an interrupt based data access method or a poll based data access method. For example, touch source 102 may configure a poll based data access method by setting a DATA ACCESS METHOD bit in a CONFIGURE TOUCH DPCD register to Polled so that touch related IRQ HPD interrupt generation by touch sink 104 is disabled. Conversely, touch source 102 may enable an interrupt based data access method by setting the DATA ACCESS METHOD bit to Interrupt so that IRQ HPD interrupt generation by touch sink 104 is enabled. Continuing the discussion of Table 1, touch source 102 may reset touch functionality in touch sink 104 by setting a RESET bit in a CONFIGURE TOUCH DPCD register. Touch sink 104 may then bring data module 124 to reset condition in response to this command. In particular, in various implementations, setting a reset condition may disable data module 124, may flush an Input Report queue (to be described later), and may reset a TOUCH_STATUS register in registers 122 to indicate that no report data is available. In various implementations, touch source 102 may issue a read request for a feature report by writing the Report ID of interest to TOUCH_PARAMETERS [0] DPCD register and by setting the GET FEATURE REPORT bit in the CONFIGURE TOUCH DPCD register to one. In various implementations, touch sink 104 may indicate availability of a feature report to be read in the REPORT DATA DPCD region at offset 0. In various implementations, as will be explained in greater detail below, touch source 102 may issue a feature report to touch sink 104. For example, touch source 102 may do so by writing a feature report in a REPORT DATA DPCD region of registers 122 at a particular offset, and by setting a SET FEATURE REPORT bit in a CONFIGURE TOUCH DPCD register of registers 122. In addition, in various implementations, as will be explained in greater detail below, touch source 102 may issue an output report to touch sink 104. For example, touch source 102 may do so by writing an output report to an OUTPUT REPORT DPCD region of registers 122 at a particular offset, and by setting a SET OUTPUT REPORT bit in a CONFIGURE TOUCH DPCD register of registers 122.Concluding the discussion of Table 1, in various implementations, touch source 102 may set a reporting rate for input reports generated by touch sink 104. For example, touch source 102 may do so by writing the parameters for this command in a IDLE RATES DPCD region of registers 122, and by setting a SET IDLE RATE bit in a CONFIGURE TOUCH DPCD register of registers 122. In various implementations, the idle rate for each Report ID may be specified in milliseconds according to the following rules: (1) a value of zero indicates the duration is indefinite, as defined in the HID specification (for this value, no interrupts will be generated by touch sink 104 even if touch source 102 programs the DATA_ACCESS_METHOD to Interrupt); (2) values of one through four are not supported; (3) a minimum valid value of the idle rate is 5 milliseconds implying a maximum sample rate of 200Hz; and (4) given a minimum sample rate of 50Hz, the maximum scan rate is 20 milliseconds. In accordance with the present disclosure, touch source 102 may set data module 124 of touch sink 104 to a sleep (low power) state. For example, touch source 102 may do so by setting a SET LOW POWER bit in the CONFIGURE TOUCH DPCD register. In various implementations, when SET LOW POWER has a value of zero, data module 124 may be in an ON state. In various implementations, the SET LOW POWER bit may be valid only when touch sink 104 is in an ACTIVE state (such as STATE 1 in Figure 5-2 of the DP 1.2 standard). Register Layout and Access Rules In accordance with the present disclosure, HID CLASS DESCRIPTORS DPCD register portions of registers 122 may include an array of HID class descriptors, where the first descriptor may be a HID Descriptor. In various implementations, the layout of an HID Descriptor may conform with section 6.2.1 of the HID 1.11 specification. The HID Descriptor may identify the revision of the HID specification that it supports, and other information specific to the HID device. In addition, the bNumDescriptors field of the HID Descriptor may define the number of additional HID class descriptors that are available. The bNumDescriptors field may be followed by an array of three byte entries, where the first byte of an entry (bDescriptorType) defines the type of the HID class descriptor and the remaining two bytes of an entry (wDescriptorLength) define the size of the HID class descriptor. In various implementations, the assignment of HID class type values may conform to section 7.1 of the HID 1.11 specification. In various implementations, as shown in FIG. 3, HID class descriptors may be packed on byte boundaries within a HID CLASS DESCRIPTORS DPCD region layout 300 of registers 122. For instance, an HID descriptor field 302 may start at offset 0 and a first byte (bLength) ofHID descriptor 302 may identify the HID descriptor's size in bytes. One or more HID class descriptors may be associated with the HID descriptor. For instance, two example HID class descriptors 304 and 306 are depicted in FIG. 3. In various implementations, HID descriptor 302 may identify a size, number and/or type of HID class descriptors appearing in layout 300. In various implementations, HID class descriptor 304 may start at offset HID DescriptonbLength, HID class descriptor 306 may start at offset HID DescriptonbLength + wDescritorLength[0], a third HID class descriptor (not shown) may start at HID DescriptonbLength + wDescritorLength[0] + wDescritorLength[l], and so on. In various implementations, a HID device may only define one additional HID class descriptor, a Report Descriptor. Reports In accordance with the present disclosure, input reports, output reports and feature reports, as will be described in greater detail below, may be provided. In various implementations, touch source 102 may originate output reports and feature reports and touch sink 194 may originate input reports and feature reports. In various implementations, an input report generated by touch sink 104 may contain interface data such as touch data generated by data module 124. In various implementations, an output report generated by touch source 102 may contain output data such as one or more user interface commands generated by touch source 102 in response to touch data provided by an input report. In various implementations, a feature report may identify the mode that a digitizer (e.g., touch sink 104) is operating in and a maximum number of simultaneous contacts/touches that are supported by the interface. For example, FIG. 13, described in greater detail below, depicts an example layout of a feature report in combination with an input report. In various implementations, it may be mandatory for touch sink 104 to support input reports and touch sink 102 may support feature reports only if they are declared in a corresponding HID Descriptor. In various implementations, it may be mandatory for touch source 102 to be able to parse input reports and feature reports received from touch sink 104, and to generate output reports and feature reports in response. In various implementations, one or more software applications of a touch source 102 may support HID usages as set forth in the HUTRR34 document. Touch Sink Generated Reports In accordance with the present disclosure, touch sink 104 may store an input report and (when applicable) a feature report that may be accessed by touch source 102. For example, asdepicted in FIG. 4, touch sink 104 may do so by storing an input report 402 and corresponding feature report 404 in a REPORT DATA DPCD region layout 400 of registers 122. In various implementations, if feature report 404 is declared in a corresponding Report Descriptor, then each instance of its Report ID may have a fixed-size. In various implementations, the size of a feature report field of REPORT DATA DPCD region 400 may be defined by the corresponding HID descriptor and may be the size of the largest Report ID for feature report 404. If the Report Descriptor does not contain a feature report, then feature report 404 may not be present in layout 400. In various implementations, the size of input report 402 may vary based on a number of touch contacts captured by data module 124. The HID Descriptor may identify the size of input report 402 in an Input Report Size area 406 of layout 400. For instance, Input Report Size 406 may identify the number of valid bytes in input report 402. In various implementations, Input Report Size field 406 may have a size of two bytes. Further, the HID Descriptor may identify a maximum size of input report 402. Touch Source Generated Reports In accordance with the present disclosure, touch source 102 may store (when applicable) an output report in registers 122 of touch sink 104. For example, touch source 102 may store an output report 502 in an OUTPUT REPORT DPCD region that may be structured as shown in layout 500 of FIG. 5. In various implementations, a Report Descriptor may define the maximum size of the Output Report sub-region. In various implementations, the size of output report 502 may vary based on the content that a touch source provides and an Output Report Size field 504 may specify the size of output report 502. In various implementations, Output Report Size field 504 may have a size of two bytes. Further, the HID Descriptor may identify a maximum size of output report 502. In various implementations, touch source 102 and touch sink 104 may share the Feature Report registers in REPORT DATA DPCD region 400. In such implementations, touch sources and touch sinks may synchronize access using a FEATURE_DATA_AVAILABLE bit in the TOUCH STATUS DPCD register as described in greater detail below. In various implementations, if a Report Descriptor uses Report IDs to define multiple reports of a specific type (e.g., feature, input, or output), then the size of the report sub-region may be the union of the size of all reports defined of the same type. In accordance with the present disclosure, software associated with touch source 102 may determine the maximum sizeof each report by parsing the Report Descriptor, and the number of valid bytes in the current report by examining the Size field preceding the report sub-region. Data Transfer In accordance with the present disclosure, touch sink 104 may transfer touch data to touch source 102. For example, touch source 102 may obtain touch data from touch sink 104 using (F)AUX channel 1 18. In accordance with the present disclosure, touch source 102 may use various methods to access HID reports such as input reports containing touch data generated by touch sink 104. For instance, as discussed above, touch source 102 may configure touch sink 104 to enable an interrupt based data access method or a poll based data access method for touch data access by touch source 102. FIG. 6 illustrates a flow diagram of an example process 600 for implementing interrupt based data access by a touch source according to various implementations of the present disclosure. FIG. 7 illustrates an example sequence chart 700 corresponding to process 600. In various implementations, process 600 may be used to access interface data such as touch data stored in DPCD registers of a touch sink. For example, touch source 102 may implement process 600 to read an input report and/or a feature report from registers 122 of touch sink 104. Process 600 may include one or more operations, functions or actions as illustrated by one or more of blocks 602, 604, 606, 608, 610 and 612 of FIG. 6. By way of non-limiting example, process 600 will be described herein with reference to example system 100 of FIG. 1. Process 600 may begin at block 602 where a touch source may configure an interrupt based access method. For example, touch source 102 may configure touch sink 104 for interrupt based data access by commanding touch sink 104 to generate interrupts using the DATA ACCESS METHOD bit in the CONFIGURE TOUCH DPCD register. At block 604, the touch source may enable a data module in the touch sink. For example, touch source 102 may enable data module 124 by setting the ENABLE_TOUCH bit in the CONFIGURE TOUCH DPCD register. In various implementations, blocks 602 and 604 may be combined into a single initialization operation 605 that configures a touch sink to provide touch data. For example, touch source 102 may undertake a single write operation over (F)AUX channel 1 18 to implement blocks 602 and 604 as operation 605. In various implementations, a touch source may not perform initialization operation 605 during each instance of reading a report via process 600, rather, in such implementations, a touch source may execute theinitialization operation only in response to a change in touch sink configuration (e.g., a change in the configuration of system 100). At block 606, the touch source may request a report. For example, touch source 102 may request a feature report for a particular Report ID by setting TOUCH_PARAMETERS[0] to the desired Report ID and by setting the GET FEATURE REPORT bit in the TOUCH_COMMAND DPCD register. In response to a report request at block 606, the touch sink may provide data and set a corresponding interrupt (block 608). For example, on each instance of the availability of multi- touch data, touch sink 104 may, if the I PUT REPORT AVAILABLE bit in the TOUCH STATUS DPCD register and the TOUCH I TERRUPT bit are clear, populate the Input Report section of the REPORT DATA DPCD region with multi-touch data. Touch sink 104 may then set an interrupt to signal the availability of the input report containing the multi- touch data. For example, touch sink 104 may set a reason for the interrupt by setting the INPUT REPORT AVAILABLE bit and the TOUCH INTERRUPT bit in DEVICE SERVICE IRQ VECTOR, and may then assert IRQ HPD to provide the interrupt to touch source 102. Further, in various implementations, upon detection of the GET FEATURE REPORT bit being set, the touch sink may, at block 608, read the Report ID from TOUCH_P ARAMETERS [0] , and, if the FEATURE REPORT AVAILABLE bit in the TOUCH STATUS DPCD register and the TOUCH INTERRUPT bit are clear, may populate the Feature Report for the desired Report ID at REPORT_DATA[0], set the reason for interrupt by setting the FEATURE REPORT AVAILABLE bit and the TOUCH INTERRUPT bit in DEVICE SERVICE IRQ VECTOR, and assert IRQ HPD. Process 600 may then continue at block 610 where the touch source may read the interrupt reason. Process 600 may then conclude at block 612 with the touch source may read the data and clear the interrupt reason. For instance, upon detection of IRQ HPD, touch source 102 may read the DEVICE SERVICE IRQ VECTOR to check if the TOUCH INTERRUPT bit is set. If the TOUCHJNTERRUPT bit is set, then touch source 102 may read the TOUCH_STATUS DPCD register to determine if either the INPUT REPORT AVAILABLE or OUTPUT REP ORT A VAILABLE bits are set, may read the data (e.g., multi-touch data in the input report) corresponding to the availability indication (e.g., in some cases both input and feature reports may be available and a touch source may read both reports), and may clear theinterrupt reason by clearing the availability bit(s) that were processed and clearing the TOUCH INTERRUPT bit. In various implementations, process 600 may end at block 610 rather than block 612 if touch source 102 determines that both the INPUT REPORT AVAILABLE and the OUTPUT REPORT AVAILABLE bits are not set. FIG. 8 illustrates a flow diagram of an example process 800 for implementing polled data access by a touch source according to various implementations of the present disclosure. FIG. 9 illustrates an example sequence chart 900 corresponding to process 800. In various implementations, process 800 may be used to access data stored in DPCD registers of a touch sink. For example, touch source 102 may implement process 800 to read an input report and/or a feature report from registers 122 of touch sink 104. Process 800 may include one or more operations, functions or actions as illustrated by one or more of blocks 802, 804, 806, 808, 810 and 812 of FIG. 8. By way of non-limiting example, process 800 will be described herein with reference to example system 100 of FIG. 1. Process 800 may begin at block 802 where a touch source may configure a touch sink for polled access method. For example, touch source 102 may configure touch sink 104 for polled access by setting the DATA ACCESS METHOD bit to Polled in the CONFIGURE TOUCH DPCD register. At block 804, the touch source may enable a data module in the touch sink. For example, touch source 102 may enable data module 124 by setting the ENABLE_TOUCH bit in the CONFIGURE TOUCH DPCD register. In various implementations, blocks 802 and 804 may be combined into a single initialization operation 805 that configures a touch sink to provide touch data. For example, touch source 102 may undertake a single write operation over (F)AUX channel 1 18 to implement operation 805. In various implementations, a touch source may not perform initialization operation 805 on each instance of reading a report via process 800, rather, in such implementations, a touch source may execute operation 805 only in response to a change in current touch sink configuration (e.g., a change in the configuration of system 100). At block 806, the touch source may request a report. For example, touch source 102 may request a feature report for a particular Report ID by setting TOUCH_PARAMETERS[0] to the desired Report ID and by setting the GET FEATURE REPORT bit in the TOUCH COMMAND DPCD register. In response to a report request at block 806, the touch sink may provide data and flag the data as being available (block 808). For example, on each instance of the availability of touchdata, touch sink 104 may, if the INPUT REPORT AVAILABLE bit in the TOUCH STATUS DPCD register and the TOUCH INTERRUPT bit are clear, populate the Input Report section of the REPORT DATA DPCD region with touch data. Touch sink 104 may then set a report availability flag (e.g., the INPUT_REPORT_AVAILABLE bit for an Input Report, and FEATURE REPORT AVAILABLE bit for a Feature Report). In various implementations, when undertaking block 808, touch sink 104 may set the corresponding availability bits in TOUCH STATUS but, unlike process 600, may not set the TOUCH INTERRUPT bit and may not assert an IRQ HPD signal. At block 810, the touch source may poll for report availability. For example, touch source 102 may poll for the presence of report availability flags (e.g., by checking the INPUT REPORT AVAILABLE bit for an input report, and FEATURE REPORT AVAILABLE bit for a feature report) until one or both are set. At block 812, process 800 may conclude when a touch source reads the data and clears the corresponding flag. For example, once touch source 102 determines that one or both input report availability flags is/are set at block 810, block 812 may involve the touch source reading the corresponding report(s) from the REPORT_DATA DPCD registers, and clearing the report availability flag(s) that were found to be set. In various implementations, a touch source may interleave interrupt based and polled data access methods to minimize latency associated with interrupt notifications by alternately configuring the touch sink as stated above in the descriptions of FIGS. 6 and 8. If interleaved data access is undertaken, a touch source should ensure that it does not lose data during the switch between the access methods. FIG. 10 illustrates a flow diagram of an example process 1000 for output report communication to a touch sink according to various implementations of the present disclosure. FIG. 1 1 illustrates an example sequence chart 1100 corresponding to process 1000. In various implementations, process 1000 may be used to write output data to the DPCD registers of a touch sink. For example, touch source 102 may implement process 1000 to write an output report to registers 122 of touch sink 104. Process 1000 may include one or more operations, functions or actions as illustrated by one or more of blocks 1002, 1004, 1006, 1008, 1010 and 1012 of FIG. 10. By way of non-limiting example, process 1000 will be described herein with reference to example system 100 of FIG. 1.Process 1000 may begin at block 1002 where a touch source may configure a touch sink for data access. For example, touch source 102 may configure touch sink 104 for interrupt based data access as described above for block 602 of FIG. 6, or may configure touch sink 104 for polled data access as described above for block 802 of FIG. 8. At block 1004, the touch source may enable a data module in the touch sink as described above with respect to block 604 of FIG. 6 and block 804 of FIG. 8. In various implementations, blocks 1002 and 1004 may be combined into a single initialization operation 1005 that configures a touch sink to provide touch data. For example, touch source 102 may undertake a single write operation over (F)AUX channel 1 18 to implement operation 1005. In various implementations, a touch source may not perform initialization operation 1005 on each instance of reading a report via process 1000, rather, in such implementations, a touch source may execute the initialization operation only in response to a change in current touch sink configuration (e.g., a change in the configuration of system 100). Upon availability of output data to be communicated to the touch sink, the touch source may check for any output report read requests at block 1006. For example, touch source 102 may determine whether a previous output report was completed by checking for a SET OUTPUT REPORT to be set to 0 by, in the case of polled access, polling for the OUTPUT REPORT READ bit to be set in the TOUCH STATUS DPCD register, or, in the case of interrupt based access, fielding an IRQ HPD with TOUCH TNTERRUPT bit and OUTPUT REPORT READ bits set. Once a touch source determines that a previous output report, if any, was completed, the touch source may write a next output report and clear the output report read request at block 808. For example, touch source 102 may write the output report to its corresponding location in the OUTPUT REPORT DPCD region, clear the OUTPUT REPORT READ bit, and, in the case of interrupt based access, clear the TOUCH INTERRUPT bit. At block 1010, the touch source may issue a set output report event. For example, touch source 102 may set the SET OUTPUT REPORT bit in the CONFIGURE TOUCH DPCD register. Process 1000 may continue at block 1012 where the touch sink may read the output report and set an output report read request. For example, when touch sink 104 detects (e.g., via touch sink firmware) that the SET OUTPUT REPORT bit has a value of one (and, in case of interrupt based access, that the TOUCH_INTERRUPT is clear), touch sink 104 may read the output report from the OUTPUT REPORT DPCD region, set the OUTPUT REPORT READ bit, and, in the case of interrupt based access, set the TOUCH_INTERRUPT bit, and clear the SET OUTPUT REPORT bit. In various implementations, the OUTPUT REPORT SET bitmay trigger an interrupt in a touch sink, or touch sink firmware may poll for this bit occasionally. In various implementations, a touch source may configure a touch sink for feature report communication in a manner similar to that depicted in FIG. 10 for output report communication except that a touch source may set (and a touch sink may clear) the SET_FEATURE_REPORT bit rather than the SET OUTPUT REPORT bit in the CONFIGURE TOUCH DPCD register, read and write rates may be controlled using the FEATURE REPORT READ bit rather than the OUTPUT REPORT READ bit, the Feature Report may be made available in the REPORT DATA DPCD region at offset 0, and the Report ID for which the Feature Report is being set may be communicated in TOUCH_PARAMETERS[0] DPCD register. In various implementations, because the TOUCH_PARAMETERS[0] DPCD register may also be used by the GET FEATURE REPORT command, a touch source may need to ensure that any previous command has completed using the indications previously described. FIG. 12 illustrates a flow diagram of an example process 1200 according to various implementations of the present disclosure. Process 1200 may include one or more operations, functions or actions as illustrated by one or more of blocks 1202, 1204, 1206, 1208 and 1210 of FIG. 12. By way of non-limiting example, process 1200 will be described herein with reference to example system 100 and example processes 600, 800 and 1000 as described previously above. Process 1200 may begin at block 1202 where an HID sink device may be configured, over an auxiliary (AUX) channel, to provide interface data, where the HID sink device includes a data module to generate the interface data. In various implementations, referring to system 100, block 1202 may involve touch source 102 configuring touch sink 104 over (F)AUX channel 1 18 to provide interface data in the form of multi-touch data. For example, touch source 102 may undertake block 1202 in a similar manner to that described above at block 602 of process 600. Process 1200 may continue at block 1204 where the data module may be enabled over the AUX channel. In various implementations, block 1204 may involve touch source 102 enabling data module 124 of touch sink 104 over (F)AUX channel 1 18. For example, touch source 102 may undertake block 1204 in a similar manner to that described above at block 604 of process 600. At block 1206, the interface data may be received over the AUX channel. In various implementations, block 1206 may involve touch source 102 receiving multi-touch data fromtouch sink 104 over (F)AUX channel 1 18. For example, touch source 102 may undertake block 1206 in a similar manner to that described above at blocks 606-612 of process 600 or blocks 806-812 of process 800. Process 1200 may then conclude at blocks 1208 and 1210 where output data may be generated in response to the interface data (block 1208) and the output data may be provided to the HID sink device over the AUX channel (block 1210). In various implementations, blocks 1208 and 1210 may involve touch source 102 using the multi-touch data received at block 1206 to generate output data and then providing the output data to touch sink 104 over (F)AUX channel 1 18. For example, touch source 102 may undertake block 1208 in a similar manner to that described above at blocks 1006-1012 of process 1000. Report Freshness In accordance with the present disclosure, interaction between touch source and touch sink devices for feature reports and/or output reports may be throttled in both devices using the report availability flags. Input reports may be generated at the configured idle rate. In cases where a read rate of a touch source is slower than the rate at which a touch sink generates input reports, the input reports may be queued, and made available to a touch source in order as described previously. In some circumstances, a touch sink's buffer may overflow. If this occurs, a touch sink may, for example, set a BUFFER OVERFLOWED bit in the TOUCH_STATUS register and assert IRQ HPD. In response, a touch source may take implementation specific action based on this indication including, for example, lowering the idle rate in the touch sink. In various implementations, touch sinks may be required to provide a queue depth of at least four maximum-sized (as described in the Report Descriptor) input reports. Further, in various implementations, a touch source may optionally choose to make use of the buffering mechanisms described herein to improve processing efficiency. For example, in various implementations, a sink device may, using the buffering and interrupt mechanisms described herein, have data available and ready for processing by a source device but the source device may optionally process the data at a slower rate by, for example, batch processing the data. For instance, a touch sink may collect touch data at a rate of 50 hertz while a touch source may batch process the touch data in two set batches at a rate of 25 hertz, or in four set batches at a rate of 12.5 hertz, and so forth.Sideband Messages Branch devices, such as branch device 202 of FIG. 2, that are in the path between a touch source and a touch sink should be configured in accordance with the DP 1.2 standard. In systems such as system 200 of FIG. 2, touch source 102 may access DPCD registers in touch sink 104 using REMOTE DPCD READ and REMOTE DPCD WRITE sideband messages. Further, touch sink 104 may indicate touch interrupts using SrNK_EVENT_NOTIFY, which may be an upward-going Broadcast Message Transaction having Sink_Event as a parameter in the message, and using the TOUCH_TNTERRUPT as Bit 7 of the Sink_Event. Changes to DPCD Address Space In accordance with the present disclosure, various changes to DPCD address space may be implemented as set forth in Table 3 : in milliseconds. Shall be less than or equal to 20 milliseconds. Bits 7:6 = RESERVED 60001h TOUCH_STATUS Clearable Value at reset = OCh Read Only Bit 0 = INPUT REPORT AVAILABLE 1: Input Report is available in REPORT DATA DPCD registers 0: Input Report is not yet available Bit 1 = FEATURE REPORT AVAILABLE 1 : Feature Report is available in REPORT DATA DPCD registers 0: Feature Report is not yet available Bit 2 = OUTPUT REPORT READ 1 : Output Report has been read by the Touch Sink 0: Output Report has not been read yet Bit 3 = FEATURE REPORT READ 1 : Feature Report has been read by the Touch Sink 0: Feature Report has not been read yet Bit 4 = BUFFER OVERFLOW ERROR 1 : Buffer for Input Reports has overflown in the Touch Sink 0: No error Bits 7:5 = RESERVED 60002h - COMMAND_PARAMETERS Write Only 60005h These four bytes may be interpreted in a command-specific manner. The commands may be issued in the CONFIGURE TOUCH DPCD register. 60006h CONFIGURE TOUCH Clearable Default = OxxxxxxxOb i.e., touch functionality is Write Only disabled at reset and on unplug. Bit 0 = ENABLE TOUCH1 : Enable touch feature on the Sink 0: Disable touch feature on the Sink. If disabled, Source treats data in TOUCH DATA DPCD registers as invalid Bit 1 = DATA ACCESS METHOD 1 : Interrupt based 0: Polled (i.e., interrupts disabled) Bit 2 = RESET 1 : Reset touch device/re-initialize firmware 0: No device action needed Bit 3 = GET FEATURE REPORT 1: Feature Report requested by Touch Source. Report ID for this request is in TOUCH_PARAMETERS [0] DPCD register 0: No device action needed Bit 3 = SET FEATURE REPORT 1 : Feature report available at offset 0 in REPORT DATA DPCD region. Size of the Feature Report is determined by the Report ID (in TOUCH_PARAMETERS [0]) and information parsed from the HID Descriptor 0: No device action needed Bit 4 = SET OUTPUT REPORT 1: Output Report available in OUTPUT REPORT DPCD region 0: No device action needed Bit 5 = SET IDLE RATE 1 : Touch Sink is to originate Input Reports for the Report IDs as per the rate specified in the TOLE RATES DPCD region 0: No device action needed Bit 6 = SET LOW POWER This command is used to put the touch feature in a low power state.1: Sleep 0: ON Bits 7 = RESERVED 60007h RESERVED Read all 0s 60008h - IDLE RATES Read Write 60017h This DPCD region (15 bytes) contains the idle rates for a maximum of 15 Input Report IDs. The rate for Report ID 1 starts at offset 0, is specified in milliseconds, and has a maximum value of 20. The rate for Report ID 2 starts at offset 2, immediately following Report ID 1. The value is interpreted similar to that for Report ID 1. Similarly for all other Report IDs. The number of valid entries in this table is 2 * number of valid Input Report IDs, as parsed from the HID Descriptor. 60018h - Reserved for touch usage Read all 0s 6003Fh 60040h - HID_CLASS_ DESCRIPTORS Read Only 6103Fh (HID and Report) and optional (Physical and vendor specific) descriptors from the Sink device 61040h - REPORT DATA Read Only 6143Fh This IK DPCD region contains the Input Report and (if declared in the HID Descriptor) Feature Report. 61440h - OUTPUT REPORT Write Only 6183Fh This IK DPCD region contains (if declared in the HID Descriptor) the Output Report Feature Report. 61840h - Reserved for future touch/HID usage Read all 0s 61CFFh Table 3 : DPCD Address Space ChangesExample DP Touch Implementation In accordance with the present disclosure, an HID device that supports "multi-input" mode operation (e.g., a touch source or touch sink device) may implement a touch interface that defines two reports, a device configuration feature report and an input report. In various implementations, a feature report may identify the mode that a digitizer is operating in (Digitizer:Device Mode) and a maximum number of simultaneous contacts/touches that are supported by the interface (Digitizer: Contact Count Maximum). In various implementations, software may read a feature report when a device is discovered. In various implementations, a touch report may include a Digitizer: Contact Count field and an array of contact information fields where the Digitizer: Contact Count Maximum field in the Feature report identifies the size of the contact information array. The Digitizer: Contact Count field may identify the number of entries in the contact information array that are currently valid (e.g., if only four fingers are detected, then Contact Count = four and the first four entries (0-3) in the contact information array are valid). In various implementations, an HID Descriptor may be nine bytes in size (USBHIDDescriptor.bLength = 9) and may start at offset 0 in the HID CLASS DESCRIPTORS DPCD registers. For example, an HID Descriptor may be formatted as follows: USB HID DESCRIPTOR USBHIDDescriptor[] = { 0x09, // bLength 0x21, // bDescriptorType 0x0100, // bcdHID 0x21, // bCountryCode 0x01, // bNumDescriptors 0x22, // cd[0]. bDescriptorType 0x0156 // cd[0].wDescriptorLength} ; In various implementations, a Report Descriptor may be 342 bytes in size (USBHIDDescriptor.cd[0].wDescriptorLength = 0x156) and may start at offset 10 (USBHIDDescriptor.bLength + 1) in the HID DESCRIPTOR area. If another HID class descriptor was defined in the HID Descriptor, then it would start at end of the Report Descriptor (i.e. offset 352, or 10 + 342). For example, an Report Descriptor may be formatted as follows: char ReportDescriptor[342] = { 0x05, OxOd, // USAGE_PAGE (Digitizers) 0x09, 0x04, // USAGE (Touch Screen) 0xal, 0x01, // COLLECTION (Application) 0x09, OxOe, // USAGE (Device Configuraiton) ; Device Configuration Report 0xal, 0x02, // COLLECTION (Logical) ; Report ID = 1 0x15, 0x00, // LOGICAL_MiNIMUM (0) 0x25, OxOf, // LOGICAL_MAXIMUM (15) 0x75, 0x04, // REPORT SIZE (4) 0x95, 0x02, // REPORT_COUNT (2) 0x09, 0x52, // USAGE (Device Mode) 0x09, 0x55, // USAGE (Contact Count Maximum) Oxb 1, 0x02, // FEATURE (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x33, // USAGE (Touch) ; Touch Report for up to 10 fingers Oxa 1, 0x02, // COLLECTION (Logical) ; Report ID = 1 0x75, 0x08, // REPORT_SIZE (8)0x15, 0x01, // LOGICAL JMIMMUM (1) 0x25, OxOa, // LOGICAL_MAXIMUM (10) 0x09, 0x54, // USAGE (Contact Count) 0x81, 0x02, // INPUT (Data,Var,Abs) ; Contact Count = 1-10 0x09, 0x22, // USAGE (Finger) 0xal, 0x00, // COLLECTION (Physical) 0x15, 0x00, // LOGICAL_MINIMUM (0) 0x26, Oxff, 0x00, // LO GIC AL M AXIMUM (255) 0x09, 0x51 , // USAGE (Contact Identifier) 0x75, 0x08, // REPORT SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact 0 Identifier 0x26, Oxff, OxOf, // LOGICAL MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x05, 0x01, // USAGE_P AGE (Generic Desktop) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data,Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) 0xal, 0x00, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LOGICAL MAXIMUM (255)OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT SIZE (8) 0x81, 0x02, // INPUT (Data,Var,Abs) ; Contact ID 1 0x26, Oxff, OxOf, // LOGICAL MAXIMUM (4095) 0x75, OxOc, // REPORT SIZE (12) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) Oxa 1 , 0x00, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LO GIC AL M AXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT SIZE (8) 0x81, 0x02, // INPUT (Data,Var,Abs) ; Contact ID 2 0x26, Oxff, OxOf, // LO GICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT SIZE (12) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data,Var,Abs)OxcO, // END_COLLECTION 0x09, 0x22, // USAGE (Finger) Oxal . OxOO, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LO GIC AL_MAXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT SIZE (8) 0x81 , 0x02, // INPUT (Data, Var,Abs) ; Contact ID 3 0x26, Oxff, OxOf, // LO GICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x09, 0x30, // USAGE (X) 0x81 , 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) Oxal . OxOO, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LOGICAL_MAXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT_SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 4 0x26, Oxff, OxOf, // LOGICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12)0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) Oxal. OxOO, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LO GIC AL M AXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers:Contact Identifier) 0x75, 0x08, // REPORT_SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 5 0x26, Oxff, OxOf, // LOGICAL MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) Oxa 1 , 0x00, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LO GIC AL MAXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier)0x75, 0x08, // REPORT_SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 6 0x26, Oxff, OxOf, // LOGICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END_COLLECTION 0x09, 0x22, // USAGE (Finger) Oxal . OxOO, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LOGICAL_MAXIMUM (255) OxOb, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT_SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 7 0x26, Oxff, OxOf, // LOGICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x3 1, // USAGE (Y) 0x81 , 0x02, // INPUT (Data,Var,Abs) OxcO, // END_COLLECTION0x09, 0x22, // USAGE (Finger) Oxa 1 , 0x00, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LO GIC AL_MAXIMUM (255) 0x0b, 0x51, 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 8 0x26, Oxff, OxOf, // LO GIC AL_MAXIMUM (4095) 0x05, 0x01, // US AGE PAGE (Generic Desktop) 0x09, 0x30, // USAGE (X) 0x81, 0x02, // INPUT (Data, Var,Abs) 0x09, 0x3 1, // USAGE (Y) 0x81, 0x02, // INPUT (Data, Var,Abs) OxcO, // END COLLECTION 0x09, 0x22, // USAGE (Finger) 0xal, 0x00, // COLLECTION (Physical) 0x26, Oxff, 0x00, // LOGICAL_MAXIMUM (255) OxOb, 0x51 , 0x00, OxOd, 0x00, // USAGE (Digitizers: Contact Identifier) 0x75, 0x08, // REPORT_SIZE (8) 0x81, 0x02, // INPUT (Data, Var,Abs) ; Contact ID 9 0x26, Oxff, OxOf, // LOGICAL_MAXIMUM (4095) 0x75, OxOc, // REPORT_SIZE (12) 0x09, 0x30, // USAGE (X)0x81, 0x02, // INPUT (Data,Var,Abs) 0x09, 0x31, // USAGE (Y) 0x81, 0x02, // INPUT (Data,Var,Abs) OxcO, // END COLLECTION OxcO, // END COLLECTION OxcO // END COLLECTION Report Data Layout FIG. 13 illustrates the layout 1300 of the REPORTED AT A area as defined by the Report Descriptor described above. In various implementations, the feature report may be allocated before the input report. As shown in FIG. 13, layout 1300 includes a feature report 1302 having a single byte defining two fields: Device Mode and Contact Count Maximum. A touch input report 1304 immediately follows the feature report and the Input Report Size fields. The input report may be thirty-one bytes in size, and may include a one byte Contact Count field followed by an array of ten, three byte Contact entries. Each Contact entry may consist of an 8-bit Contact ID followed by a 12-bit X and a 12-bit Y field. The Contact Count identifies how many Contact entries are currently valid. For example, if Contact Count = 4, then Contact entries 0-3 are valid. While the Report Descriptor may define the feature report 1302 before the input report 1304, as shown in layout 1300, the reports may be defined in the reverse order in the Report Descriptor and still be presented in the REPORT DATA area with the feature report 1302 occurring first. While implementation of example processes 600, 800, 1000 and 1200 as illustrated in FIGS. 6, 8, 10 and 12 may include the undertaking of all blocks shown in the order illustrated, the present disclosure is not limited in this regard and, in various examples, implementation of processes 600, 800, 1000 and 1200 may include the undertaking only a subset of the blocks shown and/or in a different order than illustrated. In addition, any one or more of the blocks of 6, 8, 10 and 12 may be undertaken in response to instructions provided by one or more computer program products. Such program products may include signal bearing media providing instructions that, when executed by, for example, a processor, may provide the functionality described herein. The computer programproducts may be provided in any form of computer readable medium. Thus, for example, a processor including one or more processor core(s) may undertake one or more of the blocks shown in FIGS. 6, 8, 10 and 12 in response to instructions conveyed to the processor by a computer readable medium. As used in any implementation described herein, the terms "module" and/or "logic" refers to any combination of software, firmware and/or hardware configured to provide the functionality described herein. The software may be embodied as a software package, code and/or instruction set or instructions, and "hardware", as used in any implementation described herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules and/or logic may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), and so forth. FIG. 14 illustrates an example system 1400 in accordance with the present disclosure. In various implementations, system 1400 may be a media system although system 1400 is not limited to this context. For example, system 1400 may be incorporated into a personal computer (PC), laptop computer, ultra-laptop computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and so forth. In various implementations, system 1400 includes a platform 1402 coupled to a display 1420. Platform 1402 may receive content from a content device such as content services device(s) 1430 or content delivery device(s) 1440 or other similar content sources. A navigation controller 1450 including one or more navigation features may be used to interact with, for example, platform 1402 and/or display 1420. Each of these components is described in greater detail below. In various implementations, platform 1402 may include any combination of a chipset 1405, processor 1410, memory 1412, storage 1414, graphics subsystem 1415, applications 1416 and/or radio 1418. Chipset 1405 may provide intercommunication among processor 1410, memory 1412, storage 1414, graphics subsystem 1415, applications 1416 and/or radio 1418. Forexample, chipset 1405 may include a storage adapter (not depicted) capable of providing intercommunication with storage 1414. Processor 1410 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1410 may be dual-core processor(s), dual-core mobile processor(s), and so forth. Memory 1412 may be implemented as a volatile memory device such as, but not limited to, a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM). Storage 1414 may be implemented as a non- volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device. In various implementations, storage 1414 may include technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included, for example. Graphics subsystem 1415 may perform processing of images such as still or video for display. Graphics subsystem 1415 may be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem 1415 and display 1420. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 1415 may be integrated into processor 1410 or chipset 1405. In some implementations, graphics subsystem 1415 may be a stand-alone card communicatively coupled to chipset 1405. The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a discrete graphics and/or video processor may be used. As still another implementation, the graphics and/or video functions may be provided by a general purpose processor, including a multi-core processor. In a further embodiments, the functions may be implemented in a consumer electronics device.Radio 1418 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques may involve communications across one or more wireless networks. Example wireless networks include (but are not limited to) wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, and satellite networks. In communicating across such networks, radio 1418 may operate in accordance with one or more applicable standards in any version. In various implementations, display 1420 may include any television type monitor or display. Display 1420 may include, for example, a computer display screen, touch screen display, video monitor, television-like device, and/or a television. Display 1420 may be digital and/or analog. In various implementations, display 1420 may be a holographic display. Also, display 1420 may be a transparent surface that may receive a visual projection. Such projections may convey various forms of information, images, and/or objects. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. Under the control of one or more software applications 1416, platform 1402 may display user interface 1422 on display 1420. In various implementations, content services device(s) 1430 may be hosted by any national, international and/or independent service and thus accessible to platform 1402 via the Internet, for example. Content services device(s) 1430 may be coupled to platform 1402 and/or to display 1420. Platform 1402 and/or content services device(s) 1430 may be coupled to a network 1460 to communicate (e.g., send and/or receive) media information to and from network 1460. Content delivery device(s) 1440 also may be coupled to platform 1402 and/or to display 1420. In various implementations, content services device(s) 1430 may include a cable television box, personal computer, network, telephone, Internet enabled devices or appliance capable of delivering digital information and/or content, and any other similar device capable of unidirectionally or bidirectionally communicating content between content providers and platform 1402 and/display 1420, via network 1460 or directly. It will be appreciated that the content may be communicated unidirectionally and/or bidirectionally to and from any one of the components in system 1400 and a content provider via network 1460. Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.Content services device(s) 1430 may receive content such as cable television programming including media information, digital information, and/or other content. Examples of content providers may include any cable or satellite television or radio or Internet content providers. The provided examples are not meant to limit implementations in accordance with the present disclosure in any way. In various implementations, platform 1402 may receive control signals from navigation controller 1450 having one or more navigation features. The navigation features of controller 1450 may be used to interact with user interface 1422, for example. In embodiments, navigation controller 1450 may be a pointing device that may be a computer hardware component (specifically, a human interface device) that allows a user to input spatial (e.g., continuous and multi-dimensional) data into a computer. Many systems such as graphical user interfaces (GUI), and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures. Movements of the navigation features of controller 1450 may be replicated on a display (e.g., display 1420) by movements of a pointer, cursor, focus ring, or other visual indicators displayed on the display. For example, under the control of software applications 1416, the navigation features located on navigation controller 1450 may be mapped to virtual navigation features displayed on user interface 1422, for example. In embodiments, controller 1450 may not be a separate component but may be integrated into platform 1402 and/or display 1420. The present disclosure, however, is not limited to the elements or in the context shown or described herein. In various implementations, drivers (not shown) may include technology to enable users to instantly turn on and off platform 1402 like a television with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow platform 1402 to stream content to media adaptors or other content services device(s) 1430 or content delivery device(s) 1440 even when the platform is turned "off." In addition, chipset 1405 may include hardware and/or software support for 5.1 surround sound audio and/or high definition 7.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In embodiments, the graphics driver may comprise a peripheral component interconnect (PCI) Express graphics card. In various implementations, any one or more of the components shown in system 1400 may be integrated. For example, platform 1402 and content services device(s) 1430 may beintegrated, or platform 1402 and content delivery device(s) 1440 may be integrated, or platform 1402, content services device(s) 1430, and content delivery device(s) 1440 may be integrated, for example. In various embodiments, platform 1402 and display 1420 may be an integrated unit. Display 1420 and content service device(s) 1430 may be integrated, or display 1420 and content delivery device(s) 1440 may be integrated, for example. These examples are not meant to limit the present disclosure. In various embodiments, system 1400 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 1400 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum and so forth. When implemented as a wired system, system 1400 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth. Platform 1402 may establish one or more logical or physical channels to communicate information. The information may include media information and control information. Media information may refer to any data representing content meant for a user. Examples of content may include, for example, data from a voice conversation, videoconference, streaming video, electronic mail ("email") message, voice mail message, alphanumeric symbols, graphics, image, video, text and so forth. Data from a voice conversation may be, for example, speech information, silence periods, background noise, comfort noise, tones and so forth. Control information may refer to any data representing commands, instructions or control words meant for an automated system. For example, control information may be used to route media information through a system, or instruct a node to process the media information in a predetermined manner. The embodiments, however, are not limited to the elements or in the context shown or described in FIG. 14. As described above, system 1400 may be embodied in varying physical styles or form factors. FIG. 15 illustrates implementations of a small form factor device 1500 in which system1400 may be embodied. In embodiments, for example, device 1500 may be implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries, for example. As described above, examples of a mobile computing device may include a personal computer (PC), laptop computer, ultra-laptop computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and so forth. Examples of a mobile computing device also may include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computers, clothing computers, and other wearable computers. In various embodiments, for example, a mobile computing device may be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications. Although some embodiments may be described with a mobile computing device implemented as a smart phone by way of example, it may be appreciated that other embodiments may be implemented using other wireless mobile computing devices as well. The embodiments are not limited in this context. As shown in FIG. 15, device 1500 may include a housing 1502, a display 1504, an input/output (I/O) device 1506, and an antenna 1508. Device 1500 also may include navigation features 1512. Display 1504 may include any suitable display unit for displaying information appropriate for a mobile computing device. I/O device 1506 may include any suitable I/O device for entering information into a mobile computing device. Examples for I/O device 1506 may include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, speakers, voice recognition device and software, and so forth. Information also may be entered into device 1500 by way of microphone (not shown). Such information may be digitized by a voice recognition device (not shown). The embodiments are not limited in this context. Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, andso forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. While certain features set forth herein have been described with reference to various implementations, this description is not intended to be construed in a limiting sense. Hence, various modifications of the implementations described herein, as well as other implementations, which are apparent to persons skilled in the art to which the present disclosure pertains are deemed to lie within the spirit and scope of the present disclosure. |
Methods, servers, and systems for using signatures/certifications embedded in pre-processed code to enable use or reuse of pre-processed code to obviate the need to perform some operations or execute some scripts within the web page content. One or more operations may be performed within an executable script in web page content and signing the result of the operation in a manner that can be used to verify that the corresponding operation may be skipped by a browser. A browser receiving signed pre-processed code may use a signature verification process to determine whether the browser can bypass executing corresponding scripts in the web page content or perform alternative operations. Operations may be pre-performed and the results signed by off-line tools and included in the web page content. Results of operations may be stored in memory along with a signature so the results of the operation can be reused in the future. |
CLAIMS What is claimed is: 1. A method for processing content in a browser, comprising: receiving in the browser content including one or more pre-performed operations, each associated with a signature; using a signature verification process to verify the signature associated with at least one of the pre-performed operations; performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation; and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed operation. 2. The method of claim 1, wherein performing a first operation comprises incorporating the pre-performed operation. 3. The method of claim 1, wherein performing a first operation comprises skipping a browser operation associated with the pre-performed operation. 4. The method of claim 1, wherein performing a first operation comprises altering a browser operation associated with the pre-performed operation. 5. The method of claim 1, wherein performing a second operation comprises performing a browser operation associated with the pre-performed operation. 6. The method of claim 1, wherein using a signature verification process to verify a signature associated with the pre-performed operation comprises determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. 7. The method of claim 1, further comprising: performing tool operations on code corresponding to web page content to generate at least one pre-performed operation; and signing the pre-performed operations. 8. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations in an offline tool, the method further comprising: sending the signed pre-performed operations to the browser. 9. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations in a server, the method further comprising: sending the signed pre-performed operations to a computing device on which the browser is executing. 10. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating JavaScript. 11. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating a cascading style sheet. 12. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing a source to source transformation. 13. The method of claim 7, wherein performing tool operations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 14. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 15. The method of claim 7, wherein signing the pre-processed code is accomplished by a validator. 16. The method of claim 7, wherein signing the pre-processed code comprises providing a signature that certifies that certain rules have been obeyed in the preprocessing operation. 17. The method of claim 7, wherein performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations within the browser. 18. The method of claim 17, wherein receiving content including one or more pre- performed operations each associated with a signature comprises retrieving signed pre-processed code from a memory of a computing device on which the browser is executing. 19. The method of claim 18, further comprising storing a result of the first or second operation in the memory of the computing device. 20. The method of claim 17, wherein performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation comprises pre-processing a portion of the web page content. 21. The method of claim 20, further comprising: including the signed pre-processed portion of the code within web page content; and sending the web page content to a computing device on which the browser is operating. 22. A computing device, comprising: means for receiving content including one or more pre-performed operations, each associated with a signature; means for using a signature verification process to verify the signature associated with at least one of the pre-performed operations; means for performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation; and means for performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed operation. 23. The computing device of claim 22, wherein means for performing a first operation comprises means for incorporating the pre-performed operation. 24. The computing device of claim 22, wherein means for performing a first operation comprises means for skipping a browser operation associated with the pre- performed operation. 25. The computing device of claim 22, wherein means for performing a first operation comprises means for altering a browser operation associated with the pre- performed operation. 26. The computing device of claim 22, wherein means for performing a second operation comprises means for performing a browser operation associated with the pre-performed operation. 27. The computing device of claim 22, wherein means for using a signature verification process to verify a signature associated with the pre-performed operation comprises means for determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. 28. The computing device of claim 22, further comprising: means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation; and means for signing the pre-performed operations. 29. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for generating JavaScript. 30. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for generating a cascading style sheet. 31. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for performing a source to source transformation. 32. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content comprises means for marking portions of a cascading style sheet that are not used. 33. The computing device of claim 28, wherein means for signing the pre-processed code comprises means for signing the pre-processed code in a validator application executing on the computing device. 34. The computing device of claim 28, wherein means for signing the pre-processed code comprises means for generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. 35. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for performing tool operations within a browser application executing on the computing device. 36. The computing device of claim 35, wherein means for receiving content including one or more pre-performed operations each associated with a signature comprises means for retrieving signed pre-processed code from a memory. 37. The computing device of claim 36, further comprising means for storing a result of the first or second operation. 38. The computing device of claim 28, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 39. The computing device of claim 38, wherein means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation comprises means for pre-processing a portion of the web page content. 40. A server, comprising: means for performing tool operations on code corresponding to portions of a web page content to generate at least one pre-performed operation; means for signing the generated pre-performed operations;means for including the signed pre-processed operations within the web page content; and means for sending the web page content to a computing device. 41. The server of claim 40, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for generating JavaScript. 42. The server of claim 40, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for generating a cascading style sheet. 43. The server of claim 40, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for performing a source to source transformation. 44. The server of claim 40, wherein means for performing tool operations on code corresponding to web page content comprises means for marking portions of a cascading style sheet that are not used. 45. The server of claim 40, wherein means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 46. The server of claim 40, wherein means for signing the pre-processed code comprises means for providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. 47. A computing device, comprising: a memory; and a processor coupled to the memory, wherein the processor is configured with processor-executable instructions to perform operations comprising: receiving content that includes one or more pre-performed operations, each associated with a signature; using a signature verification process to verify the signature associated with at least one of the pre-performed operations; performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation; and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed operation. 48. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises incorporating the pre-performed operation. 49. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises skipping a browser operation associated with the pre-performed operation. 50. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises altering a browser operation associated with the pre-performed operation. 51. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations such that performing a second operation comprises performing a browser operation associated with the pre- performed operation. 52. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations such that using a signature verification process to verify a signature associated with the pre-performed operation comprises determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. 53. The computing device of claim 47, wherein the processor is configured with processor-executable instructions to perform operations further comprising: performing tool operations on code corresponding to web page content to generate at least one pre-performed operation; and signing the pre-performed operations. 54. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation comprises generating JavaScript. 55. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation comprises generating a cascading style sheet. 56. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation comprises performing a source to source transformation. 57. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tooloperations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 58. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that signing the pre- processed code comprises signing the pre-processed code in a validator executing on the computing device. 59. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that signing the pre- processed code comprises providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. 60. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation comprises performing tool operations within a browser executing on the computing device. 61. The computing device of claim 60, wherein the processor is configured with processor-executable instructions to perform operations such that receiving content including one or more pre-performed operations each associated with a signature comprises retrieving signed pre-processed code from the memory. 62. The computing device of claim 61, wherein the processor is configured with processor-executable instructions to perform operations further comprising storing a result of the first or second operation in the memory. 63. The computing device of claim 53, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 64. The computing device of claim 63, wherein the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation comprises pre-processing a portion of the web page content. 65. A server, comprising: a memory; and a processor coupled to the memory, wherein the processor is configured with processor-executable instructions to perform operations comprising: performing tool operations on code corresponding to portions of a web page content to generate at least one pre-performed operation; signing the generated pre-performed operations; including the signed pre-processed operations within the web page content; and sending the web page content including the signed pre-processed operations to a computing device. 66. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating JavaScript. 67. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating a cascading style sheet. 68. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing a source to source transformation. 69. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 70. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 71. The server of claim 65, wherein the processor is configured with processor- executable instructions to perform operations such that signing the pre-processed code comprises generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. 72. A non- transitory computer readable storage medium having stored thereon processor-executable software instructions configured to cause a processor to perform operations comprising: receiving content including one or more pre-performed operations, each associated with a signature; using a signature verification process to verify the signature associated with at least one of the pre-performed operations;performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation; and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed operation. 73. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation comprises incorporating the pre-performed operation. 74. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation comprises skipping a browser operation associated with the pre-performed operation. 75. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation comprises altering a browser operation associated with the pre-performed operation. 76. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a second operation comprises performing a browser operation associated with the pre-performed operation. 77. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that using a signature verification process to verify a signature associated with the pre-performed operation comprises determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. 78. The non-transitory computer readable storage medium of claim 72, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations further comprising: performing tool operations on code corresponding to web page content to generate at least one pre-performed operation; and signing the pre-performed operations. 79. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating JavaScript. 80. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating a cascading style sheet. 81. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing a source to source transformation. 82. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 83. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that signing the pre-processed code comprises signing the pre-processed code in a computing device on which a validator is executing. 84. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that signing the pre-processed code comprises providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. 85. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations within a browser. 86. The non-transitory computer readable storage medium of claim 85, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that receiving content including one or more pre- performed operations each associated with a signature comprises retrieving signed pre-processed code from a memory of a computing device on which the browser is executing. 87. The non-transitory computer readable storage medium of claim 86, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations further comprising: storing a result of the first or second operation in the memory of the computing device. 88. The non-transitory computer readable storage medium of claim 78, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 89. The non-transitory computer readable storage medium of claim 88, wherein the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to an executable script in a web page content to generate at least one pre-performed operation comprises pre-processing a portion of the web page content. 90. A non- transitory computer readable storage medium having stored thereon server- executable software instructions configured to cause a server to perform operations comprising: performing tool operations on code corresponding to portions of a web page content to generate at least one pre-performed operation; signing the generated pre-performed operations; including the signed pre-processed operations within the web page content; and sending the web page content including the signed pre-processed operations to a computing device. 91. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating JavaScript. 92. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating a cascading style sheet. 93. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing a source to source transformation. 94. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that performing tool operations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 95. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 96. The non-transitory computer readable storage medium of claim 90, wherein the stored server-executable software instructions are configured to cause the server to perform operations such that signing the pre-processed code comprises providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. 97. A system, comprising: a client device comprising a client memory and a client processor coupled to the client memory; and a server comprising a server memory and a server processor coupled to the server memory, wherein the client processor is configured with processor-executable instructions to perform operations comprising: receiving content that includes one or more pre-performed operations, each associated with a signature; using a signature verification process to verify the signature associated with at least one of the pre-performed operations; performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation; and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed operation, and wherein the server processor is configured with server-executable instructions to perform operations comprising: performing tool operations on code corresponding to web page content to generate at least one pre-performed operation; signing the pre-performed operations; including the signed pre-processed operations within the web page content; and sending the web page content including the signed pre-processed operations to the client device. 98. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that receiving content that includes one or more pre-performed operations comprises receiving web page content including the signed pre-processed operations from the server. 99. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises incorporating the pre-performed operation. 100. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises skipping a browser operation associated with the pre-performed operation. 101. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that performing a first operation comprises altering a browser operation associated with the pre-performed operation. 102. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that performing a second operation comprises performing a browser operation associated with the pre- performed operation. 103. The system of claim 97, wherein the client processor is configured with processor-executable instructions to perform operations such that using a signature verification process to verify a signature associated with the pre-performed operation comprises determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. 104. The system of claim 97, wherein the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating JavaScript. 105. The system of claim 97, wherein the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises generating a cascading style sheet. 106. The system of claim 97, wherein the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing a source to source transformation. 107. The system of claim 97, wherein the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content comprises marking portions of a cascading style sheet that are not used. 108. The system of claim 97, wherein the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation comprises performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. 109. The system of claim 97, wherein the server processor is configured with server- executable instructions such that signing the pre-processed code comprises generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. |
REDUCING WEB BROWSING OVERHEADS WITH EXTERNAL CODE CERTIFICATION RELATED APPLICATIONS [0001] This application claims the benefit of priority to U.S. Provisional Application No. 61/591141, entitled "Reducing Web Browsing Overheads with External Code Certification" filed January 26, 2012, which is hereby incorporated by reference in its entirety. BACKGROUND [0002] Despite many recent advances in browser technology, web browsers generally remain lacking in their ability to perform complex-computation intensive tasks. To address this and other limitations, some web browsers may offload some or all of their tasks/processing to a remote server. For example, some web browsers (e.g., Opera™ Mini) may be configured to request web pages from servers that process and compress the web pages into images files before sending them to the browser. On such systems, the browser simply receives and renders the image, relying on the server to perform nearly all of the processing/tasks associated with displaying the page. [0003] Other web browsers (e.g., Amazon silk) may use a split architecture in which only some of the tasks/processing is offloaded to a server. However, this split architecture generally requires the use of predefined servers and proprietary browsers. Moreover, web browsers (whether proprietary or not) are not always fully informed of the tasks that have already been performed, or if the pre-processed results are current (e.g., in view of recent updates/changes to the content, etc.). Without mechanisms for ensuring the validity of the pre-processed code, a browser is unable to determine whether the code has been efficiently encoded and/or can otherwise be trusted to perform as required to render the associated page.SUMMARY [0004] The various aspects include methods of processing content in a browser, including receiving in the browser content including one or more pre-performed operations, each associated with a signature, using a signature verification process to verify the signature associated with at least one of the pre-performed operations, performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation, and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed. [0005] In an aspect, performing a first operation includes incorporating the pre- performed operation. In a further aspect, performing a first operation includes skipping a browser operation associated with the pre-performed operation. In a further aspect, performing a first operation includes altering a browser operation associated with the pre-performed operation. In a further aspect, performing a second operation includes performing a browser operation associated with the pre-performed operation. In a further aspect, using a signature verification process to verify a signature associated with the pre-performed operation includes determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. In a further aspect, the method includes performing tool operations on code corresponding to web page content to generate at least one pre-performed operation, and signing the pre-performed operations. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations within the browser. In a further aspect, receiving content including one or more pre-performed operations each associated with a signature includes retrieving signed pre-processed code from a memory of a computing device on which the browser is executing. In a further aspect, the method includes storing a result of the first or second operation in the memory of the computing device. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operationsin an offline tool, the method further including sending the signed pre-performed operations to the browser. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations in a server, the method further including sending the signed pre-performed operations to a computing device on which the browser is executing. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating JavaScript. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating a cascading style sheet. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing a source to source transformation. In a further aspect, performing tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, performing tool operations on code corresponding to web page content to generate at least one pre- performed operation includes performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation includes pre-processing a portion of the web page content. In a further aspect, the method further includes including the signed pre-processed portion of the code within web page content, and sending the content to a computing device on which the browser is operating. In a further aspect, signing the pre-processed code is accomplished by a validator. In a further aspect, signing the pre-processed code includes providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0006] Further aspects include a computing device that includes means for receiving content including one or more pre-performed operations, each associated with a signature, means for using a signature verification process to verify the signature associated with at least one of the pre-performed operations, means for performing afirst operation when the signature verification process confirms the signature associated with the pre-performed operation, and means for performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed. In an aspect, means for performing a first operation includes means for incorporating the pre-performed operation. In a further aspect, means for performing a first operation includes means for skipping a browser operation associated with the pre-performed operation. In a further aspect, means for performing a first operation includes means for altering a browser operation associated with the pre-performed operation. In a further aspect, means for performing a second operation includes means for performing a browser operation associated with the pre-performed operation. In a further aspect, means for using a signature verification process to verify a signature associated with the pre-performed operation includes means for determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. In a further aspect, the device further includes means for performing tool operations on code corresponding to web page content to generate at least one pre- performed operation, and means for signing the pre-performed operations. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for performing tool operations within a browser application executing on the computing device. In a further aspect, means for receiving content including one or more pre- performed operations each associated with a signature includes means for retrieving signed pre-processed code from the memory. In a further aspect, the computing device includes means for storing a result of the first or second operation in the memory. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for generating JavaScript. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for generating a cascading style sheet. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means forperforming a source to source transformation. In a further aspect, means for performing tool operations on code corresponding to web page content includes means for marking portions of a cascading style sheet that are not used. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation includes means for pre-processing a portion of the web page content. In a further aspect, means for signing the pre-processed code includes means for signing the pre-processed code in a validator application executing on the computing device. In a further aspect, means for signing the pre-processed code includes means for generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0007] Further aspects include a server that includes means for receiving web page content, means for performing tool operations on code corresponding to portions of the web page content to generate at least one pre-performed operation, means for signing the generated pre-performed operations, means for including the signed pre- processed operations within the web page content, and means for sending the web page content to a computing device. In an aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre- performed operation includes means for generating JavaScript. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for generating a cascading style sheet. In a further aspect, means for performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes means for performing a source to source transformation. In a further aspect, means for performing tool operations on code corresponding to web page content includes means for marking portions of a cascading style sheet that are not used. In a further aspect, means for performing tool operations on codecorresponding to web page content to generate at least one pre-performed operation includes means for performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, means for signing the pre-processed code includes means for providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0008] Further aspects include a computing device that includes a memory, and a processor coupled to the memory, in which the processor is configured with processor-executable instructions to perform operations including receiving content that includes one or more pre-performed operations, each associated with a signature, using a signature verification process to verify the signature associated with at least one of the pre-performed operations, performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation, and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed. In an aspect, the processor is configured with processor-executable instructions to perform operations such that performing a first operation includes incorporating the pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing a first operation includes skipping a browser operation associated with the pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing a first operation includes altering a browser operation associated with the pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing a second operation includes performing a browser operation associated with the pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that using a signature verification process to verify a signature associated with the pre- performed operation includes determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achievebetter results. In a further aspect, the processor is configured with processor- executable instructions to perform operations further including performing tool operations on code corresponding to web page content to generate at least one pre- performed operation, and signing the pre-performed operations. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations within a browser executing on the computing device. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that receiving content including one or more pre-performed operations each associated with a signature includes retrieving signed pre-processed code from the memory. In a further aspect, the processor is configured with processor- executable instructions to perform operations further including storing a result of the first or second operation in the memory. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating JavaScript. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating a cascading style sheet. In a further aspect, the processor is configured with processor- executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing a source to source transformation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations on code corresponding to an executable script in the webpage content to generate at least one pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation includes preprocessing a portion of the web page content. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that signing the pre-processed code includes signing the pre-processed code in a validator executing on the computing device. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that signing the pre- processed code includes providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0009] Further aspects include a server that includes a memory, and a processor coupled to the memory, in which the processor is configured with processor- executable instructions to perform operations including receiving web page content, performing tool operations on code corresponding to portions of the web page content to generate at least one pre-performed operation, signing the generated pre-performed operations, including the signed pre-processed operations within the web page content, and sending the web page content including the signed pre-processed operations to a computing device. In an aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation includes generating JavaScript. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating a cascading style sheet. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing a source to source transformation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such thatperforming tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, the processor is configured with processor-executable instructions to perform operations such that signing the pre- processed code includes generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0010] Further aspects include a non-transitory computer readable storage medium having stored thereon processor-executable software instructions configured to cause a processor to perform operations for processing content in a browser, the operations including receiving content including one or more pre-performed operations, each associated with a signature, using a signature verification process to verify the signature associated with at least one of the pre-performed operations, performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation, and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed. In an aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation includes incorporating the pre-performed operation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation includes skipping a browser operation associated with the pre-performed operation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a first operation includes altering a browser operation associated with the pre-performed operation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing a second operation includesperforming a browser operation associated with the pre-performed operation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that using a signature verification process to verify a signature associated with the pre-performed operation includes determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations further including performing tool operations on code corresponding to web page content to generate at least one pre-performed operation, and signing the pre-performed operations. In a further aspect, the stored processor- executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations within the browser. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that receiving content including one or more pre-performed operations each associated with a signature includes retrieving signed pre-processed code from a memory of a computing device on which the browser is executing. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations further including storing a result of the first or second operation in the memory of the computing device. In a further aspect, the stored processor- executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating JavaScript. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation includes generating a cascading style sheet. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operationincludes performing a source to source transformation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre- performed operation includes performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that performing tool operations on code corresponding to an executable script in a web page content to generate at least one pre-performed operation includes pre-processing a portion of the web page content. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that signing the pre-processed code includes signing the pre-processed code in a computing device on which a validator is executing. In a further aspect, the stored processor-executable software instructions are configured to cause a processor to perform operations such that signing the pre-processed code includes providing a signature that certifies that certain rules have been obeyed in the pre-processing operation. [0011] Further aspects include a non-transitory computer readable storage medium having stored thereon server-executable software instructions configured to cause a server to perform operations including receiving web page content, performing tool operations on code corresponding to portions of the web page content to generate at least one pre-performed operation, signing the generated pre-performed operations, including the signed pre-processed operations within the web page content, and sending the web page content including the signed pre-processed operations to a computing device, and the server processor is configured with server-executable instructions to perform operations including performing tool operations on codecorresponding to web page content to generate at least one pre-performed operation, and signing the pre-performed operations. [0012] In an aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating JavaScript. In a further aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating a cascading style sheet. In a further aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing a source to source transformation. In a further aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that performing tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, the stored server-executable software instructions are configured to cause a server to perform operations such that signing the pre-processed code includes providing a signature that certifies that certain rules have been obeyed in the preprocessing operation. [0013] Further aspects include a system that includes a client device including a client memory and a client processor coupled to the client memory, and a server including a server memory and a server processor coupled to the server memory, in which the client processor is configured with processor-executable instructions to perform operations including receiving content that includes one or more pre-performedoperations, each associated with a signature, using a signature verification process to verify the signature associated with at least one of the pre-performed operations, performing a first operation when the signature verification process confirms the signature associated with the pre-performed operation, and performing a second operation when the signature verification process does not confirm the signature associated with the pre-performed. In an aspect, the client processor is configured with processor-executable instructions to perform operations such that performing a first operation includes incorporating the pre-performed operation. In a further aspect, the client processor is configured with processor-executable instructions to perform operations such that performing a first operation includes skipping a browser operation associated with the pre-performed operation. In a further aspect, the client processor is configured with processor-executable instructions to perform operations such that performing a first operation includes altering a browser operation associated with the pre-performed operation. In a further aspect, the client processor is configured with processor-executable instructions to perform operations such that performing a second operation includes performing a browser operation associated with the pre-performed operation. In a further aspect, the client processor is configured with processor-executable instructions to perform operations such that using a signature verification process to verify a signature associated with the pre- performed operation includes determining whether a browser operation associated with the pre-performed operation may be skipped or performed differently to achieve better results. In a further aspect, the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating JavaScript. In a further aspect, the server processor is configured with server- executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes generating a cascading style sheet. In a further aspect, the server processor is configured with server-executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing a source to source transformation. In a further aspect, the serverprocessor is configured with server-executable instructions such that performing tool operations on code corresponding to web page content includes marking portions of a cascading style sheet that are not used. In a further aspect, the server processor is configured with server-executable instructions such that performing tool operations on code corresponding to web page content to generate at least one pre-performed operation includes performing tool operations on code corresponding to an executable script in the web page content to generate at least one pre-performed operation. In a further aspect, the server processor is configured with server-executable instructions such that signing the pre-processed code includes generating a signature that certifies that certain rules have been obeyed in the pre-processing operation. In a further aspect, the server processor is configured with server-executable instructions to perform operations further including including the signed pre-processed operations within the web page content, and sending the web page content including the signed pre-processed operations to the client device. In a further aspect, the client processor is configured with processor-executable instructions to perform operations such that receiving content that includes one or more pre-performed operations includes receiving web page content including the signed pre-processed operations from the server. BRIEF DESCRIPTION OF THE DRAWINGS [0014] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary aspects of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention. [0015] FIG. 1 is a component block diagram illustrating logical components and flows in an example network suitable for implementing the various aspects. [0016] FIGs. 2A-B are process flow diagrams of aspect methods for reducing web browsing overheads with external code certification.[0017] FIG. 3 is a process flow diagram of another aspect method for reducing web browsing overheads with external code certification. [0018] FIG. 4 is an illustration of an example mobile device suitable for use with the various aspects. [0019] FIG. 5 is an illustration of an example personal computer suitable for use with the various aspects. DETAILED DESCRIPTION [0020] The various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. [0021] The term "computing device" is used generically herein to refer to any one or all of servers, personal computers, mobile devices, cellular telephones, personal data assistants (PDA's), palm-top computers, wireless electronic mail receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the Blackberry Storm®), Global Positioning System (GPS) receivers, wireless gaming controllers, personal computers, and similar personal electronic devices which include a programmable processor configured with a web browser type application. While the various aspects are particularly useful in mobile devices, such as cellular telephones, which have limited processing power, the aspects are generally useful in any computing device that executes scripts and applications written in dynamic and/or scripting languages. [0022] The terms "signing" and "certifying" are used generically herein and may refer to any method of encoding or labeling code, scripts, data, or content such that a client (e.g., web browser) can determine that the code/scripts/data/content was pre-processed by an offline tool or validator, and/or otherwise conforms with the client' s requirements.[0023] The term "scripting language" is used generically in this application and may refer to any dynamic language, scripting language, markup language, style sheet, or to any interpreted language used to write programs (herein "code" or "scripts") that are interpreted and/or compiled at runtime. Thus, for the purposes of this application, the term "scripting language" should not be limited to languages that are interpreted from source code or bytecode, or to those that execute along with programs that are traditionally compiled into native machine code. Examples of scripting languages within the scope of this application include, for example, JavaScript, Cascading Style Sheets, HTML, Python, and Ruby, as well as Java and other languages that may be developed in the future. [0024] Various aspects are described herein using JavaScript and related terminology as convenient examples of a scripting language that may be used or addressed by the various embodiments. However, it should be understood that the examples related to JavaScript and other references to the JavaScript language herein are for illustration purposes only, and are not intended to limit the descriptions or the embodiments to that particular type of dynamic scripting language. Therefore the scope of the claims should not be construed as requiring JavaScript unless specifically recited. [0025] It should be understood that the terms "code" and "scripts" are used generically and interchangeably herein, and encompass data and content that may be used or rendered by an application. It should also be understood that the various aspects disclosed herein may be applicable to any part of an application (e.g., browser), including both code and content. [0026] In the various aspects, browser operations may be separated into two portions (an offline tool portion and a browser portion) such that certain browser operations are separated from the other browser operations in both time and space (i.e., performed ahead of time, by a different machine). Signatures/certifications may be used to ensure that the results of these two portions may be safely combined at runtime. [0027] As mentioned above, despite many recent advances in browser technology, web browsers generally remain lacking in their ability to perform complex-computation intensive tasks. The various aspects overcome this and other limitations, by enabling some tasks to be performed ahead of time, using external or "offline" tools that are separate from the web browsers themselves. For example, a browser may configured to allow for certain transformations, optimizations, computations, and/or analysis to be done ahead of time by offline/external tools, the results of which may be received by the browser and used at runtime to improve the browser's performance. The offline/external tool may preprocess the data by, for example, marking portions of cascading style sheets (CSS) that are not used during payload such that the browser can readily identify the portions that are not used. The offline/external tool may also perform source-to- source transformations (e.g., takes in JavaScript and generates optimized JavaScript), and the generated/transformed code (e.g., JavaScript) may be embedded with the content for the browser to process, execute and/or display. [0028] Since both the original code and the following generated code may be in the same format (e.g., both may be JavaScript), a web browser may not be fully informed of the tasks that have already been performed by the external/offline tools, whether the pre-processed tasks are still current (e.g., in view of recent updates/changes to the content, etc.), or whether the pre-performed tasks were performed in such a way so as to not cause faults or violations (e.g., incorrect execution, unconstrained faults, etc.) due to, for example, incompatible assumptions. [0029] For these and other reasons, a browser may be required to perform a number of operations to verify the validity of the generated code before execution. This verification process may require passing a substantial amount of supplemental information (e.g., task and version information, browsers supported, pre-processing methodologies, etc.) between the browser and the offline tool, and/or performing byte- code verification on the entire body of received code (e.g., as mandated when loading Java classes). Passing large amounts of supplemental information adds overhead to the browser. The verification overhead may be more than just performing all phases and ignoring pre-processed data. The extra overhead may cancel benefits of running the offline tool.[0030] Various aspects provide a framework that utilizes a signature or certification that is associated with one or more pre-processed scripts/code/data/content (herein collectively "script" or "code") such that a web browser can verify, confirm, and/or trust the script and skip further processing the associated script by relying on code previously generated and stored in memory by the browser or provided by an offline/external tool. [0031] Various aspects verify, encode, and pass pre-processed code to a browser in a manner that enables the browser to determine the tasks (e.g., transformations, optimizations, compilations, computations, analysis, etc.) that have been pre- processed, and such that the browser can trust that the pre-processed code is trustworthy (i.e., that the pre-processed code will execute correctly), without performing additional processing. [0032] In various aspects, code/data generated by an offline/external tool and included with the rest of the web page content may be embedded with a "verified stamp" or "signature." This signature may identify (e.g., via a verification identifier) the tasks which have been accomplished. This signature may also enable the browser to confirm that the code has been efficiently encoded and may be executed without additional processing or browser verification. Thus, in an aspect, instead of passing a cumbersome amount of supplemental information to the browser, the generated code may be signed with the signature embedded in the code (e.g., in comments, annotations, etc.) such that client applications (e.g., browsers) can readily identify which tasks have been accomplished and trust that the code is safe to execute. In an aspect, the signature may be well defined, structured and efficiently encoded supplemental information. [0033] By embedding the "verified stamp" or "signature" into the code, the various aspects eliminate the need for the browser to perform any additional operations to verify the pre-processed code, reducing web browsing overhead and improving performance.[0034] In an aspect, the use of signatures to confirm and verify previously processed code may also be used by the web browser when storing results of processing a web page in memory. In this aspect, when the browser processes web page scripts while rendering a web page, the processed script may be stored in memory for reuse the next time the page is rendered. Since web pages change frequently, the processor would conventionally have to process the page scripts significantly in order to determine whether the page is the same as previously rendered. The aspects enable the web browser to sign code saved in memory after it has been processed. The browser may then use the signature to determine whether the saved code can be trusted to properly render the page. For example, if the web page content has changed since the last time it was rendered by the web browser, the process of verifying the signature may inform the browser of the change in content, in which case the browser may choose to execute the script instead of reusing previous code retrieved from memory. [0035] The embedding of the stamps/signatures/certifications in pre-processed code may provide an efficient communication protocol between the external/offline tool and the browser, enabling the browser to confirm the safety or trustworthy nature of the received code (i.e., no improper memory accesses, nothing significant has changed since generating the code, the code will not cause malfunctions, etc.). [0036] As mentioned above, browser operations may be separated into two portions (an offline tool portion and a browser portion) such that certain browser operations are separated from the other operations in both time and space (e.g., performed ahead of time, or by a different machine). In an aspect, an offline tool (e.g., a tool that performs static and/or dynamic analysis) may generate pre-processed code, sign the code to certify that the code obeys certain rules, and embed the signed code into the browser. In an aspect, pre-processed code may be validated by an external validator, which may sign the results of the offline tool (i.e., the pre-processed code). The results may be signed with a private key or by including known keywords in the form of tags, attributes, formatted comments, etc. The browser may use cryptographic credentials to determine whether the code was processed by a known external/offline tool, whether the code is current, and/or whether the code is safe or trustworthy. Forexample, the browser may use a validator public key to validate the embedded signature to determine whether the code was in fact processed by a trusted validator or the expected version of it. If the browser validates the signature in the code, the browser may trust that the code is safe to execute without spending any additional overhead, requesting additional information or performing any of the additional processing/analysis typically required for code verifications. This process enables the browser to rely on and use the pre-processed code, thereby reducing processing overheads in the client device and improving performance. [0037] In aspects in which the web browser stores processed code and uses an embedded signature to verify that the stored code is trustworthy, the process proceeds in a similar manner except that the browser itself serves as the validator. [0038] FIG. 1 illustrates an example network 100 that may be used for reducing web browsing overheads with external code certification in accordance with the various aspects. The network 100 may include a web/content server 102 that delivers content to a client machine 106 via the Internet 104. The client machine 106 may include a network interface module 108, a display module 116, a memory 118, and a web browser 110. The browser 110 may include a JavaScript engine 112 for interpreting and executing JavaScript. [0039] The network 100 may also include offline/external tools 114 configured to perform browser operations. The external/offline tool 114 may be implemented anywhere in the network 100, such as on the web server 102, a separate server, a proxy, or on the client machine 106. The external/offline tool 114 may be implemented as an independent process or as a part of the browser 110. The external/offline tool 114 may be configured to generate to code (e.g., may be a preprocessor) or to send static pre-processed code (e.g., code provided by the developer, results of a previous execution session of the browser, etc.) to the browser 110. [0040] The browser 110 may be configured to offload certain browser operations (e.g., transforms, optimizations, etc.) to the offline/external tools 114 such that the offloaded operations are separated from the other operations in time and/or space (i.e.,performed ahead of time, by a different machine). The external/offline tool 114 may compile the Javascript, generate code for one or more platforms (e.g., android, etc.), and sign the generated code with a signature. The code generated by the offline/external tools 114 may be the same type of code used by the browser (i.e., performs source to source transformation). For example, the offline tool may take JavaScript code as input and generate optimized (and signed) JavaScript code as its output. The generated code may be complied executable code (e.g., a series of fully compiled functions). The existence of the signature allows the browser 110 to call the generated code directly and trust that its execution will result in the exact same operation as if the JavaScript code was generated by the browser 110 itself (e.g., via the JavaScript engine 112). This effectively eliminates virtually all the costs of JavaScript compilation from the browser/user perspective. [0041] In an aspect, the external/offline tool 114 may be part of browser 110 and include a preprocessor that pre-processes scripts when the client machine 106 detects that it is connected to a power source and/or is idle. [0042] In an aspect, the external/offline tool 114 may sign and store the preprocessed code in memory for later use. [0043] In an aspect, signatures may be embedded in the generated code so that they do not impact browsers that do not support the signatures. In an aspect, the offline/external tool may be configured to embed the code such that the embedded code can be ignored by an unmodified JavaScript engine and processed by JavaScript engines modified to understand the embedded code. In an aspect, the offline/external tool may be configured to embed the code in comments or annotations. [0044] FIG. 2A illustrates an aspect method 200 of reducing web browsing overheads by using an external code certification. In block 202, browser operations may be separated into an offline tool portion and a browser portion. In block 204, an offline tool may perform the offline tool portions in advance of the browser's execution of the code and generate pre-processed code. In block 206, the offline tool may sign the pre- processed code by embedding a signature into the code. In an aspect, as part of block206, the pre-processed code may be validated by a validator, which may sign the results of the offline tool (i.e., the pre-processed code) with a private key or other verifiable key used in a hash-type signature operation. Any of a variety of known signing processes may be used in generating the signature based on the content of the processed code. By signing the processed code, a receiver device is able to verify the signature by performing the same or a parallel process on that code when it is received. Alternatively, the signature may be based on the pre-processed code. In block 208, the signed pre-processed code may be sent to the browser along with the rest of the web content. [0045] In block 210, the browser may receive the signed code along with the rest of the web page content. In block 212, the browser may evaluate the signatures in the received code. For example, as part of block 212, the browser may use a validator public key to validate if the code was in fact processed by a trusted validator. Also or alternatively, the browser may perform a hash function on the script in the web page that has been pre-processed to obtain a hash value characteristic of that code. This verification process can confirm both that the pre-processed code corresponds to the non-processed script in the web page and that the pre-processing was performed by a trustworthy offline tool. [0046] In determination block 214, the browser may determine whether the signatures match. If the browser determines that the generated and embedded signatures match (i.e., determination block 214 = "Yes"), in block 216, the browser may combine the client portions and offline tool portions at runtime and execute the signed code trusting in the security of the code. If the browser determines that the signatures do not match (i.e., determination block 214 = "No"), in block 218, the browser may render the page by executing scripts as if the pre-processed code had not been provided. Thus, the embedding of signatures allows certain browser operations to be separated from the other operations in both time and space (i.e., performed ahead of time, by a different machine) by ensuring that the results of the two portions can be safely combined at runtime by enabling the browser to trust that the code is safe (e.g., correct operation, no unauthorized memory accesses, etc.).[0047] FIG. 2B illustrates another aspect method 250 of reducing web browsing overheads by using an external code certification. In block 252, a browser may receive input from an offline tool portion. In block 254, the browser may receive singed pre-processed code/content. In determination block 256, the browser may determine there are any valid signatures in the received singed pre-processed code/content (e.g., via determining that the generated and embedded signatures match). If it is determined that there are no valid signatures (i.e., determination block 256 = "No"), in block 258, the browser may render the page by performing full operations on all the received code (e.g., for each phases) as if the pre-processed code had not been provided. If it is determined that there are valid signatures (i.e., determination block 256 = "Yes"), in block 260, the browser may render the page by executing the pre-processed code, and only performing full verification of the unsigned portions of the code/content. In optional block 262, the browser may optionally generate signatures and results for the next set of executions, and store them locally or remotely for later retrieval. Any of a variety of known signing processes may be used in generating the signature, which may be based on the content of the processed code. By signing the processed code, a receiver device is able to verify the signature by performing the same or a parallel process on that code when it is accessed from memory or received as part of an accessed webpage. [0048] FIG. 3 illustrates another aspect method 300 of reducing web browsing overheads by using the code signing methods in order to save processed code in a manner that allows the browser to determine at a later time whether it can be reused when rendering the same page. In block 302, the browser may receive a request to visit a particular website. In determination block 304, the browser may determine whether this is the first time that the site has been visited in a given time window. If the browser determines that it is the first visit (i.e., determination block 304 = "Yes"), in block 306, the browser may process the web page content using conventional methods. In block 308, the browser may sign the processed code and include or embed the generated signature. In block 310, the signed processed code may be stored in a memory of the device on which the web browser is acting.[0049] If the browser determines that it is not the first visit to the web page (i.e., determination block 304 = "No"), in block 312, the signed code may be retrieved from memory. In block 314, the browser may verify the signature included with or embedded in the code. This process may involve performing the signature process (e.g., a hash function) on the corresponding script within the web page content to generate another signature. In determination block 316, the browser may determine whether the signatures match. If the signatures are generated based on the scripts within the web page that was pre-processed, comparing the signatures will enable the browser to quickly confirm whether the stored previously processed code was generated by executing the same scripts as in the currently downloaded web page. If the browser determines that the signatures match (i.e., determination block 316 = "Yes"), in block 320, the browser may execute the signed code having verified that the stored previously processed code was generated by processing the same scripts as in the current web page content. Thus, a signature match enables the browser to trust that execution of the previously stored code will properly render the current webpage. If the browser determines that the signatures do not match (i.e., determination block 316 = "No"), in block 318, the browser may perform the operations of executing scripts to render the web page in block 306 as if the previously processed code was not stored in memory. [0050] In a further aspect, the operations of method 300 may be combined with those of method 200 so that the web browser validates and uses pre-processed code supplied by off line tools with the web page content, stores the results of processing web page scripts with a signature, and reuses previously stored code when its associated signature is validated. [0051] Various aspects may be configured such that the non-existence of a signature in the code indicates to the JavaScript engine that the scripts in the web page has not yet been processed and therefore must be processed by the browser. In an aspect, the signatures may indicate to the browser that only a restricted subset of the available language features that are amenable to optimization have been used to generate the code and that the execution of the code is will not result in certain features being used.In an aspect, the signatures may indicate to the browser that an augmented set of the available language features (e.g., type checking) have been utilized to generate the code and that the browser can forgo performing similar operations. [0052] In an aspect, the offline/external tool may be a compiler that pre-compiles the code. [0053] It should be understood that the various aspects are not concerned with security, but ensuring the validity of previous operations or optimizations. The various aspect methods are not focused solely on executable code and JavaScript® code, and may be applied to any part of the browser, both code and content. [0054] Various aspects may partition a tool (e.g., JavaScript compiler, parser, CSS processor, layout engine, etc.) into off-line and on-line parts. The offline part may perform a set of operations on the code and generates one or multiple signatures that capture the performed operations. The online part may check the signature(s) and decide whether a certain operation can be skipped, may be performed in a simplified or approximate form, may be performed differently to achieve better results, and/or if the client may otherwise take advantage of the pre-validated code. If not, the online part may perform the same operations (potentially less optimized) on the input (code or content) again. [0055] One of the benefits provided by the various aspect is the use of signatures as an inexpensive way of determining whether certain operations can be skipped or simplified in the on-line part. The offline part may be implemented on a server or on the client. In an aspect, the offline part may be implemented on the client and executed when the computing device is idle. [0056] The various aspects may be implemented on any of a variety of computing devices. An example of a mobile computing device is illustrated in FIG. 4, and an example of a notebook computer is illustrated in FIG. 5. Typical mobile computing devices 400 will have in common the components illustrated in FIG. 4. For example, mobile computing devices 400 may include a processor 401 coupled to internalmemory 402 and a touch surface input device/display 403. The touchscreen display 403, such as a resistive sensing touchscreen, capacitive sensing touchscreen, infrared sensing touchscreen, acoustic/piezoelectric sensing touchscreen, or the like. The various aspects are not limited to any particular type of touchscreen display 403 or touchpad technology. Additionally, the computing device 400 may have an antenna 404 for sending and receiving electromagnetic radiation that is connected to a wireless data link and/or cellular telephone transceiver 405 coupled to the processor 401. Computing devices 400 may also include physical buttons 408 for receiving user inputs. [0057] While the various aspects may provide significant performance enhancements for mobile computing devices, other forms of computing devices, including personal computers and laptop computers, may also benefit from pre-parsing of the dynamic language scripts. Such computing devices typically include the components illustrated in FIG. 5, which illustrates an example personal laptop computer 500. Such a personal computer 500 generally includes a processor 501 coupled to volatile memory 502 and a large capacity nonvolatile memory, such as a disk drive 503. The computer 500 may also include a compact disc (CD) and/or DVD drive 504 coupled to the processor 501. The computer device 500 may also include a number of connector ports coupled to the processor 401 for establishing data connections or receiving external memory devices, such as a network connection circuit 505 for coupling the processor 401 to a network. The computer 500 may further be coupled to a keyboard 508, a pointing device such as a mouse 510, and a display 509 as is well known in the computer arts. [0058] The various aspects may also be implemented on any of a variety of commercially available server devices, such as the server 600 illustrated in FIG. 6. Such a server 600 typically includes a processor 601, and may include multiple processor systems 611, 621, 631, one or more of which may be or include multi-core processors. The processor 601 may be coupled to volatile memory 602 and a large capacity nonvolatile memory, such as a disk drive 603. The server 600 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 606 coupled to theprocessor 601. The server 600 may also include network access ports 604 coupled to the processor 601 for establishing data connections with a network 605, such as a local area network coupled to other broadcast system computers and servers. [0059] The processor 401, 501, 601 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various aspects described herein. In some mobile devices, multiple processors 401, 501, 601 may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory 402, 502, 602 before they are accessed and loaded into the processor 401, 501, 601. In some mobile devices, the processor 401, 501, 601 may include internal memory sufficient to store the application software instructions. In some mobile devices, the secure memory may be in a separate memory chip coupled to the processor 401, 501, 601. The internal memory 402, 502, 602 may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to all memory accessible by the processor 401, 501, 601, including internal memory 402, 502, 602, removable memory plugged into the mobile device, and memory within the processor 401, 501, 601 itself. [0060] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of the various aspects must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing aspects may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0061] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0062] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. [0063] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory processor-readable or computer-readable storage medium. Non-transitory processor-readable and computer-readable media may be any available storage media that may be accessed by a computer or a processor of a computing device. By way of example, and not limitation, such non-transitory processor-readable or computer- readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor of a computing device. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of non-transitory computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non- transitory processor- readable medium and/or non-transitory computer-readable medium, which may be incorporated into a computer program product. [0064] The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
A method of fabricating an integrated circuit provides a transistor having less susceptibility to short channel effects. The transistor utilizes a U-shaped gate conductor and a main gate conductor. The U-shaped gate conductor can provide electrically induced source/drain extensions. The transistor can be a PMOS or NMOS transistor. |
What is claimed is: 1. A method of fabricating an integrated circuit on a substrate, the integrated circuit including at least one transistor with electrically induced source/drain extensions, the method comprising:providing a first gate conductor on a substrate; providing a dielectric layer over the first gate conductor; and providing a second gate conductor over the dielectric layer above the first gate conductor, the second conductor for forming the electrically induced source/drain extensions. 2. The method of claim 1, wherein the second gate conductor is a U-shaped gate electrode.3. The method of claim 1, wherein the dielectric layer is a high-K dielectric material.4. The method of claim 2, further comprising siliciding a top surface of the U-shaped conductor.5. The method of claim 1, wherein the first gate conductor |
CROSS REFERENCE TO RELATED APPLICATIONSThe present application is related to U.S. patent application Ser. No. 09/372,705, entitled "Transistor with Dynamic Source/Drain Extensions," filed by Yu on Aug. 11, 1999, and assigned to the assignee of the present application.FIELD OF THE INVENTIONThe present invention relates to integrated circuits (ICs) and methods of manufacturing integrated circuits. More particularly, the present invention relates to a transistor and a method of manufacturing the transistor. The transistor is advantageously less susceptible to short channel effects.BACKGROUND OF THE INVENTIONIntegrated circuits (ICs), such as, ultra-large scale integrated (ULSI) circuits, can include as many as one million transistors or more. The ULSI circuit can include complementary metal oxide semiconductor (CMOS) field effect transistors (FETS). The transistors can include conducive gates disposed between drain and source regions. The drain and source regions are typically heavily doped with a P-type dopant (boron) or an N-type dopant (phosphorous).The drain and source regions generally include a thin or shallow extension that is disposed partially underneath the gate to enhance the transistor performance. Shallow source and drain extensions help to achieve immunity to short-channel effects, which degrade transistor performance for both N-channel and P-channel transistors. Short-channel effects are among the most important scaling issues for mainstream CMOS technology and can cause threshold voltage roll-off and drain-induced barrier lowering. Shallow source and drain extensions and, hence, controlling short-channel effects, are particularly important as transistors become smaller.Conventional techniques utilize a double implant process to form shallow source and drain extensions. According to the conventional process, the source and drain extensions are formed by providing a transistor gate structure without sidewall spacers on a top surface of a silicon substrate. The silicon substrate is doped on both sides of the gate structure via a conventional doping process, such as, a diffusion process or an ion-implantation process. Without the sidewall spacers, the doping process introduces dopants into a thin region (i.e., just below the top surface of the substrate) to form the drain and source extensions as well as to partially form the drain and source regions.After the drain and source extensions are formed, silicon dioxide spacers, which abut lateral sides of the gate structure, are provided over the source and drain extensions. The substrate is doped a second time to form the deeper source and drain regions. The source and drain extensions are not further doped due to the blocking capability of the silicon dioxide spacer.As transistors disposed on integrated circuits (ICs) become smaller, transistors with shallow and ultra-shallow source/drain extensions have become more difficult to manufacture. For example, smaller transistors should have ultra-shallow source and drain extensions (less than 30 nanometer (nm) junction depth). Forming source and drain extensions with junction depths of less than 30 nm is very difficult using conventional fabrication techniques.Conventional ion-implantation and diffusion-doping techniques make transistors on the IC susceptible to short-channel effects, which result in a dopant profile tail distribution that extends deep into the substrate. Also, conventional ion-implantation techniques have difficulty maintaining shallow source and drain extensions because point defects generated in the bulk semiconductor substrate during ion implantation can cause the dopant to more easily diffuse (transient enhanced diffusion, TED). The diffusion often extends the source and drain extensions vertically into the bulk semiconductor substrate.Furthermore, as transistors disposed on integrated circuits (ICs) become smaller (e.g., transistors with gate lengths approaching 70 nm or less), source and drain extension depths need to be aggressively reduced to achieve acceptable immunity to the short-channel effect. For example, a transistor having a gate length of less than 70 nm should have an ultra-shallow source/drain extension (e.g., depth of 10-20 nm). However, the formation of the ultra-shallow source/drain extension is very difficult with conventional ion implantation and thermal annealing techniques. For example, ultra-shallow source/drain extensions are susceptible to significant dopant loss during the low-KeV implantation, as well as to increased transient-enhanced diffusion (TED), which make the junction depth much deeper. These problems can prevent the manufacture of a ULSI integrated circuit having transistors with gate lengths of less than 50 nm.Another important factor associated with reduced transistor size relates to transistor leakage current. As the physical length of the gate is reduced to increase the transistor on-state drive current, the spacing between the source and drain extensions becomes closer. The off-state leakage current dramatically increases as the source/drain extensions become closer. Increased off-state leakage current increases the power consumption and heat generated by an integrated circuit.Thus, there is a need for a transistor that has source and drain extensions that are not formed by conventional processes. Further still, there is a need for a transistor with less susceptibility to short-channel effects. Further still, there is a need for source/drain extensions that do not contribute significantly to off-state leakage current. Even further still, there is a need for a method of making a novel transistor structure that is less susceptible to short channel effects.SUMMARY OF THE INVENTIONOne exemplary embodiment relates to a transistor with electrically induced source/drain extensions. The transistor includes a source, a drain, and a gate structure. The gate structure is disposed between the source and the drain and has a first gate electrode and a second gate electrode. The second gate electrode provides the electrically induced source/drain extensions. The first gate electrode turns the transistor on when a signal or voltage is applied to the first gate electrode.Another exemplary embodiment relates to a circuit comprising a transistor having a gate electrode means for receiving a gate signal. The transistor is in an on state in response to a gate signal having a first level and is in an off state in response to the gate signal having a second level. Source/drain extensions are formed in response to a gate bias.Yet another exemplary embodiment relates to a method of fabricating an integrated circuit on a substrate. The integrated circuit includes at least one transistor with electrically induced source/drain extensions. The method includes providing a first gate conductor on a substrate, providing a dielectric layer over the first gate conductor, and providing a second gate conductor over the dielectric layer above the first gate conductor. The second gate conductor is capable of forming electrically induced source/drain extensions.Still another exemplary embodiment relates to a transistor. The transistor includes a source region, a drain region, and a gate structure. The gate structure is disposed between the source region and the drain region. The transistor is in an on state in response to a gate signal having a first level and is in an off state in response to the gate signal having a second level. The gate structure includes a first gate electrode for forming the source and drain extensions and a second gate electrode for receiving the gate signal.BRIEF DESCRIPTION OF THE DRAWINGSExemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a schematic cross-sectional view of a portion of an integrated circuit including a transistor with electrically induced source/drain extensions in accordance with an exemplary embodiment;FIG. 2 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 1, showing a first electrode gate formation step;FIG. 3 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 2, showing a dielectric layer deposition step;FIG. 4 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 3, showing a second gate electrode layer deposition step;FIG. 5 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 4, showing a lithographic step;FIG. 6 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 5, showing a removal step for portions of the dielectric layer and the second gate electrode layer;FIG. 7 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 6, showing a spacer provision step for the second gate electrode; andFIG. 8 is a schematic cross-sectional view of the portion of the integrated circuit illustrated in FIG. 7, showing a deep source and drain implant step.DETAILED DESCRIPTION OF THE PREFERRED EXEMPLARY EMBODIMENTSFIG. 1 shows an advantageous transistor structure with ultra-shallow, electrically induced source/drain extensions. FIGS. 1-8 illustrate an advantageous complementary metal oxide semiconductor (CMOS) fabrication process for forming the advantageous transistor structure on a substrate. The advantageous process and operation of the transistor structure is described below, with reference to FIGS. 1-8, as follows.With reference to FIG. 1, a transistor 12 is disposed on a semiconductor substrate 14, such as, a single crystal silicon wafer. Transistor 12 is part of a portion 10 of an integrated circuit (IC) manufactured on an IC wafer. Transistor 12 preferably has a gate length of less than 70 nanometer (nm) (e.g., approaching 50 nm).Substrate 14 can be any semiconductor material, including gallium arsenide (GaAs), silicon (Si), germanium (Ge), or other material. Alternatively, substrate 14 can be a thin-film or an epitaxial layer that is part of a silicon-on-insulator substrate.Transistor 12 includes a gate stack or structure 18, a source region 22, and a drain region 24. Transistor 12 also includes an electrically induced source extension 23 and an electrically induced drain extension 25, respectively. Extensions 23 and 25 are "electrically induced" in that they are formed at least in part from an electrical field associated with gate structure 18. In the exemplary embodiment, source region 22 and drain region 24 are 60-120 nm deep (60-120 nm below a top surface 39 of substrate 14).Transistor 12 can be an N-channel or a P-channel field effect transistor (FET). Source and drain regions 22 and 24, respectively, can be planar (e.g., located entirely within substrate 14), as shown in FIG. 1, or can be raised or elevated source and drain regions. Source and drain regions 22 and 24, respectively, have a concentration of 10<19 >to 10<20 >dopants per cubic centimeter.Dynamic or electrically induced source and drain extensions 23 and 25, respectively, are preferably ultra-shallow extensions (e.g., junction depth is less than 30 nm, (10-20 nm or 5-10 nm)), which are thinner (i.e., shallower) than corresponding source and drain regions 22 and 24, respectively. When present, each of electrically induced source and drain extensions 23 and 25 has a width of 150-3000 Å (from left-to-right) (most preferably, 500-600 Å) and is integral with corresponding source and drain regions 22 and 24, respectively. Electrically induced source and drain extensions 23 and 25, respectively, are disposed partially underneath a gate structure 18. Electrically induced source and drain extensions 23 and 25 help transistor 12 achieve substantial immunity to short-channel effects.Extensions 23 and 25 are formed under gate structure 18 as an inversion layer (e.g., accumulation mode). The induced inversion layer acts as an electrical extension of source region 22 and drain region 24. Generally, the thickness of the inversion layer (extensions 23 and 25) can be very thin (e.g., less than 1000 Å) for providing good immunity to short-channel effects. The inversion layer is not provided in a channel region 41 centered under structure 18. Channel region 41 preferably has a concentration of 10*10<17>-1*10<18 >p-type dopants. The locations associated with extensions 23 and 25 can have the same dopant characteristics of channel region 41.Gate structure 18 is configured so that transistor 12 forms electrically induced source and drain extensions 23 and 25, respectively, in response to a bias. The bias is generally provided when power is provided to portion 10 of IC. Thus, when IC is on, extensions 23 and 25 are present. In this embodiment, extensions 23 and 25 are permanently present when the IC is operational even though extensions 23 and 25 are not formed in a conventional doping process.Gate structure 18 includes a U-shaped electrode or conductor 30, a main gate electrode 31, a high-K dielectric layer 34, spacers 36, and a gate dielectric layer 38. U-shaped conductor 30 receives the bias which electrically induces the formation of extensions 23 and 25. U-shaped conductor 30 is configured as a "horseshoe" shaped conductor over main gate electrode 31 (e.g., surrounds main gate electrode 31 on at least three sides). Main gate electrode 31 prevents the inversion layer from being induced directly beneath it. U-shaped conductor 30 includes a first wing 42, a center portion 44, and a second wing 46.U-shaped conductor 30 controls a first parasitic transistor (TS) associated with wing 42, and a second parasitic transistor (TD) associated with wing 46. Main gate electrode 31 controls a main transistor (T). The first parasitic transistor (TS) is associated with extension 23; extension 23 is formed when the bias is provided on wing 42. The second parasitic transistor (TD) is associated with extension 25; extension 25 is formed when the bias is provided on wing 46. Main gate electrode 31 controls the turning on and off of transistor 12.Center portion 44 of U-shaped gate conductor 30 serves to connect wings 42 and 46 and can be eliminated (if other connective structures are available on portion 10 of the IC). A channel length of transistor 12 is a distance 27 between extensions 23 and 25 when extensions 23 and 25 are present. Distance 27 is preferably less than 2,000 Å (e.g., 20 nm-200 nm).Dielectric layer 34 of transistor 12 advantageously has a high dielectric constant (K) value. For example, layer 34 can be formed from a material having a K value of greater than 20 (preferably, greater than 25). Layer 34 can be a high-K dielectric material, such as, titanium dioxide (TiO2), tantalum pentaoxide (Ta2O5), aluminum oxide (Al2O3), or other insulators. Layer 34 can also be a composite of several insulating layers made from different materials (e.g., SiO2, TiO2, Ta2xO5, Si3N4, Al2O3, etc.).Dielectric layer 34 is between main gate electrode 31 and U-shaped gate electrode 30. Layer 34 can have a width of 350-5000 Å and a thickness of 100-200 Å. The equivalent oxide thickness (EOT) of layer 34 is thinner than that of gate dielectric 38 so that U-shaped conductor 30 induces the inversion layer for extensions 23 and 25. Preferably, the EOT is configured so conductor 30 induces the formation of extensions 23 and 25 when any bias is provided to conductor 30 (e.g., even with a zero voltage signal). The bias with respect to substrate 12 (e.g., -2 volts) allows extensions 23 and 25 to be formed when the voltage on conductor 30 is zero.Gate structure 18 is preferably 1500-3000 Å thick (i.e., height) and 35-500 nm wide. Gate dielectric layer 38 is preferably a very thin (20-30 Å) silicon dioxide material formed in a deposit-and-etch process. Alternatively, gate dielectric layer 38 can be thermally grown.U-shaped conductor 30 is preferably a conductive material, such as, doped polysilicon, doped polysilicon/germanium, tungsten, titanium nitride, molybdenum, or other metal conductor. Conductor 30 is 300-500 Å thick at center portion 44 (from a bottom of a silicide layer 94 to a top of dielectric layer 34). Center portion 44 is 30-480 nm wide. Each of first and second wings 42 and 46, respectively, preferably has a width of 15-300 nm and a maximum thickness of 1500-3000 Å (from layer 94 to layer 34).Main gate electrode 31 is preferably a conductive material, such as, doped polysilicon, doped polysilicon/germanium, tungsten, titanium nitride, molybdenum, or other metal conductor. Conductor 31 is preferably 1000-2000 Å in height and 20-200 nm wide. Main conductor 31 is provided over gate dielectric 38.In operation, electrically induced source and drain extensions 23 and 25, respectively, are formed by inversion layers related to parasitic transistors (TS) and (TD) associated with first and second wings 42 and 46, respectively. By utilizing high-K dielectric layer 34, deep inversion layers are formed which can act as source and drain extensions 23 and 25 when the appropriate gate bias is provided to conductor 30. The gate bias can be 0V, 1V, 2V, 3V, etc. Design parameters and system criteria can affect the selection of the appropriate gate bias for extensions 23 and 25.In one alternative embodiment, electrically induced source and drain extensions 23 and 25 can be controlled so that extensions 23 and 25 do not significantly contribute to on-state leakage current. In such an alternative embodiment, source and drain extensions 23 and 25, respectively, are present when or just before, transistor 12 is turned on. Preferably, when a gate signal is provided to gate structure 18 that turns transistor 12 on, electrically induced source and drain extensions 23 and 25 are present. Extensions 23 and 25 can be induced by the gate signal or by a separate bias signal provided when the gate signal is provided.When a gate signal that turns transistor 12 off is provided to gate structure 18, extensions 23 and 25 are absent (e.g., disappear). Extensions 23 and 25 can be removed by removing the bias signal, (i.e., making the bias signal equal to the substrate bias signal). Thus, transistor 12 can present an advantageous structure that has dynamic source/drain extensions 23 and 25.Such an embodiment can slow the operational speed of transistor 12 due to the parasitic capacitance associated with periodically providing extensions 23 and 25. Transistor 12, according to this alternative embodiment, is preferably employed in regions of IC 10 which are concerned with low leakage current and which do not require significant transistor speed.Transistor 12 can be designed to be in the on-state at various voltage levels (e.g., for N-channel MOSFET: gate voltage equals VDD or supply voltage; for P-channel MOSFET: gate voltage equals VSS or ground). Alternatively, other voltage levels could be utilized, depending upon device parameters.With reference to FIGS. 2-6, the fabrication of transistor 12, including gate structure 18, is described below as follows. Conventional CMOS processes are utilized to form most of the elements of transistor 12 shown in FIG. 2.With reference to FIG. 2, in an exemplary embodiment, substrate 14 includes a thin gate dielectric layer that is covered with a polysilicon layer. The polysilicon layer and the gate dielectric layer are etched to leave gate dielectric layer 38 and main gate conductor 31.Gate dielectric layer 38 is preferably 20-30 Å thick and is thermally grown or deposited on substrate 14. Layer 38 is 20-200 nm wide after etching. Main gate conductor 31 is preferably deposited by chemical vapor deposition (CVD). Gate conductor 31 is 1000-2000 Å high and 20-200 nm wide.In FIG. 3, a high-K dielectric layer 62 (corresponding to layer 34 (FIG. 1)) is provided over main gate conductor 31. High-K dielectric layer 62 is preferably a 10-100 Å thick layer of silicon nitride, aluminum oxide, titanium oxide, tantalum pentoxide, or other high-K dielectric material, depending on the dielectric constant. High-K dielectric layer 62 is deposited by CVD or by a sputtering tool. Layer 62 has an EOT thickness 30-50% as thick as layer 38.With reference to FIG. 4, a conductive layer 64 is provided over layer 62. Conductive layer 64 corresponds to U-shaped gate conductor 30 discussed with reference to FIG. 1. Layer 64 is preferably a 500-1000 Å thick conformal layer deposited by CVD or sputter deposition. Layer 64 can be a doped polysilicon, doped polysilicon/germanium, tungsten, titanium nitride, molybdenum, or other conductive layer.With reference to FIG. 5, a photolithographic process is utilized to etch layers 64 and 62. The lithographic process utilizes a selectively developed photoresist material 66 above layer 64. With reference to FIG. 6, layer 64 and 62 are etched to leave U-shaped gate conductor 30 and high-K gate dielectric layer 34. Material 66 is stripped after etching layers 64 and 62 in a conventional process.With reference to FIG. 7, spacers 36 are formed on lateral sides of conductor 30. Spacers 36 are preferably a medium- or low-K dielectric material. Spacers 36 can be an oxide material, such as, silicon dioxide. In addition, spacers 36 can be a silicon nitride material. Spacers 36 preferably have a height of 1500-3000 Å and a width of 300-500 Å. Spacers 36 can be formed in a conventional CVD and etchback process.With reference to FIG. 8, source region 22 and drain region 24 are formed in an implantation process. Preferably, an ion implantation process is utilized to simultaneously dope regions 22 and 24, and conductors 30 and 31. After doping, a rapid thermal anneal technique is utilized to activate dopants in conductors 30 and 31 as well as source region 22 and drain region 24. Alternatively, main conductor 31 can be deposited as a doped material or can be doped before etching or subsequent to etching.With reference to FIG. 1, source region 22 and drain region 24 and conductor 34 are silicided in accordance with a conventional process. Conventional silicidation techniques can be utilized. For example, titanium silicide, cobalt silicide, tungsten silicide or other silicides can be formed by depositing a metal layer and reacting with the silicon of substrate 14 and conductor 30. Preferably, silicide layers 90, 92 and 94 are 200-400 Å thick.In FIG. 1, gate structure 18 is designed such that the equivalent thickness with reference to thermal oxide layer 34 is 30% to approximately 50% that of gate dielectric layer 38 underneath center portion 44. Therefore, parasitic transistors (TS and TD) associated with U-shaped conductor 30 have lower threshold voltages than the main transistor (T) and will be turned on appropriately. Parasitic transistors (TS and TD) become inverted when the bias is provided to conductor 31. The inversion layer formed by the parasitic transistors (TS and TD) act as electrically induced source and drain extensions 23 and 25, respectively, for the main transistor (T).In the alternative embodiment in which extensions 23 and 25 are dynamically formed, transistor 12 can be designed in such a way that the threshold voltage of the parasitic transistors (TS and TD) is less than the threshold voltage for the main transistor (T) and greater than zero for an N-channel transistor. [0<VTH (TS and TD)<VTH (T)]. Conversely, for a P-channel transistor, the threshold voltage of the parasitic transistors (TS and TD) is less than zero and greater than the threshold voltage for the main transistor (T). [VTH (T)<VT (TS and TD)<0]. Therefore, when the gate voltage equals 0 (for N-channel), both main transistor (T) and the two parasitic transistors (TS or TD) are turned off and source and drain extensions 23 and 25 (formed by deep inversion layers of the two parasitic transistors TS and TD) disappear. The off-state leakage current is smaller because of the larger physical space (channel length 27 is increased) between source region 22 and drain region 24.It is understood that, while preferred embodiments, examples, materials, and values are given, they are for the purpose of illustration only. The apparatus and method of the invention is not limited to the precise details and conditions disclosed. For example, although a high-K dielectric material is mentioned, other materials can be utilized. Thus, changes may be made to the details disclosed without departing from the spirit of the invention, which is defined by the following claims. |
<P>PROBLEM TO BE SOLVED: To provide a system and a method including the first graphic processor communicated with a content source. <P>SOLUTION: The first graphic processor is constituted to process the content from the content source, in operation. The system/method is provided further with the second graphic processor to be communicated with the first graphic processor, using a network. The second graphic processor is constituted to process further the content for the purpose of display. <P>COPYRIGHT: (C)2008,JPO&INPIT |
A first graphics processor in communication with a content source, the first graphics processor for processing content from the content source, and a first graphics processor in communication with the first graphics processor over a network And a second graphics processor for further processing the content for display.The system of claim 1, wherein at least one of the first graphics processor and the second graphics processor includes a graphics processing unit.The system of claim 1, wherein the first graphics processor and the second graphics processor are asymmetric.The system of claim 1, wherein the network comprises a wireless network.A transmitter in communication with the first graphics processor, the transmission for transmitting the content over the wireless network for reception by a receiver in communication with the second graphics processor. The system of claim 4, further comprising a machine.The processing performed by the first graphics processor includes decryption, decompression, post-processing, multiplexing, processing providing error correction, packetization, graphics rendering, compositing, recompression, and re-encryption. The system of claim 1, wherein the system is selected from a group that is configured.The processing performed by the first graphics processor includes decryption, decompression, post-processing, multiplexing, processing that provides error correction, packetization, graphics rendering, compositing, recompression, and re-encryption. The system of claim 1, comprising:The system of claim 1, wherein the processing performed by the first graphics processor is dynamically adaptable.The processing performed by the second graphics processor is selected from the group consisting of processing that provides decoding, decompression, depacketization, post processing, demultiplexing, combining, and error correction; The system of claim 1.The system of claim 1, wherein the processing performed by the second graphics processor includes processing that provides decoding, decompression, depacketization, post processing, demultiplexing, combining, and error correction. .A first processor in communication with a second processor via a network, wherein a first amount of processing performed by the first processor is the inverse of the second amount of processing performed by the processor; A subsystem that is a function and is capable of performing video processing or graphics processing by the first processor and the second processor.Receiving content in a graphics processor having a plurality of modules; and dynamically selecting one or more of the plurality of modules of the graphics processor to process the content over a wireless network link Supporting the communication and subsequent display of the content using a display.The method of claim 12, wherein the graphics processor includes a graphics processing unit.The method of claim 12, wherein the module comprises an encryption module.The method of claim 12, wherein the module comprises a compression module.The method of claim 12, wherein the module comprises a decoding module.The method of claim 12, wherein the module comprises a decompression module.The method of claim 12, wherein the module comprises a post-processing module.The method of claim 12, wherein the module comprises a graphics processing module.The method of claim 12, wherein the module comprises a synthesis module.The method of claim 12, wherein the process comprises preparing the content for transmission over the wireless network link.A processor capable of performing graphics processing or video processing, dynamically for processing said content to support communication over a wireless network link and subsequent display of the content using a display A system comprising the processor including a plurality of modules adapted to be selected.23. The system of claim 22, wherein the processor is integrated with a computer and the processing further comprises preparing the content for transmission over the wireless network link.The processor is integrated with a display, and the processing further comprises receiving the content via the wireless network link and preparing the content for display using the display. 23. The system of claim 22, comprising. |
Multi-graphics processor system and method for processing content communicated over a network for displayField of InventionThe present invention relates to digital processing, and more particularly to graphics / video processing.backgroundPrior art FIG. 1A shows a system 100 for graphics / video processing according to the prior art. As shown, graphics processor 102 is directly coupled to display 104 in such prior art system 100. For this purpose, all graphics and / or video processing required for the displayed content is performed by the graphics processor 102 and sent directly to the display 104 in a format arranged for display. Such a system 100 is ideal for conventional computer systems, but not necessarily convenient for other frameworks.For example, prior art FIG. 1B illustrates a system 150 for prior art graphics / video processing in a wireless network 106 environment. As shown, in such a configuration, graphics processor 102 communicates with display 104 via network 106. Such networks 106 include wireless network links that employ UWB (ultra wide band) technology, WUSB (wireless universal serial bus) technology, WiMedia technology, and / or any other desired network-related technology.While such a configuration may provide acceptable performance in certain high speed network environments, problems may still arise as a result of any bandwidth limitations associated with the network 106. Specifically, conventional formats associated with content, graphics processor output, processing architecture, etc. may not facilitate efficient network transmission. For example, content may be encrypted, compressed, etc. depending on circumstances.Therefore, if the bandwidth is large but insufficient, as a result, it may be difficult to display the content, and in some cases, the content may not be displayed. Therefore, there is a need to overcome these and / or other problems associated with the prior art.OverviewSystems and methods are provided that include a first graphics processor in communication with a content source. In operation, the first graphics processor is adapted to process content from a content source. In addition, a second graphics processor is provided that communicates with the first graphics processor using a network. The second graphics processor is further adapted to process the content for display.Detailed descriptionFIG. 2 illustrates a system 200 for processing content communicated over a network for display according to an embodiment. As shown, a content source 202 is provided, which is sent from the content source 202 to a first graphics processor 204 for processing (eg, graphics processing and / or video processing, etc.). The role of supplying content. In the present description, such content source 202 may refer to any source of content [eg, network, memory in the form of a digital versatile disc (DVD), hard disk, etc.). Content can include graphics data and / or video that can be processed for display. Of course, in various embodiments, audio and / or metadata may optionally be included as content.As further shown, a second graphics processor 208 that communicates with the first graphics processor 204 via a network 206 is also provided. The second graphics processor 208 is also adapted to perform further processing (eg, graphics processing and / or video processing, etc.) on the content for display. As shown, the display 210 can maintain communication with the second graphics processor 208 for such purposes. Note that although these graphics processors are illustrated as being directly linked to the remaining respective components, such links represent communication only. Accordingly, the system 200 may or may not include additional components that communicate between graphics processors. It should be noted that other embodiments are conceivable where a processor is employed in place of the graphics processor described above and this processor can perform video processing without necessarily graphics processing.The network 206 may take any form, including, but not limited to, a local area network (LAN), a wireless network, a wide area network (WAN) such as the Internet, a peer-to-peer network, and the like. Note also that this is not done. In various optional embodiments, the network 206 may include a wireless network link, which in this description may include any connection associated with the wireless network. For example, UWB (ultra wide band) technology, WUSB (wireless universal serial bus) technology, WiMedia technology, and the like can be employed for such a wireless network link. Further information regarding possible embodiments including wireless networks will be described hereinafter with reference to subsequent figures.In use, the first graphics processor 204 and the second graphics processor 208 are adapted to process such content. In various embodiments, the first graphics processor 204 and the second graphics processor 208 may or may not be asymmetric. For example, in an exemplary embodiment, the first amount of processing performed by the first graphics processor 204 is an inverse function of the second amount of processing performed by the second graphics processor 208. can do. Again, as before, the possible embodiments relating to such dynamic adaptability are further described below with reference to the subsequent figures.In still other embodiments, the processing performed by the first graphics processor 204 (and / or even by the second graphics processor 208) may be dynamically adaptable. For example, the processing performed by the first graphics processor 204 can vary depending on any one of a variety of factors. By way of example only, such processing may include bandwidth, desired quality of service (QoS), at least one feature of the content or display, the type or amount of processing performed by the second graphics processor 208, As well as / or any other desired factor.Still further, the processing performed by the first graphics processor 204 and / or the second graphics processor 208 includes decoding, decompression, post-processing, multiplexing, demultiplexing, error providing processing, packets Including, but not limited to: de-packetization, depacketization, graphics rendering, compositing, recompression, and / or re-encryption. Of course, in the present description, such processing performed by the first graphics processor 204 and / or the second graphics processor 208 is content that is transmitted and / or displayed, at least in part, over a network. Any process that results inIn yet another optional embodiment, the first and / or second graphics processor may include one of a plurality of graphics processors operating in conjunction with each other. An example of such a technology is NVIDIA SLI ™ technology. An example of a related embodiment can be further found by reference to application Ser. No. 10 / 990,712 filed on Nov. 17, 2004. This application is incorporated herein by reference in its entirety.The various optional architectures and features will now be further described. Together with these optional architectures and features, the aforementioned framework may or may not be implemented depending on the user's desire. It is strongly noted that the following information is presented for illustrative purposes and should not be construed as limiting in any way. Any of the following features can be optionally incorporated, with or without the other features described.FIG. 3 illustrates a system 300 for processing content prior to being communicated over a wireless network link, according to an example embodiment. As an option, the present system 300 may be implemented under the principles of the system 200 of FIG. Of course, however, the system 300 can be implemented in any desired environment. Furthermore, the above definition also applies in the following description.As shown, system 300 includes a graphics processor 306 that, in one embodiment, can function in conjunction with computer 301. Such a computer may include a memory 302 and a central processing unit (CPU) 304 that communicate with the graphics processor 306.In one embodiment, graphics processor 306 may comprise a plurality of modules as shown. Each such module may be placed on a single semiconductor platform to form a graphics processing unit (GPU) [eg, individual GPU, iGPU (integrated GPU), etc.].In this description, a single semiconductor platform may refer to a single, single semiconductor-based integrated circuit or chip. The term single semiconductor platform can also refer to a multi-chip module that simulates on-chip operation and offers many improvements while utilizing traditional CPU and bus implementations. Please note that. Of course, the various modules can be arranged individually or in various combinations of semiconductor platforms as desired by the user.Still referring to FIG. The graphics processor 306 has a plurality of modules including a decryption / decompression module 308, a post-processing module 310, a graphics rendering module 312, a compositing module 314, a recompression module 316, and a re-encryption module. 318, a multiplexer / error correction / packetization module 320, and an interface 322. Various combinations that use (and do not use) these various modules are described below, but depending on the content and / or processing requirements, any one or more of the aforementioned modules may be Can be included or not included under the transmitter and / or receiver as required, or can be used or used under the transmitter and / or receiver Note that you do not have to.In use, decryption / decompression module 308 receives content from a content source (see, for example, content source 202 of FIG. 2, for example). In embodiments where this content is encrypted and / or compressed, the decryption / decompression module 308 serves to decrypt and / or decompress such content for further processing. For example, if the content includes video in MPEG format, this format setting can be decompressed. Still further, the module 308 can serve to separate any video from the graphics data. The reason will become clear soon.Optionally (especially when the content includes video), the post-processing module 310 can serve for any desired post-processing that may be required. Such post-processing includes pixel processing, video processing (eg, gamma correction, motion estimation or compensation, decompression, color space control, brightness, saturation, color temperature correction, sharpening, overlay processing, scaling, sign , Deinterlace, up / down scaling, etc.), but is not limited to these. Such examples are given for illustrative purposes only and should not be construed as limiting in any way. This is because any type of post-processing can be performed by the post-processing module 310.As a further option, if the content includes graphics data that can take advantage of the graphics processing capabilities of the graphics processor 306, the content can be sent to the graphics rendering module 312 for any type of graphics processing. Can be entered. Such graphics processing includes, but is not limited to, pixel shading and texture shading. For example, electronic program guide (EPG) information can be incorporated into the video, and such EPG information can be the subject of the graphics processing described above.If the content includes separate graphics data (if any) and video (eg, in separate streams) and the available bandwidth allows, omit one or more of the various modules described. , And may proceed to a multiplexer / error correction related processing / packetization module 320 (which will be described in further detail below). See path 313. It is further noted that post-processing, graphics rendering, etc. are not required or required, so a mode of operation where the decoding / decompression module 308 simply passes content without such processing is possible. I want to be.On the other hand, if graphics data and video are combined (eg, into a single stream, etc.), the output of graphics rendering module 312 can be provided to synthesis module 314 in turn. Thus, the composition module 314 can serve to synthesize any uncompressed video content with the resulting graphics data. Such synthesized content can take any form, but in some embodiments, it is synthesized NTSC (National Television System Committee) format, PAL (phase altering line) format, Y / C (S video) format. , SECAM (sequential couleur aveme memory) format, HDTV (high definition television) format, ATSC (Advanced Television Systems Committee) format, and / or any digital television format that can be compressed, Or video And other formats including a combination of graphics data.For this purpose, the combined graphics data and video are recompressed by the recompression module 316 and further reencrypted by the reencryption module 318. Such recompression / re-encryption can function to be beneficial in various environments. For example, in an embodiment where content is communicated over a high definition channel (eg, throughput is 1920 × 1080 × 24 × 60 = 3.3 Gb / s), such recompression / re-encryption is It can function to ensure that graphics data or the like is successfully communicated over such media.Still further, the content is processed by the multiplexer / error correction related processing / packetization module 320 regardless of whether it is supplied from such a module or directly from the decoding / decompression module 308 / post-processing module 310. Is done. Specifically, such a module 320 was obtained, functioning to perform any desired processing that provides error correction using any available parity information and packetize the content. Packets can be multiplexed via interface 322.A transmitter 324 that communicates with the graphics processor 306 is also provided. The transmitter 324 transmits the content via the wireless network 325 so that the content is displayed using the display 326. In various embodiments, the transmitter 324 may employ a wireless network link that utilizes UWB technology, WUSB technology, WiMedia technology, or the like. Further information about the display 326 will be described in more detail with reference to FIG.As described above, the processing performed by graphics processor 306 can be dynamically adaptable. Further, such processing may include bandwidth, desired quality of service (QoS), the type or amount of processing performed by a second graphics processor (described below), and / or any other It can be adapted according to the desired factors. To facilitate this feature, feedback from transmitter 324 can be sent back to the appropriate module of graphics processor 306 as shown.For example, if the transmitter 324 detects a change in available bandwidth, the transmitter 324 instructs the recompression module 316 and / or the reencryption module 318 to use a more advanced compression / encryption algorithm. , Can respond to such changes. Of course, such changes can be realized at the expense of increased processing cycles. Similar adaptive capabilities can be shown in response to desired QoS changes and the like. Further information regarding such dynamic adaptability will be described in more detail with reference to FIG.Although not shown, the system 300 can also include secondary storage. Examples of the secondary storage include a hard disk and / or a removable storage such as a floppy disk, a magnetic tape, a compact disk, a DVD, and a solid state storage (for example, a flash memory). In use, removable storage drives read from and / or write to removable storage in a well-known manner.Computer programs or computer control logic algorithms can be stored in the system memory 302 and / or in secondary storage. Such a computer program enables the system 300 to perform various functions during execution. System memory 302, storage, and / or any other storage are also possible examples of computer-readable media.In one embodiment, the various functions described herein are designed, in part, to operate as a unit for performing the CPU 304, chipset (ie, related functions, etc.) A set of integrated circuits sold), and / or even any other integrated circuit. Still further, the architecture and / or functionality described herein may be a general purpose computer system, circuit board system, game console system dedicated to entertainment purposes, an application specific system, and / or any other desired. Can be implemented in the system environment.FIG. 4 illustrates a system 400 for processing content after being communicated over a wireless network link, according to another example embodiment. As an option, the system 400 may be implemented under the principles of the system 200 of FIG. However, the system 400 can be implemented in any desired environment. Furthermore, the above definition also applies in the following description.As shown, a computer system 404 is provided that operates to communicate content to the system 400 via the wireless network 425. In one embodiment, the computer system 404 can include the system 300 of FIG. System 400 is illustrated as including a graphics processor 408 that communicates with network 425 via receiver 406. As an option, the system 400 may take the form of a display 402 in which a graphics processor 408 is incorporated.With continued reference to FIG. The graphics processor 408 includes a plurality of modules including a demultiplexer / error correction related processing / depacketization module 410, a graphics decoding / decompression module 412, a video decompression module 414, and a post-processing. There is a module 416, a synthesis module 418, and a digital output 420.In use, content is received from the receiver 406 by a demultiplexer / error correction related processing / depacketization module 410 to perform functions that complement the functionality of the multiplexer / error correction related processing / packetization module 320 of FIG. Supplied to. Specifically, the content is processed to provide an error correction function, depacketized, and further demultiplexed. Further, any video may or may not be separated from any graphics data after decoding and / or decompression via graphics decoding / decompression module 412.Optionally, the graphics data and video may remain combined (eg, within a single stream) and at this point may be sent directly to the digital output 420 for display. See path 413. On the other hand, processing of any video may proceed by decompressing the video using the video decompression module 414. In various embodiments where the content is of a high-class nature, such video decompression may be AACS (advanced access content system) / WM-DRM (Windows media-digital rights management), CPPPP compliant DRM, etc. Can function to support.Still further, post-processing can optionally be performed using the post-processing module 416. Of course, such post-processing can also include any of the processing described above in the description of the similar module 310 of FIG.Further, as described above, the first amount of processing performed by the graphics processor 306 of FIG. 3 can be an inverse function of the second amount of processing performed by the graphics processor 408. For example, the required post-processing can be shared between the corresponding post-processing modules 310 and 416 and can be adapted to any differences regarding the video being processed. Still further, the synthesis module 418 may serve to function similarly to the similar module 314 of FIG.As noted above, one or more of the aforementioned modules of each of the graphics processors of FIGS. 3-4 may or may not be used depending on the requirements related to the content / processing. Good. Specifically, during use, at least one or more of the graphics processor modules are selected to support communicating content to be displayed later over the network. Here, the dynamic adaptability will be further described.FIG. 5 illustrates a method 500 according to an exemplary embodiment for dynamically processing content being communicated over a wireless network link for display. As an option, the method 500 may be implemented under the principles of the system 200 of FIG. Of course, however, the method 500 can be implemented in any desired environment. For example, the method 500 is performed only in the environment of one graphics processor (eg, see, for example, the first graphics processor 204 and / or the second graphics processor 208 of FIG. 2, for example). Is also envisaged.As shown, content is received in a graphics processor that includes a plurality of modules. See operation 502. For example, in embodiments where the graphics processor includes the first graphics processor 204 of FIG. 2, etc., the various modules include (as described above) an encryption module, a compression module, a decryption module, a decompression module, post-processing. One or more of a module, a graphics processing module, and / or a compositing module may be included. In such an embodiment, the graphics processor can be integrated with the associated computer or can remain separate from the associated computer.In another embodiment where the graphics processor includes the second graphics processor 208 of FIG. 2, etc., the various modules include (as described above) a decoding module, a decompression module, a depacketization module, a post-processing module. , One or more of a demultiplexing module, a synthesis module, and an error correction related processing module. In this embodiment, the graphics processor can be integrated with the associated display or can remain separate from the associated display.With continued reference to FIG. First, factors that affect the processing of content to be described are identified. Note operation 504. As described above, such factors include bandwidth, desired quality of service (QoS), content (eg, format, size, etc.) or at least one characteristic of the display, and another graphic (if any). The type or amount of processing performed by the processor and / or any other desired factor may be included. As a further option, various feedbacks can be used to provide the above factors.To this end, one or more of the graphics processor modules can be dynamically selected to support communication over a network and / or subsequent display of content using a display. Note operation 506. Note that such processing may vary depending on which end of the network the graphics processor is present.For example, in the embodiment described above where the graphics processor includes the first graphics processor 204 of FIG. 2, etc., the process involves preparing the content for transmission over a network. Further, the processing performed by the selected module includes decryption, decompression, post-processing, multiplexing, processing that provides error correction, packetization, graphics rendering, compositing, recompression, and / or re-encryption. Can be included.Still further, in the above embodiment where the graphics processor includes the second graphics processor 208, etc. of FIG. 2, the processing involves receiving and preparing the content for display using the display. . Again, the processing performed by the selected module can include processing that provides decoding, decompression, depacketization, post processing, demultiplexing, combining, and / or error correction.Of course, any type of module and associated processing that can support communicating content over a network and subsequently using the display to display that content is provided as described above and dynamically selected. can do.Therefore, various scenarios can be handled according to the current requirements. For example, incoming compressed content may or may not be decrypted / decompressed, in which case various further processing (eg, graphics processing, composition, etc.) is performed. Or it may not be executed. Again, all such selections can be performed depending on any desired factor. These factors include QoS requirements, network limitations (eg, bandwidth, etc.), received format of content, desired format of content for network transmission, user configuration / requirements, etc. Is not limited.In certain example environments, the content can be received in a compressed / encrypted format, for which no desired processing is required (eg, requirements such as bandwidth and QoS). Are all satisfied). In such a case, the content does not necessarily have to be decrypted / decompressed and can be transmitted as it is. In another embodiment, the decrypted / decompressed content may require further processing, and thus can be decrypted / decompressed, thereby performing post-processing, graphics processing, and the like. At this point, recompression / decompression may or may not be performed based on the relevant needs.Although various embodiments have been described above, it should be understood that these embodiments have been presented for purposes of illustration and not limitation. Accordingly, the breadth and scope of the preferred embodiments should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the appended claims and their equivalents.1 illustrates a graphics / video processing system according to the prior art. FIG.1 illustrates a system for graphics / video processing in a wireless network environment according to the prior art. FIG.1 is a diagram illustrating a system for processing content communicated over a network for display according to an embodiment. FIG.1 illustrates a system according to an exemplary embodiment for processing content before being communicated over a wireless network link. FIG.FIG. 4 illustrates a system for processing content after being communicated via a wireless network link, according to another example embodiment.FIG. 6 illustrates a method according to an exemplary embodiment for dynamically processing content being communicated via a wireless network link for display.Explanation of symbols200 ... System, 202 ... Content source, 204 ... First graphics processor, 206 ... Network, 208 ... Second graphics processor, 210 ... Display. |
Methods, systems, and devices for life expectancy monitoring for memory devices are described. A memory device may monitor a parameter of a component of the memory device or the memory device overall, and may determine whether the parameter satisfies a threshold. The parameter may represent or be associated with a lifetime of the component, a level of wear of the component, or an operating parameter violation of the component, or any combination thereof. The memory device may communicate, to a host device, an indication of the parameter satisfying the threshold, and the host device may use the information in the indication to adjust one or more parameters associated with operating the memory device, among other example operations. |
48CLAIMSWhat is claimed is:1. A method, comprising: measuring, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both; determining that the parameter satisfies a threshold based at least in part on a comparison of the parameter with the threshold; and communicating, to a host device, an indication that the parameter satisfies the threshold based at least in part on the determining.2. The method of claim 1, further comprising: communicating an indication of a life expectancy of the memory device, wherein the threshold comprises a level of wear of the component that is associated with the life expectancy of the memory device.3. The method of claim 2, wherein the level of wear comprises a threshold within a guard band of a range of values associated with an end of life of the memory device.4. The method of claim 1, further comprising: communicating an indication of a rate of degradation of the component that is based at least in part on the level of wear of the component, wherein the threshold comprises a threshold rate of degradation of the component.5. The method of claim 4, wherein communicating the rate of degradation comprises: communicating one or more bits that indicate the rate of degradation satisfies the threshold, or an amount of use of the component based at least in part on the rate of degradation, or both.6. The method of claim 1, further comprising:
49 determining a type of the violation of the operating parameter of the component, the type comprising one of a non-destructive violation or a destructive violation; and communicating an indication of the type of the violation of the operating parameter, wherein the threshold comprises a threshold violation of the operating parameter for the component.7. The method of claim 6, further comprising: determining a severity of the violation of the operating parameter; and communicating an indication of the severity of the violation of the operating parameter.8. The method of claim 6, further comprising: determining a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof; and communicating an indication of the quantity of violations associated with the violation of the operating parameter, the magnitude associated with the violation of the operating parameter, or the duration associated with the violation of the operating parameter, or any combination thereof.9. The method of claim 6, further comprising: determining a life expectancy of the memory device based at least in part on the parameter satisfying the threshold violation of the operating parameter, wherein the indication that the parameter satisfies the threshold comprises an indication of the life expectancy.10. The method of claim 6, wherein the threshold violation of the operating parameter comprises a threshold within a guard band of the violation of the operating parameter.11. The method of claim 6, wherein the non-destructive type of violation is associated with an error rate of the memory device, and wherein the destructive type of violation is associated with an increase in a degradation of the component.
5012. The method of claim 1, further comprising: communicating an indication of one or more suggested actions for operating the memory device based at least in part on determining that the parameter satisfies the threshold.13. The method of claim 1, further comprising: receiving, from the host device, an indication for operating the memory device based at least in part on communicating the indication that the parameter satisfies the threshold.14. The method of claim 13, further comprising: adjusting, based at least in part on receiving the indication for operating the memory device, one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof.15. An apparatus, comprising: a memory array comprising a plurality of memory cells; and circuitry coupled with the memory array and operable to: measure, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both; determine that the parameter satisfies a threshold based at least in part on a comparison of the parameter with the threshold; and communicate, to a host device, an indication that the parameter satisfies the threshold based at least in part on the determining.16. The apparatus of claim 15, the circuitry further operable to: communicate an indication of a life expectancy of the apparatus, wherein the threshold comprises a level of wear of the component that is associated with the life expectancy of the apparatus.17. The apparatus of claim 15, the circuitry further operable to:
51 communicate an indication of a rate of degradation of the component that is based at least in part on the level of wear of the component, wherein the threshold comprises a threshold rate of degradation of the component.18. The apparatus of claim 15, the circuitry further operable to: determine a type of the violation of the operating parameter of the component, the type comprising one of a non-destructive violation or a destructive violation; and communicate an indication of the type of the violation of the operating parameter, wherein the threshold comprises a threshold violation of the operating parameter for the component.19. The apparatus of claim 18, further comprising: one or more counters configured to determine a quantity of violations associated with the violation of the operating parameter or a duration associated with the violation of the operating parameter, or both, the indication including an indication of the quantity of violations associated with the violation of the operating parameter, or the duration associated with the violation of the operating parameter, or both.20. The apparatus of claim 15, wherein the circuitry is further operable to: receive, from the host device, an indication for operating the apparatus based at least in part on communicating the indication that the parameter satisfies the threshold.21. A method, comprising: receiving, from a memory device, an indication that a first parameter associated with a component of the memory device has satisfied a threshold, the first parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both; determining a second parameter for operating the memory device based at least in part on receiving the indication that the first parameter has satisfied the threshold; and communicating, to the memory device, an indication for operating the memory device based at least in part on the determining.22. The method of claim 21, further comprising:
receiving an indication of a life expectancy of the memory device, wherein the threshold comprises a level of wear of the component that is associated with the life expectancy of the memory device.23. The method of claim 21, further comprising: receiving an indication of a rate of degradation of the component that is based at least in part on the level of wear of the component, wherein the threshold comprises a threshold rate of degradation of the component.24. The method of claim 21, further comprising: receiving an indication of a type of the violation of the operating parameter of the component, the type comprising one of a non-destructive violation or a destructive violation, wherein the threshold comprises a threshold violation of the operating parameter for the component.25. The method of claim 24, further comprising: receiving an indication of a severity associated with the violation of the operating parameter, a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof.26. The method of claim 21, further comprising: receiving an indication of one or more suggested actions for operating the memory device.27. The method of claim 21, wherein determining the second parameter for operating the memory device comprises: determining to adjust one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof, the indication for operating the memory device indicative of the one or more parameters. |
LIFE EXPECTANCY MONITORING FOR MEMORY DEVICESCROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/505,028 by SCHAEFER et al., entitled “LIFE EXPECTANCY MONITORING FOR MEMORY DEVICES,” filed October 19, 2021, and U.S. Provisional Patent Application No. 63/109,168 by SCHAEFER et al., entitled “LIFE EXPECTANCY MONITORING FOR MEMORY DEVICES,” filed November 3, 2020, each of which is assigned to the assignee hereof, and each of which is expressly incorporated by reference in its entirety herein.FIELD OF TECHNOLOGY[0002] The following relates generally to one or more systems for memory and more specifically to life expectancy monitoring for memory devices.BACKGROUND[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often denoted by a logic 1 or a logic 0. In some examples, a single memory cell may support more than two states, any one of which may be stored. To access the stored information, a component may read, or sense, at least one stored state in the memory device. To store information, a component may write, or program, the state in the memory device.[0004] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), selfselecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state when disconnected from an external power source.
BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example of a system that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0006] FIG. 2 illustrates an example of a memory die that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0007] FIG. 3 illustrates an example of a model that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0008] FIG. 4 illustrates an example of a process flow that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0009] FIG. 5 illustrates an example of a process flow that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0010] FIG. 6 shows a block diagram of a memory device that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0011] FIG. 7 shows a block diagram of a host device that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein.[0012] FIGs. 8 and 9 show flowcharts illustrating a method or methods that support life expectancy monitoring for memory devices in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0013] A memory device may be included in various system applications, such as in a mission critical application. In some cases, failure of the memory device within the system may lead to malfunction or failure of the system, which may result in extra costs or danger to an end user of the system. As such, a system designer (e.g., a host device supplier, a system integrator, an original equipment manufacturer, or any combination thereof) may attempt to perform corrective action for the memory device (e.g., adjust or replace the memory device) before the memory device fails. However, the system designer may base an estimate of memory device failure on predictive modeling or other techniques that may not result in an accurate estimate of memory device lifetime (e.g., a quantity of time until memory device failure). Accordingly, the memory device may fail at a time that the system designer does not expect, which may lead to increased costs and safety failure, among other disadvantages.
Further, in some cases, the system designer may implement the memory device in the system such that the memory device may violate one or more operating parameters without the knowledge of the system designer. Such violations may contribute to the possibility of failure of the memory device (e.g., premature failure of the memory device).[0014] The present disclosure provides techniques for monitoring and reporting one or more parameters associated with a life expectancy of a memory device, among other aspects. For example, the memory device may include monitoring circuitry, which may monitor one or more parameters of the components of the memory device. The one or more parameters may include or be associated with a level of wear or degradation of the components of the memory device, or with an operating parameter violation of the memory device, or both. The memory device may measure a value of the one or more parameters and determine whether the value satisfies (e.g., is equal to or greater than) a threshold. In some cases, the threshold may represent one of multiple thresholds, where each threshold may represent a different level of wear or a different point in a lifetime of the memory device. In some cases, the threshold may represent a pass or fail point of the lifetime of the memory device.[0015] If the value satisfies the threshold, this may indicate that a lifetime milestone of the memory device has been reached, a degradation or wear level of the memory device has been reached, or an operating parameter of the memory device has been violated, or some any combination thereof. The memory device may communicate, to a host device, an indication of the parameter satisfying the threshold, and the host device may use the information in the indication to adjust one or more parameters associated with operating the memory device (e.g., indicate a replacement of the memory device, adjust voltages or timings of the memory device). Such techniques may support increased memory device lifetimes as well as an increased accuracy in predicting and notifying a host device of memory device failure (e.g., an end of life), among other advantages.[0016] Features of the disclosure are initially described in the context of systems and dies as described with reference to FIGs. 1 and 2. Features of the disclosure are described in the context of a model and process flows as described with reference to FIGs. 3-5. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to life expectancy monitoring for memory devices as described with reference to FIGs. 6-9.
[0017] FIG. 1 illustrates an example of a system 100 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The system 100 may include a host device 105, a memory device 110, and a plurality of channels 115 coupling the host device 105 with the memory device 110. The system 100 may include one or more memory devices 110, but aspects of the one or more memory devices 110 may be described in the context of a single memory device (e.g., memory device 110).[0018] The system 100 may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, or other systems. For example, the system 100 may illustrate aspects of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet- connected device, a vehicle controller, or the like. The memory device 110 may be a component of the system operable to store data for one or more other components of the system 100.[0019] At least portions of the system 100 may be examples of the host device 105. The host device 105 may be an example of a processor or other circuitry within a device that uses memory to execute processes, such as within a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, a system on a chip (SoC), or some other stationary or portable electronic device, among other examples. In some examples, the host device 105 may refer to the hardware, firmware, software, or a combination thereof that implements the functions of an external memory controller 120. In some examples, the external memory controller 120 may be referred to as a host or a host device 105.[0020] A memory device 110 may be an independent device or a component that is operable to provide physical memory addresses/space that may be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with one or more different types of host devices. Signaling between the host device 105 and the memory device 110 may be operable to support one or more of: modulation schemes to modulate the signals, various pin configurations for communicating the signals, various form factors for physical packaging of the host device 105 and the memory device 110, clock signaling and synchronization between the host device 105 and the memory device 110, timing conventions, or other factors.
[0021] The memory device 110 may be operable to store data for the components of the host device 105. In some examples, the memory device 110 may act as a slave-type device to the host device 105 (e.g., responding to and executing commands provided by the host device 105 through the external memory controller 120). Such commands may include one or more of a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands.[0022] The host device 105 may include one or more of an external memory controller 120, a processor 125, a basic input/output system (BIOS) component 130, or other components such as one or more peripheral components or one or more input/output controllers. The components of host device may be in coupled with one another using a bus 135.[0023] The processor 125 may be operable to provide control or other functionality for at least portions of the system 100 or at least portions of the host device 105. The processor 125 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination of these components. In such examples, the processor 125 may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or an SoC, among other examples. In some examples, the external memory controller 120 may be implemented by or be a part of the processor 125.[0024] The BIOS component 130 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100 or the host device 105. The BIOS component 130 may also manage data flow between the processor 125 and the various components of the system 100 or the host device 105. The BIOS component 130 may include a program or software stored in one or more of read-only memory (ROM), flash memory, or other non-volatile memory.[0025] The memory device 110 may include a device memory controller 155 and one or more memory dies 160 (e.g., memory chips) to support a desired capacity or a specified capacity for data storage. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, local memory controller 165-A) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, memory array 170-A). A memory array 170 may be a collection (e.g., one or more grids, one or more
banks, one or more tiles, one or more sections) of memory cells, with each memory cell being operable to store at least one bit of data. A memory device 110 including two or more memory dies may be referred to as a multi-die memory or a multi-die package or a multi-chip memory or a multi-chip package. The memory device 110 (e.g., the device memory controller 155, one or more memory dies 160, one or more local memory controllers 165, one or more memory arrays 170) may be configured to operate in response to commands from the host device 105 (e.g., from the external memory controller 120, from the processor 125).[0026] The device memory controller 155 may include circuits, logic, or components operable to control operation of the memory device 110. The device memory controller 155 may include the hardware, the firmware, or the instructions that enable the memory device 110 to perform various operations and may be operable to receive, transmit, or execute commands, data, or control information related to the components of the memory device 110. The device memory controller 155 may be operable to communicate with one or more of the external memory controller 120, the one or more memory dies 160, or the processor 125. In some examples, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160.[0027] In some examples, the memory device 110 may receive data or commands or both from the host device 105. For example, the memory device 110 may receive a write command indicating that the memory device 110 is to store data for the host device 105 or a read command indicating that the memory device 110 is to provide data stored in a memory die 160 to the host device 105.[0028] A local memory controller 165 (e.g., local to a memory die 160) may include circuits, logic, or components operable to control operation of the memory die 160. In some examples, a local memory controller 165 may be operable to communicate (e.g., receive or transmit data or commands or both) with the device memory controller 155. In some examples, a memory device 110 may not include a device memory controller 155, and a local memory controller 165, or the external memory controller 120 may perform various functions described herein. As such, a local memory controller 165 may be operable to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 120, or the processor 125, or a combination thereof. Examples of components that may be included in the device memory controller 155 or the local memory controllers 165 or both may include receivers for receiving signals (e.g., from
the external memory controller 120), transmitters for transmitting signals (e.g., to the external memory controller 120), decoders for decoding or demodulating received signals, encoders for encoding or modulating signals to be transmitted, or various other circuits or controllers operable for supporting described operations of the device memory controller 155 or local memory controller 165 or both.[0029] The external memory controller 120 may be operable to enable communication of one or more of information, data, or commands between components of the system 100 or the host device 105 (e.g., the processor 125) and the memory device 110. The external memory controller 120 may convert or translate communications exchanged between the components of the host device 105 and the memory device 110. In some examples, the external memory controller 120 or other component of the system 100 or the host device 105, or its functions described herein, may be implemented by the processor 125. For example, the external memory controller 120 may be hardware, firmware, or software, or some combination thereof implemented by the processor 125 or other component of the system 100 or the host device 105. Although the external memory controller 120 is depicted as being external to the memory device 110, in some examples, the external memory controller 120, or its functions described herein, may be implemented by one or more components of a memory device 110 (e.g., a device memory controller 155, a local memory controller 165) or vice versa.[0030] The components of the host device 105 may exchange information with the memory device 110 using one or more channels 115. The channels 115 may be operable to support communications between the external memory controller 120 and the memory device 110. Each channel 115 may be examples of transmission mediums that carry information between the host device 105 and the memory device.[0031] Each channel 115 may be an example of a transmission medium that carries information between the host device 105 and the memory device. Each channel 115 may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system 100. A signal path may be an example of a conductive path operable to carry a signal. For example, a channel 115 may include a first terminal including one or more pins or pads at the host device 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be operable to act as part of a channel.
[0032] Channels 115 (and associated signal paths and terminals) may be dedicated to communicating one or more types of information. For example, the channels 115 may include one or more command and address (CA) channels 186, one or more clock signal (CK) channels 188, one or more data (DQ) channels 190, one or more other channels 192, or a combination thereof. In some examples, signaling may be communicated over the channels 115 using single data rate (SDR) signaling or double data rate (DDR) signaling. In SDR signaling, one modulation symbol (e.g., signal level) of a signal may be registered for each clock cycle (e.g., on a rising or falling edge of a clock signal). In DDR signaling, two modulation symbols (e.g., signal levels) of a signal may be registered for each clock cycle (e.g., on both a rising edge and a falling edge of a clock signal).[0033] In some examples, CA channels 186 may be operable to communicate commands between the host device 105 and the memory device 110 including control information associated with the commands (e.g., address information). For example, commands carried by the CA channel 186 may include a read command with an address of the desired data. In some examples, a CA channel 186 may include any quantity of signal paths to decode one or more of address or command data (e.g., eight or nine signal paths).[0034] In some examples, clock signal channels 188 may be operable to communicate one or more clock signals between the host device 105 and the memory device 110. Each clock signal may be operable to oscillate between a high state and a low state, and may support coordination (e.g., in time) between actions of the host device 105 and the memory device 110. In some examples, the clock signal may be single ended. In some examples, the clock signal may provide a timing reference for command and addressing operations for the memory device 110, or other system-wide operations for the memory device 110. A clock signal therefore may be referred to as a control clock signal, a command clock signal, or a system clock signal. A system clock signal may be generated by a system clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors).[0035] In some examples, data channels 190 may be operable to communicate one or more of data or control information between the host device 105 and the memory device 110. For example, the data channels 190 may communicate information (e.g., bi-directional) to be written to the memory device 110 or information read from the memory device 110.[0036] In some examples, physical or operational aspects of the memory device 110 may degrade over time, and this degradation may be associated with a reduction of an ability to
reliably store information (e.g., at a memory array 170), a reduction of an ability to reliably read information (e.g., from a memory array 170), a reduction of an ability to process information (e.g., at a local memory controller 165, at a device memory controller 155), or a reduction of an ability to communicate information (e.g., within the memory device 110, between the memory device 110 and the host device 105), among other issues.[0037] Degradation of the memory device 110 may be associated with a cumulative duration of being powered (e.g., the memory device 110 being powered by the host device 105 via a power supply interface, one or more memory dies 160 being powered by the device memory controller 155), a cumulative duration or quantity of operations over which one or more memory dies 160 or memory arrays 170 are accessed, a cumulative duration or quantity of operations over which an operating parameter (e.g., a temperature of the memory device 110 or one or more memory dies 160, a voltage of the memory device 110 or one or more memory dies 160, a moisture or humidity level of an environment while operating the memory device or one or more memory dies 160, an access rate, or other parameter of the memory device 110 or a memory die 160) satisfies a threshold, and other conditions.[0038] Over time, one or more components or circuitry of the memory device 110 or one or more memory dies 160 may experience dielectric breakdown, ion or other constituent material migration or transformation, thermal stress or damage, mechanical stress or damage, fatigue, or other changes that affect operational reliability of the memory device 110. Thus, according to these and other examples, a memory device 110, or memory dies 160 thereof, may be associated with a finite life expectancy for supporting access operations.[0039] In accordance with examples as disclosed herein, the memory device 110 (e.g., the device memory controller 155, one or more memory dies 160) may include various components (e.g., logic, circuitry, sensors) configured for monitoring health and life expectancy of the memory device 110. Such monitoring may include or involve components internal to the memory device 110, such as a monitoring circuit 156 of a device memory controller 155, one or more monitoring circuits 166 of one or more local memory controllers 165, or various combinations thereof, that monitor for degradation of particular components, circuits, voltages, timings, or other characteristics of operating the memory device 110.[0040] In some examples, such components may include sensors, other circuits, or logic (among other examples) to monitor or detect voltages resulting from the memory device 110 performing an operation, or durations associated with the memory device 110 performing an
operation, or other signals or operating characteristics or combinations thereof. Such information may be compared (e.g., by the memory device 110, by the host device 105) to corresponding thresholds that are associated with a respective life expectancy level (e.g., a duration of remaining life, a percentage of remaining life). In various examples, such thresholds may be determined based on simulation, testing, or other analysis and configured at the memory device 110. The determined thresholds may be stored at or generated by components of the memory device 110 or host device 105 to support the described comparisons, which may be performed on a periodic basis (e.g., initiated by a time interval, initiated based on a quantity of operations), or initiated by a triggering condition at the memory device 110 or the host device 105 (e.g., a power cycle, a transition to an idle or power-down mode, an identified maintenance or diagnostic condition or triggering signal).[0041] In some examples, the memory device 110 (e.g., a device memory controller 155, a local memory controller 165) may include a non-volatile storage component for storing an indication of a life expectancy of the of the memory device 110, which may refer to a storage component that is included in or separate from the memory arrays 170 of the memory device 110. Such a non-volatile storage component may be physically coupled with or otherwise attached to a same substrate as a memory array 170 or a memory die 160 (e.g., a same chip or other semiconductor substrate), or a same substrate as the memory device 110 (e.g., a same printed circuit board (PCB) or other memory module, such as a substrate of a dual in-line memory module (DIMM)). In some examples, such a non-volatile storage component may be referred to as a register or a mode register, which may be read from or written to by a host device 105 (e.g., via channels 115) to determine parameters of the memory device 110. In some examples, such a storage component may not be accessible to a host device 105, but may be used by the memory device 110 to determine parameters of operating the memory device 110, or determine status signaling to transmit to a host device 105 (e.g., via channels 115).[0042] A memory device 110 may thus monitor and report one or more parameters associated with a life expectancy of the memory device 110. For example, monitoring circuitry of the memory device 110 (e.g., monitoring circuits 156 or 166) may monitor one or more parameters of a component of the memory device 110. The one or more parameters may include or be associated with a level of wear or degradation of the component, or with an operating parameter violation of the memory device 110, or both. The memory device 110 may measure a value of the one or more parameters and determine whether the value satisfies
a threshold. If the value satisfies the threshold, this may indicate that a lifetime milestone of the memory device 110 has been reached, a degradation or wear level of the memory device 110 has been reached, or an operating parameter of the memory device 110 has been violated, or any combination thereof.[0043] The memory device 110 may communicate (e.g., via one or more channels 115), to a host device 105, an indication of the one or more parameters satisfying the threshold, and the host device 105 may use the information in the indication to adjust one or more parameters associated with operating the memory device 110 (e.g., indicate a replacement of the memory device 110, adjust voltages or timings of the memory device 110). Such techniques may support increased memory device lifetimes as well as an increased accuracy in predicting and notifying a host device 105 of memory device failure (e.g., an end of life), among other advantages.[0044] FIG. 2 illustrates an example of a memory die 200 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The memory die 200 may be an example of the memory dies 160 described with reference to FIG. 1. In some examples, the memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory die 200 may include one or more memory cells 205 that may each be programmable to store different logic states (e.g., programmed to one of a set of two or more possible states). For example, a memory cell 205 may be operable to store one bit of information at a time (e.g., a logic 0 or a logic 1). In some examples, a memory cell 205 (e.g., a multi-level memory cell) may be operable to store more than one bit of information at a time (e.g., a logic 00, logic 01, logic 10, a logic 11). In some examples, the memory cells 205 may be arranged in an array, such as a memory array 170 (e.g., of a memory device 110) described with reference to FIG. 1.[0045] A memory cell 205 may store a charge representative of the programmable states in a capacitor. DRAM architectures may include a capacitor that includes a dielectric material to store a charge representative of the programmable state. In other memory architectures, other storage devices and components are possible. For example, nonlinear dielectric materials may be employed. The memory cell 205 may include a logic storage component, such as capacitor 230, and a switching component 235. The capacitor 230 may be an example of a dielectric capacitor or a ferroelectric capacitor. A node of the capacitor 230 may be coupled with a voltage source 240, which may be the cell plate reference voltage, such as Vpl, or may be ground, such as Vss.
[0046] The memory die 200 may include one or more access lines (e.g., one or more word lines 210 and one or more digit lines 215) arranged in a pattern, such as a grid-like pattern. An access line may be a conductive line coupled with a memory cell 205 and may be used to perform access operations on the memory cell 205. In some examples, word lines 210 may be referred to as row lines. In some examples, digit lines 215 may be referred to as column lines or bit lines. References to access lines, row lines, column lines, word lines, digit lines, or bit lines, or their analogues, are interchangeable without loss of understanding or operation. Memory cells 205 may be positioned at intersections of the word lines 210 and the digit lines 215.[0047] Operations such as reading and writing may be performed on the memory cells 205 by activating or selecting access lines such as one or more of a word line 210 or a digit line 215. By biasing a word line 210 and a digit line 215 (e.g., applying a voltage to the word line 210 or the digit line 215), a single memory cell 205 may be accessed at their intersection. The intersection of a word line 210 and a digit line 215 in either a two-dimensional or three- dimensional configuration may be referred to as an address of a memory cell 205.[0048] Accessing the memory cells 205 may be controlled through a row decoder 220 or a column decoder 225. For example, a row decoder 220 may receive a row address from the local memory controller 260 and activate a word line 210 based on the received row address. A column decoder 225 may receive a column address from the local memory controller 260 and may activate a digit line 215 based on the received column address.[0049] Selecting or deselecting the memory cell 205 may be accomplished by activating or deactivating the switching component 235 using a word line 210. The capacitor 230 may be coupled with the digit line 215 using the switching component 235. For example, the capacitor 230 may be isolated from digit line 215 when the switching component 235 is deactivated, and the capacitor 230 may be coupled with digit line 215 when the switching component 235 is activated.[0050] The sense component 245 may be operable to detect a state (e.g., a charge) stored on the capacitor 230 of the memory cell 205 and determine a logic state of the memory cell 205 based on the stored state. The sense component 245 may include one or more sense amplifiers to amplify or otherwise convert a signal resulting from accessing the memory cell 205. The sense component 245 may compare a signal detected from the memory cell 205 to a reference 250 (e.g., a reference voltage). The detected logic state of the memory cell 205 may
be provided as an output of the sense component 245 (e.g., to an input/output component 255), and may indicate the detected logic state to another component of a memory device that includes the memory die 200.[0051] The local memory controller 260 may control the accessing of memory cells 205 through the various components (e.g., row decoder 220, column decoder 225, sense component 245). The local memory controller 260 may be an example of the local memory controller 165 described with reference to FIG. 1. In some examples, one or more of the row decoder 220, column decoder 225, and sense component 245 may be co-located with the local memory controller 260. The local memory controller 260 may be operable to receive one or more of commands or data from one or more different memory controllers (e.g., an external memory controller 120 associated with a host device 105, another controller associated with the memory die 200), translate the commands or the data (or both) into information that can be used by the memory die 200, perform one or more operations on the memory die 200, and communicate data from the memory die 200 to a host device 105 based on performing the one or more operations. The local memory controller 260 may generate row signals and column address signals to activate the target word line 210 and the target digit line 215. The local memory controller 260 may also generate and control various voltages or currents used during the operation of the memory die 200. In general, the amplitude, the shape, or the duration of an applied voltage or current discussed herein may be varied and may be different for the various operations discussed in operating the memory die 200.[0052] The local memory controller 260 may be operable to perform one or more access operations on one or more memory cells 205 of the memory die 200. Examples of access operations may include a write operation, a read operation, a refresh operation, a precharge operation, or an activate operation, among others. In some examples, access operations may be performed by or otherwise coordinated by the local memory controller 260 in response to various access commands (e.g., from a host device 105). The local memory controller 260 may be operable to perform other access operations not listed here or other operations related to the operating of the memory die 200 that are not directly related to accessing the memory cells 205.[0053] The local memory controller 260 may be operable to perform a write operation (e.g., a programming operation) on one or more memory cells 205 of the memory die 200. During a write operation, a memory cell 205 of the memory die 200 may be programmed to
store a desired logic state. The local memory controller 260 may identify a target memory cell 205 on which to perform the write operation. The local memory controller 260 may identify a target word line 210 and a target digit line 215 coupled with the target memory cell 205 (e.g., the address of the target memory cell 205). The local memory controller 260 may activate the target word line 210 and the target digit line 215 (e.g., applying a voltage to the word line 210 or digit line 215) to access the target memory cell 205. The local memory controller 260 may apply a specific signal (e.g., write pulse) to the digit line 215 during the write operation to store a specific state (e.g., charge) in the capacitor 230 of the memory cell 205. The pulse used as part of the write operation may include one or more voltage levels over a duration.[0054] The local memory controller 260 may be operable to perform a read operation (e.g., a sense operation) on one or more memory cells 205 of the memory die 200. During a read operation, the logic state stored in a memory cell 205 of the memory die 200 may be determined. The local memory controller 260 may identify a target memory cell 205 on which to perform the read operation. The local memory controller 260 may identify a target word line 210 and a target digit line 215 coupled with the target memory cell 205 (e.g., the address of the target memory cell 205). The local memory controller 260 may activate the target word line 210 and the target digit line 215 (e.g., applying a voltage to the word line 210 or digit line 215) to access the target memory cell 205. The target memory cell 205 may transfer a signal to the sense component 245 in response to biasing the access lines. The sense component 245 may amplify the signal. The local memory controller 260 may activate the sense component 245 (e.g., latch the sense component) and thereby compare the signal received from the memory cell 205 to the reference 250. Based on that comparison, the sense component 245 may determine a logic state that is stored on the memory cell 205.[0055] In some examples, physical or operational aspects of the memory die 200 may degrade over time, and this degradation may be associated with a reduction of an ability to reliably store information (e.g., at a memory cell 205), a reduction of an ability to reliably read information (e.g., from a memory cell 205), a reduction of an ability to process information (e.g., at a local memory controller 260), or a reduction of an ability to communicate information (e.g., within the memory die 200, via digit lines 215, via input/output component 255, between the memory die 200 and a device memory controller 155), among other issues.
[0056] Degradation of the memory die 200 may be associated with a cumulative duration of the memory die 200 being powered (e.g., by a host device 105, by a device memory controller 155), a cumulative duration or quantity of access operations over which memory cells 205 are accessed or the local memory controller 260 is otherwise supporting access operations, a cumulative duration or quantity of access operations over which an operating parameter (e.g., a temperature, voltage, access rate, or other parameter of the memory die 200) satisfies a threshold, some combination thereof, or other conditions. For example, one or more components of the memory die 200 may experience dielectric breakdown, ion or other constituent material migration or transformation, thermal stress or damage, mechanical stress or damage, fatigue, or other changes that affect operational reliability of the memory die 200.[0057] In accordance with examples as disclosed herein, the memory die 200 (e.g., the local memory controller 260) may include various components (e.g., logic, circuitry, sensors) configured for monitoring health and life expectancy of the memory die 200. Such monitoring may include or involve components internal to the memory die 200, such as a monitoring circuit 261, which may be an example of a monitoring circuit 166 described with reference to FIG. 1. A monitoring circuit 261 may be configured to monitor for degradation of particular components, circuits, voltages, timings, and other characteristics of operating the memory die 200. In some examples, a monitoring circuit 261 may be configured to monitor for changes of a voltage level of a voltage source, for changes in a voltage resulting from an access operation, or for changes in threshold voltages of one or more transistors (e.g., switching components 235, word line or digit line selection components, transistors of a row decoder 220, a column decoder 225, a sense component 245, or a local memory controller 260).[0058] Additionally or alternatively, a monitoring circuit 261 may be configured to monitor for changes in durations or time constant behavior of performing various operations (e.g., a duration or time constant between activating a switching component and developing a signal that satisfies a threshold, a duration or time constant between accessing a memory cell 205 and developing a signal that satisfies a threshold, a duration, frequency, or phase shift of a clock signal or other timing signal generated at the memory die 200). The monitoring circuit 261 may be configured to perform comparisons between monitored parameters to one or more threshold values, which may be indicative of a life expectancy of the memory die 200, or component thereof (e.g., a life expectancy of the memory cells 205, the switching
components 235, a row decoder 220, a column decoder 225, a sense component 245, an input/output component 255, or a local memory controller 260)[0059] A memory device 110 may thus monitor and report one or more parameters associated with life expectancy. For example, monitoring circuitry (e.g., a monitoring circuit 261) may monitor one or more parameters of one or more components (e.g., memory cells 205, switching components 235, a row decoder 220, a column decoder 225, a sense component 245, an input/output component 255, a local memory controller 260). The one or more parameters may include or be associated with a level of wear or degradation of the component, or with an operating parameter violation, or both. The memory device 110 may measure a value of the one or more parameters and determine whether the value satisfies a threshold, and may communicate, to a host device 105, an indication of the one or more parameters satisfying the threshold. The host device 105 may use the information in the indication to adjust one or more parameters associated with operating the memory device 110.[0060] FIG. 3 illustrates an example of a model 300 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The model 300 may include a curve 305 of an expected change or degradation of a parameter of a memory device 110, or a component of a memory device 110, over time. The time may refer to an absolute or clock time, or may refer to a normalized time or duration such as a percentage of a projected design life (e.g., where 100% corresponds to an expected or designed operating life of the memory device 110). The memory device 110 may monitor one or more parameters using the techniques described herein.[0061] There may be an expected degradation or wear over time of various circuits of the memory device 110, or the memory device 110 as a whole. For example, over time, components or circuitry of the memory device 110 may experience dielectric breakdown, ion or other constituent material migration or transformation, thermal stress or damage, mechanical stress or damage, fatigue, or other changes that affect operational reliability or performance of the memory device 110. In some examples, curve 305 may illustrate an expected change of a parameter that results from such degradation or wear. The parameter illustrated by the model 300 may refer to a voltage resulting from an operation of the memory device 110, a duration or other timing to perform or otherwise associated with performing an operation of the memory device 110, a temperature, or some other characteristic resulting
from performing an operation of the memory device 110, which may degrade or otherwise change over the operating life of the memory device 110.[0062] Curve 305 may be determined based on analytical or statistical modeling of the operation or corresponding components of the memory device 110, testing or determination of (e.g. observation of) one or more operations or components of a memory device 110 or representative population of memory devices 110 (e.g., a test population), or other techniques or combinations thereof. In some examples, to generate curve 305, a worst-case or other usage assumption may be used to support a robust design of a memory device 110. In some examples, a host device 105 may use different degradation readouts from the memory device 110 (e.g., taken at different points in time or over the lifetime of the memory device 110) to determine a slope of curve 305, among other potential conditions or metrics, and estimate a wear-out point of the memory device 110 (e.g., and thereby generate curve 305).[0063] Levels of degradation or wear of a memory device 110 (e.g., or component thereof) at various (e.g., key) time intervals along a curve 305 may be selected as trip points for a life expectancy monitoring or flagging system. Such levels may be represented, for example, by various determined points which may be design points 310 (e.g., design points 310-a through 310-c). Each of the design points 310 may be associated with a respective time (e.g., lifetime) and parameter value (e.g., design point 310-a being associated with a time, ti, and a parameter value, Li, and so on). Such design points 310 may be picked in a manner relative to how long a memory device 110 is designed to be used. Such design points may be associated with a life expectancy of a memory device 110, which may refer to a time duration or period over which the memory device 110 is expected to be operational, or operational within certain design parameters. For example, the design points may be associated with the memory device 110 being operational within a threshold rate of errors (e.g., a rate of correctible errors), operational within a threshold latency, operational within a threshold power consumption, or operational under certain assumed or predicted operating parameters or environmental conditions, among other conditions.[0064] In some examples, a life expectancy may refer to an inferred or predicted operational end point, at which the reliability of the memory device 110 may be uncertain or unknown, or may have a relatively high probability of failure (e.g., a probability of failure that satisfies a threshold). In various examples, a life expectancy of a memory device 110, or a component thereof, may be aligned with an expected design life of the memory device 110
itself, or may be considered in the context of an expected design life of a system that includes the memory device 110 (e.g., aligned with or designed to exceed the design life of the system that includes the memory device 110, designed with some fraction of the design life of the system that includes the memory device 110 such that some rate of replacement is expected or anticipated).[0065] In one example, a memory device 110 may be designed with a 20 year design life, and the design points 310-a through 310-c may be associated with times of ti = 5 years, t2 = 10 years, and t3 = 15 years of operating the memory device 110-a, respectively. Although illustrated in the context of three design points 310, the techniques described herein may support any quantity of one or more design points 310, or associated times and parameter values, to support various granularity or resolution for evaluating life expectancy of a memory device 110, or one or more components thereof.[0066] For example, one or more design points 310 may be configured for providing relatively early notice of one or more wear-out mechanisms of the memory device 110 or a component thereof. The one or more design points 310 may additionally or alternatively be used to identify a component of the memory device 110 that may be experiencing a faster than expected wear-out or degradation rate (e.g., that may therefore fall short of a life expectancy of the component or the memory device 110). Notifications of wear-out mechanisms or degradation rate may provide information to a host device 105, which may be used (e.g., by a device, a system provider, an end user) to reduce a probability of failure of an overall system that includes the memory device (e.g., by adjusting one or more parameters to extend a life of the memory device 110 or by replacing the memory device 110 before failure). In some cases, the memory device 110 may support techniques for a device, a component, or a designer of the system (e.g., a provider of a host device 105) to define the one or more design points 310 for providing notifications (e.g., define one or more trigger states).[0067] The memory device 110 may be configured to monitor and report life expectancy, or a parameter associated with life expectancy. For example, the memory device 110 may estimate or measure a life expectancy based on a measured or determined parameter value, which may correspond to a life expectancy. The parameter may, for example, represent a level of wear (e.g., performing a quantity of operations that over time will lead to a level of wear), a violation of an operating parameter, a rate of degradation, or other parameter
associated with one or more components of the memory device 110, where a value of the parameter may correspond to a time or life expectancy of the memory device 110. The memory device 110 may determine or measure the parameter and report an associated life expectancy or the parameter itself to a host device 105 (e.g., based on the parameter satisfying a threshold value, such as a design point 310).[0068] The memory device 110 may include circuitry or other components for monitoring life expectancy and other parameters at the memory device 110. For example, the circuitry (e.g., monitoring circuitry) may measure or determine (e.g., periodically, such as one time a day or upon power down) a value of the parameter and may indicate the parameter, a corresponding life expectancy, or both, to the host device 105.[0069] In some examples, the times or parameter values of a model such as the model 300 may correspond to threshold values for which a comparison may be made when evaluating a life expectancy or remaining life of a memory device 110. In other words, the times and parameter values of curve 305 may provide a proxy or a prediction for degradation or life expectancy of memory devices 110, or components thereof, which may be used by logic or circuitry to report various life expectancy characteristics of a particular memory device 110.[0070] Measuring and reporting parameters or values associated with a life expectancy of the memory device 110 may support indication of one or more wear mechanisms for the memory device 110, an approaching end-of-lifetime of the memory device 110 (e.g., or a component thereof), or both. An indication associated with the life expectancy of the memory device 110 may support replacement of the memory device 110, adjustment of one or more operating parameters of the memory device 110 (e.g., to extend the life expectancy of the memory device 110), identification of one or more weak spots or components of the memory device 110 (e.g., faster than expected degradation), improved prediction and modeling, or any combination thereof.[0071] In one example, the parameter of curve 305 may represent a quantity of row active time, such as a quantity of time that a row voltage stays below or above a threshold (e.g., stays low, stays high). If the row voltage stays below the threshold for a long period, the lower voltage may result in voltage leakage, may disrupt one or more signals, or may also result in wear to one or more components of the memory device 110 (e.g., transistor degradation, hot carrier degradation, negative-bias temperature instability (NBTI), non-
conducting stress (NCS)). In one example, the parameter of curve 305 may, for example, represent a total quantity of time the row voltage has been below the threshold (e.g., since installation of the memory device 110 or during a current power cycle of the memory device 110). In some examples, the parameter of curve 305 may represent a percentage of operating time that the row voltage stays below the threshold.[0072] In some examples, the memory device 110 may be configured to report the parameter of curve 305 (e.g., or associated values) once an end of life of the memory device 110, or component thereof, is reached (e.g., based on the parameter). Additionally or alternatively, a user of the memory device 110 (e.g., a system manufacturer) may have an option of selecting a different report condition, such as a window from an end of life (e.g., within 10% of the end of life).[0073] In another example, curve 305 may represent a change or degradation over time of a voltage resulting from an operation of a memory device 110. Such an example may refer to a voltage directly resulting from an operation of a memory device 110 (e.g., an observed voltage signal, an observed threshold voltage for activating a transistor), or may refer to a difference between a voltage resulting from an operation of the memory device 110 and a baseline or initial condition. One example of such a relationship is illustrated by Table 1, associating each of three design points 310 over a 20 year expected design life with an operating time, a remaining life, an expected degradation (e.g., an expected voltage level, an expected operating condition level), and a concern level.Table 1 - Example of expected voltage degradation over time[0074] In the examples described herein, the memory device 110 may determine (e.g., using the monitoring circuitry) a value of the parameter illustrated by curve 305 and may use the value to determine a location on curve 305. The location on curve 305 may, for example,
represent the value and a corresponding life expectancy or operating time, such as illustrated by Table 1. The memory device 110 may indicate the value of the parameter, the associated operating time or life expectancy, or any combination thereof to a host device 105.[0075] Additionally or alternatively, a similar relationship may be established for one or more other monitored parameters at the memory device 110, such as a monitored charge level, a monitored current level, a monitored duration, a monitored frequency, or other characteristics. Although curve 305 illustrates a change in operating characteristic that is positively correlated (e.g., increasing) over time, the described techniques may be applicable to various change or degradation relationships that are positively or negatively correlated, including linear, exponential, polynomial, logarithmic, or discontinuous (e.g., stepped) relationships over time.[0076] FIG. 4 illustrates an example of a process flow 500 and that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The process flow 500 may be implemented by a host device 105-a and a memory device 110-a, which may be examples of the respective devices described with reference to FIGs. 1- 3. The host device 105-a and the memory device 110-a may be coupled via a physical or logical interface, such as channels 115, that may support signaling between the respective devices. The memory device 110-a may illustrate an example of an apparatus that includes an array of memory cells 205 couplable to an interface with a processor or SoC (e.g., of the host device 105-a) and configured to operate in response to commands from the processor or the SoC.[0077] The memory device 110-a may include logic or circuitry (e.g., a monitoring circuit 156, one or more monitoring circuits 166, one or more monitoring circuits 261, or various combinations thereof) for monitoring one or more parameters associated with a life expectancy of the memory device 110-a. The logic or circuitry may be attached to a same substrate, for example, as the array of memory cells 205, which may be configured to support various operations described herein. In some examples, the array of memory cells 205 of the memory device 110-a may be volatile memory cells, among other alternatives, and the memory device 110-a may include a non-volatile storage component (e.g., one or more nonvolatile memory cells, latches, fuses, or anti-fuses) configured to store an indication of a life expectancy of the memory device 110-a.
[0078] At 405, the memory device 110-a may measure a parameter associated with a component of the memory device 110-a (e.g., by sampling, detecting, or determining the parameter). The parameter may be associated, for example, with a level of wear of the component or a degradation of the component or both. The parameter may be measured by monitoring circuitry of the memory device 110-a and may represent or be associated with a voltage, a current, a timing, an amount of time, a temperature, a degradation (e.g., NBTI or hot carrier degradation), or other parameters of a component of the memory device 110-a. The monitoring circuitry of the memory device 110-a may be configured to monitor such parameters, among other examples. In some cases, the monitoring circuitry (e.g., or a portion thereof) may be configured to be smaller than a minimum feature or minimum feature size of the memory device 110-a, for example, in order to monitor a parameter associated with any component of the memory device 110-a.[0079] As described with reference to FIG. 3, the parameter may be representative of a level of wear of a component of the memory device 110-a, a rate of degradation of a component of the memory device 110-a, or any combination thereof. In one example, among others, the parameter may represent a current drawn by a sense amplifier of the memory device 110-a, where the current drawn by the sense amplifier may be representative of a wear-out of the sense amplifier. In another example, the parameter may represent a temperature of the component of the memory device 110-a.[0080] At 410, the memory device 110-a may determine that the parameter satisfies a threshold based on a comparison of the parameter with the threshold. For example, the memory device 110-a may determine that a level of wear or a degradation rate associated with the component satisfies a corresponding threshold (e.g., based on comparing a measured parameter with a related threshold stored or programmed or otherwise present in the memory device 110-a). Identifying that a degradation rate associated with the component satisfies a corresponding threshold may support identification of a “weak spot” within the memory device 110-a (e.g., a component degrading at a faster than expected rate). For example, the degradation rate satisfying the threshold may indicate that the component is experiencing a larger than expected wear-out (e.g., which may shorten the life of the component and the memory device 110-a).[0081] In some examples, the comparison of 410 may be associated with (e.g., followed by or support) determining an estimated remaining life (e.g., life expectancy) of the memory
device 110-a, or that an estimate of remaining life of the memory device 110-a satisfies a threshold of remaining life. In some examples, the threshold may be associated with an age or operating history of the memory device, and the memory device 110-a may determine that a degradation of the memory device 110-a satisfies a threshold degradation (e.g., indicating degradation that is faster or slower than expected). In some examples, the memory device 110-a may store (e.g., in a non-volatile storage component) an indication that a remaining life of the memory device 110-a satisfies a threshold of remaining life.[0082] In some examples, the operations at 405, or 410, or both, may be performed or initiated on a periodic basis (e.g., according to a duration of operation, according to a quantity of access operations, according to a monitoring interval) or alternatively on an aperiodic basis. In some examples, the operations at 405, or 410, or both, may be triggered by a condition at the host device 105-a or the memory device 110-a (e.g., triggered at a power cycle, triggered at a time of day, triggered at a deep power-down operation, triggered upon entering or exiting a power mode, triggered based on an access pattern, triggered upon a detection of a row hammer condition). In some examples, evaluation intervals or initiation conditions may be changed over time, such as shortening a testing interval based on a duration of operation, or a detected remaining life expectancy (e.g., based on flag bits). For example, if the memory device 110-a identifies an accelerated degradation or threshold life expectancy, a monitoring or evaluation interval may be shortened to support more frequent evaluation of the memory device 110-a.[0083] In one example, the parameter may represent a temperature of the component of the memory device 110-a, and the memory device 110-a may determine that a threshold temperature level has been reached. In some cases, the memory device 110-a may be configured with multiple thresholds (e.g., temperature thresholds), and may determine when a corresponding threshold is satisfied.[0084] At 415 (e.g., and as part of, or otherwise based on determining that the parameter satisfies the threshold), the memory device 110-a may communicate an indication that the parameter satisfies the threshold to the host device 105-a. The indication may be a proactive indication transmitted by the memory device 110-a, or may be in response to the memory device 110-a receiving a polling request from the host device 105-a (e.g., polling a mode register or other register of the memory device 110-a that stores the indication). In some cases, the memory device 110-a may proactively indicate (e.g., based on sending an
indication or signal to the host device 105-a) for the host device 105-a to read (e.g., poll) the register storing the indication.[0085] In some cases, the indication that the parameter satisfies the threshold may include an indication of a life expectancy of the memory device 110-a that may be determined by the memory device 110-a, for example, based on the measured parameter. In such cases, the indication may include an estimated percentage of remaining life of the memory device 110-a, or an estimated duration of remaining life of the memory device 110-a, or an indication that an estimate of remaining life of the memory device 110-a satisfies a threshold of remaining life.[0086] In some examples, the indication may include sensor signal levels associated with measuring the parameter or may include an indication of the parameter (e.g., a parameter value), or both, which may support the host device 105-a performing various calculations or evaluations based on such signal levels or the parameter value (e.g., the host device 105-a may include one or more comparators or other evaluation logic or circuitry for performing the calculations or evaluations).[0087] When the parameter is associated with a degradation rate or a weak spot of the memory device 110-a, the indication may identify the weak spot (e.g., or multiple weak spots) of the memory device 110-a. For example, if the degradation rate of the component satisfies the threshold, the memory device 110-a may indicate that the component is degrading faster than expected, or that the component may be a weak spot of the memory device 110-a.[0088] The indication may include a binary indication that the parameter satisfies the threshold, or may include a gradient that may, for example, indicate a value of the parameter or an associated value (e.g., a life expectancy). In some cases, the indication may include suggestions or guidance from the memory device 110-a for actions to be taken by the host device 105-a (e.g., in response to the indication). For example, the indication may indicate or signal (e.g., suggest) for the host device 105-a to adjust a voltage, adjust a temperature (e.g., by adjusting one or more other factors), or increase a refresh rate of the memory device 110-a (e.g., a refresh rate of memory cells of the memory device 110-a). In some cases, the indication may include a warning based on exceeding the threshold or based on an amount or magnitude of exceeding the threshold. In some cases, the indication may include an amount or gradient of overuse of the component (e.g., based on the parameter exceeding the
threshold), and may also include a suggestion to decrease a usage or a magnitude of usage of the component.[0089] At 420, the host device 105-a may determine a second parameter for operating the memory device 110-a (e.g., based on the indication of the parameter or the life expectancy of the memory device 110-a, or both). In some examples, the second parameter may be related to one or more access operations on the memory device 110-a (e.g., determining or adjusting a voltage, current, or timing parameter for accessing the memory device 110-a). In some examples, the determination of the second parameter may include determining which of a set of memory devices 110 or memory arrays 170 (e.g., including the memory device 110-a) to use for an access operation. For example, the host device 105-a may determine to refrain from performing an access operation on the memory device 110-a, or determine to perform an access operation on a different memory device 110. In some examples, determining the second parameter may include determining an estimated life expectancy parameter associated with an age or operating history of the memory device 110-a, for example, based on comparing information from the indication to the estimated life expectancy parameter (e.g., based on a determination of whether the memory device 110-a is degrading more quickly or more slowly than expected).[0090] In some examples, the indication of the parameter may be used by the host device 105-a to perform one or more responsive actions or determinations. For example, as described herein, the indication may include one or more suggested actions for the host device 105-a to take (e.g., associated with the second parameter). For example, one bit of a mode register of the memory device 110-a may be interpreted by the host device 105-a as an indication to throttle (e.g., slow down, reduce) a clock rate, such as lengthening a duration or period of a clock signal. Another bit of a mode register may be interpreted by the host device 105-a as an indication to use a power-down mode more often, which may occur at the expense of performance or latency. Another bit of a mode register may be interpreted by the host device 105-a as an indication to change an address scheme (e.g., if possible), such as accessing a different memory array 170, or accessing a different pattern of memory cells 205, among other examples. Thus, according to these and other examples, the memory device 110-a may use a mode register to support signaling to the host device 105-a one or more suggested actions (e.g., to perform for dynamic adjustment).
[0091] In some examples, the host device 105-a may determine to adjust a voltage parameter of the memory device 110-a (e.g., a voltage source level, a read or write bias, a reference voltage level), a timing parameter of the memory device 110-a (e.g., a duration or rate of performing access operations or portion thereof, a refresh interval, an idle duration), or both. In some examples, the host device 105-a or the memory device 110-a may identify a circuit slow-down, and the host device 105-a may determine to remedy the slow-down by increasing a voltage for performing subsequent access operations (e.g., adding 100 mV to a voltage supply circuit). In some examples, the host device 105-a may increase a refresh rate of one or more cells of the memory device 110-a.[0092] In another example, degradation may be related to a duty cycle, such as an excessive timing skew. In such examples, the host device 105-a may enable or disable delay components to re-center timing. In some examples, the host device 105-a may enable a redundant circuit (e.g., disabling a first circuit component or memory array 170 and enabling a second circuit component or memory array 170). In some examples, the host device 105-a may adapt a sensor or corresponding signal on a signal line of the memory device 110-a, or adapt a threshold source or corresponding signal on a threshold line of the memory device 110-a (e.g., enabling a different threshold source), so that a comparator does not continue to flag the parameter as satisfying the threshold.[0093] Although illustrated in the context of a response to a single parameter or single parameter measurement (e.g., at 405), in some examples, the described techniques may be performed in response to more than one parameter or associated evaluation. For example, the memory device 110-a or the host device 105-a may identify another parameter or another measurement of the parameter, and may compare the other parameter or other measurement to a corresponding threshold value. In such examples, the operations at 430 may be performed based on comparing the other parameter or other measurement to the corresponding threshold value (e.g., as well as the comparison of 410). In such cases, the indication of the parameter at 405 may include an indication of one of the parameters that indicates a higher level of degradation (e.g., a worst-case parameter) or may include an indication of multiple parameters (e.g., both parameters, including the worst-case parameter).[0094] At 425, the host device 105-a may communicate an indication for operating the memory device based on determining the second parameter. For example, the host device 105-a may, at least in part, implement or notify the memory device 110-a of the determined
second parameter for operating the memory device 110-a. Communicating the indication for operating the memory device 110-a may include transmitting an access command to the memory device 110-a, or another memory device 110 (not shown). The access command may include the second parameter, or may otherwise be determined according to the second parameter.[0095] In some examples, at 430, the memory device 110-a may adjust one or more parameters based on receiving (e.g., from the host device 105-a) the indication for operating the memory device 110-a. The one or more parameters may, for example, be associated with a temperature of the memory device 110-a, a refresh rate of the memory device 110-a, a voltage level of the memory device 110-a, an access parameter of the memory device 110-a, any other parameter described herein, or any combination thereof.[0096] In some examples, at 435, the host device 105-a may transmit an indication of a status of the memory device 110-a to a device different than the memory device (e.g., based on the indication of 425). The indication of the status may be transmitted to a device or component of a system including the memory device 110-a and the host device 105-a. In such cases, the indication of the status may be used by an operator or designer of the system or of the memory device 110-a for design of the system or design of an associated memory device 110. The indication of the status may indicate (e.g., to a user or system designer) that the memory device 110-a should be repaired or replaced, or that the memory device 110-a has reached a threshold level (e.g., an accelerated level) of degradation or impaired operation, among other indications. Such indications may an output of an indication to a system or a device, such as a check engine light or other dash indication of a device, such as a vehicle, that includes the system, or an indicator displayed by a computing system that includes the system, among other indications, options, and examples.[0097] FIG. 5 illustrates an example of a process flow 500 and associated operations and signaling that support life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The process flow 500 may include a host device 105-b and a memory device 110-b, which may be examples of the respective devices described with reference to FIGs. 1-4. The host device 105-b and the memory device 110-b may be coupled via a physical or logical interface, such as channels 115, that may support signaling between the respective devices. The memory device 110-b may illustrate an example of an apparatus that includes an array of memory cells 205 couplable to an interface with a processor or SoC
(e.g., of the host device 105-b) and configured to operate in response to commands from the processor or the SoC.[0098] The memory device 110-b may include logic or circuitry (e.g., a monitoring circuit 156, one or more monitoring circuits 166, one or more monitoring circuits 261, or various combinations thereof) for monitoring one or more parameters associated with a violation of an operating parameter of the memory device 110-b. The logic or circuitry may be attached to a same substrate, for example, as the array of memory cells 205, which may be configured to support various operations described herein. In some examples, the array of memory cells 205 of the memory device 110-b may be volatile memory cells, and the memory device 110-b may further include a non-volatile storage component (e.g., one or more non-volatile memory cells, latches, fuses or anti-fuses) configured to store an indication of a violation of an operating parameter of the memory device 110-b.[0099] At 505, the memory device 110-b may measure a parameter associated with a component of the memory device 110-b. The parameter may be associated, for example, with a violation of an operating parameter of the component (e.g., a violation of an operating parameter specified by a data sheet of the memory device 110-b). The memory device 110-b may be configured to monitor all operating parameters for a violation, or may be configured to monitor a subset (e.g., one or more) of operating parameters for a violation. In some cases, the parameter may be associated with a problematic command sequence (e.g., that may cause malfunction or degradation of the memory device 110-b if overused) or another operation of the memory device 110-b that may cause, be, or indicate a malfunction. The examples described herein in relation to a violation of an operating parameter may also apply to the problematic command sequence or the operation of the memory device 110-b.[0100] The violation of an operating parameter of the component may be associated with a type of violation, such as a non-destructive violation or a destructive violation, among other options.[0101] A non-destructive violation may include, for example, a violation of an operating parameter for the memory device 110-b that does not result in damage or degradation to a component of the memory device 110-b. Such violations may be associated with a quality of service of the memory device 110-b (e.g., associated with one or more error rates, such as a bit error rate), for example, as opposed to a malfunction or degradation of the memory device 110-b. Examples of a non-destructive violation may include a violation of a timing parameter
of the memory device 110-b or a violation of a low-end voltage (e.g., voltage-in low) of the memory device 110-b. For example, a voltage-in low operating parameter may be set at 400 millivolts (mV) and a violation of the operating parameter may include supplying 450 mV.[0102] When associated with a non-destructive violation, the parameter may represent a clock based parameter (e.g., a timing parameter), a time based parameter (e.g., a timing duration parameter), a quantity of violations (e.g., a long term tracking of non-destructive violations), or any combination thereof. The memory device 110-b may include one or more counters (e.g., simple counters, counters coupled to a clock rate) to measure or determine the clock based parameter, the time based parameter, the quantity of violations, or any combination thereof.[0103] A destructive violation may include, for example, a violation of an operating parameter for the memory device 110-b that may result in damage or degradation to a component of the memory device 110-b (e.g., accelerated or increased wear-out, or destruction). Such violations may be associated with a wear-out or life expectancy of the memory device 110-b. Examples of a destructive violation may include an excessive power, temperature, or voltage (e.g., a power supply voltage or a voltage-in high voltage) applied to the component of the memory device 110-b, which may be recorded using a fuse, among other examples. In one example, a voltage-in high operating parameter may be set at 2 volts (V) and a violation of the operating parameter may include supplying 6 V. In another example, a violation of a row active time (e.g., going beyond an operating parameter of row active time) may increase wear-out of associated components.[0104] The memory device 110-b may track (e.g., using the monitoring circuitry by storing, determining, or otherwise capturing) a quantity of times, a length, a magnitude, or any combination thereof, associated with a destructive violation (e.g., a temperature or voltage violation). For example, the memory device 110-b may track a quantity of times, a length, and/or a magnitude of a row active time violation. In some cases, the length of a violation may be tracked using different granularities for different violations or for different intervals, where a user (e.g., a system designer or user of the host device 105-b) may have an option to select a granularity interval.[0105] At 510, the memory device 110-b may determine that the parameter satisfies a threshold based on a comparison of the parameter with the threshold. For example, the memory device 110-b may determine that an operating parameter associated with the
component satisfies a corresponding threshold, where the threshold may represent a violation of the operating parameter or may represent a beginning of or location within a guard band (e.g., a range of values) from violating the operating parameter. In some cases, the threshold may be set or selected by a user of the memory device 110-b (e.g., a user of the host device 105-b or a system operator, programmer, or designer).[0106] In some examples, the comparison of 510 may be associated with (e.g., followed by or support) determining an estimated remaining life (e.g., life expectancy) of the memory device 110-b, or that an estimate of remaining life of the memory device 110-b satisfies a threshold of remaining life. For example, the memory device 110-b may determine a remaining life or life expectancy based on a violation of a destructive operating parameter (e.g., using parameter data associated with the violation).[0107] In some examples, the operations at 505, or 510, or both, may be performed or initiated on a periodic basis (e.g., according to a duration of operation, according to a quantity of access operations, according to a monitoring interval). In some examples, the operations at 505, or 510, or both, may be triggered by a condition at the host device 105-b or the memory device 110-b (e.g., triggered at a power cycle, triggered at a time of day, triggered at a deep power-down operation, triggered upon entering or exiting a power mode, triggered based on an access pattern, triggered upon a detection of a row hammer condition). In some examples, evaluation intervals or initiation conditions may be changed over time, such as shortening a testing interval based on a duration of operation, or a detected remaining life expectancy (e.g., based on flag bits). For example, if the memory device 110-b identifies an accelerated degradation or threshold life expectancy, a monitoring or evaluation interval may be shortened to support more frequent evaluation of the memory device 110-b.[0108] At 515 (e.g., and as part of, or otherwise based on determining that the parameter satisfies the threshold), the memory device 110-b may communicate an indication that the parameter satisfies the threshold to the host device 105-b. The indication may be a proactive indication transmitted by the memory device 110-b, or may be in response to the memory device 110-b receiving a polling request from the host device 105-b (e.g., polling a mode register or other register of the memory device 110-b). In some cases, the memory device 110-a may proactively indicate for the host device 105-a to read the register storing the indication. For example, upon determining a violation of the operating parameter (e.g., satisfying the threshold) the memory device 110-b may store information in the register (e.g.,
one or more register bits) and may point to register bits to be checked by the host device 105- b (e.g., bits associated with monitoring the parameter).[0109] The indication may include an indication of the type of the violation, such as indicating a destructive violation or a non-destructive violation (e.g., associated with the parameter). In some cases of a non-destructive violation, the indication that the parameter satisfies the threshold may indicate that the violation is not damaging the component or the memory device 110-b (e.g., not adversely affecting the component). Such an indication may also indicate that there may be less margin for operating the component based on the nondestructive violation and may further indicate a possibility of future failures (e.g., indicate that failures may occur in some situations, such as in the presence of ground noise). In some cases, the indication may indicate a magnitude of the violation (e.g., an amount beyond the operating parameter or threshold) or may indicate a margin between the parameter and the violation (e.g., may indicate a percentage range from the violation, such as 10 percent to violation).[0110] The indication may additionally or alternatively include a severity (e.g., a level or magnitude) of the violation, where the indication itself may be based on the severity. For example, a repeated occurrence of a violation or an estimated decrease in device lifetime may be more severe than a one-time voltage violation. In such cases, the indication may indicate a characteristic of the violation (e.g., a one-time or repeated violation) or may indicate a severity level of the violation. In some cases, as described with reference to FIG. 4, the indication may include one or more suggested actions for the host device 105-b. For example, the indication may indicate one or more suggested actions to take in order to reduce or eliminate the violation of the operating parameter.[OHl] In some cases, the indication that the parameter satisfies the threshold may include an indication of a life expectancy of the memory device 110-b. For example, the indication may include an estimated percentage of remaining life of the memory device 110-b, or an estimated duration of remaining life of the memory device 110-b, or an indication that an estimate of remaining life of the memory device 110-b satisfies a threshold of remaining life. In some cases, the life expectancy of the memory device 110-b may be based on information associated with the violation of the operating parameter (e.g., based on a destructive violation and associated information). For example, upon identification of a destructive violation,
information associated with the violation (e.g., the parameter value) may be fed back to life expectancy estimation circuitry for improved estimation of the life expectancy.[0112] In some examples, the indication may include sensor signal levels associated with measuring the parameter or may include an indication of the parameter (e.g., a parameter value), or both, which may support the host device 105-b performing various calculations or evaluations based on such signal levels or the parameter value (e.g., the host device 105-b may include one or more comparators or other evaluation logic or circuitry for performing the calculations or evaluations).[0113] At 520, the host device 105-b may determine a second parameter for operating the memory device 110-b (e.g., based on the indication of the parameter or the life expectancy of the memory device 110-b, or both). In some examples, the second parameter may be related to one or more access operations on the memory device 110-b (e.g., determining or adjusting a voltage or timing parameter for accessing the memory device 110-b). The second parameter may, in some cases, be related to the violation of the operating parameter. For example, the second parameter may be determined by the host device 105-b in order to reduce or eliminate the violation of the operating parameter or to adjust a parameter associated with or affected by the violation of the operating parameter.[0114] In some examples, the determination of the second parameter may include determining which of a set of memory devices 110 or memory arrays 170 (e.g., including the memory device 110-b) to use for an access operation. For example, the host device 105-b may determine to refrain from performing an access operation on the memory device 110-b, or determine to perform an access operation on a different memory device 110. In some examples, determining the second parameter may include determining an estimated life expectancy parameter associated with an age or operating history of the memory device 110- b, and determining the second parameter based on comparing information from the indication to the estimated life expectancy parameter (e.g., based on a determination of whether the memory device 110-b is degrading more quickly or more slowly than expected).[0115] In some cases, the indication may include one or more suggested actions for the host device 105-b to take (e.g., associated with the second parameter). For example, one bit of a mode register of the memory device 110-b may be interpreted by the host device 105-b as an indication to throttle (e.g., slow down, reduce) a clock rate, such as lengthening a duration or period of a clock signal. Another bit of a mode register may be interpreted by the
host device 105-b as an indication to use a power-down mode more often, which may occur at the expense of performance or latency. Another bit of a mode register may be interpreted by the host device 105-b as an indication to change an address scheme (e.g., if possible), such as accessing a different memory array 170, or accessing a different pattern of memory cells 205. Thus, according to these and other examples, the memory device 110-b may use a mode register to support signaling to the host device 105-b one or more suggested actions (e.g., to perform for dynamic adjustment).[0116] In some examples, the host device 105-b may determine to adjust a voltage parameter of the memory device 110-b (e.g., a voltage source level, a read or write bias, a reference voltage level), a timing parameter of the memory device 110-b (e.g., a duration or rate of performing access operations or portion thereof, a refresh interval, an idle duration), or both. In some examples, the host device 105-b or the memory device 110-b may identify a circuit slow-down, and the host device 105-b may determine to remedy the slow-down by increasing a voltage for performing subsequent access operations (e.g., adding 100 mV to a voltage supply circuit).[0117] In another example, degradation may be related to duty cycle, such as an excessive timing skew. In such examples, the host device 105-b may enable or disable delay components to re-center timing. In some examples, the host device 105-b may enable a redundant circuit (e.g., disabling a first circuit component or memory array 170 and enabling a second circuit component or memory array 170). In some examples, the host device 105-b may adapt a sensor or corresponding signal on a signal line of the memory device 110-b, or adapt a threshold source or corresponding signal on a threshold line of the memory device 110-b (e.g., enabling a different threshold source), so that a comparator does not continue to flag the parameter as satisfying the threshold.[0118] Although illustrated in the context of a response to a single parameter or single parameter measurement (e.g., at 505), in some examples, the described techniques may be performed in response to more than one parameter or associated evaluation. For example, the memory device 110-b or the host device 105-b may identify another parameter or another measurement of the parameter, and may compare the other parameter or other measurement to a corresponding threshold value additionally or alternatively. In such examples, the operations at 530 may be performed based on comparing the other parameter or other measurement to the corresponding threshold value (e.g., as well as the comparison of 510).
[0119] At 525, the host device 105-b may communicate an indication for operating the memory device based on determining the second parameter. For example, the host device 105-b may, at least in part, implement or notify the memory device 110-b of the determined second parameter for operating the memory device 110-b. Communicating the indication for operating the memory device 110-b may include transmitting an access command to the memory device 110-b, or another memory device 110 (not shown). The access command may include the second parameter, or may otherwise be determined according to the second parameter.[0120] In some examples, at 530, the memory device 110-b may adjust one or more parameters (e.g., operating parameters) based on receiving the indication for operating the memory device 110-b. The one or more parameters may, for example, be associated with a temperature of the memory device 110-b, a refresh rate of the memory device 110-b, a voltage level of the memory device 110-b, an access parameter of the memory device 110-b, any other parameter described herein, or any combination thereof.[0121] In some examples, at 535, the host device 105-b may transmit an indication of a status of the memory device 110-b to a device different than the memory device (e.g., based on the indication of 425). In some examples, the indication of the status may be transmitted to a device or component of a system including the memory device 110-b and the host device 105-b. In such cases, the indication of the status may be used by an operator or designer of the system or of the memory device 110-b for design of the system or an associated memory device 110. In some cases, the indication of the status may be used to track a quantity of operating parameter violations, which may be stored for evaluation of the system. For example, the tracked operating parameter violations may be used to understand exposure of the system to errors and degradation, as well as to quantify risks associated with current or future operating parameter violations. The tracked operating parameter violations may also be used to determine a cause of device failure or malfunction.[0122] The indication of the status may indicate (e.g., to a user or system designer) that the memory device 110-b should be repaired or replaced, or that the memory device 110-b has reached a threshold level (e.g., an accelerated level) of degradation or impaired operation, among other indications. Such indications may include a check engine light or other dash indication of a vehicle that includes the system, or an indicator displayed by a computing system that includes the system, among other indications.
[0123] FIG. 6 shows a block diagram 600 of a memory device 605 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The memory device 605 may be an example of aspects of a memory device as described with reference to FIGs. 1-5. The memory device 605 may include a parameter measurement component 610, a parameter threshold component 615, an indication communication component 620, an operating parameter violation component 625, a device operation component 630, and a life expectancy component 635. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0124] The parameter measurement component 610 may measure, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both. The parameter threshold component 615 may determine that the parameter satisfies a threshold based on a comparison of the parameter with the threshold.[0125] The indication communication component 620 may communicate, to a host device, an indication that the parameter satisfies the threshold based on the determining. In some examples, the indication communication component 620 may communicate an indication of one or more suggested actions for operating the memory device based on determining that the parameter satisfies the threshold.[0126] The operating parameter violation component 625 may determine a type of the violation of the operating parameter of the component, the type including one of a nondestructive violation or a destructive violation. In some examples, the operating parameter violation component 625 may communicate an indication of the type of the violation of the operating parameter, where the threshold includes a threshold violation of the operating parameter for the component.[0127] In some examples, the operating parameter violation component 625 may determine a severity of the violation of the operating parameter. In some examples, the operating parameter violation component 625 may communicate an indication of the severity of the violation of the operating parameter. In some examples, the operating parameter violation component 625 may determine a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof. In some examples, the operating parameter violation component 625
may communicate an indication of the quantity, the magnitude, or the duration, or any combination thereof.[0128] In some examples, the operating parameter violation component 625 may determine a life expectancy of the memory device based on the parameter satisfying the threshold violation of the operating parameter, where the indication that the parameter satisfies the threshold includes an indication of the life expectancy. In some cases, the threshold violation of the operating parameter includes a threshold within a guard band of the violation of the operating parameter. In some cases, the non-destructive type of violation is associated with an error rate of the memory device, and where the destructive type of violation is associated with an increase in a degradation of the component.[0129] The device operation component 630 may receive, from the host device, an indication for operating the memory device based on communicating the indication that the parameter satisfies the threshold. In some examples, the device operation component 630 may adjust, based on receiving the indication for operating the memory device, one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof.[0130] The life expectancy component 635 may communicate an indication of a life expectancy of the memory device, where the threshold includes a level of wear of the component that is associated with the life expectancy of the memory device. In some examples, the life expectancy component 635 may communicate an indication of a rate of degradation of the component that is based on the level of wear of the component, where the threshold includes a threshold rate of degradation of the component. In some examples, the life expectancy component 635 may communicate one or more bits that indicate the rate of degradation satisfies the threshold, or an amount of use of the component based on the rate of degradation, or both. In some cases, the threshold level of wear includes a threshold within a guard band of a range of values associated with an end of life of the memory device.[0131] FIG. 7 shows a block diagram 700 of a host device 705 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The host device 705 may be an example of aspects of a host device as described with reference to FIGs. 1-5. The host device 705 may include an indication reception component 710, an operation determination component 715, and an operation communication component
720. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0132] The indication reception component 710 may receive, from a memory device, an indication that a first parameter associated with a component of the memory device has satisfied a threshold, the first parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both. In some examples, the indication reception component 710 may receive an indication of a life expectancy of the memory device, where the threshold includes a level of wear of the component that is associated with the life expectancy of the memory device.[0133] In some examples, the indication reception component 710 may receive an indication of a rate of degradation of the component that is based on the level of wear of the component, where the threshold includes a threshold rate of degradation of the component. In some examples, the indication reception component 710 may receive an indication of a type of the violation of the operating parameter of the component, the type including one of a nondestructive violation or a destructive violation, where the threshold includes a threshold violation of the operating parameter for the component.[0134] In some examples, the indication reception component 710 may receive an indication of a severity associated with the violation of the operating parameter, a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof. In some examples, the indication reception component 710 may receive an indication of one or more suggested actions for operating the memory device.[0135] The operation determination component 715 may determine a second parameter for operating the memory device based on receiving the indication that the first parameter has satisfied the threshold. In some examples, the operation determination component 715 may determine to adjust one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof, the indication for operating the memory device indicative of the one or more parameters.[0136] The operation communication component 720 may communicate, to the memory device, an indication for operating the memory device based on the determining.
[0137] FIG. 8 shows a flowchart illustrating a method or methods 800 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The operations of method 800 may be implemented by a memory device or its components as described herein. For example, the operations of method 800 may be performed by a memory device as described with reference to FIG. 6. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0138] At 805, the memory device may measure, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both. The operations of 805 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 805 may be performed by a parameter measurement component as described with reference to FIG. 6.[0139] At 810, the memory device may determine that the parameter satisfies a threshold based on a comparison of the parameter with the threshold. The operations of 810 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 810 may be performed by a parameter threshold component as described with reference to FIG. 6.[0140] At 815, the memory device may communicate, to a host device, an indication that the parameter satisfies the threshold based on the determining. The operations of 815 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 815 may be performed by an indication communication component as described with reference to FIG. 6.[0141] In some examples, an apparatus as described herein may perform a method or methods, such as the method 800. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for measuring, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both, determining that the parameter satisfies a threshold based on a comparison of the parameter with the threshold, and
communicating, to a host device, an indication that the parameter satisfies the threshold based on the determining.[0142] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for communicating an indication of a life expectancy of the memory device, where the threshold includes a level of wear of the component that may be associated with the life expectancy of the memory device. In some examples of the method 800 and the apparatus described herein, the threshold level of wear includes a threshold within a guard band of a range of values associated with an end of life of the memory device.[0143] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for communicating an indication of a rate of degradation of the component that may be based on the level of wear of the component, where the threshold includes a threshold rate of degradation of the component. In some examples of the method 800 and the apparatus described herein, communicating the rate of degradation may include operations, features, means, or instructions for communicating one or more bits that indicate the rate of degradation satisfies the threshold, or an amount of use of the component based on the rate of degradation, or both.[0144] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for determining a type of the violation of the operating parameter of the component, the type including one of a non-destructive violation or a destructive violation, and communicating an indication of the type of the violation of the operating parameter, where the threshold includes a threshold violation of the operating parameter for the component. Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for determining a severity of the violation of the operating parameter, and communicating an indication of the severity of the violation of the operating parameter.[0145] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for determining a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof, and communicating an indication of the quantity, the magnitude, or the duration, or any combination thereof.
[0146] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for determining a life expectancy of the memory device based on the parameter satisfying the threshold violation of the operating parameter, where the indication that the parameter satisfies the threshold includes an indication of the life expectancy. In some examples of the method 800 and the apparatus described herein, the threshold violation of the operating parameter includes a threshold within a guard band of the violation of the operating parameter. In some examples of the method 800 and the apparatus described herein, the non-destructive type of violation may be associated with an error rate of the memory device, and where the destructive type of violation may be associated with an increase in a degradation of the component.[0147] Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for communicating an indication of one or more suggested actions for operating the memory device based on determining that the parameter satisfies the threshold. Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for receiving, from the host device, an indication for operating the memory device based on communicating the indication that the parameter satisfies the threshold. Some examples of the method 800 and the apparatus described herein may further include operations, features, means, or instructions for adjusting, based on receiving the indication for operating the memory device, one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof.[0148] FIG. 9 shows a flowchart illustrating a method or methods 900 that supports life expectancy monitoring for memory devices in accordance with examples as disclosed herein. The operations of method 900 may be implemented by a host device or its components as described herein. For example, the operations of method 900 may be performed by a host device as described with reference to FIG. 7. In some examples, a host device may execute a set of instructions to control the functional elements of the host device to perform the described functions. Additionally or alternatively, a host device may perform aspects of the described functions using special-purpose hardware.[0149] At 905, the host device may receive, from a memory device, an indication that a first parameter associated with a component of the memory device has satisfied a threshold,
the first parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both. The operations of 905 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 905 may be performed by an indication reception component as described with reference to FIG. 7.[0150] At 910, the host device may determine a second parameter for operating the memory device based on receiving the indication that the first parameter has satisfied the threshold. The operations of 910 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 910 may be performed by an operation determination component as described with reference to FIG. 7.[0151] At 915, the host device may communicate, to the memory device, an indication for operating the memory device based on the determining. The operations of 915 may be performed according to the methods described with reference to FIGs. 4 and 5. In some examples, aspects of the operations of 915 may be performed by an operation communication component as described with reference to FIG. 7.[0152] In some examples, an apparatus as described herein may perform a method or methods, such as the method 900. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, from a memory device, an indication that a first parameter associated with a component of the memory device has satisfied a threshold, the first parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both, determining a second parameter for operating the memory device based on receiving the indication that the first parameter has satisfied the threshold, and communicating, to the memory device, an indication for operating the memory device based on the determining.[0153] Some examples of the method 900 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of a life expectancy of the memory device, where the threshold includes a level of wear of the component that may be associated with the life expectancy of the memory device. Some examples of the method 900 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of a rate of degradation
of the component that may be based on the level of wear of the component, where the threshold includes a threshold rate of degradation of the component.[0154] Some examples of the method 900 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of a type of the violation of the operating parameter of the component, the type including one of a nondestructive violation or a destructive violation, where the threshold includes a threshold violation of the operating parameter for the component.[0155] Some examples of the method 900 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of a severity associated with the violation of the operating parameter, a quantity of violations associated with the violation of the operating parameter, a magnitude associated with the violation of the operating parameter, or a duration associated with the violation of the operating parameter, or any combination thereof. Some examples of the method 900 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of one or more suggested actions for operating the memory device.[0156] In some examples of the method 900 and the apparatus described herein, determining the second parameter for operating the memory device may include operations, features, means, or instructions for determining to adjust one or more parameters associated with a temperature of the memory device, a refresh rate of the memory device, a voltage level of the memory device, or any combination thereof, the indication for operating the memory device indicative of the one or more parameters.[0157] It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined.[0158] An apparatus is described. The apparatus may include a memory array including a set of memory cells and circuitry coupled with the memory array and operable to measure, at a memory device, a parameter associated with a component of the memory device, the parameter associated with a level of wear of the component, a violation of an operating parameter of the component, or both, determine that the parameter satisfies a threshold based on a comparison of the parameter with the threshold, and communicate, to a host device, an indication that the parameter satisfies the threshold based on the determining.
[0159] Some examples of the circuitry may further be operable to communicate an indication of a life expectancy of the apparatus, where the threshold includes a level of wear of the component that may be associated with the life expectancy of the apparatus. Some examples of the circuitry may further be operable to communicate an indication of a rate of degradation of the component that may be based on the level of wear of the component, where the threshold includes a threshold rate of degradation of the component.[0160] Some examples of the circuitry may further be operable to determine a type of the violation of the operating parameter of the component, the type including one of a nondestructive violation or a destructive violation, and communicate an indication of the type of the violation of the operating parameter, where the threshold includes a threshold violation of the operating parameter for the component. Some examples of the apparatus may include one or more counters configured to determine a quantity of violations associated with the violation of the operating parameter or a duration associated with the violation of the operating parameter, or both, the indication including an indication of the quantity or the duration, or both.[0161] Some examples of the circuitry may further be operable to receive, from the host device, an indication for operating the apparatus based on communicating the indication that the parameter satisfies the threshold.[0162] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0163] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that
are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0164] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0165] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0166] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon- on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.
[0167] A switching component or a transistor discussed herein may represent a fieldeffect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0168] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0169] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0170] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations
are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0171] For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0172] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0173] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-
purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0174] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
A method and apparatus for transmitting network traffic includes selecting a major node in a major ring, where the major node corresponds to a first transmission opportunity encoded in the major ring. The major node specifies a minor node in a minor ring representing a virtual port. The method and apparatus also includes transmitting network traffic to a virtual connection that uses the virtual port. Alternatively, transmitting network traffic involves processing a schedule that includes a sequence of transmission opportunities encoded in a schedule ring and satisfying a minimum data rate for a scheduled virtual connection by processing a corresponding first minimum number of transmission opportunities from the schedule, each such transmission opportunity allocated by a schedule node to the scheduled virtual connection, where the schedule node is included in the schedule ring. |
WHAT IS CLAIMED IS: 1. A machine-based method for transmitting network traffic, including: selecting a virtual port for transmission according to a sequence which allocates a plurality of transmission opportunities to a plurality of virtual ports including the virtual port; selecting a virtual connection from a plurality of virtual connections which use the virtual port; and transmitting to the virtual connection during an allocated transmission opportunity in the plurality of transmission opportunities. 2. The method of claim 1, wherein the sequence is encoded in a first array. 3. The method of claim 2, wherein the first array includes major nodes, a second array of minor nodes represents the virtual port, and a major node in the first array specifies a minor node in the second array. 4. The method of claim 3, wherein selecting a virtual connection includes: <Desc/Clms Page number 71> selecting a scheduled virtual connection specified by the minor node if the scheduled virtual connection is ready for transmission; and otherwise selecting an unscheduled virtual connection from the plurality of virtual connections. 5. The method of claim 4, wherein selecting the unscheduled virtual connection selects corresponding to network traffic in a priority queue that has a most urgent relative priority among a plurality of priority queues. 6. The method of claim 5, wherein the priority queue is selected according to emptiness indicators corresponding to the plurality of priority queues. 7. The. method of claim 1, further comprising: performing an iteration of the plurality of transmission opportunities according to the sequence, to include attempting the processes of selecting the virtual port, selecting the virtual connection, and transmitting to the virtual connection for each transmission opportunity in the plurality of transmission opportunities. 8. The method of claim 7, further comprising: repeating the iteration of the plurality of transmission opportunities according to the sequence. <Desc/Clms Page number 72> 9. The method of claim 8, wherein the sequence has a modification indicator, and wherein repeating the iteration includes replacing the sequence with a new sequence before repeating the iteration, when the modification indicator so indicates. 10. A machine-based method for transmitting network traffic, including: processing a primary sequence which allocates a plurality of transmission opportunities to a secondary sequence, where the secondary sequence includes a plurality of references to a virtual connection that has a data rate specification which includes a minimum data rate; and satisfying the minimum data rate by transmitting to the virtual connection a corresponding minimum number of times according to the plurality of references. 11. The method of claim 10, wherein the primary sequence is encoded in a first array. 12. The method of claim 11, wherein the secondary sequence is encoded in a second array, and the first array includes a node specifying a position in the second array. <Desc/Clms Page number 73> 13. The method of claim 12, wherein the secondary sequence corresponds to a virtual port used by the virtual connection. 14. A machine-based method for transmitting network traffic, including: processing a primary sequence which allocates a plurality of transmission opportunities to a secondary sequence, where the secondary sequence represents a virtual port that has a data rate; and satisfying the data rate by transmitting to the virtual port a corresponding number of times according to the primary sequence. 15. The method of claim 14, wherein the primary sequence is encoded in a first array. 16. The method of claim 15, wherein the secondary sequence is encoded in a second array, and the first array includes a node specifying a position in the second array. 17. A machine-based method for transmitting network traffic, including: <Desc/Clms Page number 74> selecting a virtual connection for transmission according to a schedule sequence which allocates a plurality of transmission opportunities to a plurality of scheduled virtual connections and to a secondary sequence, where the secondary sequence allocates the plurality of transmission opportunities to a plurality of virtual ports; and transmitting to the virtual connection. 18. The method of claim 17, wherein selecting includes a stepping process that processes a node in the schedule sequence, where the node specifies a scheduled virtual connection in the plurality of scheduled virtual connections and specifies a secondary node in the secondary sequence, where the secondary node specifies a virtual port in the plurality of virtual ports. 19. The method of claim 18, wherein the stepping process includes, if the scheduled virtual connection is ready for transmission, selecting the scheduled virtual connection to be the selected virtual connection, and otherwise selecting a port virtual connection to be the selected virtual connection, where the port virtual connection uses an available virtual port. <Desc/Clms Page number 75> 20. The method of claim 19, wherein the scheduled virtual connection is a must-send virtual connection. 21. The method of claim 20, wherein the node also specifies a could-send virtual connection, and selecting the scheduled virtual connection to be the selected virtual connection includes selecting the must-send virtual connection in preference to the could-send virtual connection if the must- send virtual connection is ready for transmission. 22. The method of claim 19, wherein selecting the port virtual connection to be the selected virtual connection includes a port stepping process that, if the virtual port is not ready for transmission, steps through the secondary sequence to find a first available virtual port subsequent to the virtual port to use as the available virtual port. 23. The method of claim 19, wherein selecting the port virtual connection selects an unscheduled virtual connection from a plurality of unscheduled virtual connections that use the available virtual port. <Desc/Clms Page number 76> 24. The method of claim 23, wherein the unscheduled virtual connection corresponds to network traffic in a priority queue that has a most urgent relative priority among a plurality of priority queues that use the available virtual port. 25. The method of claim 24, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a new-data queue over the unscheduled virtual connection. 26. The method of claim 25, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a first-chance queue over the virtual connection associated with data from the new-data queue. 27. The method of claim 25, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a first-chance queue over the unscheduled virtual connection. <Desc/Clms Page number 77> 28. The method of claim 18, wherein the stepping process includes placing new network traffic on an associated new data queue for an associated virtual port if the new network traffic arrives for the scheduled virtual connection at a time when the scheduled virtual connection does not have network traffic, where the scheduled virtual connection uses the associated virtual port. 29. The method of claim 28, when an emptiness indicator indicates that the scheduled virtual connection does not have existing network traffic. 30. The method of claim 29, wherein the emptiness indicator is a bit in a bit vector. 31. The method of claim 18, wherein the stepping process includes placing network traffic on an associated first- chance queue for an associated virtual port if the scheduled virtual connection has network traffic for transmission but the associated virtual port is not ready for transmission, where the scheduled virtual connection uses the associated virtual port. 32. The method of claim 17, wherein the schedule sequence is encoded in a first array. <Desc/Clms Page number 78> 33. The method of claim 32, wherein the secondary sequence is encoded in a second array, and the first array includes a first node specifying a second node in the second array. 34. The method of claim 33, wherein the second node specifies an entry in a table of virtual ports. 35. An article comprising a machine-readable storage medium that stores executable instructions to transmit network traffic, the instructions causing a machine to: select a virtual port for transmission according to a sequence which allocates a plurality of transmission opportunities to a plurality of virtual ports including the virtual port; select a virtual connection from a plurality of virtual connections which use the virtual port; and transmit to the virtual connection during an allocated transmission opportunity in the plurality of transmission opportunities. 36. The article of claim 35, wherein the sequence is encoded in a first array. <Desc/Clms Page number 79> 37. The article of claim 36, wherein the first array includes major nodes, a second array of minor nodes represents the virtual port, and a major node in the first array specifies a minor node in the second array. 38. The article of claim 37, wherein the instructions causing the machine to select the virtual connection include instructions causing the machine to: select a scheduled virtual connection specified by the minor node if the scheduled virtual connection is ready for transmission, and otherwise select an unscheduled virtual connection from the plurality of virtual connections. 39. The article of claim 38, wherein the selection of the unscheduled virtual connection selects corresponding to network traffic in a priority queue that has a most urgent relative priority among a plurality of priority queues. 40. The article of claim 39, wherein the priority queue is selected according to a vector of emptiness indicators corresponding to the plurality of priority queues. <Desc/Clms Page number 80> 41. The article of claim 35, further comprising instructions causing the machine to: perform an iteration of the plurality of transmission opportunities according to the sequence, to include attempting the processes of selecting the virtual port, selecting the virtual connection, and transmitting to the virtual connection for each transmission opportunity in the plurality of transmission opportunities. 42. The article of claim 41, further comprising instructions causing the machine to: repeat the iteration of the plurality of transmission opportunities according to the sequence. 43. The article of claim 42, wherein the sequence has a modification indicator, and wherein repeating the iteration includes replacing the sequence with a new sequence before repeating the iteration, when the modification indicator so indicates. 44. An article comprising a machine-readable storage medium that stores executable instructions to transmit network traffic, the instructions causing a machine to: <Desc/Clms Page number 81> process a primary sequence which allocates a plurality of transmission opportunities to a secondary sequence, where the secondary sequence includes a plurality of references to a virtual connection that has a data rate specification which includes a minimum data rate; and satisfy the minimum data rate by transmitting to the virtual connection a corresponding minimum number of times according to the plurality of references. 45. The article of claim 44, wherein the primary sequence is encoded in a first array. 46. The article of claim 45, wherein the secondary sequence is encoded in a second array, and the first array includes a node specifying a position in the second array. 47. The article of claim 46, wherein the secondary sequence corresponds to a virtual port used by the virtual connection. 48. An article comprising a machine-readable storage medium that stores executable instructions to transmit network traffic, the instructions causing a machine to: <Desc/Clms Page number 82> process a primary sequence which allocates a plurality of transmission opportunities to a secondary sequence, where the secondary sequence represents a virtual port that has a data rate; and satisfy the data rate by transmitting to the virtual port a corresponding number of times according to the primary sequence. 49. The article of claim 48, wherein the primary sequence is encoded in a first array. 50. The article of claim 49, wherein the secondary sequence is encoded in a second array, and the first array includes a node specifying a position in the second array. 51. An article comprising a machine-readable storage medium that stores executable instructions to transmit network traffic, the instructions causing a machine to: select a virtual connection for transmission according to a schedule sequence which allocates a plurality of transmission opportunities to a plurality of scheduled virtual connections and to a secondary sequence, where the secondary sequence allocates the plurality of transmission opportunities to a plurality of virtual ports; and transmit to the virtual connection. <Desc/Clms Page number 83> 52. The article of claim 51, wherein the instructions causing the machine to select the virtual connection include a stepping process that processes a node in the schedule sequence, where the node specifies a scheduled virtual connection in the plurality of scheduled virtual connections and specifies a secondary node in the secondary sequence, where the secondary node specifies a virtual port in the plurality of virtual ports. 53. The article of claim 52, wherein the stepping process includes, if the scheduled virtual connection is ready for transmission, selecting the scheduled virtual connection to be the selected virtual connection, and otherwise selecting a port virtual connection that uses an available virtual port. 54. The article of claim 53, wherein the scheduled virtual connection is a must-send virtual connection. 55. The article of claim 54, wherein the node also specifies a could-send virtual connection, and selecting the scheduled virtual connection to be the selected virtual connection includes selecting the must-send virtual connection in preference to the could-send virtual connection if the must-send virtual connection is ready for transmission. <Desc/Clms Page number 84> 56. The article of claim 53, wherein selecting the port virtual connection to be the selected virtual connection includes a port stepping process that, if the virtual port is not ready for transmission, steps through the secondary sequence to find a first available virtual port subsequent to the virtual port to use as the available virtual port. 57. The article of claim 53, wherein selecting the port virtual connection selects an unscheduled virtual connection from a plurality of unscheduled virtual connections that use the available virtual port. 58. The article of claim 57, wherein the unscheduled virtual connection corresponds to network traffic in a priority queue that has a most urgent relative priority among a plurality of priority queues that use the available virtual port. 59. The article of claim 58, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a new-data queue over the unscheduled virtual connection. <Desc/Clms Page number 85> 60. The article of claim 59, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a first-chance queue over the virtual connection associated with data from the new-data queue. 61. The article of claim 59, wherein selecting the port virtual connection includes preferring a virtual connection associated with data from a first-chance queue over the unscheduled virtual connection. 62. The article of claim 52, wherein the stepping process includes placing new network traffic on an associated new data queue for an associated virtual port if the new network traffic arrives for the scheduled virtual connection at a time when the scheduled virtual connection does not have network traffic, where the scheduled virtual connection uses the associated virtual port. 63. The article of claim 52, when an emptiness indicator indicates that the scheduled virtual connection does not have existing network traffic. 64. The article of claim 63, wherein the emptiness indicator is a bit in a bit vector. <Desc/Clms Page number 86> 65. The article of claim 52, wherein the stepping process includes placing network traffic on an associated first- chance queue for an associated virtual port if the scheduled virtual connection has network traffic for transmission but the associated virtual port is not ready for transmission, where the scheduled virtual connection uses the associated virtual port. 66. The article of claim 51, wherein the schedule sequence is encoded in a first array. 67. The article of claim 66, wherein the secondary sequence is encoded in a second array, and the first array includes a first node specifying a second node in the second array. 68. The article of claim 67, wherein the second node specifies an entry in a table of virtual ports. |
<Desc/Clms Page number 1> DSL Transmit Traffic Shaper Structure and Procedure TECHNICAL FIELD This relates to networking, and more particularly to traffic management and controlling packet rates for transmission over many connections from a packet source or packet- forwarding device. BACKGROUND Digital Subscriber Link (DSL) service is a network communication protocol. DSL supports fixed bit rates at which packets may be sent over a network. In one common configuration, a customer contracts to receive DSL service from a service provider. On the service provider side, a DSL port connects to a DSL Access Multiplexer (DSLAM), which connects to a router. On the customer side, another DSL port interfaces to a modem that connects to customer premises equipment (CPE). An ATM network connects the service provider-side router and the CPE. Many ports may be aggregated in the network system and connected to the router with a single physical port interface. For each port there may be many virtual connections. These represent stateful communication setups such as an ATM virtual circuit or Internet TCP connection. At each end of the virtual connection is a software application that can <Desc/Clms Page number 2> send and receive messages. The messages are carried across the network as packets or frames subdivided into 48-byte ATM cells. The interface in and out of the forwarding device is either 48-byte ATM cells or 64-byte frame segments. Each virtual connection has a quality of service or rate specification. The ATM Forum Traffic Management Specification version 4.1, AF-TM-0121.000, published March, 1999, specifies types of rates, including constant bit rate (CBR), variable bit rate (VBR), and unspecified bit rate (UBR). Variable bit rates can be contracted with a minimum cell rate (MCR), a sustained cell rate (SCR), a peak cell rate (PCR), or a combination of these. Additionally, some VBR virtual connections can be designated real-time (abbreviated as"rt-VBR"), which, among other things, can affect the virtual connections'tolerance of errors or delays in the communication channel. In particular, the tolerance of delay may affect how (or for how long) data for a real-time VBR virtual connection should be queued. Non- real time VBR is abbreviated"nrt-VBR". A UBR virtual connection can have a priority categorization relative to other UBR traffic. Ports can have peak data rates, describing the maximum rates at which they are capable of transmitting, typically in bits per second. Maximum burst size (MBS) is a parameter specific to a given network protocol and a given implementation. MBS describes the <Desc/Clms Page number 3> maximum number of cells that may be transmitted continuously from a port over a network link. DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of functional units in a router/traffic shaper. FIG. 2 is a block diagram of physical elements in a router/traffic shaper. FIG. 3 shows the movement of a traffic cell in a router/traffic shaper. FIG. 4 is a block diagram of a virtual connection table. FIG. 5 illustrates a major ring and minor rings. FIG. 6 is a block diagram of major and minor ring data structures. FIG. 7 is a flowchart of a major node stepping process. FIG. 8 is a flowchart of a queue selection process. FIG. 9 is a block diagram of ring leader data structures and processes. FIG. 10 illustrates a schedule ring and port rings. FIG. 11 is a block diagram of service grades. FIG. 12 is a block diagram of schedule and port ring data structures. FIG. 13 is a flowchart of a shaping process. <Desc/Clms Page number 4> FIG. 14 is a flowchart of a schedule ring stepping process. FIG. 15 is a flowchart of a port ring stepping process. DETAILED DESCRIPTION The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims. Referring to FIG. 1, a networked system 10 includes a router/traffic shaper 12 connected to a network 14. Router/traffic shaper 12 uses a procedure and structures to transmit cells or segments to the satisfaction of virtual connection rates and virtual port rates. The structures include a major ring and a minor ring denoted in FIG. 1 as rings 20 and 22, respectively, in rings 18. Network 14 supports DSL network traffic. Router/traffic shaper 12 includes multiple processors 16. Ring leader processor 16a sets up major rings 20 and minor rings 22 as data structures that govern transmit timing of traffic departing router/traffic shaper 12 via network interface 24. Nodes of major ring 20 represent time slots on a transmit processor 16c organized in a sequence. The time slots are approximately equal in size, as measured in processor cycles of transmit processor 16c. Collectively, <Desc/Clms Page number 5> the nodes of major ring 20 represent a sequence of time slots in a transmission cycle of router/traffic shaper 12. Major ring 20 apportions these times slots to virtual ports 26a-n by associating major nodes with minor rings 22, since each minor ring 22 is uniquely associated with a virtual port 26. Each minor ring 22 has its own sequence of minor nodes, each of which can be associated with a scheduled virtual connection 28. Each minor ring 22 also manages all unscheduled virtual connections 28 associated with the relevant virtual port 26. Conceptually, therefore, a major ring 20 is a schedule of service to virtual ports 26, while a minor ring 22 is a schedule of service to virtual' connections 28 within a given virtual port 26. Overall, major ring 20 and multiple minor rings 22 encode a schedule of service to virtual connections 28 belonging to multiple virtual ports 26 on router/traffic shaper 12. Service rates for virtual connections 28 can be guaranteed by the encoding of major ring 20 and minor rings 22. Service to a virtual connection 28 is scheduled by allocating nodes of major ring 20 and minor ring 22. Specifically, virtual connection 28 is scheduled into virtual port 26 via a node of minor ring 22, and minor ring 22 is scheduled into major ring 20. Sufficient major nodes are allocated, with sufficiently regular spacing within major ring 20, to ensure that the service rate for virtual <Desc/Clms Page number 6> connection 28 is satisfied in terms of throughput and regularity. A given minor ring 22 can be associated with more than one major node. Indeed, each major node with which minor ring 22 is associated increases the number of time slots allocated to minor ring 22, and therefore to its associated virtual port 26. Therefore, increasing the number of time slots allocated to minor ring 22 increases the rate at which data is transmitted to the associated virtual port 26. Traffic includes packet cells 36a or segments 36b appropriate to a network protocol of network 14: commonly, 48 byte ATM cells or 64 byte frame segments. For simplicity, packet cells or segments 36 will be referred to as simply"cells"36. Multiple major rings 20 can co-exist. Virtual ports 26 partition virtual connections 28. Major rings 20 partition virtual ports 26. ROUTER/TRAFFIC SHAPER Referring to FIG. 2, router/traffic shaper 12 includes one or more processors 16. Processors 16 run threads that perform functions of ring leader processor 16a, receive processor 16b, and transmit processor 16c. Processor 16 can run more than one thread. Ring leader processor 16a runs a thread that manages major rings 20 and minor rings 22 stored in main memory 40. Receive processor 16b runs a thread that <Desc/Clms Page number 7> receives cells 36 from the network 14. Transmit processor 16c runs a thread that transmits cells 36 back onto the network 14. Network interface 24 contains physical ports 24a to which transmission media are attached, carrying communication between network interface 24 and network 14. Bus 42 interconnects processors 16, main memory 40, and network interface 24. Virtual ports 26 can be any ports in network 14, whether remote or local to network interface 24. For instance, referring to FIG. 1, virtual ports 26a-n are ports on network interface 24, while ports 26 x-z are ports on a network device 13. Referring to FIG. 1, router/traffic shaper 12 transmits to one or more virtual connections 28, such as virtual connections 28a-d. In the example illustrated in FIG. 1, virtual connections 28a-c are virtual circuits connecting to customer premises equipment 34, while virtual connection 28d is an Internet TCP connection. PROCESSORS Referring to FIG. 3, receive processors 16b receive cells 36 from network 14. Each cell 36 is associated with virtual connection 28 in virtual port 26. In the example of FIG. 3, virtual port 26 is associated with physical port 24a in network interface 24. However, in general, virtual port <Desc/Clms Page number 8> 26 may conceptually represent a physical port not in network interface 24 of the local router/traffic shaper 12, but on a remote device. In this case, virtual port 26 is associated with physical port 24a in network interface 24 to the degree that traffic passes through physical port 24a en route to the remote port represented by virtual port 26. Incoming cells 36 arrive in receive buffer 44 in main memory 40. Receive processors 16b validate cells 36 from receive buffer 44 and stage them in port queue 46 in main memory 40, pending transmission by transmit processor 16c. Receive processors 16b also perform lookups such as routing table lookups and associating incoming cell 36 with a destination virtual connection 28, which is the particular virtual connection 28 on which cell 36 will be transmitted. Destination virtual connection 28 is associated with a destination virtual port 26. Each virtual port 26 has an affiliated port queue 46. Receive processors 16b also perform classifications such as determining a data rate associated with the destination virtual connection 28. Transmit processors 16c dequeue cells 36 from port queue 46. A transmit processor 16c performs a traffic shaping process 66 (shown in FIG. 6) to transmit cells 36 at specified bit rates appropriate to their destination virtual connections 28, as will be explained. <Desc/Clms Page number 9> An example of a commercial available processor 16 is the IXP1200 Network Processor, which includes several, for example six, microengines. Each microengine executes machine-readable instructions and supports up to four simultaneous threads. The IXP1200 is manufactured by Intel Corporation. VIRTUAL CIRCUITS AND VIRTUAL PORTS Referring to FIG. 4, a virtual connection 28 represents a stateful communication setup, such as an ATM virtual circuit or Internet TCP connection. VC table 50 is a table of virtual connections 28. VC table 50 includes VC entries 52. A VC entry 52 contains information for a given virtual connection 28, including VC index 52a, type 52b, MBS 52c, rate 52d, PCR 52e, port 52f, current burst count 52g, current rate 52h, and queue reference 52i. Virtual connection 28 has a quality of service or rate specification, or rate 52d. Virtual connection 28 also has VC index 52a, which gives the position of virtual connection 28 in VC table 50. Type 52b specifies a type of service rate for virtual connection 28, such as CBR, VBR, or UBR. Port 52f specifies a virtual port 26 which virtual connection 28 uses. Current burst count 52g and current rate 52h are dynamic properties of virtual connection 28 that are determined by the transmission of data onto virtual connection 28. Transmit processors 16c maintain the values <Desc/Clms Page number 10> of current burst count 52g and current rate 52h. Current burst count 52g and current rate 52h can be used to determined whether the current state of virtual connection 28 is within defined traffic parameters, for instance MBS 52c and rate 52d, respectively. Queue reference 52i gives an offset into VC bit vector 54 for virtual connection 28. Virtual ports 26 have specified data rates that can be measured and constrained. The actual rate of service to virtual port 26 in the present embodiment is a function of the number of time slots allocated to it: more time slots yield a higher rate. Other factors affecting rate include the size of the time slot (in processor cycles, i. e. , step rate 70b of major ring 20, shown in FIG. 6) and the amount of data transmit processor 16c can transmit in a cycle. One constraint consideration when configuring virtual port 26 is the rates of virtual connections 28 allocated to virtual port 26. Allocation is limited such that the sum of the minimum service rates for virtual connections 28 does not exceed the desired rate of virtual port 26. This ensures that all virtual connections 28 on virtual port 26 can be serviced to their minimum rates. All virtual port 26 rates associated with major ring 20 are multiples of step rate 70b of major ring 20. VC bit vector 54 contains bits corresponding to VC connection queues 56 and VC queue heads 56a, as will be <Desc/Clms Page number 11> explained. Each VC connection queue 56 has a VC queue head 56a that anchors the queue and persists even when the queue is empty. RINGS Referring now to FIG. 5, a major ring 20 is affiliated with multiple minor rings 22. There can be multiple major rings 20, each governing transmission to a subset of the total virtual ports 26. For example, if there are two thousand forty-eight (2048, or 211) virtual ports 26, eight major rings 20 could each be configured to represent two hundred fifty-six virtual ports 26. Major ring 20 and minor ring 22 are shown in FIG. 5 as circular structures to indicate their conceptual ring structure, i. e. , iterations begun at the head of each ring will typically proceed to the end and then wrap around to the head again when the previous iteration is complete. Major ring 20 and minor ring 22 are each stored in memory 40 as an array. Major ring 20 has a base 58 at which ring 20 begins in memory 40. Similarly, minor rings 22a and 22b have bases 59a and 59b, respectively. Major ring 20 includes a sequence of major nodes 60. Minor ring 22 includes a sequence of minor nodes 62. MAJOR RINGS Major ring 20 is a data structure representing time slots scheduled on transmit processor 16c. Major ring 20 <Desc/Clms Page number 12> includes a sequence of major nodes 60. The sequence indicates the scheduled order of the transmission opportunities for minor rings 22. Traffic shaping process 66, as will be explained in more detail, cycles over the sequence of nodes 60 repeatedly to select virtual port 26, and more specifically virtual connection 28 within virtual port 26, to receive cells 36 for transmission. Thus, nodes 60 in major ring 20 encode a schedule of transmissions to virtual ports 26 and virtual connections 28. Node 60 in major ring 20 represents a time slot-in which traffic can be transmitted. Step rate 70b, as shown in FIG. 6 in the ring control block 70, also known as a base rate, measures the duration of a time slot of major ring 20 in terms of processor cycles. Step rate 70b specifies the interval, in terms of processor cycles, that should occur between transmissions. When virtual port 26 has a transmission rate approximately equal to step rate 70b, associated minor ring 22 is entered once in major ring 20. (That is, in this case associated minor ring 22 corresponds to a single node 60. ) When virtual port 26 has a rate twice step rate 70b, virtual port 26 is entered in two nodes 60, and so forth. All virtual port 26 rates associated with a given major ring 20 are approximately multiples of step rate 70b. <Desc/Clms Page number 13> There is a benefit to spacing the nodes 60 that reference a minor ring 22 such that the nodes 60 are widespread throughout major ring 20. If references to a given minor ring 22 are not widespread but are bunched, then a region tightly enclosing the bunched references represents a period of time in which the minor ring 22 has a disproportionate amount of its opportunity for service, while the rest of major ring 20 has a disproportionately small amount of such opportunity. If it should happen that the virtual port 26 associated with the minor ring 22 is blocked during that period of service, then the reduced opportunity for service means that virtual port 26 has fewer chances later to recover from a temporary blockage within the current cycle of major ring 20. Node 60 can be without reference to any minor ring 22; in this case it is called a"skip node". Adding gaps to a transmission schedule is also known as"port conserving". Referring to FIG. 6, major ring 20 has ring control block 70. Ring control block 70 is a data structure that includes base address 70a, step rate 70b, and adjustment 70c. Base address 70a is the address in main memory 40 at which the data structure for major ring 20 begins. Adjustment 70c contains a number of processor cycles to wait before beginning a next iteration of major ring 20. Adjustment 70c therefore allows the period of the cyclic <Desc/Clms Page number 14> iteration of major ring 20 to be adjusted, so that the period need not depend entirely on the number of nodes 60 in major ring 20. The speed of major ring 20 is the amount of data it can transmit per unit time-usually, bits per second. Speed depends on size (in nodes) of major ring 20, step rate 70b, adjustment 70c, and the number of bits that transmit processor 16c can transmit per processor cycle. Creating different major rings 20 having different step rates 70b allows virtual connections 28 to be managed at various transmit granularities. A low-speed virtual connection 28 in general does not need as many, or as frequent, time slot opportunities as a higher-speed virtual connection 28. MAJOR NODES Referring still to FIG. 6, data structures in memory 40 include a ring control block 70, a major node ring 20, and a minor ring 22. Traffic shaping process 66 is a method encoded in computer-executable instructions. Traffic shaping process 66 manages transmission of traffic onto virtual connections 28, using the major node stepping process 72 in collaboration with a queue selection process 74. The major node stepping process 72 repeatedly cycles over major nodes 60 in major ring 20. The stepping process 72 examines major nodes 60 to select minor rings 22, along <Desc/Clms Page number 15> with particular locations within minor ring 22 given by major nodes 60, for consideration by the queue selection process 74. The queue selection process 74 allocates a transmission opportunity to a particular virtual connection 28 on virtual port 26, based on service rates. The queue selection process 74 prioritizes virtual connections 28 that have minimum rate requirements above virtual connections 28 that have unspecified rate requirements. A major node 60 includes fields for skip flag 60a, end flag 60b, v-port 60c, cycle delay 60d, minor node index 60e, and modify 60f. Skip flag 60a is one binary bit. When node 60 is associated with minor ring 22 (as is the case for the first, third, and fourth nodes 60 shown), skip flag 60a is set to zero and node 60 has fields for v-port 60c and minor node index 60e. V-port 60c specifies virtual port 26 associated with node 60. Specifically, v-port index 82a is an index of port table 82. Port table 82 contains entries 84 for each virtual port 26. The value of a given v-port 60c corresponds to the value of v-port index 84a for some entry 84 in port table 82. Entry 84 provides corresponding port queue vector 76 and minor base 59. Specifically, minor base 59 contains the address of minor ring 22 within main memory 40. Port queue <Desc/Clms Page number 16> pointer 84b provides an offset into port queues 46 that specifies the particular entry of port queue vector 76 to use. Minor node index 60e gives the position of a specific minor node 62 within minor ring 22. In combination with v- port 60c, minor node index 60e allows major node 60 to reference both a specific minor ring 22 and a specific location (node 62) within minor ring 22. Traffic shaping process 66 can update minor node index 60e. For example, traffic shaping process 66 can increment minor node index 60e to refer to a next minor node 60 after a transmission involving a first minor node 60. When node 60 is not associated with any minor ring 22 (as with the second node 60 shown, for example), skip flag 60a is set to one, and node 60 has cycle delay 60d. Cycle delay 60d fills unused cycles in the event major ring 20 is not fully populated. Cycle delay 60d causes the transmit thread to delay a number of cycles equal to step rate 70b major ring 20 before proceeding to the next major node 60. End flag 60b is one binary bit. The last node 60 in major ring 20 has end flag 60b set to one, indicating the transmit thread should wrap and start at the beginning of major ring 20. If end flag 60b equals one and modify 60f equals one, the transmit thread follows major ring reload process 98g, shown in FIG. 9, as will be explained. <Desc/Clms Page number 17> Ultimately, the transmit thread rereads ring control block 70 and examines major ring 20 given by base address 70a. In this manner, major ring 20 can be updated with little overhead to the transmit thread, by directing base address 70a to an updated version of major ring 20 and setting modify 60f to one. MINOR RINGS Still referring to FIG. 6, minor ring 22 contains data structures describing virtual port 26. Minor ring 22 contains a sequence, stored in memory 40 as a ring array, of nodes 62 representing time slots for transmission opportunities. The sequence indicates the scheduled order of the transmission opportunities for virtual connections 28 associated with virtual port 26. The sequence can be iterated over repeatedly. Node 62 of minor ring 22 is associated with a virtual connection 28 scheduled for transmission, if possible, at the time that node 62 is processed by the transmit thread. Other virtual connections 28 are available for transmission if the scheduled virtual connection 28 is unavailable or has no data awaiting transmission. Thus, broadly speaking the data structures of minor ring 22 encode a schedule that prioritizes virtual connections 28 on virtual port 26 and allows other, less- prioritized virtual connections 28 to be selected on a stand-by basis. <Desc/Clms Page number 18> Nodes 62 of minor ring 22 contain minor size 78 and scheduled VC 80. Minor size 78 is the size of minor ring 22. Scheduled VC 80 contains a value indicating the VC index 52a of the virtual connection 28 associated with node 62. Typically, this virtual connection 28 has a rate 52d that requires it to be serviced at least at a predetermined rate, i. e. , a minimum. Virtual connection 28 is scheduled into virtual port 26, and virtual port 26 is scheduled into major ring 20, with sufficient frequency (i. e. , sufficient major nodes referencing minor ring 22 associated with virtual port 26, with sufficient spacing within major ring 20) to ensure that the corresponding rate 52d is satisfied. Minor size 78 is stored redundantly on every node 62. Minor size 78 and scheduled VC 80 together fit in thirty-two (32) bits, making only one memory read necessary by the processor. Minor ring 22 also includes minor ring rate 86, a data structure for storing the effective current rate of the virtual port 26 corresponding to the minor ring 22. Traffic shaping process 66 tests minor ring rate 86 to keep the performance of virtual port 26 within its prescribed v-port speed 97d (shown in FIG. 9). Port queue vector 76 is a bit vector where each bit position corresponds to port queue 46 in a collection of port queues 46. The collection of port queues 46 has up to <Desc/Clms Page number 19> sixteen members, each of a different priority. Port queue 46 contains linked lists where each node is associated with virtual connection 28 by a value indicating VC index 52a of virtual connection 28. Virtual connections 28 referenced by port queues 46 have unspecified bit rates for their rate 52d; they are not guaranteed to transmit. If a given virtual connection 28 requires a guaranteed minimum bit rate, it is scheduled via scheduled VC 80 field of some minor node 62. A bit position in port queue vector 76 is an emptiness indicator for a corresponding port queue 46. If a bit position in port queue vector 76 has a value of one, the corresponding port queue 46 has data awaiting transmission. Otherwise, port queue 46 is empty. When queuing packets for virtual connection 28, receive processors 16b set the corresponding bit in port queue vector 76 to one. When a transmit thread empties port queue 46, transmit thread sets the corresponding bit to zero. Also shown in FIG. 6 is a table of VC connection queues 56. These also are linked list queues, each associated with some virtual connection 28. Given VC index 52a, the transmit thread can go to the associated VC connection queue 56 to get packet descriptor information for the packet at the head of the queue. <Desc/Clms Page number 20> Referring now to FIG. 7, major node stepping process 72 steps sequentially through major nodes 60. Major node stepping process 72 reads major node 60 (procedure 72a). If skip flag 60a of node 60 has a value equal to one, major node stepping process 72 allows the processor processing its thread the option of processing a different thread until the cycle delay is complete. Major node stepping process 72 then reads the next major node 60, repeating until node 60 is associated with some minor node 62. Major node stepping process 72 reads minor node 62 (procedure 72b). If modify 60f equals one (procedure 72c), major node stepping process 72 follows major ring reload process 98g, shown in FIG. 9, as will be explained. Ultimately, if modify 60f equals one, major node stepping process 72 reads a new ring control block 70 (procedure 72d) and proceeds to procedure 72f. If modify 60f equals zero (procedure 72c), however, major node stepping process 72 calculates the next minor node index 60e by adding one and wrapping to minor base 59 if the new minor node index 60e is equal to the sum of minor base 59 and minor size 78 (procedure 72e). Major node stepping process 72 then obtains from major node 60 information on virtual port 26 and minor node 62; selects virtual connection 28 for transmission using queue selection process 74 (procedure 72f); and transmits one or more cells 36. From there, major <Desc/Clms Page number 21> node stepping process 72 repeats, reading another major node 60 (procedure 72a), and so forth. When major node stepping process 72 reaches the last major node 60 in the sequence of major nodes 60 in major ring 20, major node stepping process 72 returns to the first major node 60. This repetition or looping, which causes major node stepping process 72 to iterate repeatedly over all major nodes 60 in major ring 20, is sometimes called"cycling'. Transmission in procedure 72f is subject to traffic parameters of virtual connection 28, such as MBS 52c and rate 52d (shown in FIG. 4), as well as to the state of virtual port 26, such as v-port speed 97d or a port blockage due to flow control. For instance, to determine whether the current state of virtual connection 28 is within defined traffic parameters, major node stepping process 72 can compare current burst count 52g and current rate 52h to MBS 52c and rate 52d, respectively. Referring now to FIG. 8, queue selection process 74 operates on minor node 62. If minor node 62 has a scheduled VC 80 value greater than zero (procedure 74a), queue selection process 74 tests virtual connection 28 associated with scheduled VC 80 for data to transmit (procedure 74b) by examining VC connection queues 56. If such data exists, queue selection process 74 selects scheduled VC 80 for transmission (procedure 74c). If scheduled VC 80 value is <Desc/Clms Page number 22> zero, however, or if no such data exists, queue selection process 74 performs a priority selection of port queue vector 76 (procedure 74d) -that is, queue selection process 74 considers virtual connections 28 having an unspecified bit rate (UBR). The priority selection uses a deficit round-robin algorithm to find the index of port queue 46 with the highest priority, among port queues 46 that have data to transmit (procedure 74e). If queue selection process 74 finds a suitable port queue 46, queue selection process 74 specifies virtual connection 28 associated with port queue 46 for transmission (procedure 74f). Otherwise, queue selection process 74 does not select any virtual connection 28 for transmission (procedure 74g). RING LEADER Ring leader processor 16a performs administrative and control tasks necessary to the operation of major rings 20 and minor rings 22. Such tasks include initializing, updating, and deleting major rings 20, minor rings 22, and virtual ports 26. Referring to FIG. 9, ring leader processor 16a maintains data structures including initialized pointer 90, ring leader table 91, ring load control block 92, ring table 94, and v-port list 96. Ring leader processor 16a performs processes including initialization process 98a, create ring process 98b, rebalance process 98c, populate process 98d, destroy v-port <Desc/Clms Page number 23> process 98e, activate ring process 98f, and major ring reload process 98g. Initialized pointer 90 is a global pointer set to either null or the location of the working ring leader table 91. RING LOAD CONTROL BLOCK Ring load control block 92 is a data structure that assists in making major rings 20 available for use, by loading them into main memory 40. Ring load control block' 92 includes at least one ring load control longword 92a, which is a longword in main memory 40. Ring load control block 92 is located prior to the beginning of the memory range for ring control block 70 (shown in FIG. 6). Referring to FIG. 9, ring load control longword 92a includes thirty-two (32) bits of four types: DeltaSet bit 92b, primed bit 92c, active bit 92d, and reserved bit 92e. A given ring load control longword 92a contains two bits designated reserved bit 92e. The remaining thirty bits are equally divided among DeltaSet bits 92b, primed bits 92c, and active bits 92d, such that a trio of one each DeltaSet bit 92b, primed bit 92c, and active bit 92d can correspond to one major ring 20. Thus, one ring load control longword 92a supports up to ten major rings 20. Multiple ring load control longword 92a can be distinguished by the two bits designated reserved bits 92e (allowing a total of forty <Desc/Clms Page number 24> major rings 20). If more than one ring load control longword 92a is allocated, the first will be used for rings zero through nine, the second longword for rings ten through nineteen, and so on. Ring leader processor 16a initializes all bits of ring load control longword 92a to a zero value. A thread running on transmit processor 16c and processing major ring 20 uses major ring reload process 98g to reload major ring 20 after initialization or modifications. Major ring reload process 98g allows multiple major rings 20 to be synchronized with other major rings 20, so that updates do not take effect until all microengines using major rings 20 in the synchronized set are prepared to reload. Note that this approach also allows the simple case of one major ring 20 being updated, via a synchronized set with one element, without regard to other major rings 20. When multiple major rings 20 are used simultaneously, each major ring 20 can have its own step rate 70b. In this way, major rings 20 can be allocated varying percentages of the total bandwidth managed by router/shaper 12. Skip nodes 60 or other timing control mechanisms, as will be explained, can account for time not used by a given major ring 20. For a given major ring 20, major ring reload process 98g uses the corresponding DeltaSet bit 92b to indicate that <Desc/Clms Page number 25> major ring 20 is implicated in a set of changes to be synchronized with other major rings 20. DeltaSet bit 92b may only be set and cleared by ring leader processor 16a. Primed bit 92c indicates that the microengine associated with major ring 20 acknowledges that major ring 20 is part of the synchronized set. Primed bit 92c with value equal to one indicates that the microengine will stop processing major ring 20 until major ring 20 has been reloaded. Primed bit 92c may only be set by the relevant microengine and cleared by ring leader processor 16a. The relevant microengine is the one processing the associated major ring 20. The microengine sets primed bit 92c after reading modify 60f with value of one, checking the relevant DeltaSet bit 92b, and entering a wait state pending coordination of the synchronized set. Active bit 92d indicates that the microengine associated with major ring 20 has reloaded major ring 20 and is now actively processing. Active bit 92d may only be set by the microengine and cleared by ring leader processor 16a. RING LEADER TABLE Ring leader table 91 stores global parameters used by ring leader processor 16a. Ring leader table 91 includes fields for max ring count 91a, max system speed 91b, processor core speed 91c, minimum ring speed 91d, maximum ring speed 91e, ring balance threshold 91f, ring load <Desc/Clms Page number 26> control pointer 91g, and ring table pointer 91h. Max ring count 91a stores the maximum number of major rings 20 that may be created by ring leader processor 16a. Max system speed 91b stores the maximum speed of all rings running under the control of ring leader processor 16a. Processor core speed 91c stores the operation speed of the microengine core, and is used to calculate step rate 70b and cycle delay 60d for major rings 20. Minimum ring speed 91d stores the slowest permissible speed for major rings 20. Minimum ring speed 91d is greater or equal to 2. Maximum ring speed 91e stores the fastest permissible speed for major rings 20. Maximum ring speed 91e is greater than or equal to minimum ring speed 91d, and less than or equal to processor core speed 91c. Ring balance threshold 91f provides the percentage of a ring to be filled before starting a new ring of the same size. Ring load control pointer 91g stores a pointer to ring load control block 92. Ring table pointer 91h stores a pointer to ring table 94. RING TABLE Ring table 94 collects information useful to ring leader processor 16a and specific to individual major rings 20. Each entry in ring table 94 corresponds to one major ring 20 and includes the following fields: changed Boolean 94a, ring speed 94b, port count 94c, forced creation 94d, control block pointer 94e, working list pointer 94f, and <Desc/Clms Page number 27> ring number 94g. Changed Boolean 94a is a Boolean value indicating whether the table entry for major ring 20 is being modified. Ring speed 94b stores the operation speed of major ring 20. Port count 94c stores the number of virtual ports 26 in the major ring 20. Forced creation 94d indicates that major ring 20 was explicitly created by the end user application, such as using create ring process 98b with explicit flag of one. Control block pointer 94e is a pointer to ring control block 70 for major ring 20. Working list pointer 94f is a pointer to v-port list 96, the linked list used to construct major ring 20. Ring number 94g numbers each entry of ring table 94 and uniquely identifies its corresponding major ring 20 within router/traffic shaper 12. V-PORT LIST Ring leader processor 16a uses v-port list 96 to construct major ring 20. V-port list 96 is a linked list of v-port nodes 97, each referencing one virtual port 26. Each v-port node 97 includes the following fields: v-port index pointer 97a, minor ring index pointer 97b, current delay offset 97c, v-port speed 97d, prior v-port pointer 97e, and next v-port pointer 97f. V-port index pointer 97a stores an index into port table 82. Minor ring index pointer 97b stores a value specifying virtual connection 28 to use as scheduled VC 80. Current delay offset 97c stores the amount <Desc/Clms Page number 28> of time to delay before allowing data to go out to virtual port 26, based on the last transmit. V-port speed 97d stores the speed at which virtual port 26 operates. V-port speed 97d can be used to calculate number of nodes 62 needed in major ring 20 to be allocated to virtual ports 26. Prior v-port pointer 97e and next v-port pointer 97f are pointers to the prior and next v-port nodes 97 in v-port list 96, respectively. INITIALIZATION PROCESS Initialization process 98a defines and validates values used by ring leader processor 16a. Initialization process 98a returns a success code (given by Table 1) and sets initialized pointer 90 to point to ring leader table 91. Initialization process 98a accepts parameters including initialized pointer 90, a major tables parameter, a port minimum speed parameter, a port maximum speed parameter, and a system max speed parameter. Initialization process 98a returns the following values:-1 to indicate undefined failure;-2 to indicate that the ring leader system is unable to be initialized because it is already running;-3 to indicate a memory allocation error;-4 to indicate speed validation failure; or 0 to indicate success. If initialized pointer 90 has been set previously, then the system has already been initialized, so initialization process 98a returns the corresponding code. <Desc/Clms Page number 29> Initialization process 98a reads the speed from the microengine and sets processor core speed 91c. Initialization process 98a sets the following other fields of ring leader table 91 based on values passed as arguments to initialization process 98a: max ring count 91a, max system speed 91b, minimum ring speed 91d, and maximum ring speed 91e. Initialization process 98a validates data, such as verifying that maximum ring speed 91e is greater than or equal to minimum ring speed 91d, and less than or equal to processor core speed 91c. Initialization process 98a sets null pointer values for all undefined pointer elements. CREATE RING PROCESS Ring leader processor 16a uses create ring process 98b to force the creation of major ring 20 running at a given speed. The speed is passed as a parameter to create ring process 98b, along with parameters for ring leader table 91, ring number parameter, and an Explicit flag. Create ring process 98b returns true or false. Create ring process 98b allows the user application to pre-allocate a mandatory major ring 20 running at a preset ring speed by setting the Explicit flag. Create ring process 98b also performs validations, such as verifying that the ring number parameter contains a value from 0 to the max ring count 91a of ring leader table 91, and that <Desc/Clms Page number 30> major ring 20 is free for the specified ring number 94g, before creating major ring 20 using the specified ring number 94g. In the event that the specified ring number parameter is invalid, or in use, the next available ring number 94g will be used. Create ring process 98b returns false if initialized pointer 90 has not been initialized. Create ring process 98b validates the given speed to be within or equal to minimum ring speed 91d and maximum ring speed 91e. The total of all the existing major ring 20 speeds within the system plus the desired speed of the new major ring 20 is less than or equal to the max system speed 91b. Create ring process 98b sets changed Boolean 94a to one as it begins, and resets changed Boolean 94a to zero upon completion to prevent multiple definitions. If the explicit flag is set, create ring process 98b sets forced creation 94d to one to prohibit removal of this major ring 20 during normal ring rebalancing; otherwise, create ring process 98b sets forced creation 94d to zero. Create ring process 98b sets ring speed 94b to the validated ring speed and sets port count 94c to zero. REBALANCE PROCESS Ring leader processor 16a calls rebalance process 98c to rebalance major rings 20. Rebalance process 98c attempts <Desc/Clms Page number 31> to maintain an equal number of active time slots (i. e. , time slots allocated for virtual ports 26) on each major ring 20. Rebalance process 98c can also free a time slot so a new major ring 20 running at a different speed may be created in a subsequent operation. Rebalance process 98c accepts parameters including ring leader table 91, an empty ring desired flag, and an override forced creation flag. Rebalance process 98c returns true or false: true if major rings 20 have been rebalanced and changes activated on the microengines, false if no new major ring 20 has been found or initialized pointer 90 has not been initialized. If a free major ring 20 was requested via the parameters, a return value of true also indicates a free major ring 20 has been found. Rebalance process 98c makes an explicit call to activate ring process 98f for any major ring 20 that has changed, immediately prior to leaving (but not during) the routine. Major rings 20 are balanced against the number of time slots being filled, which may be different than the number of virtual ports 26. Rebalance process 98c can balance major rings 20 according to the rules given in Table 1. <Desc/Clms Page number 32> Table 1: Balancing Rules 1) Rebalance process 98c examines major rings 20 of equal speed and moves virtual ports 26 between each major ring 20 until an equal number of time slots is filled across all major rings 20 of that speed. a) Rebalance process 98c adds virtual port 26 to major ring 20 by calling populate process 98d, passing the new major ring 20 for virtual port 26 as the major ring 20 value. b) Rebalance process 98c deletes the old virtual port 26 by calling destroy v-port process 98e, passing the old ring number 94g. 2) All time slots for virtual ports 26 exist on the same major ring 20 and may not be split across major rings 20. 3) Rebalance process 98c examines smaller major rings 20 that are multiple factors of two, using two or more time slots on major ring 20 with smaller speed to create a sum total equal to the speed of virtual port 26. 4) Rebalance process 98c will not make moves until all elements on major ring 20 being balanced, have a place on another major ring 20. 5*) If the override forced creation flag passed to rebalance process 98c is set to false, all major rings 20 that were explicitly created are removed from the list of major rings 20 to be considered for removal. 6*) Rebalance process 98c examines smallest major ring 20 (i. e. , major ring 20 having the smallest number of entries) to see if nodes 60 of major ring 20 may be inserted into one or more major rings 20 half (or successive factors of two) the size of smallest major ring 20. Multiple entries for virtual ports 26 are considered. a) Rebalance process 98c repeats this procedure using major rings 20 with successively smaller ring speeds until all major rings 20 have been examined for elimination. <Desc/Clms Page number 33> b) Once a given major ring 20 is identified, its changed Boolean 94a is set to true to prevent additions to that major ring 20. 7*) Rebalance process 98c finds"home"major rings 20 for virtual ports 26 and calls populate process 98d, using ring number 94g of each home major ring 20 found. 8*) All values for major ring 20 are reset to zero or null, as applicable. Rebalance process 98c resets major ring 20 using activate ring process 98f so the microengine tables are rebuilt. * Steps 5-8 only apply if the empty ring desired flag passed to rebalance process 98c is set to true. POPULATE PROCESS Populate process 98d inserts virtual ports 26 into major rings 20. Populate process 98d accepts parameters including ring leader table 91, a ring number parameter, a port index pointer, a port minor index pointer, and a virtual port speed. Populate process 98d returns-1 to indicate undefined failure, -2 to indicate no space available to insert into, or a value between zero and max ring count 91a to indicate ring number 94g where virtual port 26 was added successfully. If the ring number parameter is in the range 0 to max ring count 91a, populate process 98d will only attempt an addition into major ring 20 having that ring number 94g. Any other value for the ring number parameter results in a "best match"insertion. <Desc/Clms Page number 34> Populate process 98d does not create major rings 20 until the Balance percentage on all other rings of that speed are met. A"best match"insertion tries to find a suitable major ring 20 to receive virtual port 26. Populate process 98d selects major ring 20 as follows. Populate process 98d first examines all major rings 20 operating at the same speed as virtual port 26, to determine major ring 20 with the least number of entries. In the case of a tie, populate process 98d selects major ring 20 with the highest ring number 94g. In the event that all major rings 20 running at the given speed are full, populate process 98d creates a new major ring 20 for virtual port 26. DESTROY V-PORT PROCESS Destroy v-port process 98e removes a given virtual port 26 from a specified major ring 20. Activating the change requires a call to activate ring process 98f. Destroy v-port process 98e accepts parameters including ring leader table 91, a port pointer, and a ring number parameter. The port pointer points to a specific virtual port 26. Destroy v-port process 98e returns the following values:-1 to indicate undefined failure;-2 to indicate that the specified virtual port 26 could not be found;-3 to indicate the ring number parameter is not valid; or 0 to the value of max ring count 91a, indicating the specified <Desc/Clms Page number 35> virtual port 26 has been successfully removed from the specified major ring 20. ACTIVATE RING PROCESS Activate ring process 98f builds and signals updates to the microengines. Activate ring process 98f accepts parameters including ring leader table 91, a ring number parameter, and an update as set flag. Activate ring process 98f waits for all major rings 20 to load their new versions, such as by using major ring reload process 98g. Activate ring process 98f also clears all bits of ring load control longword 92a before exiting, if a transaction is done as a synchronized set. The update as set flag indicates whether a transaction is to be done as a synchronized set. To inform the microengines major ring 20 has been deleted, activate ring process 98f sets step rate 70b of deleted major ring 20 to zero. Subsequent loading of that major ring 20 is done as a synchronized set (which may contain as few as one major ring 20). Step rate 70b of zero indicates major ring 20 is not in use; thus, step rate 70b is the last value saved during an initialization or update of major ring 20 as a non-zero step rate 70b value signals a valid major ring 20 is now defined, if that control block was not in prior use. <Desc/Clms Page number 36> More than one instance of activate ring process 98f can run at once. Activate ring process 98f waits until all other instances of itself have completed clearing ring load control longword 92a, such as from a prior synchronized set. If a ring activation is not part of a synchronized set, it may still continue as long as the DeltaSet bit 92b for the corresponding major ring 20 is not already set. SCHEDULE RING AND PORT RING EMBODIMENT In a second embodiment, a traffic shaper uses a procedure and data structures to transmit cells or segments to the satisfaction of both virtual connection rates and virtual port rates. The data structures include a schedule ring and a port ring. Features of the first embodiment are common to the second embodiment, except as otherwise indicated. Element numbers will be shared between the two embodiments for similar elements. This description will sometimes refer to the first embodiment as the"major/minor"embodiment and to the second embodiment as the"schedule/port"embodiment. Referring to FIG. 10, a shaping process 100 operates on a schedule ring 102 and a port ring 104. Shaping process 100 is encoded as computing instructions to be performed by a transmit processor 16c (shown in FIG. 1) in router/traffic shaper 12. Schedule ring 102 and port ring 104 are data structures, that encode a schedule of transmission <Desc/Clms Page number 37> opportunities for data in virtual connections 28 processed by router/traffic shaper 12. Receive processors 16b place such data in VC connection queues 56 (shown in FIG. 4) to await dequeuing and transmission by shaping process 100. Shaping process 100 iterates over schedule ring 102 once per transmission cycle. Broadly speaking, shaping process 100 uses the schedule encoded in schedule ring 102 to satisfy contracted data rates for virtual connections 28, while also using port ring 104 and other data structures to provide rate control for virtual ports 26. Schedule ring 102 and port ring 104 are shown in FIG. 10 as circular structures to indicate their conceptual ring structure, i. e. , iterations begun at the head or base of each ring will typically proceed to the end and then wrap around to the head when the previous iteration is complete. Schedule ring 102 and port ring 104 are each stored in memory 40 as an array. Schedule ring 102 has a base 103 at which ring 102 begins in memory 40. Similarly, port ring 104 has a base 105. Schedule ring 102 includes a sequence of schedule nodes 106. Port ring 104 includes a sequence of port nodes 108. CATEGORIZATION OF SERVICE RATES ATM Forum defines service categories such as CBR and VBR for virtual connections 28 that use the ATM network protocol. Referring to FIG. 11, shaping process 100 defines <Desc/Clms Page number 38> service grades 110 over these categories, such that service grades 110 partition all service categories that shaping process 100 handles. Shaping process 100 handles all service categories within a service grade similarly. In other words, the service grades 110 represent functional groups within shaping process 100. Every virtual connection 28 handled by shaping process 100 is associated with a service grade 110. Shaping process 100 includes service grades 110 for must-send 112, could-send 114, and unspecified 116. Must- send 112 includes CBR, nrt-VBR with unsatisfied SCR, and nrt-VBR with unsatisfied MCR. In general, must-send 112 includes service categories for contracts that have inflexible minimum data rates. Could-send 114 includes rt- VBR, nrt-VBR with satisfied SCR but below PCR, and nrt-VBR with satisfied MCR but below PCR. In general, could-send 114 includes service categories for contracts that have flexible minimum data rates. Unspecified 116 includes UBR virtual connections 28 that have various priority categories. Unspecified 116 is also the default service grade 110 for any virtual connection 28 which shaping process 100 has not affiliated with a service grade 110, or which is not scheduled for regular service by shaping process 100. Must-send 112 and could-send 114 are"scheduled" service grades 110, based on the fact that schedule rings <Desc/Clms Page number 39> 102 have explicit references that can affiliate a transmission opportunity with a reference to a must-send 112 virtual connection 28, or a reference to a could-send 114 virtual connection 28, or both (as shown in FIG. 10). Unspecified 116 is an"unscheduled"service grade 110. In general, unspecified 116 includes all categories that shaping process 100 services on a standby basis relative to scheduled service grades 110. Shaping process 100 can vary the classification of a virtual connection 28 dynamically based on a property of the virtual connection 28. For instance, shaping process 100 classifies an nrt-VBR virtual connection 28 that has an MCR and a PCR, but which during the current transmission cycle has not been serviced up to its MCR, as must-send 112. However, once this virtual connection 28 has been serviced to its MCR but below its PCR, shaping process 100 reclassifies it as could-send 114 for the remainder of the transmission cycle. In general, shaping process 100 prioritizes could-send 114 below must-send 112 but above unspecified 116. SCHEDULE RING Referring to FIG. 12, schedule ring 102 is a data structure in main memory 40 containing a sequence of schedule nodes 106. The sequence of schedule nodes 106 indicates the schedule for transmission. Each schedule node <Desc/Clms Page number 40> 106 represents either a transmission opportunity on a transmit processor 16c (shown in FIG. 1) or a timing control feature, as will be explained. When a schedule node 106 represents a transmission opportunity, it references at least one scheduled virtual connection 28. The allocation of nodes 106 to virtual connections 28, where the allocation includes both the total nodes 106 assigned to each scheduled virtual connection 28 and the relative position of such schedule nodes 106 within schedule ring 102, provides a schedule for regular service to virtual connections 28. Schedule ring 102 includes a schedule ring base 103, which denotes the beginning of the schedule ring 102 in main memory 40. Schedule rings 102 have a predetermined number of schedule nodes 106 (namely 65,536, or 216). This means that a sixteen-bit reference is sufficient to index the schedule nodes 106 such that the nodes can be addressed individually. A base address 70a in ring control block 70 references schedule ring base 103. Step size 70b contains the number of processor cycles available to each transmission opportunity, i. e. , to each schedule node 106. Step size 70b is therefore related to the shaping granularity of the service that shaping process 100 can provide. Step size 70b depends in part on the number of distinct schedule rings 102 defined within router/traffic shaper 12. <Desc/Clms Page number 41> Router/traffic shaper 12 is configured to manage traffic for a collection of virtual ports 26. In one typical configuration, the collection of virtual ports 26 corresponds to the physical ports 24a (shown in FIG. 2) of router/traffic shaper 12, plus perhaps additional ports on remote network devices. In order to maximize aggregate throughput to the collection of virtual ports 26, therefore, router/traffic shaper 12 is prepared to support the aggregate of the maximum sustained data rates for the collection of virtual ports 26. To take a simple example, suppose that the collection of virtual ports 26 corresponds only to the physical ports 24a of router/traffic shaper 12 and that the aggregate throughput goal of router/traffic shaper 12 is a fixed OC-12 rate to the DSL side. In that case, one option is to configure one schedule ring 102 to provide all service at the OC-12 rate. This one schedule ring 102 would provide shaping granularity of 9492 bps (which is 622.08 Mbps divided by the number of time slots). Another option is to configure router/traffic shaper 12 with multiple schedule rings 102 dividing up the workload and achieving finer transmit granularity, i. e. , smaller step rates 70b. Two OC-6 schedule rings 102 provide granularity of 4746 bps, and so forth. Similarly, multiple shaping processes 100 iterating over the same schedule ring 102 are <Desc/Clms Page number 42> another way to divide the workload and achieve finer transmit granularity. The schedule/port embodiment often requires less space in main memory 40 for its rings than the major/minor embodiment does. In particular, two factors-the number of specified-rate virtual connections 28 and the number of virtual ports 26-drive up memory requirements for the major/minor embodiment requires faster than for the schedule/port embodiment. For example, suppose that for small virtual ports 26, roughly one kilobyte of memory is needed, and that router/traffic shaper 12 handles roughly two thousand virtual ports 26. Then roughly two megabytes of memory (one kilobyte times two thousand) is necessary for the minor ring 22 entries alone. (The number of specified- rate virtual connections 28 does not affect port rings 104.) In contrast, the 65,536 entries in schedule ring 102 would require less than two megabytes, since entries in schedule ring 102 contain fewer than 32 bytes. SCHEDULE NODE Referring still to FIG. 12, there are at least two types of node 106: a skip node 106a used for timing control, and a schedule node 106b that represents a transmission opportunity. Schedule node 106b includes fields for must send 122a, which references a virtual connection 28 that is must-send <Desc/Clms Page number 43> 112, and could send 122b, which references a virtual connection 28 that is could-send 114. When no virtual connection 28 is associated with must send 122a or could send 122b, the corresponding field contains a null pointer. A given virtual connection 28 can be referenced by more than one schedule node 106 in the same schedule ring 102. Schedule node 106b also contains a port node pointer 122c, which references a location in port ring 104. Nodes 106 also have fields for skip flag 60a, end flag 60b, and modify 60f, whose functions have been described in the major/minor embodiment. Skip nodes 106a have skip flag 60a set. Instead of fields must send 122a, could send 122b, and port node pointer 122c, skip nodes 106a have a field for cycle delay 60d. In contrast, schedule nodes 106b have skip flag 60a not set and do not have a field for cycle delay 60d. When multiple schedule rings 102 are used simultaneously, each schedule ring 102 can have its own step rate 70b. In this way, schedule rings 102 can be allocated varying percentages of the total bandwidth managed by router/shaper 12. Skip nodes 106a or other timing control mechanisms, as will be explained, can account for time not used by a given schedule ring 102. PORT TABLE, PORT ENTRIES, AND PORT QUEUES <Desc/Clms Page number 44> Referring still to FIG. 12, port table 124 is a data structure residing in main memory 40 containing port entries 126. Each port entry 126 corresponds to a virtual port 26. In general, port entry 126 contains information describing the current state of its affiliated virtual port 26, including references to queues for virtual port 26 that store data awaiting transmission by shaping process 100. Port entry 126 includes port table index 126a. Shaping process 100 typically has random-access interactions with port entries 126 using port table index 126a. Port table index 126a holds a key value that uniquely identifies each port entry 126 in port table 124. Port entry 126 also includes virtual port reference 126b, deficit counter 126c, first chance queue reference 126d, new data queue reference 126e, UBR priority queue reference 126f, and bit vector 126g. Virtual port reference 126b affiliates port entry 126 with master information maintained by ring leader processor 16a, including performance parameters for the associated virtual port 26. Specifically, virtual port reference 126b contains a value that references a V-port node 97 (shown, for instance, in FIG. 9) by corresponding to its v-port index pointer 97a. Deficit counter 126c supports port flow control. Deficit counter 126c stores the weight for associated virtual port 26 in shaping process 100's weighted round- <Desc/Clms Page number 45> robin allocation of transmission opportunities. Deficit counter 126c contains unsigned integer values. At the beginning of every transmission cycle, deficit counter 126c is re-initialized to a weight that reflects the maximum number of times associated virtual port 26 should be serviced. For instance, for ATM cells 36 that have constant payload size, the weight can be the number of packets in a single transmission cycle permissible at the maximum data rate of associated virtual port 26. Whenever data is transmitted on associated virtual port 26, shaping process 100 decrements deficit counter 126c by a number appropriate to the amount of data transmitted. When the weight is based on packet counts, the decrement interval is simply one per transmitted packet. First chance queue reference 126d, new data queue reference 126e, and UBR priority queue reference 126f contain values that specify positions in a port queue heads array 128a. Port queue heads array 128a contains the heads of queues that store data awaiting transmission by shaping process 100, where the data is affiliated with a virtual port 26. In general, such data includes data for unscheduled virtual connections 28, as well as data from scheduled virtual connections 28 which has been dynamically rescheduled by shaping process 100, for instance after <Desc/Clms Page number 46> having been given a scheduled transmission opportunity that could not be serviced due to port blockage or lack of data. Typically, port queue heads array 128a is stored as a contiguous block of main memory 40, sequenced such that simple offsets into port queue heads array 128a are possible. Also, all queues affiliated with a given port entry 126 are stored in contiguous blocks. Each queue associated with port queue heads array 128a is stored as a linked list, so each entry of port queue heads array 128a contains a pointer to the next node in its respective queue. A port queue group 128b is the collection of such queues for a given port entry 126. For each port queue group 128b, there exists a first chance queue 128d, a new data queue 128e, and a collection of UBR port priority queues 128f. As illustrated in FIG. 12 by directed dotted lines, for a given virtual port 26, first chance queue reference 126d specifies first chance queue 128d, new data queue reference 126e specifies new data queue 128e, and UBR priority queue reference 126f specifies the first of the collection of UBR port priority queues 128f. When port queue heads array 128a is stored as a contiguous block of main memory 40, subsequent members of the collection of UBR port priority queues 128f can be reached by simply offsetting a distance in main memory 40 from the first of the collection of UBR port priority queues 128f, where the <Desc/Clms Page number 47> distance is proportional to the member's position in the collection. Typically, the collection of UBR port priority queues 128f is ordered from highest-priority to lowest. Bit vector 126g provides a quick way to detect whether a queue in the port queue group 128b contains data. Bit vector 126g is an ordered list of bits including one bit for every queue in the port queue group 128b. The order corresponds to each queue's position in the port queue group 128b's representation in port queue heads array 128a. Thus, the first bit in bit vector 126g corresponds to first chance queue 128d, the second bit corresponds to new data queue 128e, and subsequent bits correspond to UBR port priority queues 128f. When a bit in bit vector 126g is on, it indicates that the corresponding queue in the port queue group 128b contains data. A bit in bit vector 126g is therefore an emptiness indicator for its corresponding queue. First chance queue 128d stores references to virtual connections 28, in FIFO order. First chance queue 128d is used for traffic to be transmitted to virtual port 26 at the first opportunity. New data queue 128e stores references to virtual connections 28, in FIFO order. If a scheduled virtual connection 28 (such as referenced by must send 122a or could send 122b) has no data, the transmit thread will flag this <Desc/Clms Page number 48> at the corresponding entry in VC table 50. When data arrives, receive processor 16b will either discard according to soft or strict traffic management policing, or enqueue the data for virtual connection 28 and also place the VC index 52a on new data queue 128e. New data queue 128e is behind first chance queue 128d in priority and ahead of UBR port priority queue 128f. UBR port priority queue 128f stores references to virtual connections 28, in FIFO order. Virtual port 26 has a set of four prioritized class of service queues for UBR virtual connections 28 in this embodiment. PORT RING Broadly speaking, port ring 104 schedules the allocation of transmission opportunities to virtual ports 26. This allocation provides one form of rate control, in that the throughput of a given virtual port 26 is constrained by the number of transmission opportunities it receives. Referring to FIG. 12, port ring 104 is a data structure residing in main memory 40 containing a sequence of port nodes 108b and 108a. Port node 108b includes port reference 127c referencing a port entry 126, which corresponds to a virtual port 26. Port nodes 108a includes cycle delay 60d for timing control. The allocation of port nodes 108b to virtual ports 26, where the allocation includes the total <Desc/Clms Page number 49> port nodes 108b assigned to each scheduled virtual port 26 as well as the relative position of such port nodes 108b within port ring 104, provides a rubric for service to virtual ports 26. Unlike the schedule to virtual connections 28 encoded in schedule ring 102, the rubric for service to virtual ports 26 is not guaranteed and depends instead on transmission opportunities being available after shaping process 100 has attended to scheduled virtual connections 28. The rubric for service to virtual ports 26 provides a degree of fairness and control over unscheduled virtual connections 28. Port node 108 includes skip flag 127a, end flag 127b, and modified bit 60f. Port nodes 108a have skip flag 60a set. Instead of a port reference 127c field, port nodes 108a have a field for cycle delay 60d. In contrast, port nodes 108b have skip flag 60a not set and do not have a field for cycle delay 60d. End flag 127b has the same role and same functions within port ring 104 that end flag 60b has within major ring 20, as described in the major/minor embodiment. SHAPING PROCESS Shaping process 100 is a method that selects data for transmission by router/traffic shaper 12. Shaping process 100 selects data from a variety of queues, including queues <Desc/Clms Page number 50> for scheduled virtual'connections 28, queues for unscheduled virtual connections 28, and first chance queues 128d and new data queues 128e. Shaping process 100 selects virtual connections 28 for transmission, subject to service rates. Shaping process 100 also provides port flow control. Referring to FIG. 13, shaping process 100 works as follows. Shaping process 100 starts at the beginning of a transmission cycle (procedure 130a). Next, shaping process 100 selects the schedule node 106 at the beginning of schedule ring 102 as the current schedule node 106 (procedure 130b). Specifically, shaping process 100 uses base address 70a from ring control block 70 (shown in FIG. 12) to determine the first schedule node 106 in schedule ring 102. Shaping process 100 then uses schedule ring stepping process 132 (shown in FIG. 14) to select successive schedule nodes 106 and to select data for transmission (procedure 130c), subject to performance parameters of virtual connections 28, to rate control on virtual ports 26, and to timing control encoded into schedule ring 102 and port ring 104. After schedule ring stepping process 132 terminates, shaping process 100 resets deficit counters 126c used in weighted round-robin rate control on virtual ports 26 (procedure 130d). Next, shaping process 100 enters a wait state of length determined by the value of adjustment 70c (shown in FIG. 12) and, optionally, performs timing <Desc/Clms Page number 51> control such as recalculating adjustment 70c for the next iteration (procedure 130e). Shaping process 100 then begins the next transmission cycle anew (at procedure 130a). Each schedule node 106 represents either cycle delay or a transmission opportunity. When the current schedule node 106 represents a transmission opportunity, schedule ring stepping process 132 first tries to transmit to a scheduled virtual connection 28 referenced by a current schedule node 106b (shown in FIG. 12). If a scheduled virtual connection 28 is not available for transmission, schedule ring stepping process 132 invokes a port ring stepping process 134. Port ring stepping process 134 tries to service virtual ports 26 for the duration of the current transmission opportunity, beginning with a virtual port 26 referenced by schedule node 106b. Shaping process 100 performs the following actions for each transmission cycle. At the beginning of the transmission cycle, shaping process 100 uses base address 70a from ring control block 70 (shown in FIG. 12) to determine the first schedule node 106 in schedule ring 102, making this schedule node 106 the current schedule node 106. Shaping process 100 then invokes schedule ring stepping process 132. After this instance of schedule ring stepping process 132 concludes, shaping process 100 performs timing control. <Desc/Clms Page number 52> For virtual connections 28 that are not UBR and have sufficient data in VC connection queues 56 awaiting transmission, shaping process 100 aims to satisfy the contracted rates of the virtual connections 28, subject to network conditions such as the performance of virtual ports 26 and network 30. For UBR virtual connections 28 that have sufficient data in VC connection queues 56 awaiting transmission, shaping process 100 aims to service the UBR virtual connections 28 as available bandwidth allows. Bandwidth is available, for example, when virtual connections 28 with contracted rates encounter network blockages or do not have enough data queued in VC connection queues 56 to fully occupy their allocated rates. SCHEDULE RING STEPPING PROCESS Referring to FIG. 14, schedule ring stepping process 132 is a procedure that iterates over schedule nodes 106 of a schedule ring 102. First, schedule ring stepping process 132 initiates a window counter to track the duration of the current transmission opportunity. The window counter is-initiated to the step size 70b (shown in FIG. 12) of schedule ring 102. Schedule ring stepping process 132 reads the current schedule node 106. If the current schedule node 106 is a skip node, i. e. , has skip flag 60a set, then schedule ring stepping process 132 waits a number of processor cycles <Desc/Clms Page number 53> given by the step size 70b of schedule ring 102, then makes the next schedule node 106 in schedule ring 102 the current schedule node 106. Schedule ring stepping process 132 repeats this until either reaching the end of schedule ring 102 or finding a current schedule node 106 that is not a skip node (procedure 132a). Next, current schedule node 106 has a scheduled virtual connection 28. Schedule ring stepping process 132 tests whether the virtual port 26 associated with scheduled virtual connection 28 is blocked (procedure 132b). For instance, associated virtual port 26 may be blocked by network flow control. If the virtual port 23 is blocked and scheduled virtual connection 28 has data awaiting transmission, schedule ring stepping process 132 reschedules the data to the first chance queue 128d for associated virtual port 26 (procedure 132c). In particular, schedule ring stepping process 132 determines whether scheduled virtual connection 28 has data by consulting VB table 50 (shown in FIG. 4) and examining the corresponding bit in VC bit vector 54. If the bit is set, scheduled virtual connection 28 has data. In other words, the bit is an emptiness indicator for scheduled virtual connection 28. Schedule ring stepping process 132 dequeues the data from the front of corresponding VC <Desc/Clms Page number 54> connection queue 56 and enqueues it at the back of the first chance queue 128d for associated virtual port 26. Next, schedule ring stepping process 132 services port ring 104 at port node 108 specified by current schedule node 106 (procedure 132f). In particular, schedule ring stepping process 132 reads port node pointer 122c to determine a port node 108 on which to use port ring stepping process 134 (shown in FIG. 15). After servicing port ring 104 for the duration of the current transmission opportunity, port ring stepping process 134 returns control to schedule ring stepping process 132. Next, schedule ring stepping process 132 evaluates whether to continue iterating over schedule nodes 106 (procedure 132k). If the result is positive, schedule ring stepping process 132 loops back to procedure 132a to read the next schedule node 106. If the result is negative, schedule ring stepping process 132 terminates. If virtual port 23 was not found to be blocked (in procedure 132b), schedule ring stepping process 132 tests whether the scheduled virtual connection 28 referenced by the must-send 122a field of current schedule node 106 is ready to transmit (procedure 132d). Readiness of a virtual connection 28 is indicated by having data on corresponding VC queue 56 and having values for current burst count 52g <Desc/Clms Page number 55> and current rate 52h that are within the bounds set by MBS 52c and rate 135 (shown in FIG. 4). If the must-send 122a virtual connection 28 is ready, schedule ring stepping process 132 transmits data from the corresponding VC queue 56 (procedure 132e). Specifically, schedule ring stepping process 132 transmits as much data as possible, subject to the amount of processing that can be done in the current transmission opportunity (indicated by the window counter), and subject to the MBS and PCR of the virtual connection 28. Also, if the could-send 837 virtual connection 28 is also ready, schedule ring stepping process 132 reschedules data from the could-send 837 virtual connection 28 to the end of the first chance queue 128d for its associated virtual port 26. Next, schedule ring stepping process 132 updates states of various data structures to reflect the transmission of data (procedure 132g). Specifically, schedule ring stepping process 132 decrements deficit counter 126c corresponding to the virtual port 26 that transmitted the data. Schedule ring stepping process 132 also checks whether VC queue 56 is now empty of data, and if so, updates the corresponding bit in VC bit vector 54. Next, schedule ring stepping process 132 checks whether to continue (procedure 132k, described above) and proceeds from there. <Desc/Clms Page number 56> If the must-send 122a virtual connection 28 was not ready (in procedure 132d), schedule ring stepping process 132 tests whether the scheduled virtual connection 28 referenced by the could-send 837 field of current schedule node 106 is ready to transmit (procedure 132h). If the could-send 837 virtual connection 28 is ready, schedule ring stepping process 132 transmits data from the corresponding VC queue 56 (procedure 132i). Specifically, schedule ring stepping process 132 transmits as much data as possible, subject to the amount of processing that can be done in the current transmission opportunity (indicated by the window counter), and subject to the MBS and PCR of the virtual connection 28. Next, schedule ring stepping process 132 updates data structures (procedure 132g, described above) and proceeds from there. If the could-send 837 virtual connection 28 was not ready (in procedure 132h), schedule ring stepping process 132 services port ring 104 (procedure 132f, described above) and proceeds from there. In general, schedule ring stepping process 132 returns repeatedly to process the next schedule node 106 (beginning in procedure 132a) or terminates (after exiting procedure 132k). PORT RING STEPPING PROCESS <Desc/Clms Page number 57> Referring to FIG. 15, port ring stepping process 134 is a procedure that iterates over at least a portion of port ring 104, given a starting port node 108 and a window counter that describes a current transmission opportunity. In other words, and in general, port ring stepping process 134 services port ring 104 for a specified finite period of time, starting from a given position within port ring 104. Port ring stepping process 134 monitors the processor cycles that it uses, so as not to exceed the current transmission opportunity. If at any point port ring stepping process 134 reaches the end of the current transmission opportunity, port ring stepping process 134 terminates and returns control to the process that invoked it. The shaping process 100 is port work conserving in its iteration of the port ring 104. First, port ring stepping process 134 reads an unblocked port ring node 108 (procedure 134a). Specifically, port ring stepping process 134 begins with a current port ring node 108, which when port ring stepping process 134 is first invoked, is specified as a parameter. If the current port ring node 108 has a set skip flag 127a, port ring stepping process 134 enters a wait state for timing control. If the current port ring node 108 has a port reference 127c, port ring stepping process 134 verifies <Desc/Clms Page number 58> that the associated virtual port 26 is not blocked. If the virtual port 26 is blocked, port ring stepping process 134 advances to the next port ring node 108 and begins testing again until either finding a non-blocked virtual port 26 or reaching the end of the current transmission opportunity. Next, having found a current port ring node 108 with a non-blocked virtual port 26, port ring stepping process 134 tests whether the corresponding first chance queue 128d is ready to transmit (procedure 134b). Readiness of a queue in a port queue group 128b (shown in FIG. 12) requires data on the queue. Additionally, for the virtual connection 28 associated with the first packet of data on the queue and the corresponding VC entry 52 (shown in FIG. 4), readiness requires values for current burst count 52g and current rate 52h that are within the bounds set by MBS 52c and rate 52d. If first chance queue 128d is ready, port ring stepping process 134 transmits data from that queue (procedure 134c). Specifically, port ring stepping process 134 transmits as much data as possible, subject to the amount of processing that can be done in the current transmission opportunity, and subject to the MBS and PCR of the virtual connection 28 associated with the data. Next, port ring stepping process 134 updates states of various data structures to reflect the transmission of data (procedure 134h). Specifically, schedule ring stepping <Desc/Clms Page number 59> process 132 decrements deficit counter 126c corresponding to the virtual port 26 that transmitted the data. Schedule ring stepping process 132 also checks whether any queue in port queue group 128b is now empty of data, among those that received transmitted data in the current invocation of schedule ring stepping process 132. If so, schedule ring stepping process 132 updates the corresponding bit in port bit vector 126g (shown in FIG. 12). Next, port ring stepping process 134 evaluates whether to continue iterating over port nodes 108 (procedure 134i). If the result is positive, port ring stepping process 134 proceeds to procedure 132a to find the next non-blocked port node 108. If the result is negative, port ring stepping process 134 terminates. If first chance queue 128d was not ready (in procedure 134b), port ring stepping process 134 tests whether the new data queue 128e associated with current port node 108 is ready to transmit (procedure 134d). If new data queue 128e is ready, port ring stepping process 134 transmits data from that queue (procedure 134e). Specifically, port ring stepping process 134 transmits as much data as possible, subject to the amount of processing that can be done in the current transmission opportunity, and subject to the MBS and PCR of the virtual connection 28 associated with the data. <Desc/Clms Page number 60> * Next, port ring stepping process 134 updates states of various data structures to reflect the transmission of data (procedure 134h, described above) and proceeds from there. If new data queue 128e was not ready (in procedure 134d), port ring stepping process 134 tests whether any UBR port priority queue 128f associated with the virtual port 26 for current port node 108 is ready to transmit (procedure 134f). If there is a UBR port priority queue 128f ready, port ring stepping process 134 selects the queue with the highest priority from among the ready UBR port priority queues 128f, and transmits data from that queue (procedure 134g). If the duration of the current transmission opportunity permits, and if port ring stepping process 134 exhausts all available data from a first such UBR port priority queue 128f, port ring stepping process 134 transmits additional data from a next ready UBR port priority queue 128f, in descending order of priority, until it, too, is emptied. This process continues until all such data is transmitted or the current transmission opportunity expires. Next, port ring stepping process 134 updates states of various data structures to reflect the transmission of data (procedure 134h, described above) and proceeds from there. If no UBR port priority queue 128f was ready (in procedure 134f), port ring stepping process 134 evaluates <Desc/Clms Page number 61> whether to continue iterating over port nodes 108 (procedure 134i, described above) and proceeds from there. In general, port ring stepping process 134 repeatedly processes the next port node 106 (beginning in procedure 134a) or terminates (after exiting procedure 134i, or after the current transmission opportunity expires). REQUEUEING In certain situations, shaping process 100 will move enqueued data from one queue to another in response to virtual connection 28 states and their contracted rates. In particular, VBR virtual connections 28 having both a MCR and a PCR can sometimes have an inflexible demand for service (such as when the MCR is not satisfied), while at other times their demand for service is flexible (such as when the MCR is satisfied but the PCR has not been reached). Shaping process 100 moves such VBR virtual connections 28 between service grades 110 for must-send 112 and for could-send 114 (shown in FIG. 11) by moving associated data from must send queue 122a to could send queue 122b (shown in FIG. 12). For a realtime VBR (rt-VBR) virtual connection 28 operating at peak cell rate, shaping process 100 assigns virtual connection 28 to could-send status. If there is no constant bit rate (CBR) conflict, virtual connection 28 will send at peak cell rate for a number of transmits constrained only by maximum burst size (MBS). If there is still un- <Desc/Clms Page number 62> transmitted data after these transmits, virtual connection 28 will back off to below peak cell rate. If there is no data, shaping process 100 will flag the associated bit in VC connection queue vector 828. This flag suspends scheduling for virtual connection 28 until receive processor 16b places data for it on new data queue 128e. For a non-realtime VBR (nrt-VBR) virtual connection 28, shaping process 100 calculates a minimum cell rate based on sustained cell rate and assigns virtual connection 28 to must-send status for the length of a MBS transmission. Shaping process 100 then assigns virtual connection 28 to could-send status for a MBS transmission. Shaping process 100 will re-queue CBR virtual connections 28 at peak cell rate, as must send virtual connections 28. ALTERNATE EMBODIMENTS A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the description. For example, a single processor 16 can serve multiple processor purposes, for instance by running more than one thread. Multiple shaping processes 100 may operate concurrently on a given transmit processor 16c, and multiple transmit processors 16c may <Desc/Clms Page number 63> concurrently perform instances of shaping process 100 within a given router/traffic shaper 12. Schedule ring base 103 denotes both the beginning of the schedule ring 102 in main memory 40 and the beginning from which shaping process 100 begins its traversal of schedule ring 102. In alternate embodiments, shaping process 100 could begin its traversal of schedule ring 102 from another point, iterate over the nodes 106 until reaching the end of schedule ring 102, wrap to the beginning, and continue iterating over the nodes 106 until achieving a complete traversal. The number of nodes 106 in schedule ring 102 is 65,536, which conveniently allows an integer index schedule ring 102 to be represented in exactly sixteen bits, but other numbers are possible. The described embodiments specify support for a total of up to forty major rings 20, but the approach can be extended to support more than forty. Schedule ring 102 and port ring 104 are described as residing in main memory 40. An advantage of putting these data structures main memory 40 is that it provides rapid access to data and also allows software updates. Alternatively, all or portions of schedule ring 102 and port ring 104 could reside in other storage, including high-speed <Desc/Clms Page number 64> or cache memory or non-volatile storage such as a disk drive. Each shaping process 100 can have its own instance of a schedule ring 102. Alternatively, multiple shaping processes 100 can share a single schedule ring 102. In the latter case, it is likely that problems could arise if multiple shaping processes 100 are allowed to service the same schedule node 106 at the same time-for instance, contention at the VC connection queues 56. Thus, additional measures for contention resolution may be necessary, but such measures would be familiar to one skilled in the art. The balancing rules cited in Table 1 are just an example of a balancing policy. Other policies are possible. In general, a router/traffic shaper manages traffic for a collection of virtual connections. Each such virtual connection is either UBR or has a service contract, such as CBR or VBR. The router/traffic shaper includes receive processors that accept incoming data from the virtual connections and process the data into a collection of queues. The router/traffic shaper also includes transmit processors that perform a traffic shaping process. The traffic shaping process transmits data from the queues onto a network, to the satisfaction of the service contracts and QOS considerations among the collection of virtual connections. <Desc/Clms Page number 65> The traffic shaping process uses data structures that encode a traffic schedule (or simply"schedule"). A schedule organizes transmission opportunities for virtual connection data enqueued by receive processors. A transmission opportunity is a time slot. It can be measured in processor cycles of transmit processors or as a"shaping granularity"of bandwidth, such as in bits per second. For each virtual connection with a contracted rate, the schedule allocates sufficient opportunities to satisfy the contract, i. e. , guarantee a level of service. A traffic shaping process uses the schedule (encoded among data structures) to select the data that is transmitted by the router/traffic shaper. The schedule provides a basis for such selections, but actual transmission choices are subject to operating conditions such as port blockage and under-utilization of bandwidth. For instance, a given virtual connection with a contracted rate might have periods during which no data is being transmitted. The traffic shaping process can give the unused bandwidth to other virtual connections, such as UBR virtual connections ; of lesser priority, thereby increasing the total throughput. The traffic shaping process also keeps track of the throughput of transmitted data to each virtual port, so as not to exceed the port data rate (or to exceed it by only a <Desc/Clms Page number 66> predetermined, desirable amount for transient periods of time). In this embodiment, as will be described in more detail, a weighted round-robin algorithm is used to stop further transmissions on a port during a transmission cycle, if that port has reach its desired rate. The traffic shaping process iterates over the schedule repeatedly. Each iteration represents a predetermined time period known as a transmission cycle. During that period, the sum of the transmission opportunities within the schedule supports at least the aggregate bandwidth (in terms of bits per second) that the traffic shaper device is capable of transmitting or controlling. When a schedule allocates more opportunities to a virtual connection or port than are minimally necessary to ensure full service to the virtual connection or port, the virtual connection or port is"oversubscribed". One motivation for oversubscribing is to allow a virtual connection or port that was under-utilized during an early portion of the schedule iteration additional opportunities to reach its maximum rate. Thus, through oversubscription, the sum of the transmission opportunities within the schedule can support more than the aggregate bandwidth supported by the router/traffic shaper and by the collection of virtual ports. The traffic shaping process ensures that <Desc/Clms Page number 67> oversubscription does not lead to transmission rates that exceed maximum port rates or virtual connection peak rates. Port work conserving is a technique for using transmission opportunities that would otherwise be unused. When a first port is not using its transmission opportunity, such as due to port blockage or lack of enqueued data, a port work conserving process offers the transmission opportunity to one or more other ports, thereby reducing the amount of un-transmitted data and allowing more data to be transmitted sooner. The traffic shaping process generally needs to maintain steady timing. For instance, the traffic shaping process transmits to virtual ports that correspond to physical ports. The physical ports have data rates that work on timing cycles. Also, the traffic shaping process runs on transmit processors that have timing cycles of their own. Cycle adjustment features of the schedule and the port ring enable the traffic shaping process to maintain steady timing. Priority rankings among UBR queues enable the traffic shaping process to favor higher-priority virtual connections over other virtual connections, even when none of the virtual connections has a contracted rate. Formulating the schedule is beyond the scope of this description. A schedule is assumed to have been furnished <Desc/Clms Page number 68> to the router/traffic shaper. In described embodiments, the traffic shaping process can process and enforce a supplied schedule. Potential advantages include simplifying the scheduling problem to first schedule port rates, then schedule virtual connection rates. The port rates are based on physical ports (at some location, whether local or remote) and are therefore unchanging for a long period of time, often on the order of months. Within each port, there are fewer virtual connections than in the router/traffic shaper overall. Thus, algorithms can schedule the next cell for a virtual connection more efficiently, due to the fact there are fewer contending virtual connections. A second embodiment schedules unspecified-rate virtual connections on a per- virtual port basis, while scheduling specified-rate service (such as CBR, or VBR service with minimums) globally. This second embodiment has a similar advantage of simplifying the scheduling of unspecified-rate virtual connections, while also being advantageously space-efficient when scheduling specified-rate virtual connections. Another advantage is that this is a software implementation in described embodiments. A router/traffic shaper device using this approach can be modified, for instance to adjust to new algorithms or standards, without changes to the hardware. However, it should be understood <Desc/Clms Page number 69> that other embodiments could partially or wholly store the computing instructions in hardware in a read-only medium. Various other well-known algorithms can be applied to priority selection 74d, such as weighted round-robin or weighted fair queuing. Accordingly, other embodiments are within the scope of the following claims. |
A Ta barrier slurry for Chemical-Mechanical Polishing (CMP) during copper metallization contains an organic additive which suppresses formation of precipitates and copper staining. The organic additive is chosen from a class of compounds which form multiple strong adsorbent bonds to the surface of silica or copper, which provide a high degree of surface coverage onto the reactive species, thereby occupying potential reaction sites, and which are sized to sterically hinder the collisions between tow reactant molecules which result in new bond formation. The organic additive-containing slurry can be utilized throughout the entire polish time. Alternatively, a slurry not containing the organic additive can be utilized for a first portion of the polish, and a slurry containing the organic additive or a polishing solution containing the organic additive can be utilized for a second portion of the polish. |
CLAIMS : 1. A Chemical-Mechanical Polishing (CMP) method for polishing Ta barrier layers in integrated circuit metallization structures including copper and silica, said method including flowing polishing slurry containing silica abrasive, DI water, and a copper passivation agent, onto a platen, inducing relative motion between said wafer and said platen and maintaining a force between said platen and said wafer, and removing said wafer from against said platen, said polishing occurring for a total polishing period of time, comprising, said polishing slurry further containing, for at least a portion of said total polishing period of time, an organic additive selected from the group consisting of : polyvinyl alcohol (PVA), PVA-poly (vinyl acetate) co-polymer, PVA-polyethylene co-polymer, sorbitol, glycerol, polyacrylamide (PAA), ethylene glycol, di (ethylene glycol), poly (ethylene glycol) (PEG), glycerol ethoxylate (GEO), dimethylsiloxane-ethylene oxide co-polymer (DMSiO- EO), polyethylene oxide surfactants, octylphenol polyethylene oxide, nonylphenol polyethylene oxide, polyoxyethylene lauryl ether, polyoxyethylene cetyl ether, perfluorinated analogs of polyethylene oxide surfactants, glycerol propoxylate (GPO), organic amines, N, N- diethylcyclohexylamine (DCA), and polyethyleneimine (PEI).2. The method of claim 1, wherein said at least a portion of said total polishing period of time is the entire said total polishing period of time.3. The method of claim 1, wherein said at least a portion of said total polishing period of time is substantially equal to or less than the last 10% of said total polishing period of time.4. The method of claim 3, wherein said polishing slurry containing said organic additive is formed by Point-of-Use (POU) mixing of said organic additive with said polishing slurry containing said DI water, said silica abrasive, and said Cu passivation agent.5. The CMP method of claim 4, wherein said organic additive comprises PEG-10000 and said Cu passivation agent comprises 1,2,4-triazole.6. The CMP method of claim 5, wherein said polishing slurry containing said organic additive comprises:1.54 wt % 1,2,4-triazole;0.5 wt% PEG-10,000;93.6 wt % silica suspension containing 13.6 wt% Si02 ;4.33 wt% DI water.7. A polishing additive solution comprising,DI water ; a copper passivation agent selected from the group consisting of,1,2,4-triazole, benzotriazole (BTA), imidazole, 5-methyl benzimidazole, polyaniline, indazole, and purine ; an organic additive selected from the group consisting of : polyvinyl alcohol (PVA), PVA-poly (vinyl acetate) co-polymer, PVA-polyethylene co-polymer, sorbitol, glycerol, polyacrylamide (PAA), ethylene glycol, di (ethylene glycol), poly (ethylene glycol) (PEG), glycerol ethoxylate (GEO), dimethylsiloxane-ethylene oxide co-polymer (DMSiO- EO), polyethylene oxide surfactants, octylphenol polyethylene oxide, nonylphenol polyethylene oxide, polyoxyethylene lauryl ether, polyoxyethylene cetyl ether, perfluorinated analogs of polyethylene oxide surfactants, glycerol propoxylate (GPO), organic amines, N, N- diethylcyclohexylamine (DCA), and polyethyleneimine (PEI). 8. In a Chemical-Mechanical Polishing (CMP) method for polishing Ta barrier layers in integrated circuit metallization structures including copper and silica, said method including flowing polishing slurry containing silica abrasive, DI water, and a copper passivation agent onto a platen, inducing relative motion between said wafer and said platen while maintaining a force between said platen and said wafer, and removing said wafer from against said platen, said polishing occurring for a first polishing period of time, the improvement comprising: decreasing said flow of said polishing slurry prior to said step of removing said wafer from against said platen; and flowing the polishing additive solution of claim 7 onto said platen for a second period of time while inducing relative motion between said wafer and said platen and maintaining a force between said platen and said wafer..9. The method of claim 8 wherein the step of decreasing said flow of said slurry decreases said flow to zero.10. The CMP method of claim 8, wherein said organic additive comprises PEG-10,000 and said copper passivation agent comprises 1,2,4-triazole.11. The CMP method of claim 8, wherein said steps of decreasing said flow of said polishing slurry and flowing of said polishing additive solution are performed just prior to wafer de-chuck operation.12. The CMP method of claim 11, wherein the step of decreasing said flow of said slurry decreases said flow to zero.13. The CMP method of claim 12, wherein said organic additive comprises PEG-10, 000 and said copper passivation agent comprises 1,2,4-triazole.14. The CMP method of claim 13, wherein said polishing additive solution comprises:3.0 wt % 1,2,4-triazole;0.5 wt % PEG-10,000, andDI water. 15. The CMP method of claim 8, wherein said steps of decreasing said flow of said polishing slurry and flowing of said polishing additive solution are performed just prior to post-Ta CMP buff operation. 16. The CMP method of claim 15, wherein the step of decreasing said flow of said slurry decreases said flow to zero.17. The CMP method of claim 16, wherein said organic additive comprises PEG-10,000 and said copper passivation agent comprises 1,2,4-triazole.18. The CMP method of claim 17, wherein said polishing additive solution comprises :2.0-3.0 wt % 1,2,4-triazole;0.1-2.0 wt % PEG-10,000, andDI water; and wherein said post-CMP buff step utilizes 0.5-2.0 psi down force for 5-30 seconds. |
PREVENTION OF PRECIPITATION DEFECTS ON COPPER INTERCONNECTS DURINGCMP BY USE OF SOLUTIONS CONTAINING ORGANIC COMPOUNDS WITH SILICAADSORPTION AND COPPER CORROSION INHIBITING PROPERTIESCross-reference to a Related ApplicationThis application is a continuation-in-part of copending U. S. application Ser. No. 09/434,146, filedNovember4, 1999.Technical FieldThis invention relates to the manufacture of integrated circuits, and in particular to Chemical-MechanicalPolishing of metal structures used in copper metallization.Background ArtAs integrated circuit devices shrink, with semiconductor device geometries approaching 0.18 micron minimum feature size, and as circuit speed and performance increase, copper has replaced aluminum as the preferred electrical interconnect material. The use of copper as an interconnect material in silicon integrated circuits has occurred in response to the need for lowered interconnect resistivity, good electromigration resistance, and good deposition characteristics which allow effective filling of vias and contacts. Copper metallization structures are often formed by a process known as Damascene, which is illustrated inFig. 1. An insulating layer known as the Interlevel Dielectric (ILD) separates metal layers in a multilevel metallization structure. ILD dielectric layer 2, which may be comprised of a bottom layer 4 and a top, low dielectric constant layer 6, has regions 8 etched therein into which the metal lines will be inlaid. A barrier layer 10 is deposited, which serves to prevent diffusion of copper from the metal lines into the dielectric. This barrier layer is generally comprised of Ta or Ta compounds. A copper seed layer is then generally deposited, followed by an electroplated copper layer 14. The excess copper is then removed by a process known as Chemical Mechanical Polishing (CMP). CMP enhances the removal of surface material over large distances and short distances by simultaneously abrading the surface while a chemical etchant selectively attacks the surface. For this purpose, CMP utilizes a polishing slurry containing both an abrasive and a chemically active component. Typically, in copper Damascene processing, the CMP is performed in two steps. The first CMP step removes the excess copper from the wafer surface, and may also remove part or all of the underlying barrier layer 10.A second CMP step is then generally performed, with the objectives of 1) completely removing the conductive Ta layer from the dielectric surface between Cu lines, and 2) planarizing the surface to compensate for Cu dishing and erosion, illustrated in Fig. 2. To accomplish the second objective, the second CMP step must have a selectively higher polish rate of Si02 than of Cu, thereby compensating for Cu dishing during over-polish. Of equal importance to these structural objectives is the quality of the polished surfaces, both Cu and SiO2, with respect to both surface damage/roughness and foreign materials on the surface. Post CMP cleaning can only address removable solid materials and ionic contamination. The preferred abrasive used in slurries for Ta barrier polishing is silica, although other abrasives such as alumina have been used. The advantages to using silica abrasive in place of the alumina abrasive commonly used in other CMP applications include: 1) increased Ta removal rate, 2) greater ability to polish the oxide dielectric film for planarization, and 3) the potential for minimizing damage to the oxide and Cu surfaces. All of these advantages result from the high chemical reactivity of silica, resulting in a higher ratio of chemical to mechanical component of the polish than would occur using alumina abrasive. The hydrolysis of Si-O-Si bonds to Si-OH HO-Si, and the reverse chemical process, namely, condensation of Si-OH HO-Si to Si-O-Si + H20, form the basis of much of the well documented chemistry of silica, as described by RK Iler in The Claemistty of Silica, Wiley-Interscience, New York, 1979. However, this high chemical reactivity poses difficult challenges in preventing unwanted reactions involving silica from occurring on the wafer surface. A typical silica abrasive slurry used for Ta barrier polishing comprises 50-300 nm diameter silica particles suspended in an aqueous medium. To avoid the problem of copper corrosion during and after polish, copper corrosion inhibiting compounds such as benzotriazole or 1,2,4-triazole (hereinafter referred to as"triazole"), are typically dissolved in the slurry medium, and the pH of the suspension is adjusted to a value between pH7 and pH10. 5, which is the range empirically found to produce the lowest corrosion rates. Byproducts of the polishing process result in the slurry medium containing dissolved silica, dissolved copper, and dissolved tantalum, in addition to the formulating slurry ingredients. In the prior art, two types of solid defects have been seen after CMP of copper features using silica slurries, and also after CMP of copper features using alumina slurries when Si02 was present. These defects include precipitates and copper stains. The use of copper corrosion inhibiting compounds (also known as"Cu passivation agents") such as triazole compounds in the slurry has been found to greatly amplify the occurrence of these defects.The precipitated residues, which are comprised in part of conducting materials, adversely affect device yield and reliability, for example by causing shorting and/or line-to-line leakage. Residues and precipitates additionally prevent the dielectric barrier from effectively sealing the top surface of the copper line, resulting in copper diffusion into the dielectric as well as providing a surface electromigration path for copper atoms.Disclosure of the InventionIt is an aspect of this invention to provide an improved CMP slurry for the polishing of Ta barrier layers in copper metallization during integrated circuit processing which yields a lowered incidence of silica precipitates and copper stains. It is a further aspect of this invention to provide a CMP slurry for the polishing of Ta barrier layers in copper metallization during integrated circuit processing which includes copper corrosion-inhibiting compounds such as triazole compounds, which further includes silica abrasive, and which yields a lowered incidence of silica precipitates and copper stains. It is a further aspect of this invention to provide a CMP slurry for the polishing of Ta barrier layers in copper metallization during integrated circuit processing which inhibits chemical reactions between silica, triazole, and copper. Our invention meets includes these aspects by providing a CMP slurry for the polishing of Ta barrier layers underlying copper metallization which includes at least one additional slurry component which inhibits silicatriazole-copper reactions. A set of chemical compounds has been successfully used in a CMP slurry to inhibit said reactions, including organic compounds which form hydrogen bonds to the surface of polymeric silica molecules with a high degree of surface coverage, and which also adsorb onto copper hydroxo species. Alternative embodiments are disclosed which employ the additive-containing slurry or a portion thereof at various times in the polishing process. Brief description of the drawingsFig. 1 illustrates a typical Damascene structure used in copper metallization systems. Fig. 2 illustrates the dishing effect seen after copper CMP. Fig. 3 is a drawing of an SEM photograph showing silica precipitates and copper staining following CMP of Ta barrier layer during Damascene processing. Fig. 4a is a drawing believed to show the bonding configuration between a PVA molecule and the silica surface.Fig. 4b is a drawing believed to show the bonding configuration between a PAA molecule and the silica surface. Fig. 4c is a drawing believed to show the bonding configuration between a PEG molecule and the silica surface. Fig. 4d is a drawing believed to show the bonding configuration between a GEO molecule and the silica surface. Fig. 4e is a drawing believed to show the bonding configuration between a DEG molecule and the silica surface. Fig. 4f is a drawing believed to show the bonding configuration between a DMSiO-EO molecule and the silica surface. Fig. 4g is a drawing believed to show the bonding configuration between a GPO molecule and the silica surface. Fig. 4h is a drawing believed to show the bonding configuration between a DCA molecule and the silica surface. Fig. 4i is a drawing believed to show the bonding configuration between a PEI molecule and the silica surface.Modes for Carrying Out the InventionThe chemical literature describes the tendency of silica to form strong chemical bonds to the polybasic metal ions of such elements as copper and tantalum. Solutions of copper salts are known to coagulate or coprecipitate with silica at pH values greater than 5. Furthermore, the chemically-oxidized copper surfaces that remain after CMP provide ready nucleation sites for precipitation reactions to occur. The precipitated residues detected after CMP using a triazole-containing slurry comprise silica/copper hydroxide/triazole, hereinafter referred to as"silica precipitates", and copper/triazole, hereinafter referred to as "copper stains". Fig. 3 is a drawing of an SEM picture of copper lines 16 after Ta CMP, showing silica precipitates 18 and copper stains 20. These residues are chemically grown on the surfaces, and they are not readily removed during post-CMP cleaning. It is believed that similar residues will occur when using other Cu corrosion inhibiting compounds such as benzotriazole. Our invention provides for a re-engineering of the Ta slurry chemistry and/or polish process during all or a portion of the polish step, by inclusion of an additional slurry or polishing component, in order to suppress the chemical reactions between triazole, silica, and copper, which cause the formation of silica precipitates, and copper stains. Inhibiting the chemical reaction between silica and copper, or between either or both of the two and triazole, has been achieved by adding one of a set of chemical species, each of said species exhibiting several characteristics.A first characteristic is that the chemical species strongly adsorbs onto the surface of silica and/or copper hydroxide. A second characteristic of the adsorbing chemical species is the ability to provide a high degree of surface coverage onto the reactive species, thereby occupying potential reaction sites. A third characteristic which affects the degree of inhibition of the silica/copper reaction is the size of the adsorbing molecules. An optimally sized adsorbant will sterically hinder the collisions between two reactant molecules which result in new bond formation. The additives described hereinafter will be analyzed primarily according to their interaction with the silica surface, which is comprised of silicon atoms bonded either to neutral oxygen atoms, negatively charged 0-species, or to OH (hydroxyl). groups. The silica may be silica slurry particles, or it may be dissolved silica byproducts fromCMP. Since it has been determined that copper stains contain copper but do not contain silica, in order to inhibit copper staining the additives must also form similar bonds to the copper surface or to copper ions in solution. The oxidized copper surface contains a combination of species including copper atoms bonding to neutral oxygen atoms or OH groups. Additionally, aqueous copper ions in solution can have hydroxyl groups replacing one or more of the water molecules bonding to the copper ions. Due to the similar configurations and bonding of surface oxygen andOH on the copper and silica surfaces, additives which adsorb onto the silica surface according to the aforementioned characteristics should exhibit like bonding behavior on the copper surface and/or the copper ions in solution. The results included hereinafter for the various slurry additives were obtained using the inventive slurry throughout the polish process. Following the description of the various slurry additives, several alternative methods of applying the additive or additive-containing slurry to the polish sequence will be described.A. DESCRIPTION OF SLURRY ADDITIVESHydrogen bonding additivesA category of chemical species which exhibits some or all of the above three characteristics comprises organic chemical substances which form multiple hydrogen bonds to the surfaces of polymeric silica molecules and of copper (hydroxo) species. If a hydrogen atom is bonded to a very electronegative atom such as oxygen, nitrogen, or fluorine, it is capable of forming another weak bond with another electronegative atom which has a pair of nonbonded electrons.This interaction, called the hydrogen bond, is a chemical bond which is weaker than covalent or ionic bonds because the dissociation energy of a hydrogen bond is only about 7 kcal/mole. However, the hydrogen bond is much stronger than the ordinary van der Waals bonds between molecules. We have shown that several chemical species from the aforementioned category of organic chemical substances which form multiple hydrogen bonds to the surfaces of polymeric silica molecules and/or of copper (hydroxo) species can be successfully used in slurries for Ta barrier layer CMP to suppress the formation of silica precipitates and copper stains. These chemical species comprise:1. poly (vinyl alcohol), 98% hydrolyzed2. polyacrylamide3. poly (ethylene glycol)4. dimethylsiloxane-ethylene oxide co-polymer5. glycerol propoxylate The chemistry and testing of each of these species will be addressed separately. 1. Poly (vinyl alcohol), 98% hydrolyzedThis compound has been tested in a form having molecular weight of 13,000-23,000, with an average molecular weight of 18,000. The abbreviation for this compound is designated as PVA-18000. Its chemical structure is [-CH2CH (OH)-]-400. Fig. 4a illustrates what is believed to be the bonding configuration between thePVA molecule 22 and the silica surface 24. The CH2-CH bonds which form the backbone 26 of the alcohol molecule are tetrahedral rather than linear, yielding a long string-like structure which wraps repeatedly, forming into a quasi-spherical structure. The large number of protruding hydroxyl (OH) groups 28 form multiple hydrogen bonds with the surface oxygen atoms 30 on the silica particles. A large multiply-bonded complex is thereby formed which will not be likely to detach. Four different Ta barrier slurry formulations containing PVA-18000 have been tested. Each one includes a silica abrasive manufactured by the Cabot Corporation, called Cabot SC113. Cabot SC113 is an aqueous suspension of silica containing 13 +/-. 5 wt% silica, H20, and containing a trace of KOH to adjust the pH to 10.3. The silica particle size distribution has a mean value of 204 nm with a standard deviation of 63 nm on a volume-averaged basis. Two of the slurry formulations also include a small amount of sodium dodecylbenzenesulfonate (NaDBS), an anionic surfactant which has been claimed in the literature to enhance the adsorption of PVA onto silica. In all of the slurry formations described hereinafter, the weight percentages of compounds other than Cabot SC113 silica suspension are noted, and the remainder of the sluny is comprised of the Cabot SC113. The control slurry as described below is the same for all tested additives. The tested slurry formulations are as follows: Slurry F: Cabot SC1131,2,4-triazole (1.54 wt%) PVA-18000 (0.11 wt%) H20 (4. 35 wt%) Slurry G: Cabot SC1131,2,4-triazole (1.54 wt%) PVA-18000 (0.55 wt%) H20 (4.33 wt%) Slurry la: Cabot SC1131,2,4-triazole (1.54 wt%)PVA-18000 (0.22 wt%) sodium dodecylbenzenesulfonate (0.05 wt%) H20 (4.34 wt%) Slurry lb: Cabot SC1131,2,4-triazole (1.54 wt%) PVA-18000 (0.55 wt%) sodium dodecylbenzenesulfonate (0.13 wt%) H20 (4.33 wt%) Control slurry:Cabot SC1131,2,4-triazole (1.54 wt%) H2O (4.36 wt%) The pH of each of these slurries, including the control slurry, is 8.9 +/-0. 1. All of the above compositions result in colloidal suspensions which are stable with respect to silica particle size distribution over a time period greater than two months. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates of unpatterned Cu, Ta, and Si02 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results for these and all the other slurries tested are summarized in Table 1 at the end of the specification. No silica precipitate or copper stain residues are discernable using visual (optical microscope) andSEM inspection at any wafer locations for slurries F, G, la, or lb. In comparison, the control Ta barrier slurry results in heavy silica precipitate and stain residues located across the entire wafer. The same control Ta barrier slurry is used in all experiments described hereinafter. PVA exists with average molecular weight ranging between approximately 9000 and 186,000, and also exists in co-polymer form with both poly (vinyl acetate) and polyethylene. It is believed that these other forms ofPVA will also act as precipitate inhibitors. It is believed that PVA or other polymeric alcohols with molecular weight equal to or greater than 18, 000 are the most effective, but that a molecular weight of greater than 10,000 is acceptable. Concentrations of 0.1 wt% or greater are believed to be effective, possibly as low as 0.01 wt%. Lower molecular weight alcohols and sugars are also appropriate candidates for suppressing silica precipitation and copper stains. A Ta slurry formulation using a lower molecular weight sugar, sorbital, has been tested. Sorbitol has the chemical structure which allows hydrogen bonding to silica through the hydroxyl groups. The tested slurry formulation is as follows:Slurry 4b: Cabot SC113 1, 2,4-triazole (1.54 wt%) sorbitol (1.00 wt%)H20 (4.31 wt%)The results from this slurry are summarized in Table 1. The addition of sorbitol was effective in reducing the degree of silica precipitation and copper stain found on the wafer relative to the control Ta slurry. However, some precipitation residues do remain, so that the use of sorbitol is judged to be not as effective as PVA-18000 in precipitate prevention. Many other low molecular weight, hydroxyl-containing compounds, glycerol by way of example, are believed to exhibit some degree of precipitate suppression.2. PolyacrylamideThis compound has been tested in a first form having an average molecular weight of 10,000, and a second form having an average molecular weight of 1500. The abbreviations for these compounds are designated as PAA10000 and PAA-1500 respectively. Their chemical structures are [-CH2CH (CONH2)-]-l41 and [CH2CH (CONH2)-] ~21 respectively. Fig. 4b illustrates what is believed to be the bonding configuration between thePAA molecule 32 and the silica surface 24. The CH2-CH bonds which form the backbone 34 of the PAA molecule are tetrahedral rather than linear, yielding a long string-like structure which wraps repeatedly, forming into a quasispherical structure. The PAA molecules can hydrogen bond to silica through the amide functional groups 36 in one of two modes: 1) through its amido hydrogens 38 to bridging oxygen atoms 30 on silica surface 24, or 2) through its carbonyl oxygens 40 to silanol sites 42 on silica surface 24. Multiple hydrogen bonds are formed, and a large multiply-bonded complex is thereby formed which will not be likely to detach. Four different Ta slurry formulations containing PAA-10000 and PAA-1500 have been tested. The tested slurry formulations are as follows:Slurry 12a: Cabot SC 113 1,2,4-triazole (1.54 wt%) PAA-10000 (0.10 wt%) H20 (4.45 wt%) Slurry 12b: Cabot SC1131,2,4-triazole (1.54 wt%) PAA-10000 (1.00 wt%) H2O (5.25 wt%) Slurry D: Cabot SC1131,2,4-triazole (1.54 wt%) PAA-1500 (0.10 wt%) H20 (4.45 wt%) Slurry E: Cabot SC113 1,2,4-triazole (1.54 wt%) PAA-1500 (1.00 wt%) H2O (5.25 wt%) The pH of each of these slurries is 8.9 +/-0.1 All of the above compositions result in colloidal suspensions which are stable with respect to silica size distribution over more than two months time. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates ofunpattemed Cu, Ta, and SiO2 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1.No silica precipitate or copper stain residues are discernable using visual and SEM inspection at any wafer locations for slurry 12b. The wafers polished with slurries 12a, D, and E exhibit a moderate degree of silica precipitate and stain, although less than present on the wafers using the control slurry. This result indicates that both concentration of polymer in the slurry and the weight or size of the polymer molecule are important factors in suppressing precipitate residues when using polyacrylamide. It is believed that molecular weight greater than or equal to 1500 with concentration of 0.1 wt% or greater will be effective when using polyacrylamide as a precipitate/residue suppressant.3. Poly (ethylene glycol)This compound has been tested in a first form having an average molecular weight of 10,000, a second form having an average molecular weight of 1000, and a third form having an average molecular weight of 200. The abbreviations for these compounds are designated as PEG-10000, PEG-1000, and PEG-200 respectively. Their chemical structures are H (OCH2CH2)-227OH, H (OCH2CH2)-22OX and H (OCH2CH2) ~4OH respectively. Fig. 4c illustrates what is believed to be the bonding configuration between the PEG molecule 44 and the silica surface 24.The O-CH2-CH2 bonds which form the backbone 46 of the PEG molecule are tetrahedral rather than linear, yielding a long string-like structure which wraps repeatedly, forming into a quasi-spherical structure The PEG molecules hydrogen bond to silica through ether oxygens 48 to silanol sites 42 on silica surface 24. Multiple hydrogen bonds are formed, and a large multiply-bonded complex is thereby formed which will not be likely to detach. Six different Ta barrier slurry formulations containing PEG-10000, PEG-1000, and PEG-200 have been tested. The tested slurry formulations are as follows:Slurry 1 la : Cabot SC1131,2,4-triazole (1.54 wt%) PEG-10000 (0.10 wt%) H2O (4.35 wt%) Slurry lib : Cabot SC1131,2,4-triazole (1.54 wt%) PEG-10000 (1.00 wt%) H2O(4. 31 wt%) Slurry 2b: Cabot SC113 1,2,4-triazole (1.54 wt%) PEG-1000 (0.10 wt%) H2O (4.35 wt%) Slurry 2d : Cabot SC1131,2,4-triazole (1.54 wt%) PEG-1000 (1.00 wt%) H20 (4.31 wt%)Slurry 2a: Cabot SC1131,2,4-triazole (1.54 wt%) PEG-200 (0.10 wt%) H20 (4.35 wt%) Slurry 2c: Cabot SC1131,2,4-triazole (1.54 wt%) PEG-200 (1.00 wt%) H20 (4.31 wt%) The pH of each of these slurries is 8.9 +/-0.1 All of the above compositions result in colloidal suspensions which are stable with respect to silica size distribution over greater than two months time. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates of unpatterned Cu, Ta, and Si films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1. No silica precipitate or copper stain residues are discernable using visual or SEM inspection at any wafer locations for slurry 1 lb. Slurry 1 la has produced equally good results except for a statistical anomaly at one site on one wafer only. The wafers polished with slurries 2a, 2b, and 2d exhibit a minor degree of silica precipitate and stain, although much less than present on the wafers using the control slurry.The wafer polished with slurry 2c exhibits a moderate degree of silica precipitate and stain, although less than present on the control. These results indicate that PEG should be effective as a residue/precipitate inhibitor for molecular weights above 200 and for concentrations above 0.1 wt%. Four additional Ta barrier slurry formulations containing compounds closely related to poly (ethylene glycol), namely structural isomers of poly (ethylene glycol) or low molecular weight ethylene glycol ether compounds, have been tested. Slurry 6c and slurry 6d have been formulated using glycerol ethoxylate Mn 1000, a structural isomer to PEG-1000, with a designated abbreviation of GEO-1000. The compound has the molecular structure Fig. 4d illustrates what is believed to be the bonding configuration between the GEO molecule 49 and the silica surface 24. The GEO molecule bonds to the silica surface similarly to the PEG molecule, with the GEO ether oxygens 50 forming hydrogen bonds to silanol sites 42 on silica surface 24. Slurry 6a and slurry 6b have been formulated using di (ethylene glycol), a low molecular weight compound with a designated abbreviation of DEG, having the molecular structure (HOCH2CH2) 2O. Fig. 4e illustrates what is believed to be the bonding configuration between the DEG molecule 51 and the silica surface 24. The tested slurry formulations are as follows:Slurry 6c: Cabot SC113 1,2,4-triazole (1.54 wt%) GEO-1000 (0.10 wt%) H20 (4.35 wt%) Slurry 6d: Cabot SC1131,2,4-triazole (1.54 wt%) GEO-1000 (1.00 wt%) H20 (4.31 wt%) Slurry 6a: Cabot SC1131,2,4-triazole (1.54 wt%) DEG (0.10 wt%) H20 (4.35 wt%) Slurry 6b: Cabot SC1131,2,4-triazole (1.54 wt%)DEG (1.00 wt%) H20 (4.31 wt%) The pH of each of these slurries is 8.9 +/-0. 1 All of the above compositions result in colloidal suspensions which are stable with respect to silica size distribution over greater than two months time. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates of unpatterned Cu, Ta, and SiO2 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1. Wafers polished with slurries 6a, 6b, and 6d exhibit a moderate degree of silica precipitate and stain, although less than present on the control. However, the wafer polished with slurry 6c shows little improvement in the degree of precipitation when compared to the control.4. Dimethylsiloxane-Ethylene Oxide Co-PolymerThe abbreviation for this compound is designated as DMSiO-EO. Its chemical structure is (CH3) 3SiO {SiO (CH3) [CH2CH2CH2 (OCH2CH2) xOCH3] m {SiO (CH3) 2} nSi (CH3) 3. Its molecular weight is in the range between 600 and 1000. Fig. 4f illustrates what is believed to be the bonding configuration between the DMSiO-EO molecule 52 and the silica surface 24. The DMSiO-EO molecules hydrogen bond to silica through ether oxygens 54 to silanol sites 42 on silica surface 24. DMSiO-EO is a classic surfactant molecule. 75% of the molecule's mass is comprised of polyethylene oxide branches 56 which are hydrophilic, i. e., which readily react with or dissolve in water. The remaining 25% of the molecule's mass is comprised of silicone tail 58 which is hydrophobic, i. e., which is not capable of reacting with or dissolving in water. As a result of these two components of the molecule, the complete DMSiO-EO molecule will mix with water, but will readily coat onto an available surface such as the silica surface. These surfactant characteristics of DMSiO-EO lead to a much greater adsorption onto polymeric silica and solid silica surfaces than occurs for other hydrogen bonding molecules that are not surface-active. Consequently, beneficial effects are expected at reduced DMSiO-EO concentration levels. Two different Ta slurry formulations containing DMSiO-EO have been tested. The tested slurry formulations are as follows:Slurry 3a: Cabot SC1131,2,4-triazole (1.54 wt%) DMSiO-EO (0.01 wt%) H2O(4. 36 wt%) Slurry 3b: Cabot SC1131,2,4-triazole (1.54 wt%) DMSiO-EO (0.10 wt%) H20 (4.35 wt%) The pH of each of these slurries is 8.9 +/-0.1 Each of the above compositions result in colloidal suspensions which are stable with respect to silica size distribution over greater than two months time. Using a standardized Cu Damascene process, each of the above slurries has been evaluated for 1) its polishing rates of unpatterned Cu, Ta, and Si02 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1. No silica precipitate or copper stain residues are discernable using visual and SEM inspections at any wafer locations for slurry 3b. The wafer polished with slurry 3a exhibits a very minor degree of localized silica precipitate, although much less than present on the wafers using the control slurry, and no copper staining. These results indicate that DMSiO-EO will be effective as a precipitate/residue inhibitor for concentrations of 0.01 wt% or greater. The portion of the DMSiO-EO molecule which bonds to the silica surface is the EO (ethylene oxide) portion. Therefore, its bonding is equivalent to the aforementioned PEG or GEO molecules. It is seen that the surfactant DMSiO-EO yielded as good or better results at 0.01 wt% than were seen for the equivalent non-surfactantPEG and GEO molecules at 0.1 wt%. It is therefore concluded that use of surfactant additives can decrease the needed additive concentration for suppression of precipitates and residues. DMSiO-EO is representative of a class of non-ionic surfactant compounds that can hydrogen bond with silica and/or copper and which therefore have the potential to suppress or prevent precipitate residues. Other such surfactants containing PolyEthylene Oxide (PEO) include: octylphenol polyethylene oxide nonylphenol polyethylene oxide polyoxyethylene lauryl ether polyoxyethylene cetyl ether.There also exist perfluorinated analogs of these compounds. It is believed that these surfactants will act similarly to DMSiO-EO as precipitate/residue inhibitors. 5. Glycerol Propoxylate This compound has been tested in a first form having an average molecular weight of 1500 and a second form having an average molecular weight of 260. The abbreviations for these compounds are designated as GPO1500 and GPO-260 respectively. Their chemical structures areCH2 (OCH2CH2CH2)-sOH \CH (OCH2CH2CH2)-80H \CH2 (OCH2CH2CH2)-80H and CH2OCH2CH2CH20H \ CHOCH2CH2CH20H \ CH2OCH2CH2CH20H respectively. Fig. 4g illustrates what is believed to be the bonding configuration between the GPO molecule 60 and the silica surface 24. The GPO molecules hydrogen bond to silica through ether oxygens 64 to silanol sites 42 on silica surface 24. Glycerol propoxylate is structurally analogous to the aforementioned ethylene glycol ether compound, glycerol ethoxylate. The additional carbon atom in each ether chain unit imparts a slightly greater hydrophobic character to the molecule than the ethylene glycol ethers. Four different Ta barrier slurry formulations containing GPO-1500 and GPO-260 have been tested. The tested slurry formulations are as follows:Slurry 7b: Cabot SC1131,2,4-triazole (1.54 wt%) GPO-1500 (0.10 wt%) H20 (4.35 wt%) Slurry 7d: Cabot SC1131,2,4-triazole (1.54 wt%) GPO-1500 (1.00 wt%) H20 (4.31 wt%) Slurry 7a: Cabot SC1131,2,4-triazole (1.54 wt%) GPO-260 (0.10 wt%) H2O (4.35 wt%) Slurry 7c : Cabot SC1131,2,4-triazole (1.54 wt%) GPO-260 (1.00 wt%) H20 (4.31 wt%) The pH of each of these slurries is 8.9 +/-0.1 All of the above compositions result in colloidal suspensions which are stable with respect to silica size distribution over greater than two months time. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates of unpatterned Cu, Ta, and Si02 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1. No silica precipitate or copper stain residues are discernable using visual and SEM inspection at any wafer locations for slurry 7d. The wafers polished with slurries 7a, 7b, and 7c exhibit a minor degree of silica precipitate and stain, although much less than present on the wafers using the control slurry. According to these results, it is believed that GPO will be effective as a precipitate/residue inhibitor for molecular weights of 260 or above, and for concentrations, of 0.1 wt% or above.Organic AminesAnother type of Ta barrier slurry additive shares three of the characteristics of the aforementioned hydrogen-bonding organic additives which are believed to inhibit residue and precipitate formation, namely:1. The additive chemical species strongly adsorbs onto the surface of silica and/or copper hydroxide2. The additive exhibits a high degree of surface coverage onto the reactive species, thereby occupying potential reaction sites3. The additive adsorbant molecules are of a size to sterically hinder the collisions between two reactant molecules which result in new bond formation. A category of chemical species which exhibits the above three characteristics comprises organic amines, which form strong electrostatic, rather than hydrogen, bonds to the surfaces of polymeric silica molecules and of copper (hydroxo) species. In basic solutions silica acquires a net negative charge, due to the neutralization of the weakly acidic silanol (Si-OH) groups present on the surface. In the mildly basic pH range (7 < pH < 10), many substituted organic amines and polymeric amines are positively charged in aqueous solution, due to protonation of the amine functional groups. These compounds are known to adsorb onto silica, forming strong electrostatic bonds. A first organic amine compound known as N, N-diethylcyclohexylamine has been tested as a Ta barrier slurry additive. The abbreviation for this compound is designated as DCA. Its chemical structure is C6HIlN (C2Hs) 2. Fig. 4h illustrates what is believed to be the bonding configuration between the DCA molecule 66 and the silica surface 24. Lone pair electrons of nitrogen atoms 68 in amine functional groups 70 are bonded to H+ 72, thereby causing the DCA atom to become a positively charged ion. Negatively charged SiO-74 on silica surface 24 provides an electrostatic bonding adsorption site for the DCA ion 66. Two different Ta barrier slurry formulations containing DCA have been tested. The tested slurry formulations are as follows :Slurry 5a : Cabot SC1131,2,4-triazole (1.54 wt%) DCA (0.10 wt%) H20 (4.35 wt%) Slurry 5b : Cabot SC1131,2,4-triazole (1.54 wt%) DCA (1.00 wt%) H20 (4.30 wt%) The pH of each of these slurries is 8.9 +/-0.1 The above compositions result in colloidal suspensions which are unstable with respect to silica size distribution over time. One day after preparation, both slurries exhibit moderate settling of large flocs on the bottom of the container. A third Ta barrier slurry has been formulated using a related organic amine compound, polyethyleneimine, Mn 1800, a branched polymeric organic amine. The abbreviation for this compound is designated as PEI-1800. Its chemical structure is [-NHCH2CH2-] X [-N (CH2CH2NH) CH2CH2-] y. Fig. 4i illustrates what is believed to be the bonding configuration between the PEI-1800 molecule 76 and the silica surface 24. The tested slurry formulation is as follows:Slurry 5c : Cabot SC1131,2,4-triazole (1.54 wt%) PEI-1800 (0.013 wt%) H20(4. 37 wt%) The pH of this slurry is app. 9. The above composition results in a colloidal suspension which is unstable with respect to silica size distribution over time. Seven days after preparation, the slurry exhibits a significant amount of settling of large flocs on the bottom of the container. Using a standardized Cu Damascene process, all of the above slurries have been evaluated for 1) their polishing rates of unpattemed Cu, Ta, and Si02 films, 2) the degree of Cu line dishing and Cu pattern erosion that results when used as a second step Ta polish, and 3) the tendency of the slurry to produce precipitate residues on Cu features. The results are summarized in Table 1. The wafers polished with slurries 5a and 5c exhibit a moderate degree of silica precipitate and stain, although less than present on the wafers using the control slurry. The wafer polished with slurry 5b showed no improvement in the degree of precipitation when compared to the control. According to these results, it is believed that, although the use of electrostatically bonding additives such as organic amines may act to inhibit silica precipitates, the electrostatic charge associated with such additives tends to destabilize the slurry and leads to settling. Table 1 summarizes the results from all the aforementioned tested slurries. Included on the table are the slurry compositions, an indication of whether precipitate formation was seen, and an SEM microscope defect inspection summary. Polish rates of Cu, Ta, and oxide, as well as dishing, erosion, and Cu line protrusion, are not included in the table, since the values of each were acceptable for all the slurry formulations tested. <tb>SLURRY <SEP> ADDITIVES <SEP> PRECI-VISUAL <SEP> (MICROSCOPE) <SEP> INSPECTION<tb> CODE <SEP> PITATE <SEP> ? <SEP> SUMMARY<tb> Control <SEP> None <SEP> Y <SEP> Heavy <SEP> silica <SEP> precipitate <SEP> and <SEP> stain <SEP> residues <SEP> across <SEP> the<tb> wafer<tb> 0. <SEP> 1 <SEP> % <SEP> PAA-1500 <SEP> Moderate <SEP> ppt <SEP> ! <SEP> stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> <tb> E <SEP> 1. <SEP> 0% <SEP> PAA-1500 <SEP> Y <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> F <SEP> 0.11% <SEP> PVA-18000 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> G <SEP> 0.55% <SEP> PVA-18000 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> 1 <SEP> a <SEP> 0.22% <SEP> PVA-18000 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> 0.05%NaDBS<tb> lb <SEP> 0.55% <SEP> PVA-18000 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> 0.13% <SEP> NaDBS<tb> 2a <SEP> 0.1% <SEP> PEG-200 <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 2b <SEP> 0. <SEP> 1 <SEP> % <SEP> PEG-1000 <SEP> Y <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 2c <SEP> 1. <SEP> 0% <SEP> PEG-200 <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 2d <SEP> 1. <SEP> 0% <SEP> PEG-1000 <SEP> Y <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 3a <SEP> 0.01% <SEP> DMSiO-EO <SEP> Y <SEP> Very <SEP> minor <SEP> ppt, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 3b <SEP> 0.10% <SEP> DMSiO-EO <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> 4b <SEP> 1.0% <SEP> sorbitol <SEP> Y <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 5a <SEP> 0.1% <SEP> DCA <SEP> Y <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 5b <SEP> 1.0% <SEP> DCA <SEP> Y <SEP> No <SEP> ppt/stain <SEP> improvement <SEP> over <SEP> control <SEP> slurry<tb> 5c <SEP> 0.013% <SEP> PEI <SEP> Y <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 6a <SEP> 0.1% <SEP> DEG <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 6b <SEP> 1.0% <SEP> DEG <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 6c <SEP> 0.1% <SEP> GEO-1000 <SEP> Y <SEP> Little <SEP> ppt/stain <SEP> improvement <SEP> over <SEP> control <SEP> slurry<tb> 6d <SEP> 1. <SEP> 0% <SEP> GEO-1000 <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 7a <SEP> 0.1% <SEP> GPO-260 <SEP> Y <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 7b. <SEP> 0.1% <SEP> GPO-1500 <SEP> Y <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 7c <SEP> 1.0% <SEP> GPO-260 <SEP> Y <SEP> Minor <SEP> ppt/stain, <SEP> much <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 7d <SEP> 1.0% <SEP> GPO-1500 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residues<tb> l <SEP> la <SEP> 0.1% <SEP> PEG-10000 <SEP> (N) <SEP> No <SEP> ppt/stain <SEP> except <SEP> for <SEP> one <SEP> site <SEP> on <SEP> one <SEP> wafer<tb> lib <SEP> 1. <SEP> 0% <SEP> PEG-10000 <SEP> N <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residue<tb> 12a <SEP> 0. <SEP> 1% <SEP> PAA-10000 <SEP> Y <SEP> Moderate <SEP> ppt/stain, <SEP> less <SEP> than <SEP> with <SEP> control <SEP> slurry<tb> 12b <SEP> 1. <SEP> 0% <SEP> PAA-10000 <SEP> (N) <SEP> No <SEP> silica <SEP> precipitate <SEP> or <SEP> copper <SEP> stain <SEP> residue <SEP> except<tb> for <SEP> one <SEP> site <SEP> with <SEP> very <SEP> minimal <SEP> ppt/stain<tb> TABLE 1. Summary of additive resultsIt is seen that excellent precipitate/residue characteristics are achieved using slurries F, G, la, lb, 3b, 7d, and 1 lb. Good results are achieved with slurries 3a, 1 la and 12b. These best results correspond to all the slurries which include 0.11 to 0.55 wt% high molecular weight PVA-18000, the 0.01-0.10 wt% DMSIO-EO, the 1.0 wt%GPO-1500, the 0.1-1.0 wt% high molecular weight PEG-10000, and the 1.0 wt% high molecular weight PAA10000. Comparison of the low-molecular weight PAA and PEG with their corresponding high-molecular weight additives clearly indicates a correlation between higher molecular weight and better residue and precipitate suppression. There are at least two possible mechanisms for the improved suppression at higher molecular weights ;One likely mechanism is that the larger adsorbed additive molecules sterically hinder the collisions between the reactant slurry molecules. Another likely mechanism involves the probable kinetics of the polymer adsorption/desorption process onto silica. Higher molecular weight adsorbants having a larger number of bonding sites would tend to be more likely to remain adsorbed even if some of the hydrogen bonds were broken. The larger molecules would likely thereby have a lower frequency of desorption/adsorption and as a result more effectively suppress the reactions between slurry molecules. Similar arguments are believed to explain the relatively poor residue and precipitate suppression using GEO 1000, DEG and sorbital, all of which are lower in molecular weight than the additives which achieved the best results. Comparison of lower concentrations of PAA-10000 and GPO-1500 with higher concentrations of these additives indicates a correlation between higher additive concentration and better residue and precipitate suppression.This is believed to be due to the need for high surface coverage of the reactive molecules by the additive adsorbates.This explanation is supported by the observation that the surfactant additive DMSiO-EO is effective at lower concentrations than the equivalent non-surfactant additives.B. ALTERNATIVE POLISHING PROCESS EMBODIMENTSA first embodiment of the invention as described above utilizes the additive-containing slurry throughout the entire polish process. Several alternative embodiments of the invention employ the additive-containing slurry or portions thereof with differing methods and at differing times at the end of the polish process only. Among the advantages of the alternative embodiments are: 1) the maintenance of maximum Ta and oxide removal rates, 2) maintenance of selectivities, and 3) minimizing clogging of filters. Advantages of the first embodiment include: 1) manufacturing simplicity, and 2) since oxide and Ta removal rates vary according to the concentration of the organic additive, utilizing the additive during the whole polishing process provides the ability to tailor selectivity of Ta to oxide removal rate, e. g., suppressing oxide removal rate more than Ta rate, thereby decreasing erosion. The chemical additive which has been used to test the alternative embodiments is PEG-10,000. The Cu passivation agent used to test the alternative embodiments is 1,2,4-triazole. It is anticipated that all of the organic additives which produced no precipitates or stains in the first embodiment will have similar effect as the PEG, and that other Cu passivation agents such as: benzotriazole (BTA), imidazole, 5-methyl benzimidazole, polyaniline, indazole, and purine, can be combined with any of the aforementioned organic additives to produce the similar effect.1.1,2,4 triazole and PEG use during wafer de-chuck operationThe wafer de-chuck operation to facilitate removal of the wafer from the polishing pad is done after the polishing step is substantially complete. On the Mirra polisher from Applied Materials, which was used to obtain the data shown here, the wafer de-chuck operation is a 5-10 second operation in which vacuum is used to create suction cups in the membrane portion of the carrier which contacts the wafer. These suction cups are utilized to pick up the wafer from the polishing pad. During the de-chuck operation, there is no applied downward pressure on the wafer, but rotation continues. The weight of the carrier on the wafer results in a downward force on the wafer of approximately 0.5 psi, compared with 2-4 psi during polishing. Prior art methods comprise cessation of slurry flow, with DI water being dispensed onto the polish platen during the de-chuck operation. In a second embodiment of our invention, Ta CMP is completed using a slurry equivalent to the abovementioned control slurry, comprising triazole, H20, and a silica abrasive such as Cabot SCE or SC113 or Silica Emulsion ER80500 by Arch Chemical, Phoenix, Az. During the de-chuck operation, following decrease or cessation of slurry flow, a mixture of 3.0% 1,2,4-triazole and 0.5% PEG in DI water is dispensed onto the polish platen instead of just DI water. Results using this procedure show no visible Cu-silica precipitate formation or copper staining, in contrast to wafers polished using prior art de-chuck operation with DI water alone. This method can be utilized with other polisher apparatus by dispensing the triazole-PEG solution onto the polish pad during the wafer pick-up afterTa CMP. It is anticipated that all of the organic additives which produced no precipitates or stains in the first embodiment will have similar effect as the PEG in this second embodiment.2. POU mixing of organic additives into slurry at the end of the polish cycleA third embodiment of our invention utilizes Point-of-Use (POU) mixing of PEG-10,000 or any other of the aforementioned organic additives with the control slurry, at the end of the polish cycle only. POU is a method of mixing slurry components whereby components are dispensed from separate containers and mixed together in real time close to dispense onto the platen. In this embodiment of the invention, for 90% of the polish time the aforementioned control Ta slurry or its equivalent is used without the addition of the organic additive, followed byPOU mixing the 0.5% PEG additive (or other additive as described above) with the control slurry for the last 10% of the polish time. Results using this procedure show no visible Cu-silica precipitate formation or copper staining.3. Organic additive used in post Ta CMP buff stepA fourth embodiment of our invention utilizes a solution of PEG-10,000 (or other organic additive as described above) and 1,2,4 triazole (or other copper passivation agent) in DI water for a post Ta CMP buff step.Prior art has utilized a 20 second post-CMP buff step using a soft pad (polytex supreme, by way of example) while applying a small amount (i. e., 1-2.5 psi) of pressure following tungsten CMP. This prior art buff method, when applied after copper CMP using prior art slurries not containing organic additives, results in the precipitate and copper staining as described above. In tests of this fourth embodiment of our invention, decreasing or ceasing the slurry and introducing a mixture of 0.1-2.0% PEG-10,000 and 2-3 % 1,2,4-triazole in DI water for 5-30 seconds with 0.5-2.0 psi pressure during post Ta CMP buff has prevented the formation of precipitate and copper staining on the Cu surface. This approach has been shown to have no substantial effect on polish rates and selectivities, and maintains the oxide polish rate for a high planarization efficiency. Furthermore, using this method, an improvement of 30-40 % in copper and oxide surface roughness, measured by Atomic Force Microscopy (AFM), has been achieved over prior methods.Industrial ApplicabilityThe addition to Ta CMP slurries of certain organic chemical substances which form multiple hydrogen bonds with the surfaces of polymeric silica molecules and/or of copper (hydroxo) species has been shown to greatly suppress the formation of silica precipitates and copper stains. Use of the organic chemical substances in a polishing additive solution used during the end portion of the polish cycle has shown similar effects. The elimination or substantial reduction of these defects in the copper metallization lines will result in improved reliability. It is not intended that our invention be restricted to the exact embodiments described herein. Use of other chemical substances than those listed, but which share the properties of multiple hydrogen bonds with the surfaces of polymeric silica molecules and/or of copper (hydroxo) species may be used without altering the inventive concept.These additives may also be used in Ta barrier slurries for copper CMP which use other abrasives such as alumina in place of silica, since the presence of dissolved SiO2 CMP byproducts in the slurry medium can also result in precipitates and copper staining. The scope of the invention should be construed in view of the claims.WITH THIS IN MIND, WE CLAIM : |
A bit-cell includes a plurality of bridge structures and a driver including an input port and an output port. The input port is connected to each of the plurality of bridge structures and at most one of the plurality of bridge structures is connected to a signal source. A method includes removing a conductive element in a first particular conductive layer from a first bridge structure and adding a conductive element in the first particular conductive layer to connect a second bridge structure to a first signal source. |
What is claimed is: 1. A bit-cell comprising: a plurality of bridge structures ; and a driver including an input port and an output port, the input port connected to each of the plurality of bridge structures, wherein at most one of the plurality of bridge structures is connected to a signal source. 2. The bit-cell of claim 1, wherein each of the plurality of bridge structures includes a first conductive stack connected to a second conductive stack by a conductive beam. 3. The bit-cell of claim 2, wherein the conductive beam comprises a metal. 4. The bit-cell of claim 2, wherein the first conductive stack includes a gap on a particular layer. 5. The bit-cell of claim 4, wherein at least one of the plurality of bridge structures includes a connection on the particular layer to a signal source. 6. The bit-cell of claim 1, wherein the driver comprises an inverter. 7. The bit-cell of claim 1, wherein the signal source comprises a power source. 8. A communication system comprising: a substrate ; a communication circuit formed on the substrate and coupled to an antenna; and an identification register formed on the substrate and including a plurality of bit- cells, wherein each of the plurality of bit-cells can be changed during manufacturing of the communication circuit by changing only one metallization mask. 9. The communication system of claim 8, wherein at least one of the plurality of bit- cells comprises: a plurality of bridge structures formed on the substrate; and <Desc/Clms Page number 10> a driver formed on the substrate, the driver including an input port and an output port, the input port connected to each of the plurality of bridge structures, wherein at most one of the plurality of bridge structures is connected to a signal source. 10. The communication system of claim 9, wherein the substrate comprises a semiconductor. 11. The communication system of claim 10, wherein each of the plurality of bridge structures includes a first conductive stack connected to a second conductive stack by a conductive beam. 12. The communication system of claim 11, wherein the first conductive stack includes a plurality of conductive elements. 13. The communication system of claim 12, wherein each of the plurality of conductive elements is separated from adjacent conductive elements by a dielectric and connected to adjacent conductive elements by a via. 14. An interconnect comprising: a first conductive bridge structure formed on a substrate, the first conductive bridge structure including each of a plurality of metallization layers included in an integrated circuit, the first conductive bridge structure having a proximal end and a distal end, and the first conductive bridge structure forming a conductive path between the proximal end and the distal end; and a second conductive bridge structure formed on the substrate, the second conductive bridge structure including each of the plurality of metallization layers, the second conductive bridge structure having a proximal end and a distal end, the proximal end of the first bridge structure connected to the proximal end of the second bridge structure, the second conductive bridge structure forming a conductive path between the proximal end of the second bridge structure and the distal end of the second bridge structure, and the distal end and of the first conductive bridge structure and the distal end of the second bridge structure being unconnected. <Desc/Clms Page number 11> 15. The interconnect of claim 14, further comprising a signal source connected to the distal end of the first bridge structure. 16. The interconnect of claim 15, wherein the signal source comprises a logic signal. 17. The interconnect of claim 14, wherein the distal end of the second conductive bridge structure is adjacent to a first power source contact. 18. The interconnect of claim 17, wherein the first power source contact comprises a conductive stack. 19. A method comprising: removing a conductive element in a first particular conductive layer from a first bridge structure ; and adding a conductive element in the first particular conductive layer to connect a second bridge structure to a first signal source. 20. The method of claim 19, wherein removing the conductive element in the first particular conductive layer from the first bridge structure comprises removing the conductive element during fabrication of the first bridge structure by editing a metallization mask for the particular conductive layer. 21. The method of claim 20, wherein adding the conductive element in the first particular conductive layer to connect the second bridge structure to the first signal source comprises adding the conductive element by editing the metallization mask. 22. The method of claim 21, further comprising removing a conductive element in a second particular conductive layer in the second bridge structure. 23. The method of claim 22, further comprising adding a conductive element in the second particular conductive layer to connect a third bridge structure to a second signal source. <Desc/Clms Page number 12> 24. A computer system comprising: a processor; a die including an identification register having a plurality conductive bridge structures, the identification register coupled to the processor. 25. The computer system of claim 24, wherein the processor comprises a microprocessor. 26. The computer system of claim 25, wherein at least one of the plurality of conductive bridge structures includes a conductive stack having a gap. 27. A method comprising: providing an identification register on a die including a circuit having a plurality of metallization layers; and changing only one metallization mask to modify the circuit and the identification register. 28. The method of claim 27, wherein providing the identification register on the die including the circuit comprises providing a bit-cell including a plurality of conductive bridge structures. 29. The method of claim 27, wherein providing the identification register on the die including the circuit comprises providing a plurality of bit-cells, each of the plurality of bit-cells including a plurality of conductive bridge structures. 30. An apparatus comprising: logic formed on a die; and an information storage structure coupled to the logic, the information storage structure including one or more bit-cells, each of the one or more bit-cells including a plurality of conductive bridge structures. 31. The apparatus of claim 30, wherein the information storage structure includes one or more microcode instructions. <Desc/Clms Page number 13> 32. The apparatus of claim 30, wherein the logic comprises a processor. |
<Desc/Clms Page number 1> BIT-CELL AND METHOD FOR PROGRAMMING Field This invention relates to integrated circuits and, more particularly, to bit-cells used in integrated circuits. Background The bit-cells used in revision identification registers to identify a revision level of an integrated circuit are often synthesized, placed, and routed using automated tools. Often, multiple metal layers must be modified to implement these automatically generated revision identification registers. It is usually not possible to confine the modifications of the bit-cells in automatically generated revision identification registers to a single metal layer. Hence, even if a logic change to an integrated circuit only requires modification of a single metal layer, the corresponding changes to the revision identification registers may require changing more than one metal layer, which increases the cost of the change. Brief Description of the Drawings Fig. 1A is an illustration of bit-cell including a plurality of bridge structures in accordance with some embodiments of the present invention. Fig. 1B is an illustration of one of the plurality of bridge structures shown in Fig. 1A in accordance with some embodiments of the present invention. Fig. 1 C is a cross-sectional view, taken along the section line X, of the conductive stack, shown in Fig. 1B, illustrating the relationship between the coupling structure and two adjacent conductive elements in accordance with some embodiments of the present invention. Fig. 1D is an illustration of the bridge structure, shown in Fig. 1B, in which the conductive stack, shown in Fig. 1B, is replaced with a conductive stack that has a gap in accordance with some embodiments of the present invention. Fig. 1E is an illustration of a conductive stack and a conductive element that connect a signal source to the bridge structure shown in Fig. 1A in accordance with some embodiments of the present invention. Fig. IF is a schematic diagram of the driver shown in Fig. 1A in accordance with some embodiments of the present invention. <Desc/Clms Page number 2> Fig. 2 is a block diagram of a communication system including a plurality of bit- cells shown in Fig. 1A in accordance some embodiments of the present invention. Fig. 3 is an illustration of an interconnect including a first conductive bridge structure and a second conductive bridge structure, such as the conductive bridge structure shown in Fig. 1B, in accordance with some embodiments of the present invention. Fig. 4 is flow diagram of a method for modifying the bit-cell shown in Fig. 1A in accordance with some embodiments of the present invention. Fig. 5 is a block diagram of a computer system including a processor and a die including an identification register, shown in Fig. 2, having a plurality of conductive bridge structures, shown in Fig. 1B. Fig. 6 is a block diagram of an apparatus including an information storage structure including one or more bit-cells, shown in Fig. 1A, and logic formed on a substrate in accordance with some embodiments of the present invention. Description In the following description of some embodiments of the present invention, reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific embodiments of the present invention which may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled. Fig. 1 A is an illustration of a bit-cell 100 including a plurality of bridge structures 102,103, 104,105, and 106 in accordance with some embodiments of the present invention. In addition to the plurality of bridge structures 102,103, 104,105, and 106, the bit-cell 100 includes a driver 108 having an input port 110 and an output port 112, and a signal source 114. The signal source 114 is connected to the bridge structure 102. Each of the plurality of bridge structures 102,103, 104,105, and 106 is connected to the input port 110 of the driver 108. <Desc/Clms Page number 3> In operation, the signal source 114 provides a signal to the bridge structure 102. The bridge structure 102 provides a conductive path between the signal source 114 and the input port 110 of the driver 108. The driver 108 processes the signal and provides a processed signal at the output port 112. The bridge structures 103,104, 105, and 106 are not connected to a signal source, so the bridge structures 103,104, 105, and 106 do not provide a signal to the input port 110 of the driver 108. If the signal source 114 is disconnected from the bridge structure 102, then a signal source (not shown) can be connected to one of the bridge structures 103,104, 105, or 106 to provide a signal to the driver 108. Fig. 1B is an illustration of one of the plurality of bridge structures 102,103, 104, 105, and 106 shown in Fig. 1A in accordance with some embodiments of the present invention. The bridge structure 102 includes conductive stacks 116 and 118. The conductive stack 116 is connected to the conductive stack 118 by a conductive beam 120. The conductive stack 116 includes a plurality of conductive elements 122,123, 124,125, and 126. The conductive stack 118 includes a plurality of conductive elements 128,129, 130,131, and 132. Each of the conductive elements 122,123, 124,125, and 126, and each of the conductive elements 128,129, 130, 131, and 132 are connected to adjacent conductive elements or to the conductive beam 120 by a coupling structure 134. Fig. 1C is a cross-sectional view, taken along the section line X, of the conductive stack 116, shown in Fig. 1B, illustrating the relationship between the coupling structure 134 and two adjacent conductive elements 124 and 125 in accordance with some embodiments of the present invention. The coupling structure 134 includes a dielectric 136 and a via 138. As can be seen in Fig 1C, the via 138 is not centered within the coupling structure 134. Rather, the via 138 is located on one side of the coupling structure 134, and the dielectric 136 is located on the other side. The via 138 and the dielectric 136 swap sides in an adjacent coupling structure 134. The dielectric 136 is a non-conductor of electronic charge. In some embodiments, the dielectric 136 is silicon dioxide. The via 138 is a conductor of electronic charge. In some embodiments the via is a metal. Exemplary metals suitable for use in connection with the fabrication of the via 138 include aluminum, copper, tungsten, and alloys of aluminum, copper, and tungsten. In some embodiments, the via is polysilicon. Referring again to Fig. 1B, the conductive beam 120, each of the plurality of conductive elements 122,123, 124,125, and 126, and each of the plurality of conductive <Desc/Clms Page number 4> elements 128,129, 130,131, and 132 are formed from a conductive material. In some embodiments, the conductive beam 120, each of the plurality of conductive elements 122, 123,124, 125, and 126, and each of the plurality of conductive elements 128,129, 130, 131, and 132 is formed from a metal. Exemplary metals suitable for use in connection with the fabrication of the conductive beam 120, the plurality of conductive elements 122, 123,124, 125, and 126, and each of the plurality of conductive elements 128, 129,130, 131, and 132 include aluminum, tungsten, and copper and alloys of aluminum, tungsten, and copper. Each of the plurality of conductive elements 122,123, 124,125, and 126 and each of the plurality of conductive elements 128,129, 130, 131, and 132 is connected to one or more adjacent elements. The conductive beam 120 is connected to conductive elements 122 and 132. Fig. 1D is an illustration of the bridge structure 102, shown in Fig. 1B, in which the conductive stack 116, shown in Fig. 1B, is replaced with a conductive stack 140 that has a gap 142 in accordance with some embodiments of the present invention. The bridge structure 102 shown in Fig. 1D includes the conductive stack 118, the conductive beam 120, and the conductive stack 140. The conductive beam 120 connects the conductive stack 140 to the conductive stack 118. The conductive stack 140 includes the conductive elements 122,123, 125, and 126 included in the conductive stack 116, shown in Fig. 1B, however, the conductive stack 140 does not include the conductive element 124 included in the conductive stack 116. The conductive stack 140 includes the gap 142 in place of the conductive element 124 (shown in Fig. 1B) of the conductive stack 116. In some embodiments, the bridge structure 102 is formed using a six-layer metallization process. The conductive element 126, in a six-layer metallization process, is included in the first metallization layer and the conductive beam 120 is included in the sixth metallization layer. Each metallization layer in a six-layer metallization process is defined by a mask. The gap 142 is included on metallization layer three and the mask used to define the conductive element 124 is modified to define the gap 142 in the conductive stack 140. Fig. 1E is an illustration of a conductive stack 144 and a conductive element 146 that connect a signal source 148 to the bridge structure 103, shown in Fig. 1A, in accordance with some embodiments of the present invention. The conductive stack 144 includes a plurality of conductive elements 147, 148, 149,150, 151, and 152. The materials and methods used in the fabrication of the conductive stacks 116 and 118 shown <Desc/Clms Page number 5> in Fig. 1B and described above are suitable for use in connection with the fabrication of the conductive stack 144. The materials and methods used in the fabrication of the plurality of conductive elements 122,123, 124,125, and 126 shown in Fig. 1B and described above are suitable for use in connection with the fabrication of the plurality of conductive elements 147,148, 149, 150, 151, and 152. The conductive element 146 connects the conductive stack 144 to the bridge structure 102. The materials and methods used in the fabrication of the plurality of conductive elements 122,123, 124,125, and 126 (shown in Fig. 1B) and described above are suitable for use in connection with the fabrication of the conductive element 146. The conductive element 146 is formed on the third metallization layer and connects the conductive element 150 of the conductive stack 144 to the conductive element 124 of the conductive stack 116 in the bridge structure 103. The conductive element 146 is defined in the metallization layer three mask. Fig. IF is a schematic diagram of the driver 108 shown in Fig. 1A in accordance with some embodiments of the present invention. The driver 108 is not limited to a particular type of circuit, a particular technology, or a particular power level. The driver 108 is an inverter having the input port 110 and the output port 112. Technologies suitable for use in the fabrication of the driver 108 includes semiconductor technologies, such as silicon, germanium, and gallium arsenide. The driver 108 is not limited to processing a particular type of signal. Exemplary types of signals suitable for processing by the driver 108 include logic signals, such as digital signals, and power signals, such as power source signals. Referring again to Figs. 1A, 1B, 1D, and 1E, in the bit-cell 100 a change to a particular layer (the third metallization layer in this embodiment) can change the signal provided at the output port 112 of the driver 108. The change can be accomplished by editing only a single mask. The change includes removing the conductive element 124 from the conductive bridge 102 (thereby disconnecting the signal source 114 from the driver 108) and adding the conductive element 146 between the conductive stack 144 and the conductive bridge 103 (thereby connecting the signal source 148 to the driver 108). Thus, if changing only a single mask level is sufficient to update an integrated circuit, then a change to the bit-cell 100 on the same mask level is sufficient to update the revision level (represented by the bit-cell 100) of the integrated circuit. <Desc/Clms Page number 6> Fig. 2 is a block diagram of a communication system 200 including a plurality of bit-cells 100 shown in Fig. 1A in accordance some embodiments of the present invention. The communication system 200 includes a substrate 202, a communication circuit 204, and an identification register 206. The communication circuit 204 and the identification register 206 are formed on the substrate 202. The communication circuit 204 is coupled to an antenna 208. The identification register 206 includes the plurality of bit-cells 100. Each of the plurality of bit-cells 100 can be changed during manufacturing of the communication circuit 204 by changing only one metallization mask. The plurality of bit- cells 100 includes a plurality of bridge structures 102,103, 104,105, and 106 (shown in Fig. 1A) formed on the substrate 202. The bridge structures 102,103, 104,105, and 106 are formed from the metallization layers included in the fabrication of the communication circuit 204. The substrate 202 is not limited to a particular material. Exemplary substrate 202 materials suitable for use in connection with the fabrication of the communication circuit 204 include semiconductors, such as silicon, germanium, and gallium arsenide. In operation, the identification register 206 can provide version information to the communication circuit 204. The communication circuit 204 is coupled to the antenna 208 to transmit and receive information. Fig. 3 is an illustration of an interconnect 300 including a first conductive bridge structure 302 and a second conductive bridge structure and 304, such as the conductive bridge structure 102, shown in Fig. 1B, in accordance with some embodiments of the present invention. The first and second conductive bridge structures 302 and 304 are formed on a substrate 306. The first conductive bridge structure 302 includes a proximal end 308 and a distal end 310. The second conductive bridge structure 304 includes a proximal end 312 and a distal end 314. The proximal end 308 of the first conductive bridge structure 302 is connected to the proximal end 312 of the second conductive bridge structure 304. The distal end 310 of the first conductive bridge structure 302 and the distal end 314 of the second conductive bridge structure 304 are unconnected. In some embodiments, the distal end 314 of the second conductive bridge structure 304 is adjacent to a first power source contact 316. In some embodiments, a signal source 318, such as a logical signal source, is connected to the distal end 310 of the first bridge structure 302. In some embodiments, the first power source contact 316 comprises a conductive stack, such as the conductive stack 144 shown in Fig. 1E. <Desc/Clms Page number 7> Fig. 4 is flow diagram of a method 400 for modifying the bit-cell 100 shown in Fig. 1A in accordance with some embodiments of the present invention. The method 400 includes removing a conductive element in a first particular conductive layer from a first bridge structure (block 402), and adding a conductive element in the first particular conductive layer to connect a second bridge structure to a first signal source (block 404). In some embodiments of the method 400, removing the conductive element in the first particular conductive layer from the first bridge structure (block 402) includes removing the conductive element during fabrication of the first bridge structure by editing a metallization mask for the particular conductive layer. In some embodiments of the method 400, adding the conductive element in the first particular conductive layer to connect the second bridge structure to the first signal source (block 404) includes adding the conductive element by editing the metallization mask. In some embodiments of the method 400, the method 400 further includes removing a conductive element in a second particular conductive layer in the second bridge structure. In some embodiments of the method 400, the method 400 further includes adding a conductive element in the second particular conductive layer to connect a third bridge structure to a second signal source. Fig. 5 is a block diagram of a computer system 500 including a processor 502 and a die 504 including an identification register 206, shown in Fig. 2, having a plurality of conductive bridge structures 102, shown in Fig. 1B. The identification register 206 is coupled to the processor 502. In some embodiments, the processor 502 comprises a microprocessor. In some embodiments, at least one of the plurality of conductive bridge structures 102 includes a conductive stack 140 (shown in Fig. 1D) having a gap. Fig. 6 is a block diagram of an apparatus 600 including an information storage structure 602 including one or more bit-cells 100, shown in Fig. 1A, and logic 604 formed on a substrate 606 in accordance with some embodiments of the present invention. The information storage structure 602 functions as a read-only-memory coupled to the logic 604 forming a processor core, a microcontroller, or a microprocessor. A read-only- memory can contain microcode instructions suitable for execution by the logic 604 or data for processing by the logic 604. When stored in the information storage structure 602, microcode instructions or data can be changed by editing a single metallization mask. <Desc/Clms Page number 8> Thus, if the logic 604 requires a change, for example, on metallization level three, and the microcode instructions or data also require a change, then the change to the microcode instructions or data can also be made by only changing metallization level three. Exemplary materials suitable for use in connection with the fabrication of the substrate 606 include semiconductors, such as silicon, germanium, or gallium arsenide. Although specific embodiments have been described and illustrated herein, it will be appreciated by those skilled in the art, having the benefit of the present disclosure, that any arrangement which is intended to achieve the same purpose may be substituted for a specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof. |
A hierarchy of multiplexers is provided to generate functions of more inputs than the lookup table can handle. For example, a lookup table having 16 memory cells can generate functions of four input signals. By combining the outputs of two lookup tables in a multiplexer (F5) controlled by a fifth input signal, any function of five input signals can be generated. Using a sixth signal to select between the outputs of two such F5 multiplexers allows any function of six input signals to be generated, and so forth. In one embodiment, a configurable logic block (CLB) includes four slices, each having two four-input lookup tables (a total of eight lookup tables). The multiplexer hierarchy allows for all functions of eight input signals to be generated by selecting the output signal of one of the 16 lookup tables in a pair of CLBs. In addition to the eight lookup tables that generate functions of four input signals, the CLB includes four F5 multiplexers, where each F5 multiplexer receives input signals from two lookup tables and can generate all functions of five input signals when the two lookup tables receive the same four input signals and the F5 multiplexer is controlled by the fifth input signal. The CLB also includes two F6 multiplexers where each F6 multiplexer receives input signals from two of the F5 multiplexers. The CLB further includes an F7 multiplexer which receives the two F6 signals. The CLB also includes an F8 multiplexer which receives the F7 multiplexer output signal and an F7 multiplexer output signal from an adjacent CLB. |
We claim: 1. A configurable logic element (CLE) slice for a field programmable gate array (FPGA) comprising: a first function generator having n input terminals and one output terminal; a second function generator having n input terminals and one output terminal; a first wide function multiplexer having input terminals coupled to the output terminals of the first and second function generators; and a second wide function multiplexer having two of its input terminals coupled to output terminals of wide function multiplexers external to the CLE slice. 2. A configurable logic block (CLB) comprising a plurality of CLE slices, each having a plurality of function generators, a first multiplexer coupled to the function generators, and a second multiplexer, the CLE slices further comprising: a plurality of first CLE slices, each having a second multiplexer coupled to a pair of first multiplexers; and a second CLE slice having a second multiplexer connected to a pair of second multiplexers in the first CLE slices. 3. The configurable logic block of claim 2 further comprising: a third CLE slice having a second multiplexer connected to the second multiplexer of the second CLE slice and a second multiplexer external to the CLB. 4. A field programmable gate array (FPGA) comprising: a plurality of configurable logic blocks (CLBS) arranged in a column, wherein each of the CLBs includes a first multiplexer configurable to provide an output signal that is any function of N input signals, and a second multiplexer configurable to provide an output signal that is any function of N+1 input signals, wherein the second multiplexer in each CLB has a first input terminal coupled to an output terminal of the first multiplexer in the CLB, and a second input terminal coupled to an output terminal of a first multiplexer in an adjacent CLB. 5. A configurable logic block (CLB) comprising: a first CLE slice having a plurality of function generators coupled to a first multiplexer, and a second multiplexer; a second CLE slice having a plurality of function generators coupled to a third multiplexer, and a fourth multiplexer; a third CLE slice having a plurality of function generators coupled to a fifth multiplexer, and a sixth multiplexer; a fourth CLE slice having a plurality of function generators coupled to a seventh multiplexer, and an eighth multiplexer; interconnect circuitry coupling output terminals of the first and third multiplexers to input terminals of the second multiplexer; interconnect circuitry coupling output terminals of the fifth and seventh multiplexers to input terminals of the sixth multiplexer; interconnect circuitry coupling output terminals of the second and sixth multiplexers to input terminals of the fourth multiplexer; interconnect circuitry coupling an output terminal of the fourth multiplexer to an input terminal of the eighth multiplexer, and to an input terminal of a ninth multiplexer in a first adjacent CLB; and interconnect circuitry coupling an input terminal of the eighth multiplexer to an output terminal of a tenth multiplexer in a second adjacent CLB. 6. A configurable logic element (CLE) slice comprising: a first function generator configurable to operate as a shift register; a second function generator configurable to operate as a shift register; a first wide function multiplexer coupled to output terminals of the first and second function generators; a second wide function multiplexer; first means for routing a first signal to the first function generator as a shift input signal and to the first wide function multiplexer as a control signal; and second means for routing a second signal to the second function generator as a shift input signal and to the second wide function multiplexer as a control signal. 7. A CLE slice as in claim 6 further comprising: a first carry multiplexer controlled by the first function generator; and a circuit for routing the first signal to an input terminal of the first carry multiplexer. 8. A CLE slice as in claim 7 further comprising: a second carry multiplexer controlled by the second function generator; and a circuit for routing the second signal to an input terminal of the second carry multiplexer; wherein the second carry multiplexer provides an output signal that is an input signal to the first carry multiplexer. 9. A CLE slice as in claim 7 wherein the circuit for routing the first signal to an input terminal of the first carry multiplexer comprises a multiplexer that receives the first signal as one input signal and receives an input signal to the first function generator as another input signal. 10. A CLE as in claim 6 wherein the first means further comprises a multiplexer for selecting between routing the first signal to the first function generator and routing an output signal from an adjacent function generator to the first function generator. |
FIELD OF THE INVENTION The present invention relates to an architecture for enabling random access memory (RAM) structures in configurable logic blocks (CLBs) of a field programmable gate array (FPGA). BACKGROUND OF THE INVENTION Xilinx, Inc. the assignee of the present application, manufactures FPGAs, the complexity of which continues to increase. Freeman in U.S. Pat. No. Reissue 34,363, incorporated herein by reference, which is a re-issue of original U.S. Pat. No. 4,870,302, describes the first FPGA. An FPGA is an integrated circuit chip which includes a plurality of programmable input/output pads, a plurality of configurable logic elements, and a programmable interconnect structure for interconnecting the plurality of logic elements and pads. Each logic element implements a logic function of the n inputs to the logic element according to how the logic element has been configured. Logic functions may use all n inputs to the logic element or may use only a subset thereof. A few of the possible logic functions that a logic element can be configured to implement are: AND, OR, XOR, NAND, NOR, XNOR and mixed combinations of these functions. One disclosed implementation of the logic element includes a configurable lookup table which is internal to the logic element and which includes 2@n individual memory cells, where n is the number of input signals the lookup table can handle. At configuration, in this architecture a bitstream programs the individual memory cells of the lookup table with a desired function by writing the truth table of the desired function to the individual memory cells. Although the programming is described as being performed serially, other techniques for parallel programming are also known. One memory cell architecture appropriate for use in the lookup tables is shown in FIG. 1 and described by Hsieh in U.S. Pat. No. 4,821,233, incorporated herein by reference. A memory cell of this architecture is programmed by applying the value to be written to the memory cell on the data input line, "Data," and strobing the corresponding address line, "ADDR." Further, although this architecture uses five transistors, other known configurations, e.g., six transistor static memory cells, also are appropriate choices for implementing the memory cells of the lookup table. As shown in FIG. 1, inverter 726 may be included to increase the drive of memory cell 700, and avoid effecting the value stored in memory cell 700 unintentionally via charge sharing with the read decoder. After configuration, to use a lookup table, the input lines of the configured logic element act as address lines which select a corresponding memory cell in the lookup table. For example, a logic element configured to implement a two-input NAND gate would output the corresponding value {1,1,1,0} contained in the one of the four memory cells corresponding to the current input pair {00, 01, 10, 11}, respectively. This selection is performed by a decoding multiplexer which selects a memory cell from the lookup table on the basis of the logic levels of the input lines. A block diagram of an exemplary four-input lookup table composed of 16 memory cells 7001 through 70016 and a decoding multiplexer 200 is shown in FIG. 2. The multiplexer propagates a value stored in one of the memory cells 7001 -70016 of the lookup table to an output X of the lookup table as selected by the four input signals F0-F3. FIG. 3 is a schematic diagram of another embodiment of a lookup table. In this embodiment, the lookup table is implemented using four memory cells 7001 -7004 and a two-input decoding multiplexer 200 with two input signals, F0 and F1. The two-input decoding multiplexer 200 is shown in detail as being implemented by a hierarchy of pass transistors which propagate the value stored in the selected memory cell to the output X of the logic element. In FIG. 3, the memory cells may be implemented as shown in FIG. 1. The above architecture was later augmented to enhance the functionality of the lookup tables. U.S. Pat. No. 5,343,406 to Freeman et al., incorporated herein by reference, describes how additional circuitry can enable lookup tables to behave as random access memories (RAMs) which can be both read and written after configuration of the logic device. When the option of allowing the user to write data to memory cells is available, there also must be provision for entering the user's data into these memory cells and reading from the memory cells. This capability is provided by including two means for accessing each dual function memory cell, one which is used to supply the configuration bitstream from off the chip, and another which is used during operation to storevalues from signals that are routed from the interconnect lines of the FPGA. FIG. 4 shows the memory cell architecture described in U.S. Pat. No. 5,343,406 which allows memory cell 750 to be programmed both during and after configuration. During configuration, memory cell 750 is programmed using the same process for programming the memory cell of FIG. 1. After configuration, memory cell 750 is programmed differently. A value to be written to memory cell 750 is applied through the interconnect structure of the FPGA to the second data line 705, and then the corresponding write-strobe line WS for the memory cell is pulsed. This pulse latches the value on line 705 into memory cell 750. Like the lookup table of FIG. 2 which uses a series of memory cells from FIG. 1, a series of memory cells from FIG. 4 are combinable into a lookup table. FIG. 5 is a block diagram showing a four-input lookup table with synchronous write capability. There is a write strobe generator 504 which receives a clock signal, CK, and a write enable signal, WE, and creates a single write strobe signal, WS, for the lookup table. To write a value to a desired memory cell, say 7505, the value is applied on line Din and the address of the desired memory cell 7505 is applied to the input lines F0-F3 of demultiplexer 500. The value then is latched into the desired memory cell 7505 by pulsing the write strobe. Conversely, to read a value stored in a different desired memory cell 7503, the address of the memory cell 7503 is applied to the input lines F0-F3 of decoding multiplexer 200 (without pulsing the write strobe), as was described with reference to FIGS. 2 and 3. FIG. 6 is a schematic illustration of a two-input lookup table with synchronous write capability. FIG. 6 includes four memory cells 7501 through 7504. Detail of demultiplexer 500 and multiplexer 200 is shown in FIG. 6. The implementation and operation of other logic array devices are described in "The Programmable Logic Data Book," pages 4-1 to 4-372, copyright 1996 by Xilinx, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124. This portion of "The Programmable Logic Data Book" is incorporated herein by reference. Because a 4-input lookup table is only capable of storing 16-bits of data, it would be desirable to have an architecture that enables a plurality of lookup tables to be combined to form larger random access memories (RAMs) of selectable sizes. It would also be desirable if this architecture would enable dual-port RAMs of selectable sizes. It would further be desirable if this architecture did not significantly increase the complexity of the configurable logic elements (CLEs) in the FPGA. One or more 4-input lookup tables, such as those illustrated in FIGS. 2 and 5, are typically used to implement combinatorial function generators in a configuration logic element (CLE). Some CLEs include a function generator to select between the outputs of two 4-input lookup tables in order to enable the CLE to implement any 5-input function. One such CLE, implemented in the Xilinx XC4000-Series FPGAs, is described in pages 4-11 through 4-23 of the Xilinx 1996 Data Book entitled "The Programmable Logic Data Book", available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124. The function generator can be replaced by a 2-to-1 multiplexer, with a signal selecting between the outputs of the two 4-input lookup tables, as disclosed in U.S. Pat. No. 5,349,250 entitled "Logic Structure and Circuit for Fast Carry" by Bernard J. New. Replacing the function generator with a 2-to-1 multiplexer still provides any function of up to five inputs and reduces the silicon area required to implement a the function generator. An FPGA using two 4-input lookup tables and a 2-to-1 multiplexer to implement a five input function generator is the XC5200.TM. family of products from Xilinx, Inc. The XC5200 CLE is described in pages 4-188 through 4-190 of the Xilinx 1996 Data Book. A configurable logic block (CLB) capable of generating 6-input functions is described as implemented in the VIRTEX.TM. FPGAs from Xilinx Inc. This CLB includes two CLE slices, and is described in "The Programmable Logic Data Book 1999" pages 3-1 to 3-60, copyright 1999 by Xilinx, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124. It would be desirable to have a CLE structure that is capable of efficiently implementing functions larger than 6-input functions. It would further be desirable if this CLE structure is easily expandable, without significantly increasing the complexity of the CLE structure. SUMMARY OF THE INVENTION The present invention provides means and method for programming a configurable logic element so that the logic element can implement any one of a shift register and a combinatorial logic function using a lookup table. In one embodiment, the invention further provides for implementing a random access memory in this same logic element. The lookup table includes a plurality of memory cells which are connected in series so that an output of a first memory cell is configurable as an input to a second memory cell of the same lookup table. Further, by connecting shift registers of plural logic elements in series, larger shift registers can be built from smaller shift registers. Previous architectures built n-bit shift registers out of n flip flops connected in series, thereby wasting interconnect resources and logic while achieving mediocre performance. In one mode, the memory cells which store the lookup table values are used as registers in a shift chain. When the logic element is in shift register mode, the Data-in value is shifted into the first cell and the value in each memory cell is shifted to the next cell. When the logic element is in random access memory mode, the Data-in value is written to a cell addressed by F3-F0, as discussed above. When the logic element is in pure lookup table mode, no value can be written after configuration and the logic element continues to generate the function loaded in during configuration. According to another aspect of the invention, shift registers formed in a single lookup table can be cascaded together through cascade multiplexers to form larger shift registers. Each cascade multiplexer receives two input signals, the output signal from the last memory cell in a previous lookup table, and an input signal from the interconnect structure (or other selectable source). The output signal from the cascade multiplexer provides the input signal to the first memory cell in the next lookup table. According to yet another aspect of the invention, a hierarchy of multiplexers is provided to generate functions of more inputs than the lookup table can handle. For example, a lookup table having 16 memory cells can generate functions of four input signals. By combining the outputs of two lookup tables in a multiplexer (F5) controlled by a fifth input signal, any function of five input signals can be generated. Using a sixth signal to select between the outputs of two such F5 multiplexers allows any function of six input signals to be generated, and so forth. In one embodiment, a configurable logic block (CLB) includes four slices, each having two four-input lookup tables (a total of eight lookup tables). The multiplexer hierarchy allows for all functions of eight input signals to be generated by selecting the output signal of one of the 16 lookup tables in a pair of CLBs. In addition to the eight lookup tables that generate functions of four input signals, the CLB includes four F5 multiplexers, where each F5 multiplexer receives input signals from two lookup tables and can generate all functions of five input signals when the two lookup tables receive the same four input signals and the F5 multiplexer is controlled by the fifth input signal. The CLB also includes two F6 multiplexers where each F6 multiplexer receives input signals from two of the F5 multiplexers. The CLB further includes an F7 multiplexer which receives the two F6 signals. The CLB also includes an F8 multiplexer which receives the F7 multiplexer output signal and an F7 multiplexer output signal from an adjacent CLB. In one embodiment, this hierarchy of eight multiplexers is controlled by the same lines that provide shift register input signals. In this embodiment, the eight lookup tables are paired into 4 slices so that the downstream lookup table in each slice receives a shift register input signal on the line that also controls the F5 multiplexer for the slice. The upstream lookup table of the slice receives a shift register input signal on the line that controls an F6, F7 or F8 multiplexer. This arrangement is advantageous because the structure can be configured as a variable length shift register, where the line carrying the most upstream signal is used for loading shift register data and the more downstream lines all control multiplexers. In accordance with another embodiment of the present invention, the plurality of function generators (lookup tables) present in the CLB are configured to form a random access memory (RAM). The width and depth of the RAM are selectable by controlling the routing of signals within the CLE slices. The hierarchy of multiplexers (e.g., the F5, F6, F7 multiplexers) are used to selectively route read data values from the lookup tables. Another set of multiplexers is used to selectively route write data values to the lookup tables. These multiplexers can be configured to provide a single write data value to all of the lookup tables to form a deep RAM. Alternatively, these multiplexers can be configured to provide one write data value to half of the lookup tables, and another write data value to the other half of the lookup tables. This pattern repeats down to the level where these multiplexers can be configured to provide a different write data value to each of the lookup tables. Advantageously, each of the CLE slices includes the same multiplexer pattern, and each lookup table is accompanied by a corresponding multiplexer. A write control circuit is also provided in each CLE slice to provide write enable signals to the lookup tables in the CLE slice. Each write control circuit generates the write enable signals in response to a plurality of write control signals received from various CLE slices. This advantageously enables the generation of many different patterns of write enable signals. Advantageously, each of the CLE slices includes an identical write control circuit. Dedicated routing resources are provided to enable read and write addresses to be provided to the CLE slices in a manner that enables the CLB to be operated as a dual-port RAM having selectable width and depth. The present invention will be more fully understood in view of the following description and drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic illustration of a first prior art memory cell architecture used in lookup tables in FPGAs where a value of the memory cell is stored during configuration. FIG. 2 is a block diagram of a prior art programmable 4-input look-up table implemented by a sixteen-to-one decoding multiplexer and a series of sixteen memory cells. FIG. 3 is an expanded view of a schematic illustration of a prior art two-input lookup table and a decoding multiplexer implemented by a hierarchy of pass gates. FIG. 4 is a schematic illustration of a second prior art memory cell architecture used in lookup tables where the value of the memory cell is stored at configuration and remains dynamically readable and writable after configuration. FIG. 5 is a block diagram of a prior art logic element that is configurable to implement either a sixteen-by-one random access memory or a four-input lookup table. FIG. 6 is a schematic illustration of a prior art logic element that is configurable to implement either a four-bit random access memory or a two-input lookup table. FIG. 7 is a schematic illustration of a memory cell architecture according to the present invention which can alternatively be configured as a shift register or a lookup table. FIGS. 7A and 7B are waveform diagrams showing non-overlapping signals Phi1 and Phi2 which cause a bit value to shift from a preceding memory cell into the current memory cell when Phi2 is asserted. FIG. 8 is a block diagram of a logic element according to the invention that can implement either a four-input lookup table or a 16-bit shift register. FIG. 9 is a circuit diagram of a logic element according to the invention that can implement either a 2-input lookup table or a 4-bit shift register, where the mode of the logic element controls the operation of the control logic, and may be stored in configuration memory. FIG. 10 is a schematic illustration of a memory cell for implementing any of a lookup table, a shift register, or a RAM. FIG. 11 is a block diagram of a logic element that is configurable to implement any one of a four-input lookup table, a sixteen-bit shift register, and a sixteen-bit random access memory. FIG. 12 is a schematic diagram of a logic element according to the present invention that is configurable to implement any one of a two-input lookup table, a four-bit shift register, and a four-bit random access memory. FIG. 13 comprising FIGS. 13A through 13H shows waveform diagrams of the operation of the logic element when configured in shift-register mode. FIG. 14 is a block diagram of a logic element which includes both a shift register and a flip-flop. FIG. 15 is a block diagram of an FPGA. FIG. 16 shows a 64-bit variable length shift register formed by combining structures such as shown in FIG. 8. FIG. 17 shows a 64-bit variable length shift register formed using an architecture with an advantageous modification to the structure of FIG. 8. FIG. 18 shows a logic slice structure from which the 64-bit variable length shift register of FIG. 17 can be formed. FIG. 19 shows a layout of wiring for cascading adjacent lookup table slices by which interiors of adjacent lookup table slices can be identically laid out. FIG. 20 shows more detail of the structure of FIG. 19, illustrating the lookup table structures. FIG. 21 is a schematic diagram of a CLE slice S0 in accordance with one embodiment of the present invention. FIG. 22 is a block diagram illustrating a CLB that includes four CLE slices S0-S3, each of which is identical to the CLE slice S0 of FIG. 21. FIG. 23 is a block diagram of a CLB in accordance with another embodiment of the present invention. FIG. 24 is a block diagram of a CLB in accordance with yet another embodiment of the present invention. FIG. 25 is a block diagram illustrating selected multiplexers in the CLE slice of FIG. 21, as well as the associated function generators. FIG. 26 is a circuit diagram of the write control circuit of the CLE slice of FIG. 21 in accordance with one embodiment of the present invention. FIG. 27 is a block diagram illustrating the write control circuits in the CLE slices of FIG. 22 in accordance with one embodiment of the present invention. FIG. 28 is a block diagram illustrating the routing of the address signals to the function generators in the CLE slices of FIG. 22. DETAILED DESCRIPTION With an increase in logic gate density, a shift register can now be implemented as one element of a larger user-configurable integrated circuit logic array. In a first embodiment of the present invention, a logic element is configurable to implement both an n-bit shift register and a (log2 n)-input lookup table. FIG. 7 shows a schematic illustration of a memory cell 7702 of the logic element architecture according to the present invention which, when configured to be in shift register mode, advantageously enables a value to be shifted from a preceding memory cell 7701 into the memory cell 7702. Memory cell 7702 includes a pass transistor 706. The configuration value is written into memory cell 7702 by pulsing configuration control line 702 of transistor 706, while applying the configuration value to the data line 704. The output of memory cell 7702 is programmably connected to the input of a next memory cell 7703 by pass transistors 7202, inverter 7262, and a next pass transistor 7083 not shown in FIG. 7. As shown by the timing diagrams in FIGS. 7A and 7B, during most of each cycle the clocking signal Phi1 on output control line 724 remains high, and thus the output signal 7342 of memory cell 7702 is applied through inverter 7262 to shift input line 7142 leading to the next memory cell 7703. When Phi1 goes low at time t1, pass transistor 7202 is turned off. Inverter 7262 continues for a short time to hold as an output signal the logic level previously asserted by memory cell 7702. In this way, the combination of transistor 7202 and inverter 7262 serves as a temporary latch. When a second clocking signal, Phi2, is asserted at time t2 on input control line 716, inverter 701 receives both the output of inverter 703 of memory cell 7702 and the output of inverter 7261 of the previous memory cell 7701. Each inverter 726 is designed to overpower the inverter 703 so that values can be shifted between adjacent memory cells. Therefore, the current value stored in memory cell 7702 is overwritten by the output of the previous memory cell 7701. When Phi2 returns low at time t3, memory cell 7702 is once again latched, holding its current value independent of changes in shift input line 7141. At time t4, Phi1 goes high, thus applying the new value to inverter 7262. Thus in one clock cycle, a bit shifts one cell. In contrast, if Phi1 and Phi2 mistakenly overlapped, the value of the output 734 of each memory cell 770 would propagate from preceding memory cell 7001 through memory cell 7702 to the next memory cell 7703. This would not produce the desired single bit shift. However, by using non-overlapping two-phase clocking, as shown in FIGS. 7A and 7B, the memory cells shift one bit per cycle of Phi1 and Phi2. FIG. 8 shows a logic element which implements a 16-bit shift register and 4-input lookup table according to a first embodiment of the invention. For simplicity, in FIG. 8 the structures within memory cells 770 of FIG. 7 have not been explicitly illustrated. In FIG. 8, when in shift register mode, a first memory cell 7701 of the memory is programmed with an initial value. The memory cell's value may be over written with a new value by applying the new value to the Din terminal of the first memory cell 7701 and strobing the clock line, CK. The strobing of CK in turn invokes the two-phase clocking cycle of FIGS. 7A and 7B. As data is moved synchronously from left to right in the shift register, i.e., from the first memory cell 7001 to a last memory cell 70016, the logic element can continue to act as a lookup table though the function changes with every clock cycle. As in the prior art lookup tables, the decoding multiplexer 200 outputs on output line X the contents of the memory cell selected by the user inputs, i.e., F0-F3. FIG. 9 shows a structure for implementing a 2-input lookup table or a 4-bit shift register, and shows internal structure of multiplexer 200 and memory cells 7701 through 7704. FIG. 9 is oriented on the page the same way as FIG. 8, and thus assists in understanding the relationship between the elements that make up the lookup table/shift register embodiment. In a second embodiment of the present invention, a logic element is configurable to implement an n-bit shift register, an n-bit random access memory, and a (log2 n)-input lookup table. FIGS. 10-12 illustrate this embodiment. FIG. 10 illustrates the memory cell. The memory cell of FIG. 10 can be loaded from three different sources. During configuration, memory cell 7902 is loaded by applying configuration data to line 704 and strobing control line 702 of transistor 706. When memory cell 7902 is in shift register mode, it is loaded through transistor 708, as discussed above. When memory cell 7902 is in RAM mode, it is loaded through demultiplexer 500 on line 7052. Write strobe line WS is pulsed, turning on transistor 707, and thus applying a data signal to node 730. FIG. 11 shows a logic element which implements any one of a 16-bit shift register, a 16-bit random access memory, and 4-input lookup table according to the second embodiment of the present invention. In this embodiment, a memory cell, say 7905, of the lookup table is programmed with an initial value during configuration, as discussed above. Subsequently, the initial value may be replaced in either of two ways, depending on the mode of the logic element: shift or RAM. When the lookup table including memory cells 790 is being used in RAM mode, each memory cell 790 receives its data input on RAM input line 705. To write to any memory cell 790, the write strobe line WS pulses, thereby driving the value of Din through demultiplexer 500 into the addressed memory cell via input line 730. The operation of the logic element in each of these modes is controlled by control logic 1000. Control bits which specify whether the logic element is in RAM mode, shift mode, or neither are inputs to control logic unit 1000. Control logic unit 1000 also receives the user clock signal and the write enable signal. From these inputs, control logic unit 1000 outputs Phi1, Phi2 and write strobe signal WS to either shift data between memory cells, to write to a particular memory cell, or to leave the memory cell data untouched. When in shift register mode, as in FIG. 8, data is moved synchronously from left to right in the shift register, i.e., from the first memory cell 7901 to a last memory cell 79016, as described above, by invoking a two-phase clocking cycle when CK is strobed. On the other hand, when the logic element is configured as a random access memory (RAM), the addressing lines F0-F3 select one of the memory cells (7901 through 79016) to be written to and read from by using the demultiplexer 500 and the decoding multiplexer 200, respectively. When in shift register mode, the first memory cell 7901 receives as its input the signal applied to line Din. When in RAM mode, memory cell 7901 receives an input signal on line 7051 from demultiplexer 500. In RAM mode, to write to a given memory cell, say 7005, the write enable line WE must be active. When the user clock signal CK is asserted in conjunction with the active WE signal, control logic unit 1000 generates a write strobe WS. When the write strobe WS is high, memory cell 7005 addressed by address lines F0-F3 of the demultiplexer 500 receives the value from data input line Din. This value overwrites the previous contents of the memory cell 7005. No other memory cells receive the value applied to Din since they are not addressed and therefore separated from Din by high impedance connections from the demultiplexer 500. FIG. 12 is a schematic illustration which shows more detail of a logic element according to the second embodiment of the present invention. Collectively, demultiplexer 500, decoding multiplexer 200, pass transistors 708 and 720, inverters 726, and RAM mode pass transistors 707 form an interconnection network and are combined with memory cells (7901 through 7904) and control logic unit 1000 to implement the logic element according to the second embodiment. If the logic element of the second embodiment is not configured as a shift register, then the logic element acts as either a random access memory or a lookup table. In either non-shift register mode, Phi2 is maintained at a low level, deactivating pass transistors 708, thereby blocking data from one memory cell 790i from affecting the next memory cell 790i+1. Also, in the non-shift register modes, Phi1 is maintained at a high logic level, thereby feeding the outputs of the memory cells (7901 to 7904) through to the decoding multiplexer 200. As before, the output of the logic element is selected by the decoding multiplexer 200 according to the user inputs F0 and F1. When the logic element of FIG. 12 is configured as a shift register, the RAM mode pass transistors 707 are turned off because WS is held low, isolating the memory cells from the outputs of demultiplexer 500. Memory cell 7901 is programmably connected to Din through transistor 7081. To shift values, control logic unit 1000 produces control signals Phi1 and Phi2, triggered while the write enable signal is active by a rising edge of the User Clock signal CK applied to control logic unit 1000 such that values are shifted from one memory cell to next memory cell, i.e., from memory cell 790i-1 to memory cell 790i, and from memory cell 790i to memory cell 790i+1. When control logic unit 1000 receives a rising edge of the user clock signal, control logic unit 1000 first pulls Phi1 low, then pulses Phi2 high long enough to overwrite the contents of the memory cells (7901 to 7904), and lastly reasserts Phi1 after Phi2 has fallen. It is important for extremely low clocking frequencies that Phi2 be only a pulse since Phi1 must be off while Phi2 is on. To accomplish this, the control logic is designed so that Phi1 and Phi2 do not rely on the falling edge of the User Clock signal 1008, but rather are self-timed. FIG. 13 comprising FIGS. 13A through 13H are waveform diagrams of the operation of the logic element of FIG. 12. When the logic element of FIG. 12 is configured in shift-register mode, setting F1 to 1 and F0 to 0 makes it function as a three-bit shift register. As shown in FIG. 13E, the input, Din, to the three-bit shift register is maintained continuously at a high logic level throughout the example. Upon receiving a rising edge 1104 of a first user clock pulse 1108, control logic unit 1000 pulls Phi1 to a low logic level, as shown in FIG. 13G, to deactivate pass transistors 720 (FIG. 12). After temporarily having isolated the outputs 7341 through 7344 of the memory cells (7901 through 7904) from inputs of inverters 7261 through 7264, the control logic unit 1000 asserts Phi2, which propagates outputs of inverters 7261 through 7264 to their corresponding next memory cells, i.e., memory cells 7902 through 7904. When Phi2 is asserted, the value on Din is written to first memory cell 7901. The non-overlapping Phi2 pulse is shown in FIG. 13F. As shown in FIG. 13D, the value stored in first memory cell 7901 (corresponding to 7341) changes shortly after Phi2 is asserted. This change is indicated by reference 1112. The new value of output 7341 of the first memory cell 7901 does not affect the second memory cell 7902 (corresponding to 7342) because Phi1 is temporarily inactive. After asserting Phi2 long enough for the memory cells (7901 to 7904) to reach their new states, Phi2 is lowered, thereby latching the data values. Only after Phi2 has been lowered does control logic unit 1000 raise Phi1. On receiving the rising edge of Phi1, the values of outputs 7341 through 7344 again pass through pass transistors 7201 through 7204. Reference numeral 1116 shows that the change in the output X of the three-bit shift register is synchronized with the rising edge of Phi1. As seen in FIGS. 13G and 13H, the reassertion of Phi1 and the lowering of the User Clock are independent, thus logic designers need not depend on exact timing relationships between these two edges. Of course, Phi1 must be reasserted before the inputs of inverters 7261 through 7264 float to an invalid voltage. FIG. 14 is a block diagram of a logic element which includes both a logic element 1200 and a flip-flop 1204. The purpose of the flip-flop is to improve the clock-to-out delay of the output of the logic element 1200. This is simple and efficient in Xilinx FPGAs because function generators are historically paired with flip-flops in Xilinx logic elements. Further, when an n-bit, synchronous shift register is required, the logic element can be configured so that the shift register 1200 is an (n-1)-bit shift register and flip-flop 1204 is the final register of the n-bit shift register. When configured in this alternative fashion, the final bit XQ is available upon the rising edge 1104 of the User Clock pulse 1108, rather than on the rising edge 1116 of Phi1. This provides a faster clock-to-out time for the overall n-bit shift register. By configuring the logic element to route XQ back to Din, the present invention can also perform circular shifts. As discussed above (FIGS. 13A-13H), a shift register having fewer stages than the number of memory cells in a lookup table can be formed by directing a bit other than the last bit to output terminal X. Lookup tables likewise may be cascaded to create shift registers of a greater size than supported by a single lookup table. For example, it is possible to create a 20-bit shift register in a logic array composed of 16-bit lookup tables by cascading two logic elements. A first full 16-bit shift register 1200 and a second full 16-bit shift register 1200 combine to produce a 32-bit shift register. Thus, to achieve a 20-bit shift register, user input lines F0-F3 of the first logic element are set to 1111 and user input lines F0-F3 of the second logic element are 0011, i.e., the second 16-bit shift register 1200 is programmed to pass the output of the fourth memory cell 7904, which is the final output of the 20-bit shift register. Additionally, in order to improve the clock-to-out delay of the cascaded shift registers, an alternate embodiment uses a first full 16-bit shift register 1200 addressed to 1111, a second full 16-bit shift register 1200 addressed to 0010 and the flip-flop 1204. The output, X, of the second shift register feeds the input of flip-flop 1204 of the second shift register. If desired, the flip-flops 1204 can also be used to extend the number bits that can be shifted within a logic element. Fully utilizing both 16-bit shift registers 1200 and their flip-flops 1204, cascaded shift registers can be built which are 17-bit, 34-bit, 51-bit, etc. The novel shift register logic element is typically implemented in an FPGA such as the FPGA of FIG. 15 having logic blocks 101, each comprising a portion of an interconnect structure and a logic element. The FPGA of FIG. 15 is further discussed by Tavana et al. in the application Ser. No. 08/618,445 incorporated herein by reference. FIG. 16 shows a 64-bit variable length shift register formed by combining structures such as shown in FIG. 8. Variable length shift registers are desired when building FIFOs (first-in-first-out storage devices). Conventional FIFOs are commonly composed of a block of RAM addressed by READ and WRITE pointers which each increment through the block and cycle to the bottom upon reaching the top. When a word is written (pushed) into the FIFO, it is written to the address pointed to by the WRITE pointer, and the WRITE pointer is then incremented to point to the next address. When a word is read (popped) from the FIFO, it is taken from the address pointed to by the READ pointer and the READ pointer is incremented to the next address. Thus the data in a RAM based FIFO are never shifted. Rather, the READ and WRITE pointers are incremented independently. In the present case using a shift register, whenever a WRITE command is received, data are always written to one location in a shift register and all other data are shifted one step through the shift register. In response to a WRITE command, a READ pointer is incremented. In response to a READ command, the READ pointer is decremented. There is no WRITE pointer. (The READ address represents the end of the string of stored data.) Such a shift register can be used to implement a variable length FIFO. If a shift register FIFO is desired that is no more than 16 words deep, then such a FIFO can be built in an FPGA using only one lookup table configured as a shift register for each bit of the word to be stored. If a FIFO is desired that can store more than 16 words, a structure such as shown in FIG. 16 must be built for each bit of the word. For example, a 64-word FIFO with 8-bit words would require 8 of the structures shown in FIG. 16. The structure of FIG. 16 can store up to 64 bits, the DATA bits being written from the left on data input line Din and being read out on the line OUT. However, because the architecture of FIG. 8 provides only a single output from each LUT, (outputs are labeled X and Y), it is necessary to duplicate the data, an upper bank being used to store data for writing to subsequent lookup tables, and a lower bank being used for providing the particular data bit that has been addressed during a READ operation. A long shift register requires that the last sequential bit (77016) of each 16-bit shift register be shifted to the first bit of the subsequent shift register, and that every bit be addressable by the READ address applied to the LUT output multiplexers 200. (If the FIFO is nearly empty, the READ address points to a memory cell near the left of the picture, for example cell 7701 of LUT-G of slice S63. If the FIFO is nearly full, the READ address points to a memory cell near the right of the picture, for example cell 77016 of LUT-F of slice S64.) Data bits are routed from one slice to another using the general interconnect routing lines. (These lines are illustrated using dotted lines to indicate that they are programmably connectable and to distinguish from the routing lines that are part of the slice itself.) Using the architecture of FIG. 8, five slices S1 through S5 are used. A slice includes two lookup tables LUT-F and LUT-G, each comprising 16 memory cells 7701 through 77016, a multiplexer 200-F or 200-G, four LUT input lines F1 through F4 or G1 through G4 and a LUT output line X or Y. The slice also includes a clocking structure 800 receiving write enable signal WE, clock input signal CK, and a shift control signal from, for example, a configuration memory cell. Clocking structure 800 generates two non-overlapping clocking signals Phi1 and Phi2, as discussed earlier (See FIGS. 7A and 7B). These clocking signals Phi1 and Phi2 operate to shift bits to the right in response to clock signal CK when the shift memory cell contains a logic 1 and when the write enable signal WE is logic 1. In order to provide that the last bit 77016 of lookup table LUT-G of slice S61 is fed to lookup table LUT-F of slice S63, while simultaneously allowing an addressed bit to be read from any of four lookup tables (two in slice S63 and two in slice S64), it is necessary to duplicate three of the four lookup tables and to configure the lookup tables so that in one lookup table the last bit is always routed out through multiplexer 200-F or 200-G to the first bit of the next shift register, and in the duplicate lookup table, the addressed bit is read. Thus, the addressed bit is read from the addressed lookup tables LUT-G of slice S63, LUT-F of slice S63, LUT-G of slice S64, or LUT-F of slice S64 while the last bit of lookup table LUT-G of slice S61, LUT-F of slice S61, or LUT-G of slice S62 is shifted in to the first bit of lookup table LUT-F of slice S63, LUT-G of slice S64 of LUT-F of slice S64, respectively, regardless of which address is being read out. Since lookup table LUT-F of slice S64 is the last in the chain, it is not necessary to form a duplicate in lookup table LUT-F of slice S62. (Recall that the data stored in slice S61 is identical to the data stored in slice S63, and the data stored in LUT-G of slice S62 is identical to the data stored in LUT-G of slice S64.) As another aspect of the particular architecture of FIG. 8, discussed by Young, Chaudhary, and Bauer in pending U.S. patent application Ser. No. 08/806,997, the content of which is incorporated herein by reference, multiplexers are included for generating five (F5) and six (F6) input functions by combining the outputs of the four-input lookup tables LUT-F and LUT-G. But in that described embodiment, the same input signal that feeds the Din signal also serves as the control signal on the F5 multiplexer. Thus, it is not possible to use an address signal for controlling the F5 multiplexer when also using that signal for supplying data. Thus a fifth slice S65 is used. The LUT-F and LUT-G lookup tables and an F5 multiplexer of slice S65 are configured to implement a four-to-one multiplexer, the output signal from this multiplexer being the addressed bit. FIG. 17 shows a 64-bit variable length shift register formed using an architecture with an advantageous modification to the structure of FIG. 8. By changing the architecture to add a two-to-one multiplexer to the data input of each shift register and feeding the output signal of the last memory cell of the previous shift register to that multiplexer (in addition to the signal from the interconnect structure that exists in FIG. 8), a variable length shift register can be formed using no more than half the number of lookup tables of FIG. 16. The structure of FIG. 17 is configured as a 64-bit variable length shift register, just as is the structure of FIG. 16. But since the structure of FIG. 17 includes multiplexers M71 and M72 as inputs to the respective lookup table shift registers, each lookup table has both a variable-tap output through multiplexer 200 and a fixed output from cell 77016. This is advantageous for making a FIFO because each lookup table now has the two outputs required when cascading together logic elements to build a long variable-tap shift register, so no duplication of logic is required. And the READ address dynamically addresses one of the 64 memory cells via the four lookup table input signals and the F5 and F6 multiplexers. Note that using the shift input of the newly added multiplexer M71 or M72 allows the BY or BX input of the newly added multiplexer to be used for another function, in this case controlling an F5 or F6 multiplexer. FIG. 18 shows a logic slice structure from which the 64-bit variable length shift register of FIG. 17 can be formed, and in particular shows connections of the F5 multiplexer and another multiplexer labeled FX. A preferred architecture combines four of these slices into one configurable logic block (CLB). The FX multiplexer can be an F6, F7, or F8 multiplexer, depending upon the position of the illustrated slice in the CLB, where an F6 multiplexer selects between outputs of two F5 multiplexers, an F7 multiplexer selects from two F6 multiplexers, and an F8 multiplexer selects from two F7 multiplexers. FIG. 18 illustrates that the BX input signal goes two places: to multiplexer M72 and to the control terminal of the F5 multiplexer. Similarly, the BY input signal goes to multiplexer M71 and to the control terminal of the FX multiplexer. Note that the input signals to the FX multiplexer are labeled FXin0 and FXin1. These input signals come from other F5 or FX multiplexers within the CLB, and they are most conveniently illustrated in FIG. 19. In a preferred embodiment, a logic slice structure such as that of FIG. 18 will include additional elements, for example flip flops, fast carry circuits, and routing structures (see, for example, U.S. Pat. Nos. 5,267,187 to Hsieh et al., and 5,349,250 to New, as well as U.S. patent application Ser. No. 08/806,997 referenced above). However, to avoid obscuring the present invention, these additional structures have not been shown here. FIG. 19 shows a layout of wiring for cascading adjacent lookup table slices by which interiors of adjacent lookup table slices can be identically laid out and by which a single input line BX or BY can serve a function in an earlier architecture as well as a new function discussed here (so the new architecture discussed here can implement designs that have been implemented in the previous architecture illustrated in FIG. 16). FIG. 19 illustrates one configurable logic block (CLB) comprising four slices, each having two lookup tables (LUTs). Each slice is equivalent to that of FIG. 18. Whereas FIG. 18 shows one F5 multiplexer and one FX multiplexer (in addition to the two M71 and M72 multiplexers discussed earlier), FIG. 19 shows the different interconnections to the FX multiplexer in different parts of one CLB. These wide function multiplexers are now labeled F6, F7, and F8 to show the number of input signals they can provide all function of. Thus, the F8 multiplexer selects from the output signals of two F7 multiplexers and an F7 multiplexer selects from two F6 multiplexers and so on. The lookup tables themselves provide all functions of four input signals. Note that the F8 multiplexer receives one input signal from the F7 multiplexer of its own CLB and another input signal from the F7 multiplexer of an adjacent CLB. Note also that one CLB includes four F5 multiplexers, two F6 multiplexers, one F7 multiplexer, and one F8 multiplexer. The novel and advantageous placement of these wide function multiplexers always allows the control signal BX or BY to serve the dual function of providing shift-in data and controlling a corresponding multiplexer. This is because only one of the BX or BY terminals will be used for shifting in data to a shift register, and the sharing is arranged so that the highest order multiplexer is placed at the beginning of the shift register for that length. In the case of a 64-bit shift register, two slices will be used (see FIG. 17). The address will be six bits long and will use two F5 multiplexers and one F6 multiplexer. Looking at FIG. 19, this can be accomplished in either the upper two slices S3 and S2 or in the lower two slices S1 and S0. In either case, data will be shifted in on line BY of slice S3 or S1, and multiplexer M71 of the slice will be set to receive the BY signal. The F7 or F8 multiplexer will not be used since the desired output signal is provided by the F6 multiplexer of slice S2 or S0. Thus there is no conflict that the line used for controlling the F7 or F8 multiplexer is used in this case as a data input line to the shift register. If a 128-bit shift register is desired, the entire CLB of FIG. 19 will be used. Data will be shifted in on the BY line of slice S3 and the output signal will be taken from the F7 multiplexer. The F8 multiplexer will not be used. Thus, again, there is no conflict in the fact that the line used for controlling multiplexer F8 is used to provide data to the shift register. Similarly, if a 256-bit shift register is desired, two CLBs of the type shown in FIG. 19 will be used, data being shifted in to the upper of the two CLBs and the output signal taken from the F8 multiplexer of the lower CLB. So again there is no conflict. Knowing this relationship, architectures can be provided having longer patterns of multiplexers for providing larger functions. All this is possible because for n-input lookup tables we need (n-1) lines for controlling multiplexers and 1 line for shifting in data to a shift register. The (n-1) multiplexer control signals plus 1 data-in signal exactly match the n lines provided. Shift registers of sizes other than powers of two can also be formed by combining the appropriate number of slices. For example, if a user wanted a 200-bit variable length shift register, this could be implemented in seven slices using 13 LUTs, seven F5 multiplexers, four F6 multiplexers, two F7 multiplexers, and one F8 multiplexer. The three LUTs not needed in the eight slices that feed the F8 multiplexer could be used for other functions. To avoid generating an erroneous output signal if one of the unused lookup tables is addressed, the control inputs for the F5 and F6 multiplexers associated with partially used slices are preferably tied to a constant value. FIG. 20 shows more detail of the structure of FIG. 19, illustrating the lookup table structures and clocking structures discussed earlier. Since the additional details of FIG. 20 have been discussed earlier, they are not discussed again here. FIG. 21 is a schematic diagram of CLE slice S0 in accordance with one embodiment of the present invention. CLE slice S0 includes G and F function generators 1001 and 1002, exclusive OR gates 1003-1004, D-Q flip flops 1005-1006, AND gates 1007-1008, write control logic 1009, multiplexers 1010-1031, inverter 1040, and multiplexers F5 and FX. Slice S0 includes shift register circuitry consistent with that described above. The shift input data (e.g., SHIFTIN or BY) is provided to G function generator 1001 by multiplexer 1010. Data is shifted out of G function generator 1001 to multiplexer 1016. Note that multiplexer 1016 is also coupled to the output terminals of multiplexers 1010 and 1012. Data is shifted into F function generator 1002 from multiplexer 1016. Data is then shifted out of F function generator 1002 as the SHIFTOUT signal. Write control circuit 1009 controls the writing of data values to G and F function generators 1001 and 1002. Multiplexers 1010-1031 are configured to control the routing of the various signals in slice S0. F function generator 1002 can be configured to implement a 4-input lookup table that provides an output signal F' that is any function of the input signals F4-F1. The output signal F' is routed to an input terminal of multiplexer F5. G function generator 1001 can be configured to implement a 4-input lookup table that provides an output signal G' that is any function of the input signals G4-G1. The output signal G' is routed to another input terminal of multiplexer F5. Multiplexer F5 is controlled by the bypass signal BX (or BX#, which is the inverse of BX). By routing the signals F1-F4 to the four input terminals of the G function generator 1001, multiplexer F5 can be used to provide an output signal F5' that can be any function of the five input signals F4-F1 and BX. The output signal G' is also routed to an input terminal of multiplexer 1025. In accordance with the described embodiment, multiplexer 1025 is configured to route the output signal G' as the output signal Y. Multiplexer FX is a 2-to-1 multiplexer having two input terminals coupled to receive the FXA and FXB input signals, which are provided by the general interconnect located outside of CLE slice S0. Multiplexer FX is controlled by the bypass signal BY (or BY#, which is the inverse of BY). As described in more detail below, multiplexer FX is capable of operating as any multiplexer wider than an F5 multiplexer (i.e., F6, F7, F8, F9, F10, etc.), depending on the configuration of the CLE slice in a larger CLB circuit. These wider multiplexers are capable of providing any function of greater numbers of input signals. Thus, an F6 multiplexer is capable of providing any function of up to six input signals, and an F10 multiplexer is capable of providing any function of up to ten input signals. In the CLB circuit described below in connection with FIG. 22, the largest FX multiplexer is an F8 multiplexer. FIG. 22 is a block diagram illustrating a CLB 1100 that includes four CLE slices S0-S3, each of which is identical to the CLE slice S0 of FIG. 21. FIG. 22 only illustrates G and F function generators and multiplexers F5 and FX in each of CLE slices S0-S3. Multiplexers F5 and FX are labeled as multiplexers F5N and FXN in CLE slice SN. For example, within CLE slice S2, multiplexers F5 and FX are labeled as multiplexers F52 and FX2. Similarly, the control signals BX and BY are labeled as control signals BXN and BYN in CLE slice SN. The output terminals of multiplexers F50 and F51 are connected to the input terminals of multiplexer FX0 in CLE slice S0. As a result, multiplexer FX0 is configured as an F6 multiplexer (i.e., a multiplexer capable of providing an output signal that is any function of six input signals). This F6 multiplexer is capable of providing an output signal that is any function of the four F/G input signals to CLE slices S0-S1 (note that the same four input signals are provided to each F and G function generator in CLE slices S0 and S1), the BX0 /BX1 input signal (note that the same input signal is provided to control the F50 and F51 multiplexers), and the BY0 input signal. The output terminals of multiplexers F52 and F53 are connected to the input terminals of multiplexer FX2 in CLE slice S2. As a result, multiplexer FX2 is also configured as an F6 multiplexer. This F6 multiplexer is capable of providing an output signal that is any function of the four F/G input signals to CLE slices S2-S3 (note that the same four input signals are provided to each F and G function generator in CLE slices S2 and S3), the BX2 /BX3 input signal (note that the same input signal is provided to control the F52 and F53 multiplexers), and the BY2 input signal. Because the F6 multiplexer has a total of 19 inputs, an F6 multiplexer can also be configured to provide some (but not all) functions of up to 19 input signals. For example, the F6 multiplexer can be used to implement an 8-to-1 multiplexer, which is a function of 11 input signals (i.e., 8 input signals+3 control signals). The output terminals of F6 multiplexers FX0 and FX2 are connected to the input terminals of multiplexer FX1 in CLE slice S1. As a result, multiplexer FX1 is configured as an F7 multiplexer (i.e., a multiplexer capable of providing an output signal that is any function of seven input signals). This F7 multiplexer is capable of providing an output signal that is any function of the four F/G input signals to CLE slices S0-S3 (note that the same four input signals are provided to each F and G function generator in CLE slices S0-S3), the BX0 /BX1 /BX2 /BX3 input signal (note that the same input signal is provided to control the F50, F51, F52, and F53 multiplexers), the BY0 /BY2 input signal (note that the same input signal is provided to control the FX0 and FX2 multiplexers), and the BY1 input signal, which is provided to control the FX1 multiplexer. Because the F7 multiplexer has a total of 39 inputs, an F7 multiplexer can also be configured to provide some (but not all) functions of up to 39 input signals. For example, the F7 multiplexer can be used to implement an 16-to-1 multiplexer, which is a function of 20 input signals (i.e., 16 input signals+4 control signals). The output terminal of F7 multiplexer FX1 is connected to an input terminal of multiplexer FX3 in CLE slice S3. The other input terminal of multiplexer FX3 is connected to an output terminal of an F7 multiplexer in an upper adjacent CLB (not shown). The F7 multiplexer in the upper adjacent CLB is configured in the same manner as multiplexer FX1 in CLB 1100. Because multiplexer FX3 is configured to receive input signals from two F7 multiplexers, multiplexer FX3 functions as an F8 multiplexer (i.e., a multiplexer capable of providing an output signal that is any function of eight input signals). Because the F8 multiplexer has a total of 79 inputs, an F8 multiplexer can also be configured to provide some (but not all) functions of up to 79 input signals. For example, the F8 multiplexer can be used to form a 32-to-1 multiplexer, which is a function of 37 input signals (i.e., 32 input signals+5 control signals). In addition, the F8 multiplexer can be used to form a 256-bit variable tap shift register. Note that the F8 multiplexer requires the use of 2 CLBS. The output terminal of F7 multiplexer FX1 is also connected to a lower adjacent CLB. More specifically, the output terminal of multiplexer FX1 is connected to an input terminal corresponding to the upper input terminal of multiplexer FX3. CLB 1100 is connected to a plurality of identical CLBs 1100, thereby providing an array of CLBs that are capable of providing F5, F6, F7 and F8 functions. The structure of the F8 multiplexer extends across CLB boundaries in a regular manner. As a result, CLB 1100 can be connected to either the upper adjacent CLB or the lower adjacent CLB to implement an F8 multiplexer. This advantageously provides flexibility in the configuration of the resulting FPGA. In addition, each of the CLE slices in the various CLBs has an identical logic (transistor) layout. This advantageously simplifies the configuration software of the resulting FPGA, as well as the physical layout of the CLB array on a silicon substrate. The above-described CLB structure can be easily expanded to provide for arbitrarily large functions. As described above in connection with FIG. 22, an F8 multiplexer structure can be created with four CLE slices. By doubling the number of CLE slices per CLB, a multiplexer structure having an additional input can be implemented. Thus, an F9 multiplexer can be created with eight CLE slices per CLB, and an F10 multiplexer can be created with sixteen CLE slices per CLB. FIG. 23 is a block diagram of a CLB 1200 in accordance with another embodiment of the present invention. CLB 1200 includes eight CLE slices identical to CLE slice S0. These slices S0-S7 are configured to provide a CLB array that is capable of providing an F9 multiplexer that can provide any function of up to nine input signals. CLE slices S0-S7 of FIG. 23 are illustrated in the same manner as CLE slices S0-S3 in FIG. 22. In CLB 1200, multiplexers FX0, FX2, FX4 and FX6 are all configured as F6 multiplexers. More specifically, the input terminals of multiplexer FX0 are connected to the output terminals of multiplexers F50 and F51. The input terminals of multiplexer FX2 are connected to the output terminals of multiplexers F52 and F53. The input terminals of multiplexer FX4 are connected to the output terminals of multiplexers F54 and F55. The input terminals of multiplexer FX6 are connected to the output terminals of multiplexers F56 and F57. Multiplexers FX1 and FX5 are configured as F7 multiplexers. More specifically, the input terminals of multiplexer FX1 are connected to the output terminals of F6 multiplexers FX0 and FX2. The input terminals of multiplexer FX5 are connected to the output terminals of F6 multiplexers FX4 and FX6. Multiplexer FX3 is configured as an F8 multiplexer. More specifically, the input terminals of multiplexer FX3 are connected to the output terminals of F7 multiplexers FX1 and FX5. Finally, multiplexer FX7 is configured as an F9 multiplexer. More specifically, one input terminal of multiplexer FX7 is connected to the output terminal of F8 multiplexer FX3. The other input terminal of multiplexer FX7 is connected to the output terminal of an F8 multiplexer in an upper adjacent CLB (not shown). This F8 multiplexer is located in a CLE slice identical to CLE slice S3 of CLB 1200. Note that the output terminal of F8 multiplexer FX3 in CLB 1200 is also routed to a lower adjacent CLB (not shown). More specifically, the output terminal of multiplexer FX3 is connected to the input terminal of the F9 multiplexer in the lower adjacent CLB. The structure of the F9 multiplexer extends across CLB boundaries. However, each of the CLE slices and each of the CLBs are identical. This advantageously simplifies the configuration software of the resulting FPGA, as well as the layout of the FPGA on silicon. In FIG. 21, CLE slice 1100 is defined to include a pair of function generators 1001-1002 and a pair of multiplexers F5 and FX. However, this is not necessary. In another embodiment, each CLE slice includes a single function generator and a single multiplexer that corresponds with either multiplexer F5 or multiplexer FX. FIG. 24 is a block diagram of a CLB 1200 in accordance with such an embodiment. CLB 1200 includes eight CLE slices S0 -S7, wherein each of the CLE slices S0 -S7 is defined to include one function generator and a corresponding multiplexer. (The other elements of CLE slices S0 -S7 are not shown for purposes of clarity.) Similar elements in FIGS. 22 and 24 are labeled with similar reference numbers. The CLB structures illustrated by FIGS. 22 and 24 are similar. However, in FIG. 24, each of the F5 and FX multiplexers receives input signals from the general interconnect structure, and does not receive input signals from within the CLE slice. Thus, each of CLE slices S0 -S7 includes a multiplexer that receives a user-defined control signal (i.e., BX or BY) and input signals from outside the CLE slice. (Note that a user-defined signal, as used herein, is not a signal provided by a configuration memory cell, but rather from a signal routed by the user on the general interconnect structure.) These identical CLE slices S0 -S7 can be cascaded as illustrated to form wide function multiplexers (e.g., F5, F6, F7, and F8 multiplexers). Returning to CLB 1100 of FIG. 22, in accordance with another embodiment of the present invention, CLE slices S0-S3 are connected in a manner that enables the function generators F0 -F3 and G0 -G3 in these CLE slices to be selectively connected to form random access memories (RAMs) of various sizes. As described above, each of the CLE slices S0-S3 has an identical transistor layout, thereby simplifying the design and configuration software of the resulting FPGA. In the described embodiment, CLB 1100 includes four CLE slices S0-S3 that can be configured to form RAMs having dimensions of 128.times.1, 64.times.2, 64.times.1, 32.times.4, 32.times.2, 32.times.1, 16.times.8, 16.times.4, 16.times.2 and 16.times.1. In other embodiments, this CLB structure can be expanded to include other numbers of CLE slices. In these embodiments, RAMs having other dimensions can be implemented. The manner of expanding the described CLB structure to include other numbers of CLE slices will be apparent to one of ordinary skill in the art in view of the following disclosure. As described above, each 4-input F and G function generator includes sixteen memory cells that can be accessed in response to four address signals. In the described example, each F function generator is addressed by four read address signals F1-F4 and four write address signals WF1-WF4. The read address signals F1-F4 are separate from the write address signals WF1-WF4 to enable dual port access to the F function generator. Each G function generator is similarly configured to be accessed in response to read address signals G1-G4 and write address signals WG1-WG4. Read Operations To read one of the sixteen data values stored in an F or G function generator, a read address F1-F4 or G1-G4 is applied to the function generator. In response, the F or G function generator provides a data value corresponding to the read address as an output signal F' or G'. In the described embodiment, multiplexers FX0-FX3 and F50 -F53 of CLE slices S0-S3 are connected as described above in connection with FIG. 22. As described in more detail below, these multiplexers are used to route read data values from function generators F0 -F4 and G0 -G4 to an appropriate output terminal. 128.times.1 More specifically, to operate CLB 1100 as a 128.times.1 RAM, the 128 memory cells in the F0 -F4 and G0 -G4 function generators of CLE slices S0-S3 are used to store 128 data values. The F0 -F4 and G0 -G4 function generators are addressed by the same four read address signals (i.e., F1/G1, F2/G2, F3/G3, F4/G4) during a read operation. These four read address signals are hereinafter referred to as address signals A1 -A4. A single bypass signal (i.e., BX0 /BX1 /BX2 /BX3) is used to control multiplexers F50, F51, F52, and F53, thereby selecting either the output signals of the F0 -F3 function generators or the output signals of the G0 -G3 function generators. The bypass signal BX0 /BX1 /BX2 /BX3 is thereby used as a fifth address signal A5. In the described embodiment, if the fifth address signal A5 has a logic "1" value, then multiplexers F50, F51, F52, and F53 route the output signals of the F0 -F3 function generators. Conversely, if the fifth address signal A5 has a logic "0" value, then multiplexers F50, F51, F52, and F53 route the output signals of the G0 -G3 function generators. Another bypass signal (i.e., BY0 /BY2) is used to control F6 multiplexers FX0 and FX2, thereby selecting either the output signals of the F50 and F52 multiplexers or the output signals of the F51 and F53 multiplexers. The bypass signal BY0 /BY2 is thereby used as a sixth address signal A6. In the described embodiment, if the sixth address signal A6 has a logic "1" value, then multiplexers FX0 and FX2 route the output signals of the F50 and F52 multiplexers, respectively. Conversely, if the sixth address signal A6 has a logic "0" value, then multiplexers FX0 and FX2 route the output signals of the F51 and F53 multiplexers, respectively. Another bypass signal (i.e., BY1) is used to control F7 multiplexer FX1, thereby selecting either the output signal of F6 multiplexer FX0 or the output signal of F6 multiplexer FX2 as the read data output signal. The bypass signal BY1 is thereby used as a seventh address signal A7. In the described embodiment, if the seventh address signal A7 has a logic "1" value, then multiplexer FX1 routes the output signal of the FX0 multiplexer as the read output data value. Conversely, if the seventh address signal A7 has a logic "0" value, then multiplexer FX1 routes the output signal of the FX2 multiplexer as the read output data value. As described in more detail below, the address signals A5 -A7 are also used to address the 128.times.1 RAM during write operations. As also described in more detail below, the unused bypass signal BY3 is used to provide a write data value to the 128.times.1 RAM during write operations. 64.times.2, 64.times.1 To operate CLB 1100 as a 64.times.2 RAM, the 64 memory cells in the F0, G0, F1 and G1 function generators of CLE slices S0 and S1 are used to store a first set of 64 data values, and the 64 memory cells in the F2, G2, F3 and G3 function generators of CLE slices S2 and S3 are used to store a second set of 64 data values. In general, one of the 64 data values in function generators F0, G0, F1 and G1 is read out through multiplexers F50, F51 and FX0 as a first bit of the two bit output signal. Similarly, a corresponding one of the 64 data values in function generators F2, G2, F3 and G3 is read out through multiplexers F52, F53 and FX2 as a second bit of the two bit output signal. More specifically, the F0 -F3 and G0 -G3 function generators are addressed by the same four read address signals A1 -A4 during a read operation. Multiplexers F50 -F53 are controlled by the fifth address signal A5 (i.e., BX0 /BX1 /BX2 /BX3), such that these multiplexers select either the output signals of the F0 -F3 function generators or the output signals of the G0 -G3 function generators. F6 multiplexers FX0 and FX2 are controlled by the sixth address signal A6 (i.e., BY0 /BY2), such that these multiplexers select either the output signals of multiplexers F50 and F52 or the output signals of multiplexers F51 and F53. In this manner, F6 multiplexer FX0 provides one bit of the read output signal, and F6 multiplexer FX2 provides the other bit of the read output signal in the 64.times.2 RAM. As described in more detail below, the address signals A5 -A6 are also used to address the 64.times.2 RAM during write operations. As also described in more detail below, the unused bypass signals BY1 and BY3 are used to provide write data values to the 64.times.2 RAM. A 64.times.1 RAM, which uses only CLE slices S0 and S1, is a subset of the 64.times.2 RAM, which uses CLE slices S0, S1, S2, and S3. The 64.times.1 RAM is accessed in the same manner as the 64.times.2 RAM. An independent 64.times.1 RAM can not be implemented in S2 and S3 because the write addresses of S2 and S3 are tied to S0 and S1. 32.times.4, 32.times.2, 32.times.1 To operate CLB 1100 as a 32.times.4 RAM, the 32 memory cells in the F0 and G0 function generators of CLE slice S0 are used to store a first set of 32 data values, the 32 memory cells in the F1 and G1 function generators of CLE slice S1 are used to store a second set of 32 data values, the 32 memory cells in the F2 and G2 function generators of CLE slice S2 are used to store a third set of 32 data values, and the 32 memory cells in the F3 and G3 function generators of CLE slice S3 are used to store a fourth set of 32 data values. In general, one of the 32 data values in function generators F0 and G0 is read out through multiplexer F50 as a first bit of the four bit output signal. Similarly, a corresponding one of the 32 data values in function generators F1 and G1 is read out through multiplexer F51 as a second bit of the four bit output signal. A corresponding one of the 32 data values in function generators F2 and G2 is read out through multiplexer F52 as a third bit of the four bit output signal. Finally, a corresponding one of the 32 data values in function generators F3 and G3 is read out through multiplexer F53 as a fourth bit of the four bit output signal. More specifically, the F0-F3 and G0 -G3 function generators are addressed by the same four read address signals A1 -A4 during a read operation. Multiplexers F50 -F53 are controlled by the fifth address signal A5 (i.e., BX0 /BX1 /BX2 /BX3), such that these multiplexers select either the output signals of the F0 -F3 function generators or the output signals of the G0 -G3 function generators. In this manner, multiplexers F50 -F53 provide the four bits of the read output signal in the 32.times.4 RAM. As described in more detail below, the address signal A5 is also used to address the 32.times.4 RAM during write operations. As also described in more detail below, the unused bypass signals BY0 -BY3 are used to provide write data values to the 32.times.4 RAM. A 32.times.2 RAM is a subset of the 32.times.4 RAM, which uses only CLE slices S0 and S1. Similarly, a 32.times.1 RAM is a subset of the 32.times.4 RAM, which uses only CLE slice S0. The 32.times.2 and 32.times.1 RAMs are accessed in the same manner as the 32.times.4 RAM. 16.times.8, 16.times.4, 16.times.2, 16.times.1 It is noted that CLB 1100 can be operated as a 6.times.8, 16.times.4, 16.times.2 or 16.times.1 RAM by using the data values read directly out of the lookup tables F0-F3 and G0-G3. In these RAMs, it is not necessary to use multiplexers F50 -F53 and FX0 -FX3 to select the read data values. As described in more detail below, in the 16.times.8, 16.times.4, 16.times.2 or 16.times.1 RAMs, the unused bypass signals BX0 -BX3 and BY0 -BY3 are used to provide up to eight write data values to the 16.times.8 RAM. In the foregoing manner, read data values for 128.times.1, 64.times.2, 64.times.1, 32.times.4, 32.times.2, 32.times.1, 16.times.8, 16.times.4, 16.times.2 and 16.times.1 RAMs can be routed out of CLB 1100 through multiplexers F50 -F53 and FX0 -FX3. Write Operations In order to operate CLB 1100 as a 128.times.1, 64.times.2, 64.times.1, 32.times.4, 32.times.2, 32.times.1, 16.times.8, 16.times.4, 16.times.2 and 16.times.1 RAM, it is necessary to provide a mechanism for routing input data values to the function generators F0 -F3 and G0 -G3 in a manner consistent with the various RAM configurations. As described in more detail below, this mechanism is largely provided by multiplexers 1010 and 1016 of CLE slice S0 (FIG. 21). In addition, it is necessary to provide a mechanism for providing write enable signals to the various function generators F0 -F3 and G0 -G3 in a manner consistent with the various RAM configurations. As described in more detail below, this mechanism is largely provided by write control logic 1009, along with multiplexers 1030-1031 and inverter 1040 (FIG. 21). Write Data Routing FIG. 25 is a block diagram illustrating the multiplexers corresponding with multiplexers 1010 and 1016 in CLE slices S0-S3, as well as function generators F0 -F3 and G0 -G3. These multiplexers are labeled with the reference numbers 1010N and 1016N, where N is the number slice in which the multiplexers are located. For example, multiplexers 1010 and 1016 in CLE slice S2 are labeled with the reference numbers 10102 and 10162, respectively. Many elements of CLE slices S0-S3 are not shown for purposes of clarity. In addition, the SHIFTIN input signals to multiplexers 10100 -10103 and the input signals from the G0 -G3 function generators to multiplexers 10160 -10163 are not shown in FIG. 25, as these signals are not material to the present embodiment. Each of multiplexers 10100 -10103 is coupled to receive a corresponding one of alternate data input signals ALTDIG0 -ALTDIG3 and a corresponding one of bypass signals BY0 -BY3. Each of multiplexers 10160 -10163 is coupled to receive an output signal from a corresponding one of multiplexers 10100 -10103 and a corresponding one of bypass signals BX0 -BX3. The output signals provided by multiplexers 10100 -10103 are routed from CLE slices S0-S3 as data signals DIG0 -DIG3, respectively. Data signal DIG3 is routed to provide input data signals ALTDIG2 and ALTDIG1 in CLE slices S2 and S1. Data signal DIG1 is routed to provide input data signal ALTDIG0 in CLE slice S0. The output signals of multiplexers 10100 -10103 are also provided to G function generators G0 -G3 as write data input signals DINY0 -DINY3, respectively. The output signals of multiplexers 10160 -10163 are provided to F function generators F0 -F3 as write data input signals DINX0 -DINX3, respectively. Multiplexers 10100 -10103 and 10160 -10163 are controlled as follows to route write data values to function generators F0 -F3 and G0 -G3. 128.times.1 When CLB 1100 is to operate as a 128.times.1 RAM, multiplexers 10100 -10103 and 10160 -10163 are configured to route the bypass signal BY3 to the data input terminals of function generators F0 -F3 and G0 -G3. As a result, DINY3 =DINX3 =DINY2 =DINX2 =DINY1 =DINX1 =DINY0 =DINX0 =BY3. Note that the bypass signal BY3 is routed from CLE slice S3 to CLE slices S2 and S1 as the data signal DIG3. Similarly, the bypass signal BY3 is routed from CLE slice S1 to CLE slice S0 as the data signal DIG1. As described in more detail below, a write enable control signal will be applied to one of function generators F0 -F3 and G0 -G3, thereby enabling the write data input signal (BY3) to be written to this write-enabled function generator. The generation of this write enable control signal is controlled by the bypass signals BX0 -BX3 and BY0 -BY2 (i.e., the bypass signals other than BY3). 64.times.2, 64.times.1 When CLB 1100 is to operate as a 64.times.2 RAM, the bypass signal BY1 operates as a first write data input signal, and the bypass signal BY3 operates as a second write data input signal. More specifically, multiplexers 10100 -10101 and 10160 -10161 are configured to route the bypass signal BY1 to the write data input terminals of function generators F0 -F1 and G0 -G1. As a result, DINY1 =DINX1 =DINY0 =DINX0 =BY1. Similarly, multiplexers 10102 -10103 and 10162 -10163 are configured to route the bypass signal BY3 to the write data input terminals of function generators F2 -F3 and G2 -G3. As a result, DINY3 =DINX3 =DINY2 =DINX2 =BY3. As described in more detail below, during a write operation, a first write enable control signal is applied to one of function generators F0, F1, G0 and G1, and a second write enable control signal is applied to a corresponding one of function generators F2, F3, G2, and G3. In response, the write input data signals (BY1 and BY3) are written to the two function generators receiving the first and second write enable control signals. As described in more detail below, these first and second write enable control signals are generated in response to the bypass signals BX0 -BX3, BY0 and BY2 (i.e., the bypass signals not used as write data input signals). When CLB 1100 is to operate as a 64.times.1 RAM, CLE slices S0 and S1 are configured in the same manner described above for the 64.times.2 RAM. Thus, bypass signal BY1 is used as the write input data signal and the bypass signals BX0 -BX1 and BY0 are used to generate the required write enable signal. In the 64.times.1 RAM configuration, function generators F2 -F3 and G2 -G3 are free to perform other functions. 32.times.4, 32.times.2, 32.times.1 When CLB 1100 is to operate as a 32.times.4 RAM, the bypass signals BY0 -BY3 operate as four write data input signals. Thus, multiplexers 10100 -10113 are configured to route the bypass signals BY0 -BY3 to function generators G0 -G3, respectively. Similarly, multiplexers 10160 -10163 are configured to route the bypass signals BY0 -BY3 to function generators F0 -F3, respectively. Thus, DINY3 =DINX3 =BY3, DINY2 =DINX2 =BY2, DINY1 =DINX1 =BY1, and DINY0 =DINX0 =BY0. As described in more detail below, during a write operation, a set of four write enable control signals is applied to either function generators F0 -F3 or to function generators G0 -G3. In response, the write input data signals (BY0 -BY3) are written to the four function generators receiving the write enable control signals. As described in more detail below, the set of four write enable control signals are generated in response to the bypass signals BX0 -BX3 (i.e., the bypass signals not used as write data input signals). When CLB 1100 is to operate as a 32.times.2 RAM, CLE slices S0 and S1 are configured in the same manner described above for the 32.times.4 RAM. Thus, bypass signals BY0 and BY1 are used as the write input data signal and the bypass signals BX0 -BX1 are used to generate the required write enable signals. In the 32.times.2 RAM configuration, function generators F2 -F3 and G2 -G3 are free to perform other functions. Similarly, when CLB 1100 is to operate as a 32.times.1 RAM, CLE slice S0 is configured in the same manner described above for the 32.times.4 RAM. Thus, bypass signal BY0 is used as the write input data signal and the bypass signal BX0 is used to generate the required write enable signals. In the 32.times.1 RAM configuration, function generators F1 -F3 and G1 -G3 are free to perform other functions. In the foregoing manner, multiplexers 10100 -10103 and 10160 -10163 provide a structure that enables the flexible application of write data values to function generators F0 -F3 and G0 -G3. Advantageously, many variations are possible, even though each of the CLE slices S0-S3 has an identical transistor layout. In the 128.times.1, 64.times.2, 32.times.4, and 16.times.8 RAMs, the write address terminals WF0-WF3 and WG0-WG3 of each of the function generators F0-F3 and G0-G3 are coupled to receive the A1 -A4 address signals. This is because these configurations all have shared read and write addresses. However, these write address signals are only effective within the associated function generator if the write enable signal corresponding to the function generator is asserted low. Write Enable Control Signals The mechanism for generating the write enable control signals for the various RAMs will now be described. Within each CLE slice, a pair of write enable control signals are generated by write control circuit 1009 (FIG. 22). In the present description, the write control circuits in CLE slices S0-S3 are labeled as write control circuits 10090 -10093, respectively. FIG. 26 is a circuit diagram of write control circuit 10090 of CLE slice S0 in accordance with one embodiment of the present invention. Write control circuit 10090 includes NAND gates 2501-2502, multiplexers 2503-2504 and inverter 2505. If multiplexer 2503 is configured to route the SLICEWE0 signal, then multiplexer 2503 provides the SLICEWE0 signal to NAND gate 2502. If multiplexer 2504 is configured to pass the output signal provided by inverter 2505, then multiplexer 2504 provides the inverse of the SLICEWE0 signal (SLICEWE0#) to NAND gate 2501. Under these conditions, the SLICEWE0 signal is said to be `enabled` within write control circuit 10090. (Note that if multiplexers 2503 and 2504 are configured to pass logic "1" values, then NAND gates 2501 and 2502 will receive these logic "1" values, thereby effectively disabling the SLICEWE0 signal). Assuming that the SLICEWE0 signal is enabled in write control circuit 10090, NAND gate 2501 generates a write enable control signal WEG#0 in response to the SLICEWE2 signal, the SLICEWE1 signal, and the SLICEWE0# signal. Similarly, NAND gate 2502 generates a write enable control signal WEF#0 in response to the SLICEWE2 signal, the SLICEWE1 signal, and the SLICEWE0 signal. The WEG#0 and WEF#0 write control signals are provided to the write enable input terminals of function generators G0 and F0, respectively. When one of the WEG#0 and WEF#0 write control signals is asserted LOW, a write operation is enabled in the corresponding function generator G0 or F0. As described in more detail below, bypass output signals BYOUT and inverted bypass output signals BYINVOUT are generally provided as the SLICEWE2 and SLICEWE1 signals. The bypass output signals BXOUT are generally provided as the SLICEWE0 signals. FIG. 27 is a block diagram illustrating the write control circuits 10090 -10093 and function generators F0 -F3, G0 -G3 in CLE slices S0-S3 of CLB 1100 in accordance with the described embodiment. The other elements of CLE slices S0-S3 are not shown in FIG. 27 for purposes of clarity. Write control circuit 10090 is connected to receive bypass signals BY1, BY0, and BX0. Write control circuit 10091 is connected to receive bypass signals BY1, BY0 #, and BX1. Write control circuit 10092 is connected to receive bypass signals BY1 #, BY0, and BX0. Write control circuit 10093 is connected to receive bypass signals BY1 #, BY0 #, and BX1. Referring to FIG. 21, it is noted that the BY and BY# bypass signals, which are provided as output signals at the BYOUT and BYINVOUT terminals, can be disabled (i.e., set at logic "1" values) by configuring multiplexers 1030 and 1031 in the appropriate manner. Conversely, these multiplexers 1030 and 1031 can be configured to enable the BY and BY# signals at the BYOUT and BYINVOUT output terminals. 128.times.1 The write control structure of CLB 1100 operates as follows. When CLB 1100 is to be operated as a 128.times.1 RAM, the bypass signals BY0 -BY1, BY0 #-BY1 # and BX0 -BX1 provided to write control circuits 10090 -10093 are all enabled. Bypass signals BX0 -BX1 are identical, and correspond with the fifth address signal A5. Bypass signal BY0 corresponds with the sixth address signal A6, and bypass signal BY1 corresponds with the seventh address signal A7. Table 1 below summarizes the manner in which write control circuits 10090 -10093 assert the write enable signals WEG#0 -WEG#3 and WEF#0 -WEF#3 in response to the address signals A7 -A5.<tb>TABLE 1<tb>A7 -A5 WEG#3 WEF#3 WEG#2 WEF#2<tb> WEG#1 WEF#1 WEG#0 WEF#0<tb>000 0 1 1 1 1 1 1 1<tb>001 1 0 1 1 1 1 1 1<tb>010 1 1 0 1 1 1 1 1<tb>011 1 1 1 0 1 1 1 1<tb>100 1 1 1 1 0 1 1 1<tb>101 1 1 1 1 1 0 1 1<tb>110 1 1 1 1 1 1 0 1<tb>111 1 1 1 1 1 1 1 0 As shown in Table 1, a different one of the function generators F0-F3, G0-G3 is write-enabled for each instance of the address signals A7 -A5. Thus, the addressing scheme of the write control structure corresponds with the addressing scheme of the read control structure described above. 64.times.2 or 64.times.1 When CLB 1100 is to be operated as a 64.times.2 or 64.times.1 RAM, the bypass signals BY0, BY0 #, and BX0 -BX1 provided to write control circuits 10090 -10093 are enabled. Bypass signals BY1 and BY1 # provided to write control circuits 10090 -10093 are disabled (i.e., set to logic "1" values) by appropriately configuring the multiplexers 1030-1031 in CLE slice S1. As described above, bypass signal BY1 is used as a write data value in this configuration. Bypass signals BX0 -BX1 are identical, and correspond with the fifth address signal A5. Bypass signal BY0 corresponds with the sixth address signal A6. Table 2 below summarizes the manner in which write control circuits 10090 -10093 assert the write enable signals WEG#0 -WEG#3 and WEF#0 -WEF#3 in response to the address signals A6 -A5.<tb>TABLE 2<tb>A6 -A5 WEG#3 WEF#3 WEG#2 WEF#2<tb> WEG#1 WEF#1 WEG#0 WEF#0<tb>00 0 1 1 1 0 1 1 1<tb>01 1 0 1 1 1 0 1 1<tb>10 1 1 0 1 1 1 0 1<tb>11 1 1 1 0 1 1 1 0 As shown in Table 2, a different pair of the function generators F0-F3, G0-G3 is write-enabled for each instance of the address signals A6 -A5. Thus, the addressing scheme of the write control structure corresponds with the addressing scheme of the read control structure described above. 32.times.4, 32.times.2 or 32.times.1 When CLB 1100 is to be operated as a 32.times.4, 32.times.2 or 32.times.1 RAM, the bypass signals BX0 -BX1 provided to write control circuits 10090 -10093 are enabled. Bypass signals BY0 -BY1 and BY0 #-BY1 # provided to write control circuits 10090 -10093 are disabled (i.e., set to logic "1" values) by appropriately configuring the multiplexers 1030 and 1031 in CLE slices S0-S3. As described above, bypass signals BY1 and BY0 are used as write data values in this configuration. Bypass signals BX0 -BX1 are identical, and correspond with the fifth address signal A5. Table 3 below summarizes the manner in which write control circuits 10090 -10093 assert the write enable signals WEG#0 -WEG#3 and WEF#0 -WEF#3 in response to the address signal A5.<tb>TABLE 3<tb>A5 WEG#3 WEF#3 WEG#2 WEF#2 WEG#1<tb> WEF#1 WEG#0 WEF#0<tb>0 0 1 0 1 0 1 0 1<tb>1 1 0 1 0 1 0 1 0 As shown in Table 3, a different set of four function generators is write-enabled for each instance of the address signal A5. Thus, the addressing scheme of the write control structure corresponds with the addressing scheme of the read control structure described above. In the foregoing manner, the write enable signals are provided to function generators F0 -F3 and G0 -G3. Advantageously, a wide variety of write enable signal patterns can be provided to the function generators F0 -F3 and G0 -G3 in the CLB 1100, with relatively little overhead. In addition, because the transistor layout of each of the CLE slices is identical, the layout and software configuration of the resulting FPGA is simplified. The functionality of the bypass signals BX0 -BX3 and BY0 -BY3 in the 128.times.1, 64.times.2, 32.times.4 and 16.times.8 RAM embodiments is summarized below in Table 4.<tb>TABLE 4<tb>Signal 128 .times. 1 64 .times. 2 32 .times. 4 16 .times. 1<tb>BY3 DATA DATA DATA DATA<tb>BY2 A6 A6 DATA DATA<tb>BY1 A7 DATA DATA DATA<tb>BY0 A6 A6 DATA DATA<tb>BX3 A5 A5 A5 DATA<tb>BX2 A5 A5 A5 DATA<tb>BX1 A5 A5 A5 DATA<tb>BX0 A5 A5 A5 DATA In accordance with yet another embodiment of the present invention, CLB 1100 can be operated as a dual-port RAM of various sizes. In the above-described single-port RAM embodiments, the address signals A1 -A4 are provided to each of the function generators F0 -F3 and G0 G3 used to implement the single-port RAM. The routing of address signals A1 -A4 in the single-port embodiments is therefore straightforward. However, in the case of a dual-port implementation, the routing of the address signals A1-A4 becomes more complex. FIG. 28 is a block diagram of CLB 1100, which illustrates the connections to the four address inputs of function generators F0 -F3 and G0 -G3 in accordance with one embodiment of the present invention. Address AA[4:1] is provided as a write address signal (i.e., WF0-WF3 or WG0-WG3) to function generators F0, G0, F2 and G2. Address AA[4:1] is also provided as a read address signal (i.e., F0-F3 or G0-G3) to function generators F0 and G0. Address AB[4:1] is provided as a write address signal (i.e., WF0-WF3 or WG0-WG3) to function generators F1, G1, F3 and G3. Address AB[4:1] is also provided as a read address signal (i.e., F0-F3 or G0-G3) to function generators F1 and G1. Address AC[4:1] is provided as a read address signal (i.e., F0-F3 or G0-G3) to function generators F2 and G2 -Address AD[4:1] is provided as a read address signal (i.e., F0-F3 or G0-G3) to function generators F3 and G3. Note that in the above-described single-port embodiments, AA[4:1]=AB[4:1]=AC[4:1]=AD[4:1]=A4 -A1. However, in the dual-port embodiments addressing is implemented as follows. 64.times.1 Dual-Port CLB 1100 can be configured to operate as a 64.times.1 dual-port RAM in the following manner. In general, function generators F0 -F1 and G0 -G1 are used to implement a write port of the dual-port memory, and function generators F2 -F3 and G2 -G3 are used to implement a read-only port of the dual-port memory. Note that data values can also be read from function generators F0 -F1 and G0 -G1, thereby making the write port a read/write port, if desired. Data values are written to the 64.times.1 dual-port memory as follows. The write control circuits 10090 -10093 are configured in the manner described above for a 64.times.2 RAM array. As a result, write enable signals are provided to pairs of function generators as shown in Table 2. Data input multiplexers 10100 -10103 and 10160 -10163 are configured in the manner described above for a 64.times.2 RAM array. Thus, a single data signal is routed to both BY3 and BY1, and is thus provided to the data input terminal of each of the function generators F0 -F3 and G0 -G3. The desired write address signals A4 -A1 are applied to the write address terminals of function generators F0 -F3 and G0 -G3 as address signals AA[4:1] and AB[4:1]. As described above in connection with the 64.times.2 RAM, write operations will be enabled in one of function generators F0 -F1 and G0 -G1, and in a corresponding one of function generators F2 -F3 and G2 -G3. For example, a write operation may be enabled in function generators F0 and F2 (See, Table 2). As a result, the data written to CLB 1100 is stored in two locations, namely, at one location in function generators F0 -F1 and G0 -G1, and at a corresponding location in function generators F2 -F3 and G2 -G3. Data can be read from the read-only port of 64.times.1 dual-port RAM as follows. The desired read address signals A4 -A1 are applied to the read address terminals of function generators F2 -F3 and G2 -G3 as address signals AC[4:1] and AD[4:1]. As a result, read operations will be enabled in all four of these function generators F2 -F3 and G2 -G3 at the address location identified by the read address signals A4 -A1. The wide function multiplexers F52, F53 and FX2 are configured as described above in the 64.times.2 single-port RAM embodiment. These multiplexers F52, F53 and FX2 are controlled to select the appropriate read output signal from function generators F2 -F3 and G2 -G3 in response to the address signals A5 and A6. 32.times.2, 31.times.1 Dual-Port RAM CLB 1100 can be configured to operate as a 32.times.2 dual-port memory in the following manner. In general, function generators F0 -F1 and G0 -G1 and are used to implement a write port of the dual-port memory, and function generators F2 -F3 and G2 -G3 are used to implement a read-only port of the dual-port memory. Note that data values can also be read from function generators F0 -F1 and G0 -G1, thereby making the write port a read/write port, if desired. Data values are written to the 32.times.2 dual-port memory as follows. The write control circuits 10090 -10093 are configured in the manner described above for a 32.times.4 RAM array. As a result, write enable signals are provided to sets of four function generators as shown in Table 3. Data input multiplexers 10100 -10103 and 10160 -10163 are configured in the manner described above for a 32.times.4 RAM array. A first data signal (BY2 /BY0) is provided to the data input terminal of each of the function generators F0, F2 and G0, and G2. A second data signal (BY3 /BY1) is provided to the data input terminal of each of the function generators F1, F3 and G1, and G3. The desired write address signals A4 -A1 are applied to the write address terminals of function generators F0 -F3 and G0 -G3 as address signals AA[4:1] and AB[4:1]. As described above in connection with the 32.times.4 RAM, write operations will be enabled in one of the function generators in each of the CLE slices S0 -S3. For example, write operations may be enabled at the address identified by write address A4 -A1 in function generators F0, F1, F2 and F3 (or in function generators G0, G1, G2 and G3). (See, Table 3). As a result, the first data signal (BY2 /BY0) written to CLB 1100 is stored in two locations, namely, at one location in function generators F0 and G0 and at a corresponding location in function generators F2 and G2. Similarly, the second data signal (BY3 /BY1) written to CLB 1100 is stored in two locations, namely, at one location in function generators F1 and G1 and at a corresponding location in function generators F3 and G3. Data can be read from the read-only port of 32.times.2 dual-port RAM as follows. The desired read address signals A4 -A1 are applied to the read address terminals of function generators F2 -F3 and G2 -G3 as address signals AC[4:1] and AD[4:1]. As a result, read operations will be enabled in all four of these function generators F2 -F3 and G2 -G3 at the address location identified by the read address signals A4 -A1. The wide function multiplexers F50 -F53 are configured as described above in the 32.times.4 single-port RAM embodiment. Multiplexers F52 and F53 are controlled to select the appropriate read output signal from function generators F2 -F3 and G2 -G3 in response to the address signal A5. A 32.times.1 dual-port RAM can be implemented by using only half of the 32.times.2 dual-port RAM. For example, a 32.times.1 dual-port RAM can be implemented by using function generators F0 and G0 to form the write port, and function generators F2 and G2 to form the read-only port. 16.times.4, 16.times.2, 16.times.1 Dual-Port RAM CLB 1100 be configured to operate as a 16.times.4 dual-port memory in the following manner. In general, function generators F0 -F1 and G0 -G1 and are used to implement a write port of the dual-port memory, and function generators F2 -F3 and G2 -G3 are used to implement a read-only port of the dual-port memory. Note that data values can also be read from function generators F0 -F1 and G0 -G1, thereby making the write port a read/write port, if desired. Data values are written to the 16.times.4 dual-port memory as follows. The write control circuits 10090 -10093 are configured in the manner described above for a 16.times.8 RAM array. The input data values are routed through multiplexers 10100 -10103 and 10160 -10163 to function generators F0 -F3 and G0 -G3 as described above for a 16.times.8 RAM array. The desired write address signals A4 -A1 are applied to the write address terminals of function generators F0 -F3 and G0 -G3 as address signals AA[4:1] and AB[4:1]. As described above in connection with the 16.times.8 RAM, write operations will be enabled in each of the function generators in CLE slices S0 -S3. As a result, a first bit written to CLB 1100 is stored in two locations, namely, at one location in function generator F0 and at a corresponding location in function generator F2. Similarly, a second bit is stored at one location in function generator G0 and at a corresponding location in function generator G2. A third bit is stored at one location in function generator F1 and at a corresponding location in function generator F3. Finally, a fourth bit is stored in one location in function generator G1 and a corresponding location in function generator G3. Data can be read from the read-only port of 16.times.4 dual-port RAM as follows. The desired read address signals A4 -A1 are applied to the read address terminals of function generators F2 -F3 and G2 -G3 as address signals AC[4:1] and AD[4:1]. As a result, read operations will be enabled in all four of these function generators F2 -F3 and G2 -G3 at the address location identified by the read address signals A4 -A1. As described above in the 16.times.8 single-port RAM embodiment, these four signals are routed directly from the function generators as read output signals. A 16.times.2 or 16.times.1 dual-port RAM can be implemented by using only a half or a quarter, respectively, of the 16.times.8 dual-port RAM. For example, a 16.times.1 dual-port RAM can be implemented by using function generator F0 to form the write port, and function generator F2 to form the read-only port. Numerous modifications and variations of the present invention are possible in light of the above teachings. Although FIGS. 7 and 10 show a memory cell programmed through only one node of the latch, the invention can also be used with memory cells in which some data signals are inverted and applied to both nodes of the latch, or in which different control signals are applied to different nodes of the latch. Further, in FIG. 10 the three transistors 706, 708, and 707 can be implemented as a multiplexer receiving input signals on lines 704, 714, and 705. And transistors 706, 708, 707, and 720 can be replaced by transmission gates. While particular multiplexer and demultiplexer implementations are shown, the invention can use other implementations as well. And, of course, different structures and methods for generating signals such as Phi1, Phi2, and WS can be used with the invention. Further, although the above embodiments show a single multiplexer with a single output terminal for selecting one signal from a plurality of memory cells, other embodiments can select more than one memory cell from which to provide an output signal. And although FIGS. 19 and 20 show a CLB with lookup tables and multiplexers for generating functions of up to 8 input signals, other embodiments can use CLBs with more lookup tables and higher order multiplexers, for example CLBs with 16 or 32 lookup tables with F9 and F10 multiplexers. A lookup table can have fewer or more than the 16 memory cells shown. For example, a 6-input lookup table would use 64 memory cells (configurable as a shift register) and the combining multiplexers would start with F7. Further, although the cascading aspect of the invention has been discussed in comparison to FIG. 8, this aspect also applies to structures with demultiplexing, such as shown in FIG. 11. More fundamentally, although the above invention has been described in connection with an FPGA, a shift register with cascade multiplexers can be formed in other structures than FPGAs, and formed not in connection with lookup tables. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described above. |
A first device is determined as connected to a first one of a plurality of ports of a root complex. Addresses are assigned corresponding to a first hierarchy of devices including the first device. A second device is determined as connected through a mapping portal bridge at a second one of the ports of the root complex, the second device included in another second hierarchy of devices. A mapping table is generated that corresponds to the mapping portal bridge. The mapping table defines a translation between addressing used in a first view of a configuration address space of the system and addressing used in a second view of the configuration address space. The first view includes a view of the root complex and the second view includes a view corresponding to the second hierarchy of devices, the first hierarchy of devices being addressed according to the first view. |
1.At least one machine-accessible storage medium having stored thereon code for causing the machine to be used when executed on a machine:Determining that the at least one first device is connected to the first port of the plurality of ports of the root complex of the system;Assigning an address corresponding to the first level device including the first device;Determining that the second device is connected to the second port of the plurality of ports of the root complex by a mapping portal bridge, and the second device is included in another second hierarchical device;Trigger generating a mapping table corresponding to the mapping portal bridge, wherein the mapping table defines addressing used in a first scenario of a configuration address space of the system and used in a second scenario of the configured address space a conversion between addressing, the first scene comprising a scene of the root complex, and the second scene comprising a scene corresponding to the second level device and assigned to the first level device The address is according to the first scenario.2.The storage medium of claim 1 wherein the code is further executable to assign an address to the second level device in response to the first scene of the configuration address space.3.The storage medium of claim 2, wherein each of said second hierarchical devices is also assigned a corresponding address in accordance with said second scenario of said configured address space.4.A storage medium according to any one of claims 1 to 3, wherein the address of each of the first scene and the second scene of the configuration address comprises a respective bus-device-function (BDF) number.5.The storage medium of claim 4 wherein said address assigned according to said first scene of said configuration address space is assigned to optimize an assignment of a bus number utilized in said first scene.6.The storage medium of claim 5 wherein said address assigned to said second scene according to said configuration address space is assigned according to a different, second address assignment scheme.7.The storage medium of claim 6 wherein said second scheme is agnostic to optimizing bus number assignments within said address of said second scene.8.The storage medium of claim 4, wherein the configuration address space comprises a PCIe configuration address space.9.The storage medium of claim 4 wherein a first number of bus numbers are allowed in said first scene of said configuration address space and a second number of bus numbers are assigned to said configured address space In the second scenario, a third number of bus numbers are assigned in the first scenario of the configuration address space, and a sum of the second number and the third number of bus numbers exceeds the first number .10.A storage medium according to any one of claims 1 to 9, wherein the mapping portal bridge is implemented in a switching device that connects the devices of the hierarchy to the root complex.11.The storage medium of any of claims 1-10, wherein the mapping portal bridge is implemented in the second port.12.The storage medium of any of claims 1-11, wherein the mapping portal bridge will use the mapping table to facilitate communication between the second level device and the root complex.13.The storage medium of any of claims 1-12, wherein the code is further executable to discover devices in each of the first device level and the second device level in accordance with a respective search algorithm.14.The storage medium of claim 13 wherein said search algorithm comprises a depth-first search.15.The storage medium of claim 13 wherein said search algorithm comprises a breadth-first search.16.The storage medium of claim 13 wherein said search algorithm for discovering devices in said first level is different from said search algorithm for discovering devices in said second level.17.The storage medium of claim 13 wherein said search algorithm for discovering devices in said first level is the same as said search algorithm for discovering devices in said second level.18.A storage medium according to any one of claims 1 to 17, wherein at least part of the address in the first scenario of the configuration address space is reserved for hot plugging.19.A method comprising:Determining that the at least one first device is connected to the first port of the plurality of ports of the root complex of the system;Assigning an address corresponding to the first level device including the first device;Determining that the second device is connected to the second port of the plurality of ports of the root complex by a mapping portal bridge, and the second device is included in another second hierarchical device;Trigger generating a mapping table corresponding to the mapping portal bridge, wherein the mapping table defines addressing used in a first scenario of a configuration address space of the system and used in a second scenario of the configured address space a conversion between addressing, the first scene comprising a scene of the root complex, and the second scene comprising a scene corresponding to the second level device and assigned to the first level device The address is according to the first scenario.20.The method of claim 19 wherein said code is further executable to assign an address to said second level device in response to said first scene of said configured address space.21.The method of claim 20 wherein each of said second level devices is also assigned a respective address in accordance with said second scenario of said configured address space.22.The method of any of claims 19-21, wherein the addresses of each of the first scene and the second scene of the configuration address comprise respective bus-device-function (BDF) numbers.23.The method of claim 22 wherein said address assigned to said first scene in accordance with said configuration address space is assigned to optimize an assignment of a bus number utilized in said first scene.24.The method of claim 23, wherein the address assigned according to the second scenario of the configuration address space is assigned according to a different, second address assignment scheme.25.The method of claim 24 wherein said second scheme is agnostic to optimizing bus number assignments within said address of said second scene.26.The method of claim 22 wherein said configuring an address space comprises a PCIe configuration address space.27.The method of claim 22 wherein a first number of bus numbers are allowed in said first scene of said configuration address space, a second number of bus numbers being assigned to said said configuration address space In a second scenario, a third number of bus numbers are assigned in the first scene of the configuration address space, and a sum of the second number and the third number of bus numbers exceeds the first number.28.The method of any of claims 19-27, wherein the mapping portal bridge is implemented in a switching device that connects the devices of the hierarchy to the root complex.29.The method of any of claims 19-28, wherein the mapping portal is implemented in the second port.30.The method of any of claims 19-29, wherein the mapping portal bridge will use the mapping table to facilitate communication between the second level device and the root complex.31.The method of claims 19-30, wherein the code is further executable to discover devices in each of the first device level and the second device level in accordance with a respective search algorithm.32.The method of claim 31 wherein said search algorithm comprises a depth first search.33.The method of claim 31 wherein said search algorithm comprises a breadth-first search.34.The method of claim 31 wherein said search algorithm for discovering devices in said first level is different from said search algorithm for discovering devices in said second level.35.The method of claim 31 wherein said search algorithm for discovering devices in said first level is the same as said search algorithm for discovering devices in said second level.36.The method of claim 19 wherein at least a portion of said address in said first scene of said configuration address space is reserved for hot plugging.37.A system comprising means for performing the method of any of claims 19-36.38.A system comprising:a root complex comprising a plurality of ports for coupling to a plurality of hierarchical devices;System software executable by the processor for:Determining that the at least one first device is connected to the first port of the plurality of ports;Assigning an address corresponding to the first level device including the first device;Determining that the second device is connected to the second port of the plurality of ports of the root complex by a mapping portal bridge, and the second device is included in another second hierarchical device;Generating a mapping table corresponding to the mapping portal bridge, wherein the mapping table defines addressing used in a first scenario of a configuration address space of the system and a homing used in a second scenario of the configured address space a conversion between addresses, the first scene comprising a scene of the root complex, and the second scene comprising a scene corresponding to the second level device, and the assigning to the first level device The address is according to the first scenario. |
Speculative enumeration of bus-device-function address spacesCross-reference to related applicationsThis application claims US Provisional Patent Application Serial No. 62/387,492, filed on Dec. 26, 2015, entitled "SPECULATIVE ENUMERATION OF BUS-DEVICE-FUNCTION ADDRESS SPACE" U.S. Non-Provisional Patent Application Serial No. 15/079,922, filed on Mar. 24, s.Technical fieldThis disclosure relates to computing systems and, in particular, and not exclusively, to address space mapping.Background techniqueThe Peripheral Component Interconnect (PCI) configuration space is utilized by systems employing PCI, PCI-X, and PCI Express (PCIe) to perform configuration tasks for PCI-based devices. PCI-based devices have an address space for device configuration registers called configuration space, and PCI high speed introduces an extended configuration space for devices. The configuration space registers are typically mapped by the host processor to the input/output locations of the memory map. Device drivers, operating systems, and diagnostic software access the configuration space and can read and write information to the configuration space registers.One of the improvements that PCI local buses have over other I/O architectures is their configuration mechanism. In addition to normal memory mapping and I/O port space, each device function on the bus has a configuration space that is 256 bytes long by knowing the 8-bit PCI bus, 5-bit device, and 3-bit function number for the device ( Usually referred to as BDF or B/D/F, addressable according to bus/device/function abbreviations). This allows up to 256 buses with up to 32 devices per bus and 8 devices per device. A single PCI expansion card can respond as a device and at least implement function number 0. The first 64 bytes of the configuration space are normalized; the remaining bytes are extensions of the available specification definitions and/or for vendor-defined purposes.In order to allow more portions of the configuration space to be standardized without conflicting with existing usage, there can be a list of capabilities defined within the first 192 bytes of the peripheral component interface configuration space. Each capability has a byte that describes what capabilities it has and a byte that indicates the next capability. The number of additional bytes depends on the capability ID. If the capability is being used, the bits in the status register are set and a pointer to the first one in the capability list is provided. Previous features of PCIe have been provided with similar features, such as PCIe extension capability architecture.DRAWINGSFIG. 1 illustrates an embodiment of a computing system including an interconnect architecture.Figure 2 illustrates an embodiment of an interconnect architecture that includes a layered stack.FIG. 3 illustrates an embodiment of a packet or request received or generated within an interconnect fabric.Figure 4 illustrates an embodiment of a transmitter and receiver pair for an interconnect architecture.Figure 5 illustrates a representation of the system bus.Figure 6 illustrates a representation of an example enumeration of bus identifiers in a system.Figure 7 illustrates an embodiment of a mapping portal bridge (MPB).Figure 8 illustrates a representation of a corresponding address map and an enumeration of bus identifiers in the system.Figure 9 illustrates a representation of at least a portion of an example capability register.10A-10C are simplified block diagrams illustrating example techniques for enumerating devices within a system.11 is a simplified flow diagram illustrating an example technique for enumerating devices within a system.Figure 12 illustrates an embodiment of a block diagram of a computing system including a multi-core processor.Figure 13 illustrates another embodiment of a block diagram for a computing system.detailed descriptionIn the following description, numerous specific details are set forth, such as specific types of processor and system configurations, specific hardware structures, specific architecture and micro-architectural details, specific register configurations, specific instruction types, specific system components, specific measurement/height, specific Examples of processor pipeline stages and operations are provided to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that <RTIgt; In other instances, well-known components and methods are not described in detail, such as specific and alternative processor architectures, specific logic circuits/code for the described algorithms, specific firmware code, specific interconnection operations, specific logic configurations, specific manufacturing Techniques and materials, specific compiler implementations, specific representations of algorithms in code, specific power down and gating techniques/logic, and other specific operational details of computer systems are employed to avoid unnecessarily obscuring the present invention.While the following embodiments may be described with reference to energy savings and energy efficiency in a particular integrated circuit, such as in a computing platform or microprocessor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein are applicable to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy savings. For example, the disclosed embodiments are not limited to desktop computer systems or UltrabooksTM. Also, it can be used in other devices such as handheld devices, tablets, other thin notebooks, system on chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, digital signal processor (DSP), system on a chip, network computer (NetPC), set top box, network hub, wide area network (WAN) switch, or any other system that can perform the functions and operations taught below. Moreover, the devices, methods, and systems described herein are not limited to physical computing devices, but may also be related to software optimization for energy conservation and efficiency.As computing systems advance, the components of this article become more complex. As a result, interconnect architectures that communicate and couple between components are also increasing in complexity to ensure that bandwidth requirements are met for optimal component operation. Further, different market segments require different aspects of the interconnect architecture to suit the needs of the market. For example, servers require higher performance, and mobile ecosystems sometimes sacrifice overall performance for energy savings. However, the most prominent goal of most fabrics is to provide the highest possible performance for maximum energy savings. In the following, several interconnections are discussed that would potentially benefit from aspects of the invention described herein.An interconnect fabric architecture includes a Peripheral Component Interconnect (PCI) High Speed (PCIe) architecture. The primary goal of PCIe is to enable components and devices from different vendors to interoperate in an open architecture that spans multiple market segments; clients (desktop and mobile), servers (standard and enterprise), and embedded and communication devices . PCI Express is a high performance, general purpose I/O interconnect defined for a variety of future computing and communications platforms. Some PCI attributes, such as their usage models, load-store architectures, and software interfaces, have been maintained with their modifications, while previous parallel bus implementations have been replaced by highly scalable, full-serial interfaces. Nearer versions of PCI high speed utilizes advances in point-to-point interconnects, switch-based technologies, and packetization protocols to deliver new levels of performance and features. Power management, quality of service (QoS), hot-swap/hot swap support, data integrity, and error handling are among the advanced features supported by PCI Express.Referring to Figure 1, an embodiment of a fabric consisting of point-to-point links interconnecting a set of components is illustrated. System 100 includes a system memory 110 and a processor 105 coupled to a controller hub 115. Processor 105 includes any processing elements such as a microprocessor, host processor, embedded processor, coprocessor, or other processor. Processor 105 is coupled to controller hub 115 via a front side bus (FSB) 106. In one embodiment, FSB 106 is a serial point-to-point interconnect as described below. In another embodiment, link 106 includes a serial, differential interconnect architecture that conforms to different interconnect standards.System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible through devices in system 100. System memory 110 is coupled to controller hub 115 through memory interface 116. Examples of memory interfaces include dual data rate (DDR) memory interfaces, dual channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In one embodiment, controller hub 115 is a root hub, root complex, or root controller in a peripheral component interconnect high speed (PCIe or PCIE) interconnect hierarchy. Examples of controller hub 115 include a chipset, a memory controller hub (MCH), a north bridge, an interconnect controller hub (ICH), a south bridge, and a root controller/hub. The term "chipset" often refers to two physically separate controller hubs, ie, a Memory Controller Hub (MCH) coupled to an Interconnect Controller Hub (ICH). It is noted that current systems often include an MCH integrated with processor 105, and controller 115 is to communicate with the I/O devices in a manner similar to that described below. In some embodiments, peer to peer routes are optionally supported by root complex 115.Here, controller hub 115 is coupled to switch/bridge 120 via serial link 119. Input/output modules 117 and 121 (which may also be referred to as interfaces/ports 117 and 121) include/implement a layered protocol stack that provides communication between controller hub 115 and switch 120. In one embodiment, multiple devices can be coupled to switch 120.The switch/bridge 120 routes the packet/message upstream (i.e., up to the level of the root complex) from the device 125 to the controller hub 115, and downstream (i.e., down the level of the root controller) from the processor. 105 or system memory 110 is routed to device 125. In one embodiment, switch 120 is referred to as a logical assembly of multiple virtual PCI to PCI bridge devices. Device 125 includes any internal or external device or component to be coupled to an electronic system, such as an I/O device, a network interface controller (NIC), an add-in card, an audio processor, a network processor, a hard drive, a storage device, a CD/ DVD ROM, monitors, printers, mice, keyboards, routers, portable storage devices, FireWire devices, Universal Serial Bus (USB) devices, scanners, and other input/output devices. In PCIe jargon, such as devices are often referred to as endpoints. Although not explicitly shown, device 125 may include a PCIe to PCI/PCI-X bridge that supports legacy or other versions of PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe, or root complex integration endpoints.Graphics accelerator 130 is also coupled to controller hub 115 via serial link 132. In one embodiment, graphics accelerator 130 is coupled to the MCH, which is coupled to the ICH. Switch 120, and corresponding I/O device 125, is then coupled to the ICH. I/O modules 131 and 118 also implement a layered protocol stack that communicates between graphics accelerator 130 and controller hub 115. Similar to the MCH discussion above, the graphics controller or graphics accelerator 130 itself may be integrated into the processor 105.Turning to Figure 2, an embodiment of a layered protocol stack is illustrated. The layered protocol stack 200 includes any form of layered communication stack, such as a fast path interconnect (QPI) stack, a PCIe stack, a next generation high performance computing interconnect stack, or other layered stack. Although the following discussion with respect to the PCIe stack is discussed with reference to Figures 1-4, the same concepts are applicable to other interconnect stacks. In one embodiment, protocol stack 200 is a PCIe protocol stack that includes transaction layer 205, link layer 210, and physical layer 220. Interfaces, such as interfaces 117, 118, 121, 122, 126, and 131 in FIG. 1, may be represented as communication protocol stack 200. A representation as a communication protocol stack may also be referred to as a module or interface that implements/contains a protocol stack.PCI high speed uses packets to pass information between components. Packets are formed in the transaction layer 205 and the data link layer 210 to carry information from the transmitting component to the receiving component. As the transmitted packets flow through other layers, they are extended with additional information necessary to handle the packets at those layers. The reverse process is performed on the receiving side, and the packets are transformed from their physical layer 220 representation into a data link layer 210 representation, and finally (for transaction layer packets) transformed into a form that can be processed by the transaction layer 205 of the receiving device.Transaction layerIn one embodiment, the transaction layer 205 will provide an interface between the processing core of the device and the interconnect fabric, such as the data link layer 210 and the physical layer 220. In this regard, the primary responsibility of the transaction layer 205 is the assembly and disassembly of the packets (ie, transaction layer grouping or TLP). Transaction layer 205 typically manages credit base flow control for TLP. PCIe implements separate transactions, i.e., transactions with time-separated requests and responses that allow the link to carry other traffic while the target device collects data for the response.In addition, PCIe utilizes credit-based flow control. In this scenario, the device advertises an initial credit for each receive buffer in the transaction layer 205. The external device at the opposite end of the link (such as controller hub 115 in Figure 1) counts the number of credits consumed by each TLP. If the transaction does not exceed the credit limit, the transaction can be transferred. The credit is restored when a response is received. The advantage of the credit scheme is that the waiting time for credit returns does not affect performance (provided that no credit limit is encountered).In one embodiment, the four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. The memory space transaction contains one or more of the read request and the write request, and the data is transferred to the location of the memory map. In one embodiment, a memory space transaction can use two different address formats, such as a short address format (such as a 32-bit address) or a long address format (such as a 64-bit address). The configuration space transaction is used to access the configuration space of the PCIe device. Transactions to the configuration space contain read requests and write requests. Message transactions are defined to support in-band communication between PCIe agents.Thus, in one embodiment, transaction layer 205 assembles packet header/payload 156. The format used for the current packet header/payload can be found in the PCIe specification of the PCIe Specification website.Referring briefly to Figure 3, an embodiment of a PCIe transaction descriptor is illustrated. In one embodiment, transaction descriptor 300 is a mechanism for carrying transactional information. In this regard, transaction descriptor 300 supports transaction identification in the system. Other potential uses include modifications that track the association of transactions to channels and the default transaction ordering.Transaction descriptor 300 includes a global identifier field 302, an attribute field 304, and a channel identifier field 306. In the illustrated example, global identifier field 302 is depicted as including a local transaction identifier field 308 and a source identifier field 310. In one embodiment, the global transaction identifier 302 is unique to all outstanding requests.According to one implementation, the local transaction identifier field 308 is a field generated by the requesting agent and it is unique to all outstanding requests that require completion of the requesting agent. Still further, in this example, the source identifier 310 uniquely identifies the requestor agent within the PCIe hierarchy. Accordingly, along with the source ID 310, the local transaction identifier 308 field provides a global identification of the transactions within the hierarchical domain.Attribute field 304 specifies the characteristics and relationships of the transaction. In this regard, the attribute field 304 is potentially used to provide additional information that allows modification of the default disposition of the transaction. In one embodiment, the attribute field 304 includes a priority field 312, a reserved field 314, a sort field 316, and a no snoop field 318. Here, the priority subfield 312 can be modified by the initiator to assign a priority to the transaction. The reserved attribute field 314 is reserved for future or vendor defined usage. A possible usage model using priority or security attributes can be implemented using the reserved attribute field.In this example, the sort attribute field 316 is used to provide optional information for shipping the sort type of the modifiable default collation. According to an example implementation, the sort attribute "0" indicates that a default collation is to be applied, where the sort attribute "1" indicates a non-strict ordering, where the write can pass the write in the same direction, and the read completion can pass the write in the same direction. The snoop attribute field 318 is utilized to determine if a transaction has been snooped. As shown, channel ID field 306 identifies the channel with which the transaction is associated.Link layerLink layer 210 (also referred to as data link layer 210) acts as an intermediate stage between transaction layer 205 and physical layer 220. In one embodiment, the responsibility of the data link layer 210 is to provide a reliable mechanism for interchanging transaction layer packets (TLPs) between two components of the link. One side of the data link layer 210 accepts the TLP assembled by the transaction layer 205, applies a packet sequence identifier 211 (ie, an identification number or a packet number) to calculate and apply an error detection code (ie, CRC 212), and submits the modified TLP to The physical layer 220 is for transmission across physical to external devices.Physical layerIn one embodiment, physical layer 220 includes logical sub-block 221 and electrical sub-block 222 to physically transfer packets to an external device. Here, logical sub-block 221 is responsible for the "digital" function of physical layer 221. In this regard, the logical sub-block includes a transmitting portion for preparing outgoing information for transmission by the physical sub-block 222, and a receiving portion for identifying and preparing the received information, which is then passed to the link layer 210.Physical block 222 includes a transmitter and a receiver. Logic sub-block 221 provides symbols to the transmitter, which serializes the symbols and transmits them to an external device. A serialized symbol is provided to the receiver from an external device, and the receiver converts the received signal into a bit stream. The bit stream is deserialized and provided to logic sub-block 221. In one embodiment, the 8b/10b transmission code is employed in which the ten-bit symbol is transmitted/received. Here, special symbols are used to frame the frames into frames. Moreover, in one example, the receiver also provides a symbol clock that is recovered from the incoming serial stream.As discussed above, although transaction layer 205, link layer 210, and physical layer 220 are discussed with respect to particular embodiments of the PCIe protocol stack, the layered protocol stack is not limited in this respect. In fact, any layered protocol can be included/implemented. By way of example, a port/interface represented as a layered protocol includes: (1) a first layer, an assembled packet, ie, a transaction layer; a second layer, serializing a packet, ie, a link layer; and a third layer, transmitting Grouping, that is, the physical layer. As a specific example, a Common Standard Interface (CSI) layered protocol is utilized.Referring next to Figure 4, an embodiment of a PCIe serial point-to-point fabric is illustrated. Although an embodiment of a PCIe serial point-to-point link is illustrated, the serial point-to-point link is not limited thereto as it contains any transmission path for transmitting serial data. In the illustrated embodiment, the basic PCIe link includes two low voltage differentially driven signal pairs: a transmit pair 406/411 and a receive pair 412/407. Accordingly, device 405 includes transfer logic 406 that transfers data to device 410 and receive logic 407 that receives data from device 410. In other words, two transmission paths (ie, paths 416 and 417) and two reception paths (ie, paths 418 and 419) are included in the PCIe link.A transmission path refers to any path for transmitting data, such as a transmission line, a copper wire, an optical line, a wireless communication channel, an infrared communication link, or other communication path. The connection between two devices, such as device 405 and device 410, is referred to as a link, such as link 415. The link can support one route - each route represents a set of differential signal pairs (one pair for transmission and one pair for reception). To scale the bandwidth, the link can aggregate multiple routes identified by xN, where N is any supported link bandwidth, such as 1, 2, 4, 8, 12, 16, 32, 64 or more.A differential pair refers to two transmission paths (such as lines 416 and 417) that carry differential signals. As an example, when line 416 switches from a low voltage level to a high voltage level (ie, a rising edge), line 417 is driven from a high logic level to a low logic level (ie, a falling edge). Differential signals potentially demonstrate better electrical characteristics, such as better signal integrity, ie, cross-coupling, voltage overshoot/undershoot, ringing, and the like. This takes into account a better timing window that enables faster transfer frequencies.New and growing usage models, such as PCIe-based storage arrays and Thunderbolt, are driving significant increases in PCIe level depth and width. The PCI Express (PCIe) architecture is based on PCI, which defines the "configuration space" in which system firmware and/or software discovers functions and enables/disables/controls them. Addressing in this space is based on a 16-bit address (commonly referred to as "BDF" or bus-device-function number) and consists of an 8-bit bus number, a 5-bit device number, and a 3-bit function number.PCI allows the system to provide multiple independent BDF spaces, which are referred to as "segmentation." Each segment may have certain resource requirements, such as a mechanism for generating a PCI/PCIe configuration request, including an Enhanced Configuration Access Mechanism (ECAM) as defined in the PCIe specification. In addition, an input/output (I/O) memory management unit (IOMMU), such as Intel VT-d, can use BDF space as an index, but may not be configured to directly comprehend segments. Accordingly, in some instances, separate ECAMs and IOMMUs must be replicated for each segment defined in the system. FIG. 5 illustrates an example of a system that includes multiple segments (eg, 505a-c). For example, in this example, a segment is defined for each of the three switches 510, 515, 520 that are connected to the root complex 525. Separate IOMMUs and ECAMs can be implemented at root complex 525 to facilitate each segment (e.g., 505a-c). Additionally, in this example, a variety of switches (e.g., 530a-r), various endpoints (EP), and other devices are connected to the various buses in each segment. In some cases, the segmented configuration space may reserve multiple bus addresses for potential hot plug events, limiting the total number of bus addresses available within each segment. Furthermore, the allocation of bus numbers in one or more of the segments may be based on an algorithm that is less focused on the compact occupancy address and the use of the available bus address space. This can result in wasted configuration address (ie, BDF) space in some instances.Conventional PCIe systems are configured to assign address spaces in a manner that is inefficient in the use of BDF space and bus numbers when applied to modern and emerging use cases. While in practice relatively few implementations may involve a single system that consumes all unique BDF values (eg, 64K defined under PCIe), depth levels, such as those occurring, for example, in the depth hierarchy of PCIe switches, can be exhausted very quickly. The available bus number in the BDF space. In addition, in hot-swappable applications, most of the BDF space can usually be reserved for future potential use (ie, when future devices are hot-plugged into the system), from pools that are directly available through the system. Get the additional swath of the bus number. Although the segmentation mechanism can be applied to solve this problem, the segmentation itself has poor scaling because, as noted above, additional hardware resources (such as IOMMU) are built into the CPU, platform controller hub (PCH), and system on chip ( SoC), root complex, etc. to support each segment. Thus, using segmentation to address the depth level results in scaling the system to meet worst-case system requirements, which is typically more than would be required for most systems, resulting in significant waste of platform resources. In addition, segmentation is difficult (and in some cases substantially impossible) to create outside of the root complex of the system (among other example questions).In some implementations, the system can be provided to enable more efficient use of BDF space and to address at least some of the example problems above. In addition to other example advantages, this can take into account expansion of PCIe, Thunderbolt, system-on-chip fabrics (eg, Intel System-on-a-Chip Architecture (IOSF) and others) and other interconnects to very large topologies, but without requiring root recombination Dedicated resources in the body, which would be the case in an exclusive solution that relies on segmentation or other alternatives. 6 shows a simplified block diagram illustrating an example system including an endpoint (e.g., 605, 610) and a switch (e.g., 620, 625, 630) connected to a root complex 615 by a plurality of buses forming a hierarchy of switch fabrics. 600. The example of FIG. 6 further illustrates an example assignment of a bus number to an in-system bus according to an example PCIe BDF assignment. In this example, the two most likely possible bus number assignments (as indicated by circular marks (eg, 650a-d, etc.) are enumerated (or assigned) with two directly connected to the root complex 615 Directly connected devices 605, 610 and two switch-based hierarchies (corresponding to switches 620, 625). In the depth hierarchy, the available bus numbers in a single BDF space can be quickly consumed. In addition, real-world systems typically allocate bus numbers far less efficiently, resulting in sparse (or "wasted") allocation of BDF space.Another problem with the use case of supporting hot add/remove (such as Thunderbolt, and in some cases PCIe-based storage) is that the bus number assignment in the BDF space is "rebalanced" to be resolved in the runtime system. The hardware topology change that occurred. However, rebalancing can be very difficult for system software because, under typical circumstances, all PCI functions are then forced into a quiescent state (related to rebalancing) to allow BDF space to be re-enumerated by the system, followed by PCI-enabled Re-enable. However, this process can be quite slow and often results in a system freeze, in the worst case, for it can be a very long period of time (for example, long enough to be destructive to the running application, and easy to finalize Visible to the user). It also provides an improved system to reduce the time it takes to apply the modified BDF space, enabling the rebalancing process to be performed for a few percent of a millisecond or faster, without explicitly leaving the PCI function at rest.Finally, very large systems or systems with (proprietary) mechanisms for supporting multiple root complexes can be defined to use segmentation. The improved system can also be applied within such use cases to provide device management with minimal changes (as opposed to using a single root system). Rather, the improved system can provide a mapped portal bridge (MPB) implemented using hardware (and/or software) logic of one or more devices in the system to provide multiple views and remapping tables for the BDF space. In order to enable a "bridge" (which is a logical scene of a root or switch port) to convert one scene of the BDF space into another scene in both directions across the bridge, effectively creating a virtual segment.A Map Portal Bridge (MPB) can be implemented as a logical block (implemented in hardware, firmware, and/or software) provided at one or more ports of a root hub or switch to implement two or more definitions in some implementations. Conversion between BDF spaces (such as primary and secondary BDF spaces). The MPB can be implemented in a root port and/or switch port with a consistent software model (eg, utilized by system software) and can be implemented recursively within a given topology, allowing for high scalability. In addition, the MPB need not be tied to a specific usage model (eg, it can alternatively be used in either or both Thunderbolt (TBT) and conventional PCIe use cases). In addition, the MPB implementation supports implementation flexibility and engineering price/performance trade-offs. In addition, consistency in existing PCIe system software stacks can be maintained (among other example advantages).The MPB utilizes a mapping mechanism that allows the MPB to map all PCIe packets flowing across the MPB between the BDF primary space and the BDF secondary space. The BDF primary space refers to the scenario of configuring the address space seen on the primary side of the bridge (ie, the side closer to the host CPU (eg, at the root complex)). The BDF secondary space can refer to a scenario of a configured address space that is managed and created for devices on the secondary side of the bridge (eg, closer to the device and from the downstream of the root complex or CPU). In PCIe, the same BDF assignments for device configuration can also be used to identify the source of the packet (and sometimes the destination), report errors, and perform other functions.FIG. 7 illustrates a block diagram 700 representing an example implementation of MPB 705. The MPB 705 can contain (in whole or in part) a local copy of the two mapping tables and/or pointers to it, one for the secondary address space to the primary address space mapping (BDFsec-BDFpri) and the other for the primary address space. To the secondary address space map (BDFpri-BDFsec). In one example, the mapping table can be stored in system memory 710, in which case MPB 705 (and system software) can read the table from system memory (at 715) and/or maintain each mapping table (ie, At least a portion of the local copy (720, 725) of BDFsec-BDFpri and BDFpri-BDFsec (eg, to enhance performance). In still other examples, the BDFsec-BDFpri and BDFpri-BDFsec mapping tables can be stored directly in the MPB 705 without maintaining a copy in the system memory 710. The MPB 705 can further perform and manage conversions between the primary BDF space and one or more BDFsec spaces. In the case of multiple BDFsec spaces, a single mapping table can be used that contains columns that map not only the BDFsec address but also the specific space of the BDFsec space to the BDFpri address. In other cases, a separate mapping table can be maintained for each BDFsec space. The MPB can be implemented in hardware using a switch, hub, or port (such as the port of the root complex). The MPB 705 can also include control logic 730 that enables/disables mapping functionality (eg, selectively configuring the MPB 705 as an option on various ports of the switch or root complex (among other examples)).In some examples, MPB 705 can be configured to flexibly operate as MPB 705 (using a primary/secondary BDF spatial mapping mechanism) or a conventional bridge (eg, using conventional PCIe BDF addressing) (eg, using control logic 730). When the system software intends to enable the MPB 705, it can configure the MPB to provide a unique one-to-one mapping between the primary (BDFpri) and secondary (BDFsec) BDF addresses. In other words, a single primary side BDF can correspond to a single secondary side BDF. In addition to other example advantages, this constraint ensures that the MPB does not track unresolved requests, and such tracking will add significant cost to the MPB. In some implementations, multiple MPBs can be deployed in the system. For example, a separate MPB 705 can be provided for each BDFsec space. In such cases, BDFsec assignments behind these multiple different MPBs can be granted to (and possibly will) reuse the same BDF values (in their respective second BDF spaces), if these are mapped to BDFpri A unique value in space.FIG. 8 is a simplified block diagram showing the example of FIG. 6 by using an MPB modification that implements the BDFpri space/BDFsec spatial dichotomy. For example, Figure 8 shows how BDFsec space can be assigned in a system with two MPBs (first MPB for virtual segmentation (vSEG) A (805a) and second MPB for vSEG B (805b)). . In this example, bus numbers 1 and 2 (at 806, 808) of root complex 625 can be maintained in BDFpri space (810) and assigned to two directly connected devices 605, 610. In this example, the connection level from bus 3-n can be handled through the BDFsec space (corresponding to vSEG A (805a) and vSEG B (805b)) and one or more corresponding MPBs. For example, bus 3-6 can be enumerated towards BDFsec space vSEG A (805a), thereby providing virtual segments to devices and buses connected to the root complex via bus 3-6. The second virtual segment can be provided by defining a second BDFsec space vSEG B (805b) that is coupled to the device and bus of the root complex via bus 7-n. According to any suitable scheme (including a scheme of inefficiently allocating these addresses), each BDFsec space can be assigned a BDF address (and a bus number) in the corresponding second space. In fact, different BDFsec spaces can be assigned different addresses based on the type of endpoint or routing device connected to the corresponding bus. Unlike BDFsec space, the BDFpri space (ie, the scene of the configuration space enjoyed by the root complex) can be optimized to account for and control the compact and efficient allocation of bus addresses within the space (eg, as in the example of Figure 6) Illustrated).Each BDFsec address in each of vSEG A and vSEG B can be exactly mapped to a BDF address, which is a BDFpri space (eg, according to mappings 815, 820). As an example, a first device within vSEG A can be assigned a BDF "1:0:0" within the vSEG ABDFsec space, which is mapped to the primary BDF "4:0:0" (as shown in map 815) ) (among other examples). Different devices in other BDFsec spaces of the system (e.g., vSEG B) can be assigned the same BDFsec value assigned in other BDFsec spaces (e.g., vSEG A). For example, the second device can also be assigned BDF "1:0:0" in vSEG B, but within BDFsec of vSEG B. However, the second device will be mapped to a different BDF within the BDFpri of the system (ie, BDF "7:0:1", as shown in map 820), and the like.As noted above, in some implementations, the mapping between BDFpri and BDFsec space can be done based on a mapping table in system memory. Different packet types can be mapped differently (ie, passed between BDFsec and BDFpri space). For example, for requests in both directions, the corresponding requestor ID can be remapped (eg, according to an appropriate mapping table). Message requests for configuration & ID routing can be routed by the ID bus/device/function number field. Completion in both directions can be routed by the requester and the Completion ID (among other examples).The MPB's mapping hardware has access to a mapping table located in system memory with an optional cache in the MPB. In some implementations, a single mapping table can be used for traffic in both directions, where the MPB processing logic determines mappings in the forward (eg, downstream) and opposite (eg, upstream) (eg, through reverse lookup operations) directions. In other implementations, it can be more efficient to provide two separate mapping tables per MPB, one for the forward direction and the other for the opposite direction. In some instances, this may not be as expensive as providing MPB hardware for performing reverse lookups in one of two directions.The MPB can be responsible for mapping a subset of the bus numbers of the BDFpri space to the corresponding BDFsec space on the primary side (eg, the port closest to the root complex). Accordingly, the range of bus numbers in the BDFpri space assigned to the MPB can be limited to the range indicated by [minor bus number to slave bus number], since only packets within the range in the BDFpri space will be always pointed The corresponding MPB. Thus, the table used to map BDFsec:BDFpri may involve a conversion table that is large enough to cover the secondary to the dependent range of the bus number, but not necessarily larger. In some embodiments, a default 64K entry table can be provided for simplicity of implementation. In fact, to map BDFpri to BDFsec, a 64K entry translation table can be provided to make the full BDF space available on the secondary side, but this can be constrained to reduce hardware/memory requirements in some alternatives.The mapping table can be maintained by system software that manages data communication in PCIe (or other interconnections that implement these features). For example, in an implementation that utilizes two separate upstream and downstream tables, the two tables can be maintained by system software. The system software ensures consistency between the tables so that the mapping of BDFx-BDFy-BDFx' always gives the result of x=x' (among other example considerations).In some implementations, the MPB can implement a cache for storing at least a portion of the mapping table locally at the MPB. The system software can assume that the MPB cache conversion is based on a mapping table and can support cache management by providing the necessary cache management information to the hardware. It may be desirable to provide system software with a mechanism to ensure that the MPB cache is only updated under the control of the system software. In addition, mechanisms such as specific registers in the I/O (MMIO) space of the MPB memory map can be defined to provide pointers to mapping tables in system memory. Registers can also be used for cache management, system software for enabling/disabling caching, invalidating the MPB cache, and so on. Alternatively, in some implementations, these tables can be implemented directly in the MPB without maintaining a copy in system memory. In this case, the system software updates these tables directly at the MPB as needed.An additional benefit of the MPB mapping table mechanism is that the system software can be executed to update the mapping table atomically, such as by creating a new mapping table, and then invalidating the MPB cache "immediately" and redirecting the MPB from the old table to New table. This can be done by defining a control register mechanism such that when the system software instructs to do so, the MPB hardware is required to sample the register settings and then continue to operate with the sampled settings until directed resampling. This resampling can be performed by the MPB hardware in such a way that the transition from the old sample to the new sample is atomically effective. And by providing a mechanism for system software to temporarily block traffic through the MPB (eg, "pause" traffic at the MPB to allow for changes to the mapping table), it becomes more attractive to allow system software to modify the runtime system. Bus number assignment ("rebalance") because the amount of time spent switching on the mapping table can be kept fairly short. Alternatively, if the mapping table is maintained directly in the MPB, a mechanism such as double buffering can be employed, double buffering enables the MPB to use one copy operation of the table, while the candidate set is updated by the system software and then in the direction of the system software The MPB transition uses updated table operations (for example, while the local table is replaced with an updated version).Turning to FIG. 9, a representation of an example embodiment of register fields and bits for implementing a hardware/software interface for an example MPB is shown. Specifically, in this particular example, PCIe extension capabilities can be defined for discovery/management in an MPB system. The extension capability can include fields such as the Unresolved Request (OR) field, which reflects the MPB count of non-posted (NP) requests in either direction (although non-posted requests are not explicitly or separately Tracking); MPB enable (E) bit to indicate whether MPB functionality is enabled at the bridge supporting MPB (when the E bit is set (0 -> 1), the address register will be sampled by MPB); and the mapping table is updated. Trigger field (TR) (among other examples). In some cases, additional fields can be used, along with a window mechanism (such as a status bit) for generating a "probe" configRequest behind the MPB, so the system software can comprehend when ConfigWrite is complete. In some cases, the windowing mechanism can only support one ConfigRequest at a time (eg, without a pipelined request). In still other examples, the capability structure can provide values for describing the nature of the local translation table maintained at the MPB (eg, if not a full size, to indicate the table size, etc.) (among other examples).In some implementations, the BDF primary/secondary mapping and MPB can be used in conjunction with the segmentation. As noted above, MPB can be used to implement virtual segmentation (vSEG) to at least partially replace and reduce the number of segments in the system design. In cases where segments will be included, or when multiple root systems are created (eg, using a proprietary load/store fabric), the MPB can be extended to support portions of the segments that are not supported in the hierarchy and in the hierarchy. The mapping between. For example, in a system where BDFpri is augmented to support segmentation, this actually becomes a segmented BDFpri (SBDFpri) space because the segment acts as a "prefix" to increase the number of available bus numbers. In such systems, a mechanism such as a TLP prefix may be used to identify a particular segment. However, since many devices do not yet support the segmentation tagging mechanism, it is possible to use an extended MPB mapping mechanism to support mapping of BDFsec spaces that do not support segmentation tags into SBDFpri spaces that do support segmentation tags (among other examples) .In some implementations, the MPB can accommodate probing. Here, "probe" can refer to the first data word (D W ) of the configuration space of the read function to see if there is a valid vendor/device ID. However, the use of in-memory tables during probing can involve repeated updates, which potentially have specific problems if the probing is done at runtime, due to repeated updates of the mapping table that can occur. To avoid this, a mechanism for generating a configuration request through the MPB for sounding can be provided. This mechanism may not be expected to be used as a normal configuration generation path, but may instead be configured for probing (and some instances are also referred to as "failsafe"). In one example, the mechanism can include a windowing mechanism in the extended MPB capability (eg, based on the CFC/CF8 configuration access mechanism defined for PCI, with support for 4K configuration space). System software can use this mechanism to discover functionality, such as by probing. Upon discovery of the functionality, the system software can then update the mapping table to provide a translation for the functionality and continue with additional enumeration/configuration (other than other example features) through the MPB's table-based transformation mechanism.The system software can enumerate the BDFsec space according to any suitable or customary algorithm. In some cases, the enumerator configures the MPB to assign the "key" in the BDFpri before initially detecting the particular BDF on the secondary side. This key can be used to generate configuration requests and remap them to that particular BDF. The enumerator must "understand" the BDFpri:BDFsec mapping during this initial enumeration and can use BDF to provide services to all models to perform this mapping during system operations. In one example, the system software can use an algorithm to enumerate PCI devices under the Map Gateway (MPB).The scalable limits of resource allocation by conventional PCI enumeration algorithms are clear in the case of PCI-based SSD devices (eg, NVMe-based) and in the Thunderbolt hierarchy, where these restrictions do not allow for configuration of large/deep levels. This creates an error in which the user is not able to use the device under such configuration. For example, there may be situations where more than 256 PCI-based solid state drives need to be connected to a single system, or there may be hot plugging devices, such as devices for Thunderbolt, where resources reserved for a given portion of the tree are insufficient Configure hot-swappable Thunderbolt units. Under these circumstances, the traditional PCI enumeration algorithm exhausts the rare BDF resources very quickly due to an allocation mechanism such as equally dividing and allocating available bus numbers between all ports having hot-swap capabilities. As discussed earlier, traditional methods of using rebalancing to redistribute resources are not suitable for many use cases. The new algorithm proposed in this disclosure, in conjunction with MPB, addresses these limitations to significantly improve resource configuration and scalability in such situations.Traditional resource enumeration algorithms may not be able to enumerate the addresses (eg, BDF) used for devices that exist under the Map Gateway (MPB) because traditional enumeration algorithms do not have a proper relationship between the secondary and primary BDF spaces. Map the visibility of these devices. Traditionally, PCI devices have been enumerated by system software using a bus-device-function (BDF) number scanning PCI bus starting at BDF[0,0,0] to BDF [255,31,7]. In conventional enumeration techniques, for each BDF that can potentially correspond to a PCI function, the system software generates a configuration read transaction to read the vendor and device ID for that particular function. The valid vendor and device ID from the configuration read can indicate the functionality present at the bus-device-function (BDF). Function 0 can be implemented for each PCI device. Because of this, if the function 0 of the device is not implemented, the system software can have the freedom to skip the function number of the device. This traditional enumeration algorithm does not work for devices that exist under the Map Gateway (MPB) due to potential remapping of buses, devices, and function numbers. Accordingly, an enhanced enumeration algorithm can be provided that can be used by the system software to determine the presence of the MPB and enumerate the devices present under the MPB.As indicated above, a BDF triplet consisting of a bus number (0-255), a device number (0-31), and a function number (0-7) can identify each logical function in the PCI subsystem. The BDF is a form that uniquely identifies the address of each logical function within the PCI system. A PCI system or "subsystem" (eg, a subsystem of a wider system containing non-PCI subsystems) can have 256 bus numbers within a segment. Each bus in the PCI subsystem can have 32 devices, and each device can have 8 functions. Traditionally, to enumerate these devices, system software can scan through these buses, devices, and function numbers. Enumeration can be performed by selecting a particular BDF and reading the vendor/device ID, as discussed above. Based on the results of the vendor/device ID read, the system software can indicate the results and perform additional configuration space tasks based on the requirements of the particular device/function and the policies established for the particular system. For each BDF combination, the system software generates a configuration read transaction to read the vendor and device ID of the function. A valid response from this configuration read transaction indicates the functionality present at the BDF. If a valid response is not received, the system software can record this and this BDF may not be used (unless the device is added and this BDF is used at a later time (eg hot add)).The PCI protocol uses a windowing mechanism to forward transactions from the primary side of the bridge to the secondary side of the bridge. Due to this windowing mechanism, traditional software algorithms for enumerating devices have to allocate sufficient bus numbers for all bridges with hot-swap capability, expecting more levels to be attached to bridges with hot-swap capabilities. This exposes the problem of scalability of rare BDF resources. The secondary BDF space can be used to achieve pre-allocation of portions of the BDF space without consuming resources in the primary BDF space (the space is natively referenced and used by the root complex) until such time when resources are actually needed (here) The corresponding mapping to the BDF main space can be established by the system software).By creating new bus, device and functional levels starting from BDF [0,0,0] in each hierarchy, the Mapping Portal Bridge (MPB) addresses static (during boot) and dynamic (during hot add and hot removal) The problem of bus, device and functional resource allocation. Creating and supporting new BDF spaces (or levels or scenarios of configuration spaces) complicates device enumeration at these levels. In one example, the system software can enumerate MPB-capable PCI bridges using traditional enumeration methods (eg, because any other legacy device will be enumerated and within the same BDF space utilized by the root complex). However, in some instances, if the MPB capability is subsequently enabled (eg, by system software), the BDF number generated by the system software to scan for devices under the MPB capable bridge may be for the mapping portal (MPB). The device is invalid and may need to be re-enumerated (including building the corresponding mapping table). In fact, when the MPB capability maps the BDF scene of the primary side (the level above the MPB) to the BDF scene of the secondary side (the level below the MPB) using the mapping table, in some cases, the entries in this mapping table Will become invalid, such as when the system is reset, until it is initialized by the software. Each entry in the mapping table can be valid or invalid, and those entries that are explicitly initialized only by system software can be marked as "valid" mappings. Because of this, the primary side BDF number may not be correctly mapped to the secondary side BDF number. Accordingly, an enhanced enumeration algorithm can be provided to populate this mapping table, while also helping to enumerate devices under the MPB.In one implementation, an enhancement algorithm for enumerating PCI devices under a mapped portal bridge (MPB) can support speculative generation of mappings between primary and secondary BDF spaces. In this context, the mapping can be speculative because the devices/functions that actually exist in the system under the MPB may not have been found, but the system still configures the MPB to provide a later state that can be found during the enumeration process. Device/function mapping. If these mappings are actually used, they will usually be maintained, while those that are not used may sometimes be "recycled" by the system. For example, a BDF mapped under one port of the switch may be shifted to be available to another port of the switch. (If the hardware is hot added below the second port that requires more BDF than originally allocated). The Map Portal Bridge (MPB) logic can create a new (eg, secondary) level of bus-device-function numbers starting with BDF[0,0,0]. This makes it possible by creating a mapping table that maps the primary side BDF (the level above the MPB) to the secondary side BDF (the level below the MPB). The algorithm uses a speculative approach to create a mapping in the BDF mapping table of the mapping portal bridge. Once these mappings are created, the algorithm uses the traditional PCI enumeration algorithm to enumerate PCI devices under the Map Portal (MPB).In one example, the enhancement algorithm can be built on the principles of a traditional enumeration algorithm, allowing the device to enumerate under the MPB in the same manner as traditionally, and not using secondary BDF space at other ports (eg Generate a configuration transaction to read the vendor and device identifiers from the configuration space while performing a tree search of the BDF space under the MPB) to make it backward compatible. As mentioned above, the enhancement algorithm can allow for efficient filling of mapping tables, where a minimum number of mapping table entries are used to support a given size hierarchy below the MPB while optimizing/maximizing BDF usage on the primary side. The algorithm uses a simple data structure to make its implementation in system software straightforward and simple.In one example implementation, such as illustrated in the simplified block diagrams 1000a-c of Figures 10A-10C, an enhancement algorithm for enumerating devices within a secondary BDF space can utilize a data structure, including as an example:• Global Bus Number Pool 1005 (Bit Vector 0-255) -- Used to maintain the bus number assignment history on the primary side of the MPB.• Enumeration queue 1010 -- Used to hold the secondary side bus number (for example, based on the width-first search algorithm).As an illustrative example, system software may begin to enumerate PCI devices connected to ports of root complex 615 using conventional algorithms. If the system software detects a Type 1 device (eg, a PCI bridge) with MPB capabilities, and if the system software intends to enable MPB capabilities, the system software can be turned into a device that uses the enhanced enumeration algorithm to handle the hierarchy under the MPB. The enumeration queue 1010 is used to record which bus number on the secondary side that needs to be scanned next. This can include the MPB capabilities on the Type 1 device that the system software first enables detection. In addition, the system software can create an empty mapping table (eg, stored in system memory or in the memory of the MPB port). Since the mapping table (eg 1015) starts with a blank, the traditional enumeration algorithm will not work for devices under the MPB bridge (eg 630), since no configuration request will flow from the primary side of the MPB to the secondary side unless There is a valid mapping in the MPB that maps the target BDF in the BDFpri space to the BDFsec space.In one example, the BDF space of the secondary side level below the mapping portal bridge (or the scene of the configuration space within the secondary side level space) will always start with bus number 0. In such an implementation, the enhancement algorithm can begin by enqueuing bus number 0 into queue enumeration queue 1010 (as shown in Figure 10B) to reflect the use of bus number 0 within the secondary side hierarchy space. The enhancement algorithm can be dequeued from the enumeration queue to indicate the bus number on the secondary side that will be scanned next. For scanning, a speculative mapping is performed between the next available bus number on the primary side and the dequeued bus number on the secondary side. Accordingly, for each bus number dequeued from the enumeration queue, the algorithm gets the next available bus number from the global bus number pool. It then creates a speculative map between the next available bus number from the global bus number pool and (possibly multiple) dequeued secondary side bus numbers, assuming that a device is found under the MPB bridge (eg, as shown in Figure 10C) The mapping table 1015 is shown).For each speculative map, a configuration read transaction can be generated for all devices and function numbers under this speculative map to convert the primary BDF to a secondary BDF and read the device and vendor ID using a conventional enumeration algorithm. Similarly, the completion of the configuration read can involve using the mapping to convert the BDF from BDFsec back to BDFpri. During this time, if a Type 1 device is found, it is assigned a secondary and secondary bus number, and the secondary bus number is enqueued in the queue 1010. Also, if a Type 0 device is found, it is allocated the required resources.To illustrate an example, the following pseudo code represents an example implementation of an enhancement algorithm:/ * Main side bus number. Size = 1 byte. */PriBusNum = 0;/ * Secondary side bus number. Size = 1 byte. Set this to 0 to indicate the first bus number = 0 on the secondary side. */SecBusNum = 0;/* enumeration queue */Queue EnumQueue;/ * Global main side bus number pool. */BitVector gPriBusNumPool;/* Enter the first bus number on the secondary side. */Enqueue(EnumQueue, SecBusNum);/* Loop until the enumeration queue is empty. */While (!IsEmpty(EnumQueue)){ /* Get the secondary bus number by dequeuing the enumeration queue*/ SecBusNum = Dequeue(EnumQueue); /* Get the next available bus number from the global bus number pool for the main bus number*/ PriBusNum = Get Next Available Bus Number from gPriBusNumPool. /* Create speculative mappings between PriBus, PriDevFunc, and SecBus and SecDevFunc. */ CreateSpeculativeMapping(PriBusNum, SecBusNum); For each Primary Dev, Func Number { Perform traditional enumeration by generating a read configuration transaction to read the vendor and device ID. If (Typel Device Found) { Assign secondary and slave bus numbers. Enqueue(EnumQueue, Allocated Secondary Bus Number); } Else { Allocate the resources you need. } }}In some implementations, the system software can use "don't care" bits to reduce the number of entries in the mapping table. For example, if a range of BDFs are allocated under the MPB instead of mapping each BDF separately, then a mapping that applies only to some of the bits can be constructed, implicitly passing the unmapped bits through the unmodified MPB. In such an implementation, the system software can create a speculative map for each bus-device-function number combination that falls outside the "don't care" bit mask. For each speculative mapping, the system software then uses a traditional enumeration algorithm for all bus-device-function numbers that fall under "don't care" masking.For example, in one implementation, the "don't care" bit can be equal to four. In this particular case, the system software can enumerate two device numbers for each mapping (each with 8 functions). If the "don't care" bit is exhausted, a new map is created. If the device is not found under the previously established speculative mapping, a more rigorous implementation can be adopted to reuse the mapping entries. Furthermore, multiple bus numbers on the secondary side can be mapped under the same bus number on the primary side. These techniques can be used to maximize the use of BDF on the primary side while keeping the mapping table concise.Turning to FIG. 11, a simplified flowchart 1100 illustrating an example technique for enumerating devices within a system is shown. A 1105 first device can be detected at the first port of the root complex. The first device can be part of the first level. The first device can enumerate (or "assign") the address 1110 (including other devices in the hierarchy connected to the first port) according to the first or primary scene of the configuration space. The second device can be detected 1115 as a second port connected to the root complex via a mapping portal bridge. The mapping portal bridge can support the addressing of the second level device connected under the mapping portal bridge according to the secondary scene of the configuration space. In response to detecting the mapped portal bridge connection, a 1120 mapping table can be generated to map the addresses in the secondary space to the addresses of the primary space. The mapping table can then be used to assign the address of the 1125 second level device (eg, where the root complex and/or system software translates between the primary spatial address and the secondary spatial address when performing the configuration task of the address enumeration).It is pointed out that the apparatus, methods and systems described above can be implemented in any of the electronic devices or systems mentioned above. As a specific illustration, the following figures provide an exemplary system for utilizing the invention described herein. When the following systems are described in more detail, several different interconnections are disclosed, described, and revisited from the discussion above. And as will be readily appreciated, the advances described above can be applied to any of those interconnects, fabrics, or architectures.Referring to Figure 12, an embodiment of a block diagram of a computing system including a multi-core processor is depicted. Processor 1200 includes any processor or processing device such as a microprocessor, embedded processor, digital signal processor (DSP), network processor, handheld processor, application processor, coprocessor, system on a chip (SOC) Or other device that executes the code. In one embodiment, processor 1200 includes at least two cores - core 1201 and core 1202, which may comprise an asymmetric core or a symmetric core (the illustrated embodiment). However, processor 1200 can include any number of processing elements, which can be symmetrical or asymmetrical.In one embodiment, a processing element refers to hardware or logic that supports a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a processing unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element capable of preserving the state of the processor, such as an execution state or The state of the architecture. In other words, a processing element, in one embodiment, refers to any hardware that can be independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) is generally referred to as an integrated circuit that potentially includes any number of other processing elements, such as core or hardware threads.A core is often referred to as logic that is capable of maintaining an independent architectural state on an integrated circuit, where each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to a core, a hardware thread generally refers to any logic located on an integrated circuit that is capable of maintaining an independent architectural state in which independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and other resources are dedicated to the architectural state, the lines between the naming of the core and the hardware thread overlap. Typically, however, core and hardware threads are treated by the operating system as separate logical processors, where the operating system is capable of scheduling operations individually on each logical processor.The physical processor 1200 as illustrated in Figure 12 includes two cores - cores 1201 and 1202. Here, cores 1201 and 1202 are considered to be symmetric cores, ie, cores having the same configuration, functional units, and/or logic. In another embodiment, core 1201 includes an out-of-order processor core and core 1202 includes an in-order processor core. However, cores 1201 and 1202 may be individually selected from any type of core, such as a native core, a software management core, a core suitable for executing a native instruction set architecture (ISA), a core suitable for performing an instruction set architecture (ISA), and a synergy. Designed core or other known core. In a heterogeneous nuclear environment (ie, an asymmetric core), some form of transformation, such as binary transformation, can be utilized to schedule or execute code on one core or two cores. However, to advance the discussion, the functional units illustrated in core 1201 are described in more detail below, as the units in core 1202 operate in a similar manner in the depicted embodiment.As depicted, core 1201 includes two hardware threads 1201a and 1201b, which may be referred to as hardware thread slots 1201a and 1201b. Thus, a software entity, such as an operating system, in one embodiment potentially treats processor 1200 as four separate processors, ie, four logical processors or processing elements capable of executing four software threads simultaneously. As mentioned above, the first thread is associated with architectural state register 1201a, the second thread is associated with architectural state register 1201b, the third thread is associated with architectural state register 1202a, and the fourth thread is associated with architectural state register 1202b. Here, each of the architectural status registers (1201a, 1201b, 1202a, and 1202b) may be referred to as a processing element, a thread slot, or a thread unit, as described above. As illustrated, architectural state register 1201a is replicated in architectural state register 1201b, so various architectural states/contexts can be stored for logical processor 1201a and logical processor 1201b. In core 1201, for threads 1201a and 1201b, other smaller resources, such as rename logic and instruction pointers in the allocator and rename block 1230, may also be replicated. Some resources may be shared by partitions, such as reorder buffers in resequator/retirement unit 1235, ITLB 1220, load/store buffers, and queues. Other resources, such as general purpose internal registers, page table base registers, low level data cache and data TLB 1215, execution unit 1240, and partially unordered unit 1235, are potentially fully shared.Processor 1200 often contains other resources that may be fully shared, shared by partitions, or dedicated/dedicated by processing elements. In Figure 12, an embodiment of a purely exemplary processor with an illustrative logical unit/resource of a processor is illustrated. It is pointed out that the processor may include or omit any of these functional units, as well as any other known functional units, logic or firmware not depicted. As illustrated, core 1201 includes a simplified representative unordered (OOO) processor core. However, an in-order processor can be utilized in different embodiments. The OOO core contains a branch target buffer 1220 that predicts branches to be executed/taken and an instruction conversion buffer (I-TLB) 1220 that stores address translation entries for instructions.Core 1201 further includes a decoding module 1225 coupled to extraction unit 1220 to decode the extracted elements. The extraction logic, in one embodiment, includes respective sequencers associated with thread slots 1201a, 1201b, respectively. Typically, core 1201 is associated with a first ISA that defines/specifies instructions executable on processor 1200. Machine code instructions that are part of the first ISA often contain a portion of the instructions (referred to as opcodes) that reference/specify the instructions or operations to be performed. Decode logic 1225 includes circuitry that identifies these instructions from their opcodes and passes the decode instructions in the pipeline for processing as defined by the first ISA. For example, as described in greater detail below, decoder 1225, in one embodiment, includes logic designed or adapted to identify particular instructions, such as transactional instructions. As a result of the identification by decoder 1225, architecture or core 1201 takes certain predefined actions to perform tasks associated with the appropriate instructions. It is important to note that any of the tasks, blocks, operations, and methods described herein can be performed in response to a single or multiple instructions; some of which can be new or old. It is noted that decoder 1226 identifies the same ISA (or a subset thereof) in one embodiment. Alternatively, in a heterogeneous core environment, decoder 1226 identifies the second ISA (or the first ISA subset or a distinct ISA).In one example, the allocator and rename block 1230 includes an allocator that reserves resources, such as a register file that stores the results of the instruction processing. However, threads 1201a and 1201b are potentially capable of out-of-order execution, where allocator and renamer block 1230 also reserves other resources, such as a reorder buffer, to track the result of the instruction. Unit 1230 may also include a register renamer to rename the program/instruction reference registers to other registers internal to processor 1200. The reorder/retirement unit 1235 includes components such as the reorder buffers, load buffers, and memory buffers mentioned above to support out-of-order execution of instructions for out-of-order execution and subsequent out-of-order execution.In one embodiment, the scheduler and execution unit block 1240 includes a scheduler unit to schedule instructions/operations on the execution unit. For example, a floating point instruction is dispatched on an execution unit port with available floating point execution units. A register file associated with the execution unit is also included to store the information instruction processing result. Exemplary execution units include floating point execution units, integer execution units, hop execution units, load execution units, storage execution units, and other known execution units.Lower level data cache and data conversion buffer (D-TLB) 1250 is coupled to execution unit 1240. Data high speed caches the recently used/operated units, such as data operands, which potentially remain in a memory coherency state. The D-TLB will store the recent virtual/linear transformation of physical addresses. As a specific example, a processor may include a page table structure to divide a physical memory into a plurality of virtual pages.Here, cores 1201 and 1202 share access to higher level or higher level caches, such as a second level cache associated with on-chip interface 1210. It is to be noted that higher or higher refers to a cache level that increases or derives further ways from the execution unit. In one embodiment, the higher level cache is the last level of data cache - the last cache in the memory hierarchy on processor 1200 - such as a second level or third level data cache. However, higher level caches are not so limited because they can be associated with an instruction cache or contain an instruction cache. Trace cache - one type of instruction cache - can instead be coupled behind decoder 1225 to store the most recently decoded traces. Here, an instruction is potentially referred to as a macro instruction (ie, a general purpose instruction recognized by a decoder) that can be decoded into a number of microinstructions (micro-operations).In the depicted configuration, processor 1200 also includes an on-chip interface module 1210. Historically, memory controllers, described in more detail below, have been included in computing systems external to processor 1200. In this case, the on-chip interface 1210 will be external to the processor 1200 (such as system memory 1275, chipset (often including a memory controller hub connected to memory 1275 and an I/O controller hub that connects peripherals), memory Controller hub, north bridge or other integrated circuit) communication. And in this case, bus 1205 can comprise any known interconnect, such as a multidrop down bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a consistent (eg, cache consistent) bus, a layered protocol architecture, a differential Bus and GTL bus.Memory 1275 can be dedicated to processor 1200 or shared with other devices in the system. Common examples of types of memory 1275 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. It is noted that device 1280 can include a graphics accelerometer, a card or processor coupled to a memory controller hub, a data storage device coupled to an I/O controller hub, a wireless transceiver, a flash memory device, an audio controller, a network controller Or other known devices.However, recently, when more logic and devices are integrated on a single die, such as a SOC, each of these devices can be incorporated on the processor 1200. For example, in one embodiment, the memory controller hub is on the same package and/or die as processor 1200. Here, the portion of the core (on-core portion) 1210 includes one or more controllers for interfacing with other devices, such as memory 1275 or graphics device 1280. Configurations including controllers and interconnects for interfacing with such devices are often referred to as on-core (or non-core) configurations. As an example, the on-chip interface 1210 includes a ring interconnect for on-chip communication and a high speed serial point-to-point link 1205 for off-chip communication. However, in a SOC environment, even more devices, such as a network interface, coprocessor, memory 1275, graphics processor 1280, and any other known computer devices/interfaces can be integrated on a single die or integrated circuit, To provide a small form factor with high functionality and low power consumption.In one embodiment, processor 1200 can execute compiler, optimization, and/or converter code 1277 to compile, transform, and/or optimize application code 1276 to support or interface with the devices and methods described herein. Compilers often contain programs or a set of programs that convert source text/code into target text/code. Typically, compiler/application code is compiled in a multi-stage and multi-pass with a compiler to transform high-level programming language code into low-level machine or assembly language code. However, for simple compilation, a single pass compiler can still be utilized. The compiler can utilize any known compilation technique and perform any known compiler operations such as lexical analysis, pre-processing, parsing, semantic analysis, code generation, code transformation, and code optimization.Larger compilers often contain multiple phases, but most often, these phases are included in two general phases: (1) front-end, that is, generally where syntactic processing, syntax processing, and some transformation/optimization can occur. And (2) the backend, ie, generally in the case of analysis, transformation, optimization, and code generation. Some compilers refer to the middle, which illustrates the ambiguity of the engraving between the front end and the back end of the compiler. Thus, references to inserts, associations, builds, or other operations of the compiler may occur in any of the stages or passes mentioned above, as well as any other known stages or passes of the compiler. As an illustrative example, a compiler potentially inserts operations, calls, functions, etc. into one or more phases of the compilation, such as inserting a call/action into the compiled front end, and then invoking the call/action during the transform phase Into lower level code. It is noted that during dynamic compilation, compiler code or dynamically optimized code can insert such operations/calls and optimize the code for execution during runtime. As a specific illustrative example, binary code (already compiled code) can be dynamically optimized during runtime. Here, the program code can include dynamically optimized code, binary code, or a combination thereof.Similar to compilers, converters (such as binary converters) statically or dynamically convert code to optimize and/or convert code. Thus, references to execution code, application code, program code or other software environment may refer to: (1) executing a compiler program, optimizing a code optimizer or converter, either dynamically or statically, to compile program code, maintain software Structure, perform other operations, optimize code, or convert code; (2) execute main program code containing operations/calls, such as application code that has been optimized/compiled; (3) execute other program code, such as maintain software structure The library associated with the main program code to execute other software-related applications, or to optimize the code; or (4) a combination thereof.Referring now to Figure 13, shown is a block diagram of a second system 1300 in accordance with an embodiment of the present invention. As shown in FIG. 13, multiprocessor system 1300 is a point-to-point interconnect system and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnect 1350. Each of processors 1370 and 1380 can be a certain version of the processor. In one embodiment, 1352 and 1354 are part of a serial point-to-point coherent interconnect fabric, such as Intel's Fast Path Interconnect (QPI) architecture. Thus, the present invention can be implemented within a QPI architecture.Although only two processors 1370, 1380 are shown, it is to be understood that the scope of the present invention is not limited thereto. In other embodiments, one or more additional processors may be present in a given processor.Processors 1370 and 1380 are shown to include integrated memory controller units 1372 and 1382, respectively. Processor 1370 also includes portions of bus controller unit point-to-point (P-P) interfaces 1376 and 1378; similarly, second processor 1380 includes P-P interfaces 1386 and 1388. Processors 1370, 1380 can exchange information via point-to-point (P-P) interface 1350 using P-P interface circuits 1378, 1388. As shown in FIG. 13, EVICs 1372 and 1382 couple the processors to respective memories, namely memory 1332 and memory 1334, which may be portions of the main memory that are locally attached to the respective processors.Processors 1370, 1380 each exchange information with chipset 1390 via point-to-point interface circuits 1376, 1394, 1386, 1398 via respective P-P interfaces 1352, 1354. Chipset 1390 also exchanges information with high performance graphics circuitry 1338 via interface circuitry 1392 along high performance graphics interconnect 1339.A shared cache (not shown) may be included in either or both processors; still connected to the processor via a PP interconnect such that local cache information for either or both processors may be stored In the shared cache (if the processor is placed in low power mode).Chip set 1390 can be coupled to first bus 1316 via interface 1396. In one embodiment, the first bus 1316 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG. 13, various I/O devices 1314 are coupled to a first bus 1316, along with a bus bridge 1318 that couples the first bus 1316 to the second bus 1320. In one embodiment, the second bus 1320 includes a low pin count (LPC) bus. In one embodiment, various devices are coupled to the second bus 1320, including, for example, a keyboard and/or mouse 1322, a communication device 1327, and a storage unit 1328, such as a disk drive or other mass storage that often includes instructions/code and data 1330. Device. Additionally, audio I/O 1324 is shown coupled to second bus 1320. It is to be noted that other architectures are possible in which the components and interconnect architectures involved are variable. For example, instead of the point-to-point architecture of Figure 13, the system can implement a multi-drop downlink bus or other such architecture.While the invention has been described with respect to a limited number of embodiments, those skilled in the art will recognize many modifications and changes. It is intended that the appended claims cover all such modifications and modifications,Design can go through various stages, from creation to simulation to production. The data representing the design can represent the design in several ways. First, as useful in simulation, the hardware can be represented using a hardware description language or another functional description language. Additionally, at certain stages of the design process, a circuit level model with logic and/or transistor gates can be generated. Furthermore, most designs reach a data level that represents the physical layout of the various devices in a hardware model at some stage. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers for the mask used to create the integrated circuit. In any representation of the design, the data can be stored in any form of machine readable medium. The memory or magnetic or optical storage device (such as a disk) may be a machine readable medium to store information transmitted via light waves or waves modulated or otherwise generated to convey such information. When transmitting an indication or carrying an electrical carrier of code or design, a new copy is made to the extent that a copy, buffer or retransmission of the electrical signal is performed. Thus, a communication provider or network provider can at least temporarily store an article, such as information encoded into a carrier wave, that implements the techniques of embodiments of the present invention on a tangible, machine readable medium.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a microcontroller, that is associated with a non-transitory medium that stores code that is adapted to be executed by a microcontroller. Thus, in one embodiment, a reference to a module refers to hardware that is specifically configured to recognize and/or execute code that is maintained on a non-transitory medium. Still further, in another embodiment, the use of a module refers to a non-transitory medium containing code that is specifically adapted to be executed by a microcontroller to perform a predetermined operation. And as can be noted, in yet another embodiment, the term module (in this example) can refer to a combination of a microcontroller and a non-transitory medium. The usual module boundaries, illustrated as separate, are commonly changed and potentially overlap. For example, the first and second modules can share hardware, software, firmware, or a combination thereof while potentially retaining some separate hardware, software, or firmware. In one embodiment, the use of the term logic includes hardware, such as transistors, registers, or other hardware, such as a programmable logic device.The use of the phrase "for" or "configured to", in one embodiment, refers to arranging, placing, manufacturing, promising, importing, and/or designing devices, hardware, logic, or components to perform specified or predetermined tasks. In this example, a device that is not operating or its components is still "configured to" perform a specified task (if it is designed, coupled, and/or interconnected to perform the specified task). As a purely illustrative example, a logic gate can provide 0 or 1 during operation. However, a logic gate that is "configured to" provide an enable signal to the clock does not contain every potential logic gate that provides 1 or 0. Instead, the logic gates are one that are coupled in some way, during operation, the 1 or 0 output will enable the clock. Again, it is pointed out that the use of the term "configured to" does not require an operation, but instead focuses on the latency of the device, hardware, and/or component, wherein in the latent state, when the device, hardware, and/or component is in operation, the device, Hardware and/or components are designed to perform specific tasks.Furthermore, the use of the phrase "capable of /" and / or "operable with", in one embodiment, refers to a device, logic, hardware, and/or component that is designed in some manner to achieve use in a prescribed manner. Equipment, logic, hardware and/or components. As indicated above, the use of "for", "capable of" or "operable to", in one embodiment, refers to a latent state of a device, logic, hardware, and/or component, where the device, logic, hardware, and/or The component is not in operation, but is designed in some way to enable the device to be used in a prescribed manner.As used herein, a value includes any known representation of a number, state, logic state, or binary logic state. Often, the use of logical levels, logical values, or logical values is also referred to as 1 and 0, which simply represents a binary logic state. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a memory cell, such as a transistor or flash cell, may be capable of maintaining a single logic value or multiple logic values. However, other representations of values in computer systems have been used. For example, the decimal number 10 can also be represented as a binary value of 1010 and a hexadecimal letter A. Thus, a value contains any representation of information that can be held in a computer system.Moreover, the state can be represented by a value or a portion of a value. As an example, a first value, such as a logical one, may represent a default or initial state, and a second value, such as a logical zero, may represent a non-default state. Moreover, the terms "reset" and "set" refer to default and updated values or states, respectively, in one embodiment. For example, the default value potentially contains a high logical value, ie a reset, while the updated value potentially contains a low logical value, ie a setting. It is noted that any combination of values can be utilized to represent any number of states.Embodiments of the methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine-readable, computer-accessible or computer-readable medium (which may be executed by processing elements) . Non-transitory machine accessible/readable media include any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, non-transitory machine accessible media includes random access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; electrical storage devices; Storage device; sound storage device; other form of storage device for holding information received from transient (propagating) signals (eg, carrier waves, infrared signals, digital signals), etc., which will be associated with non-transitory media from which information can be received Differentiate.Instructions for logic programming to perform embodiments of the invention may be stored in a memory in the system, such as in a DRAM, cache, flash memory, or other storage device. Still further, the instructions can be distributed via a network or through other computer readable media. Thus, a machine-readable medium can comprise any mechanism for storing or transmitting information in a form readable by a machine (eg, a computer), but is not limited to a floppy disk, an optical disk, a compact disk, a read only memory (CD-ROM), and a magnetic Optical disk, read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory Or a tangible, machine readable storage device for use in transmitting information over the Internet via electrical, optical, acoustic or other forms of propagated signals (eg, carrier waves, infrared signals, digital signals, etc.). Accordingly, a computer readable medium comprises any type of tangible, machine readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).A method, system, and/or machine readable storage medium having executable code to determine a first port of at least one first device connected to a plurality of ports of a root complex of the system is disclosed in Example 1; Configuring an address corresponding to the first level device comprising the first device; determining that the second device is connected to the second port of the plurality of ports of the root complex by mapping the portal bridge, and the second device is included in another second level And generating a mapping table corresponding to the mapping portal bridge. The mapping table defines a translation between an addressing used in a first scenario of a configuration address space of the system and an addressing used in a second scenario of the configuration address space, the first scenario comprising A scenario of the root complex, and the second scenario includes a scenario corresponding to the second hierarchical device, and an address assigned to the first hierarchical device is according to the first scenario.In Example 2, the method, system, and medium of Example 1 can optionally also assign an address for a second level device corresponding to a first scenario in which the address space is configured.In the method, system and medium of any one of examples 1-2, each of the second hierarchical devices may optionally also be according to the second scenario of the configuration address space The corresponding address is assigned.In Example 4, in the method, system and medium of any of examples 1-3, the address of each of the first scene and the second scene of the configuration address may optionally be a bus-device-function (BDF) )number.In Example 5, in the method, system and medium of Example 4, optionally the address that is assignable according to the first scenario of the configuration address space is assigned to be optimized in the first scenario The assignment of the bus number utilized.In Example 6, in the method, system, and medium of Example 5, optionally, the address assigned according to the second scenario of the configuration address space is referred to according to a different, second address assignment scheme Match.In Example 7, in the method, system, and medium of Example 6, the second aspect may be agnostic to optimizing bus number assignments within the address of the second scenario.In Example 8, in the method, system, and medium of Example 4, the configuration address space can optionally include a PCIe configuration address space.In Example 9, in the method, system, and medium of Example 4, the first number of bus numbers may optionally allow the second number of bus numbers to be optionally in the first scene of the configuration address space Assigned in the second scenario of the configuration address space, a third number of bus numbers may optionally be assigned to the first scene of the configuration address space, and the first number of bus numbers The sum of the two quantities and the third quantity may exceed the first quantity.In Example 10, in the method, system and medium of any of examples 1-9, the mapping portal bridge is optionally connectable to the switch device of the root complex at the device of the hierarchy Implemented in .In Example 11, in the method, system, and medium of any of Examples 1-10, the mapping portal bridge is optionally achievable in the second port.In Example 12, in the method, system, and medium of any of examples 1-11, the mapping portal bridge will use the mapping table to facilitate communication between the second level device and the root complex .In Example 13, in the method, system, and medium of any of Examples 1-12, the apparatus is optionally found in each of the first device level and the second device level in accordance with a respective search algorithm.In Example 14, in the method, system, and medium of Example 13, the search algorithm optionally includes a depth-first search.In Example 15, in the method, system, and medium of Example 13, the search algorithm optionally includes a breadth-first search.In Example 16, in the method, system, and medium of Example 13, the search algorithm optionally for discovering devices in the first level is different from the searching for discovering devices in the second level algorithm.In Example 17, in the method, system and medium of Example 13, the search algorithm for discovering devices in the first level is the same as the search algorithm for discovering devices in the second level.In the method, system and medium of any one of examples 1-17, at least part of the address in the first scenario of the configuration address space may be reserved for hot plugging .A system is disclosed in Example 19, including a root complex and system software including a plurality of ports coupled to a plurality of hierarchical devices. The system software is executable by the processor to: determine that the at least one first device is connected to the first port of the plurality of ports; assigning an address corresponding to the first level device comprising the first device; determining that the second device is connected by mapping the portal bridge At a second port of the plurality of ports of the root complex, and the second device is included in another second level device; and generating a mapping table corresponding to the mapping portal bridge. The mapping table defines a translation between an addressing used in a first scenario of a configuration address space of the system and an addressing used in a second scenario of the configuration address space, the first scenario comprising A scenario of the root complex, and the second scenario includes a scenario corresponding to the second hierarchical device, and an address assigned to the first hierarchical device is according to the first scenario.Reference throughout the specification to "one embodiment" or "an embodiment" means that the particular features, structures, or characteristics described in connection with the embodiments are included in at least one embodiment of the invention. Thus, appearances of the phrases "in the embodiment" Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, the detailed description has been given by reference to the specific exemplary embodiments. It will be apparent, however, that various modifications and changes may be made without departing from the spirit and scope of the invention. The description and drawings are to be considered in a Furthermore, the embodiments and other exemplary language used above are not necessarily referring to the same embodiment or the same example, but may refer to different or distinct embodiments, and potentially the same embodiments. |
Techniques are disclosed for forming column IV transistor devices having source/drain regions with high concentrations of germanium, and exhibit reduced parasitic resistance relative to conventional devices. In some example embodiments, the source/drain regions each includes a thin p-type silicon or germanium or SiGe deposition with the remainder of the source/drain material deposition being p-type germanium or a germanium alloy (e.g., germanium:tin or other suitable strain inducer, and having a germanium content of at least 80 atomic % and 20 atomic % or less other components). In some cases, evidence of strain relaxation may be observed in the germanium rich cap layer, including misfit dislocations and/or threading dislocations and/or twins. Numerous transistor configurations can be used, including both planar and non-planar transistor structures (e.g., FinFETs and nanowire transistors), as well as strained and unstrained channel structures. |
The device of any of the preceding claims wherein at least one of the caps further comprises tin. 25 17. The device of any of the preceding claims wherein at least one of the caps further comprises misfit dislocations and/or threading dislocations and/or twins. The device of any of the preceding claims wherein the caps are free of misfit dislocations, threading dislocations, and twins. 30 An electronic device comprising: a printed circuit board having an integrated circuit including one or more transistor devices as 23 WO 2012/088097 PCT/US2011/066129 defined in any of the preceding claims. The electronic device of claim 19 wherein the integrated circuit comprises at least one of a communication chip and/or a processor. 5 The electronic device of claims 19 or 20 wherein the electronic device is a computing device. An integrated circuit, comprising: a substrate having a channel region; 10 a gate electrode above the channel region; source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 40% of the total thickness; and 15 metal-germanide source and drain contacts. The circuit of claim 20 wherein the thickness ratio of liner thickness to cap thickness is 1:5, or less. 20 24. The circuit of claims 20 or 21 wherein at least one of the caps further comprises tin. A method for forming a transistor device, comprising: providing a substrate having a channel region; providing a gate electrode above the channel region; and 25 providing source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. 30 The method of claim 25 further comprising providing metal-germanide source and drain contacts. WO 2012/088097 PCT/US2011/066129. The method of claims 25 or 26 wherein the thickness ratio of liner thickness to cap thickness is 2:5, or less. 5 28. The method of any of claims 25 through 27 wherein at least one of the liners and/or caps has at least one of a graded concentration of germanium and/or p-type dopant. The method of any of claims 25 through 28 wherein at least one of the caps further comprises tin. 10 A transistor device, comprising: a silicon-containing substrate having a channel region; a gate electrode above the channel region; and source and drain regions formed on or in the substrate and adjacent to the channel region, each of 15 the source and drain regions having a total thickness comprising a p-type liner of silicon or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. A transistor device, comprising: 20 a germanium substrate having a channel region; a gate electrode above the channel region; and source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the 25 liner is less than 50% of the total thickness. The device of claim 31 wherein each liner is included in the composition of the corresponding cap. |
WO 2012/088097 PCT/US2011/066129 Column IV Transistors for PMOS Integration 5 RELATED APPLICATION This application is a continuation-in-part of and claims priority to U.S. Application No. 12/975,278 filed December 21, 2010. 10 BACKGROUND Increased performance of circuit devices including transistors, diodes, resistors, capacitors, and other passive and active electronic devices formed on a semiconductor substrate is typically a major factor considered during design, manufacture, and operation of those devices. For example, during design and manufacture or forming of, metal oxide semiconductor (MOS) transistor 15 semiconductor devices, such as those used in a complementary metal oxide semiconductor (CMOS), it is often desired to minimize the parasitic resistance associated with contacts otherwise known as external resistance Rext. Decreased Rext enables higher current from an equal transistor design. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 schematically illustrates components of resistance of a typical MOS transistor that 20 includes source and drain tip regions. Figure 2 is a method of forming a column IV transistor in accordance with an embodiment of the present invention. Figures 3A to 3F illustrate structures that are formed when carrying out the method of Figure 2, in accordance with various embodiments of the present invention. 25 Figures 4A to 4G each shows a perspective view of a FinFET transistor structures formed in accordance with one embodiment of the present invention. Figures 5A and 5B each shows a perspective view of a nanowire transistor structure formed in accordance with an embodiment of the present invention. Figure 6 illustrates a computing system implemented with one or more transistor structures in 30 accordance with an example embodiment of the present invention. As will be appreciated, the figures are not necessarily drawn to scale or intended to limit the claimed invention to the specific configurations shown. For instance, while some figures generally 1 WO 2012/088097 PCT/US2011/066129 indicate straight lines, right angles, and smooth surfaces, an actual implementation of a transistor structure may have less than perfect straight lines and/or right angles, and some features may have surface topology or otherwise be non-smooth, given real world limitations of the processing equipment and techniques used. In short, the figures are provided merely to show example 5 structures. DETAILED DESCRIPTION Techniques are disclosed for forming column IV transistor devices having source and drain regions with high concentrations of germanium, and exhibit reduced parasitic resistance relative to conventional devices. In some example embodiments, the source/drain regions of the resulting 10 transistor structure each includes a thin p-type silicon or germanium or silicon germanium (SiGe) liner layer with the remainder of the source/drain material being p-type germanium or a germanium alloy comprising, for instance, germanium and tin, and having a germanium content of at least 80 atomic % (and 20 atomic % or less other components, such as tin and/or other suitable strain inducers). In some example cases, evidence of strain relaxation may be observed in this germanium 15 rich layer including misfit dislocations and/or threading dislocations. Numerous transistor configurations and suitable fabrication processes will be apparent in light of this disclosure, including both planar and non-planar transistor structures (e.g., FinFETs and nanowire transistors), as well as strained and unstrained channel structures. The techniques are particularly well-suited for implementing p-type MOS (PMOS) devices, although other transistor configurations may benefit as 20 well. General Overview As previously explained, increased drive current in the transistors can generally be achieved by reducing device external resistance, Rext. However, PMOS transistor performance is a function of various component resistances within the device, as can be seen with reference to Figure 1. 25 Channel resistance R1 can be modulated through carrier mobility, which is a function of compressive strain within the channel. External resistance Rext of the device includes tip resistance R2 (tip region is also referred to as source/drain extension), source/drain resistance R3, and contact resistance R4 (metal to semiconductor). All of these segmented resistances have a material component (e.g., energy barrier across an interface, carrier concentration and mobility) a geometry 30 component (e.g., length, width, etc) and a dynamic electrical load component (current crowding). Thus, and in accordance with some embodiments of the present invention, replacing the typical silicon or SiGe alloy materials in the source/drain regions with a p-type thin liner and high WO 2012/088097 PCT/US2011/066129 content of germanium (with very high p-type doping concentration) minimizes the external resistance components (R2, R3, and R4). In addition, by introducing a highly compressively strained material, the channel hole mobility is maximized or otherwise increased and hence reduces channel resistance (Rl). The net impact of decreased channel, tip, source/drain and contact 5 resistance is improved transistor current for a given voltage (relative to threshold voltage, Vt, i.e. V- Vt). In some example cases, the thin liner is p-type doped silicon or or germanium or SiGe alloy, and is generally less than 50% of the total source/drain deposition layer thickness. The remaining source/drain deposition layer thickness is generally greater than 50% of the total source/drain 10 deposition layer thickness and can be, for example, p-type doped germanium or a germanium alloy such as germanium:tin or germanium:tin:x (where x is, for example, silicon or other marginal component or process/diffusion-based artifact) having at least 80 atomic % germanium and 20 atomic % or less of other constituents (e.g., tin and/or any other suitable strain inducer and/or other marginal unintentional components). In some specific such example embodiments, the thickness 15 ratio of the source/drain liner to the high concentration germanium cap is about 1:5 or less (where the liner makes up about 20% or less of the total source/drain deposition layer thickness). In some such example cases, the thickness liner is one to several monolayers. The techniques can be used to form transistor devices in any number of devices and systems. In some embodiments, such as CMOS devices having both n-type MOS (NMOS) and PMOS 20 transistors, selectivity can be achieved in various ways. In one embodiment, for instance, deposition on NMOS source/drain locations can be avoided by having NMOS regions masked off during PMOS deposition. In other embodiments, selectivity may include natural selectivity. For instance, while boron doped germanium grows on p-type SiGe (or silicon) source drain regions, it does not grow on insulator surfaces such as silicon dioxide (Si02) or silicon nitride (SiN); nor does it grow 25 on, for instance, exposed heavily phosphorous doped silicon in n-type regions. The techniques provided herein can be employed to improve device resistance in any number of transistor structures and configurations, including planar, flush or raised source/drain, non-planer (e.g., nanowire transistors and finned transistors such as double-gate and trigate transistor structures), as well as strained and unstrained channel structures. The source/drain areas can be 30 recessed (e.g., using an etch process) or not recessed (e.g., formed on top surface of substrate). In addition, the transistor devices may optionally include source and drain tip regions that are designed, for instance, to decrease the overall resistance of the transistor while improving short channel effects 3 WO 2012/088097 PCT/US2011/066129 (SCE), but such tip regions are not required. The transistor devices may further include any number of gate configurations, such as poly gates, high-k dielectric metal gates, replacement metal gate (RMG) process gates, or any other gate structure. Any number of structural features can be used in conjunction with low resistance transistor techniques as described herein. 5 A transmission electron microscopy (TEM) cross-section perpendicular to gate lines or secondary ion mass spectrometry (SIMS) profile can be used to show the germanium concentration in the structure, as profiles of epitaxial alloys of silicon and SiGe can readily be distinguished from high germanium concentration profiles, in accordance with some embodiments. In some such silicon-containing substrate cases, by forgoing the typical requirement to maintain strained 10 (dislocation free) source/drain regions, the lattice dimension mismatch between the source/drain fill material and silicon channel can be increased by at least 2X for pure germanium and even more for germanium-tin alloys. While not 100% of the strain is able to transfer to the channel in cases where dislocations are present in the germanium rich cap layer, post deposition thermal treatments can be used to provide a clear transistor performance (current at a given V- Vt) gain even for relaxed films 15 (as described herein) relative to strained SiGe controls. As will be appreciated, relaxed generally means that the films can have misfit dislocations present, but may also refer to a plastic relaxation mechanism which involves dislocation formation and propagation. A process of elastic relaxation becomes possible in non-planar configurations such as FinFET (e.g., tri-gate) and nanowire structures where the strained material is not fully constrained by the substrate. Thus, the in-plane 20 lattice constant has more flexibility to expand or contract independent of the substrate and this process does not require formation and propagation of misfit dislocations. Going forward herein, the word relaxation is used in the sense of plastic relaxation and not in the sense of elastic relaxation. The use of tin or other suitable strain inducers to alloy the high concentration germanium cap as described herein can optionally be used to increase the strain in the channel 25 region, and thereby further reduce the overall device resistance via reduction in resistance R1 in Figure 1. As will further be appreciated, while defect free pure germanium may be desirable, it is generally difficult to grow defect free for deposition on, for example, a silicon substrate or even SiGe substrate having say 50 atomic% germanium. Surprisingly, however, if performance of a typical fully strained SiGe layer and such a germanium-rich layer having some defects (e.g., has 30 misfit and/or threading dislocations) were compared, then the defective germanium-rich layer would perform better. As will be appreciated, this result is generally not intuitive, as it runs counter to the conventional understanding with respect to thin film. In any case, while some embodiments of the 4 WO 2012/088097 PCT/US2011/066129 present invention may include germanium-rich caps that are lacking in crystal features such as misfit dislocations, threading dislocations and twins (defects resulting from a change in lattice orientation across a twin plane), other embodiments may include germanium-rich caps that have one or more such features. 5 Architecture and Methodology Figure 2 is a method of forming a column IV transistor in accordance with an embodiment of the present invention. Figures 3A to 3F illustrate example structures that are formed when carrying out the method of Figure 2, in accordance with various embodiments. One or more such transistors 10 may be formed in the fabrication of, for example, a processor or a communications chip or memory chip. Such integrated circuits can then be used in various electronic devices and systems. The example method includes forming 202 one or more gate stacks on a semiconductor substrate upon which a MOS device may be formed. The MOS device may comprise, for example, PMOS transistors, or both NMOS and PMOS transistors (e.g., for CMOS devices). Figure 3A 15 shows an example resulting structure, which in this case includes a PMOS transistor formed on substrate 300. As can be seen, the gate stack is formed over a channel region, and includes a gate dielectric layer 302, a gate electrode 304, and an optional hardmask 306. Spacers 310 are formed adjacent to the gate stack. The gate dielectric 302 can be, for example, any suitable oxide such as silicon dioxide (Si02) 20 or high-k gate dielectric materials. Examples of high-k gate dielectric materials include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may 25 be carried out on the gate dielectric layer 302 to improve its quality when a high-k material is used. In some specific example embodiments, the high-k gate dielectric layer 302 may have a thickness in the range of 5 A to around 100 A thick (e.g., 10 A). In other embodiments, the gate dielectric layer 302 may have a thickness of one monolayer of oxide material. In general, the thickness of the gate dielectric 302 should be sufficient to electrically isolate the gate electrode 304 from the source and 30 drain contacts. In some embodiments, additional processing may be performed on the high-k gate dielectric layer 302, such as an annealing process to improve the quality of the high-k material. 5 WO 2012/088097 PCT/US2011/066129 The gate electrode 304 material can be, for example, polysilicon, silicon nitride, silicon carbide, or a metal layer (e.g., tungsten, titanium nitride, tantalum, tantalum nitride) although other suitable gate electrode materials can be used as well. The gate electrode 304 material, which may be a sacrificial material that is later removed for a replacement metal gate (RMG) process, has a 5 thickness in the range of about 10A to 500A (e.g., 100A), in some example embodiments. The optional gate hard mask layer 306 can be used to provide certain benefits or uses during processing, such as protecting the gate electrode 304 from subsequent etch and/or ion implantation processes. The hard mask layer 306 may be formed using typical hard mask materials, such as such as silicon dioxide, silicon nitride, and/or other conventional insulator materials. 10 The gate stack can be formed as conventionally done or using any suitable custom techniques (e.g., conventional patterning process to etch away portions of the gate electrode and the gate dielectric layers to form the gate stack, as shown in Figure 2A). Each of the gate dielectric 302 and gate electrode 304 materials may be formed, for example, using conventional deposition processes such as chemical vapor deposition (CVD), atomic layer deposition (ALD), spin-on deposition 15 (SOD), or physical vapor deposition (PYD). Alternate deposition techniques may be used as well, for instance, the gate dielectric 302 and gate electrode 304 materials may be thermally grown. As will be appreciated in light of this disclosure, any number of other suitable materials, geometries, and formation processes can be used to implement an embodiment of the present invention, so as to provide a low resistance transistor device or structure as described herein. 20 The spacers 310 may be formed, for example, using conventional materials such as silicon oxide, silicon nitride, or other suitable spacer materials. The width of the spacers 310 may generally be chosen based on design requirements for the transistor being formed. In accordance with some embodiments, however, the width of the spacers 310 is not subject to design constraints imposed by the formation of the source and drain tip regions, given sufficiently high p-doped germanium 25 content (e.g., boron doped germanium) or SiGe alloy liner in the source/drain tip regions. Any number of suitable substrates can be used to implement substrate 300, including bulk substrates, semiconductors-on-insulator substrates (XOI, where X is a semiconductor material such as silicon, germanium, or germanium-enriched silicon), and multi-layered structures, including those substrates upon which fins or nanowires can be formed prior to a subsequent gate patterning 30 process. In some specific example cases, the substrate 300 is a germanium or silicon or SiGe bulk substrate, or a germanium or silicon or SiGe on oxide substrate. Although a few examples of materials from which the substrate 300 may be formed are described here, other suitable materials 6 WO 2012/088097 PCT/US2011/066129 that may serve as a foundation upon which a low resistance transistor device may be built falls within the spirit and scope of the claimed invention. With further reference to Figure 3A, after the one or more gate stacks are formed, the method continues with some optional processing which in this example embodiment includes etching 204 5 the source/drain regions of the transistor structure, and masking-off 206 any NMOS source/drain regions of the structure (if present). As will be appreciated, the source/drain regions need not be recessed or otherwise etched. In such cases, the source/drain materials can be formed on the substrate 300 without any etching. While such non-recessed source/drain regions will not impact channel resistance, a bi-layer source/drain structure having a thin liner and high germanium content 10 cap can still be implemented to provide low contact resistance, in accordance with some embodiments. As will further be appreciated, not all embodiments will include n-type regions. In some example cases, for instance, the circuit being fabricated may include only PMOS devices. In such example cases, there would be no n-type source/drain regions to mask off. When n-type regions are present, any suitable masking technique can be used to protect the n-type regions during 15 p-type processing. In example embodiments where the source/drain regions are etched, source/drain cavities 312/314 result, as best shown in Figure 3A. The cavities effectively define the location of the source/drain regions. As can be further seen, substrate 300 has been etched not only to provide source/drain cavities 312/314, but also their respective tip areas 312A/314A which undercut the gate 20 dielectric 302. The cavities 312/314 and their respective tip areas 312A/314A can be formed as conventionally done using any number of suitable processes. In some example cases, this includes ion implantation to highly dope portions of substrate 300 adjacent to gate stack followed by annealing to drive the dopants further into substrate 300 to improve the etch rate of the intended source/drain areas. A dry etch process can then be used to etch the doped regions of substrate 300 to 25 form cavities 312/314 and their respective tip areas 312A/314A. After the dry etch process has completed, a wet etch may be used, for instance, to clean and further etch the cavities 312/314 and their respective tip areas 312A/314A. Such wet etching, which can be carried out using conventional or custom wet etch chemistries, can be used to remove contaminants such as carbon, fluorine, chlorofluorocarbons, and oxides such as silicon oxide to provide a clean surface upon 30 which subsequent processes may be carried out. In addition, and assuming a monocrystalline silicon substrate, the wet etching may also be used to remove a thin portion of substrate 300 along the <111> and <001> crystallographic planes to provide a smooth surface upon which a high quality WO 2012/088097 PCT/US2011/066129 epitaxial deposition may occur. In some example cases, the thin portion of substrate 300 that is etched away may be, for example, up to 5 nm thick and may also remove residual contaminants. The wet etching generally causes edges of the cavities 312/314 and their respective tip areas 312A/314A to follow the <111> and <001> crystallographic planes. 5 With further reference to Figure 2, the method continues with depositing 208 a p-type silicon or germanium or SiGe liner 313/315 in the p-type source/drain regions, and then depositing 210 a p-type germanium or germanium alloy in the p-type source/drain regions over the liner 313/315. Each of these depositions can be carried out, for instance, using selective epitaxial deposition, although any suitable deposition process can be used. As can be seen with reference to Figure 3B, the p-type 10 silicon or germanium or SiGe liners 313/315 are deposited into cavities 312/314 and their respective tip areas 312A/314A. In addition, and as best shown in Figure 3C, cavities 312/314 and tip areas 312A/314A have been further filled to provide a thick capping layer of p-type germanium or germanium alloy 318/320 over the p-type liners 313/315. Example p-type dopants include, for instance, boron, gallium, or any other suitable p-type dopant or dopants, as will be appreciated, and 15 the claimed invention is not intended to be limited to any particular one. In accordance with some specific example embodiments where the substrate 300 is a silicon or SiGe bulk substrate, or a semiconductor-on-insulator substrate (XOI, where X is silicon or SiGe), the source and drain cavities 312/314 along with their respective tip areas 312A/314A are filled with in-situ boron doped silicon or SiGe thereby forming the corresponding liners 313/315, and then 20 further filled with in-situ boron doped germanium or germanium rich alloy to provide caps 318/320. In other example embodiments where the substrate 300 is a germanium bulk substrate or a germanium-on-insulator substrate, the source and drain cavities 312/314 along with their respective tip areas 312A/314A can be filled with in-situ boron doped germanium thereby forming the corresponding liners 313/315, and then further filled with in-situ boron doped germanium rich alloy 25 (such as germanium:tin) to provide caps 318/320. As will be appreciated in light of this disclosure, the respective germanium and p-type dopant concentrations of the liners 313/315 and caps 318/320 can vary depending on factors such as the composition of the substrate 300, the use of grading for lattice matching/compatibility, and the overall desired thickness of the total source/drain deposition. Numerous material system and p-type doping configurations can be implemented, as will be 30 appreciated in light of this disclosure. For instance, in some example embodiments having a silicon or germanium or SiGe substrate, the germanium concentration of the liners 313/315 can be in the range of 20 atomic % to 100 atomic 8 WO 2012/088097 PCT/US2011/066129 %, and the boron concentration is in the range of 1E20 cm" 3 to 2E21 cm~3. To avoid lattice mismatch with an underlying silicon-containing substrate, the germanium concentration of the liners 313/315 can be graded, in accordance with some embodiments. For example, in one such embodiment, the liners 313/315 can be a graded boron doped SiGe layer with the germanium 5 composition graded from a base level concentration compatible with the underlying silicon or SiGe substrate 300 up to 100 atomic % (or near 100 atomic %, such as in excess of 90 atomic % or 95 atomic % or 98 atomic %). In one specific such embodiment, the germanium concentration ranges from 40 atomic % or less to in excess of 98 atomic %. The boron concentration within liners 313/315 can be fixed, for example, at a high level, or alternatively can be graded. For instance, for 10 example, the boron concentration within liners 313/315 can be graded from a base concentration at or otherwise compatible with the underlying substrate 300 up to a desired high concentration (e.g., in excess of 1E20 cm"3, in excess of 2E20 cm"3, or in excess of 5E20 cm~3). In some such embodiments, the boron doped germanium caps 318/320 have a boron concentration in excess of 1E20 cm"3, such as in excess of 2E20 cm" 3 or in excess of 2E21 cm"3, or higher. This boron 15 concentration in the caps 318/320 can be graded in a similar fashion as described with reference to the liners 313/315. In a more general sense, the boron concentrations can be adjusted as necessary to provide the desired degree of conductivity, as will be appreciated in light of this disclosure. The germanium concentration of the caps 318/320 can be, for instance, fixed at 100 atomic%. Alternatively, germanium concentration of the caps 318/320 can be graded from a low to high 20 concentration (e.g., from 20 atomic % to 100 atomic %), as will be appreciated in light of this disclosure, to account for lattice mismatch between the liners 313/315 and the desired peak germanium concentration of the caps 318/320. In still other embodiments, the caps 318/320 are implemented with a germanium alloy, where the blend can be, for example, up to 80 atomic % germanium and up to 20 atomic % for the alloying material, which in some embodiments is tin. 25 Note that the tin concentration (or other alloying material) can also be graded, as will be appreciated. In one such case, channel strain is increased with a tin concentration in the range of 3 to 8 atomic % in the caps 318/320 (with the balance atomic percentage of the caps 318/320 substantially being germanium and any gradient material). In spite of relaxation, lattice constants are still relatively large and capable of applying significant strain on the adjacent channel. Other 30 suitable tin concentrations will be apparent, as will other suitable strain inducers. Note that with a pure germanium substrate, the liners 313/315 can be implemented with germanium and need not be graded. In some such cases, the germanium concentration of the liners WO 2012/088097 PCT/US2011/066129 313/315 can be fixed (e.g., 100 atomic %) and the caps 318/320 can be implemented with a germanium alloy (e.g., germanium:tin, as or other suitable germanium alloy as previously described). As previously explained, the germanium concentration (or the tin or other alloying material concentration) in the caps 318/320 can be graded to effect desired channel strain. In some 5 such cases, further note that the germanium liners 313/315 can effectively be integrated with the germanium alloy caps 318/320 or otherwise be an undetectable component of the source/drain region deposition. With respect to gradings, note that compatibility as used herein does not necessitate an overlap in concentration levels (for instance, the germanium concentration of underlying substrate 300 can 10 be 0 to 20 atomic % and initial germanium concentration of the liners 313/315 can be 30 to 40 atomic %). In addition, as used herein, the term 'fixed' with respect to a concentration level is intended to indicate a relatively constant concentration level (e.g., the lowest concentration level in the layer is within 10% of the highest concentration level within that layer). In a more general sense, a fixed concentration level is intended to indicate the lack of an intentionally graded 15 concentration level. The thickness of the liners 313/315 and caps 318/320 can also vary depending on factors such as the composition of the substrate 300, the use of grading for lattice matching/compatibility, and the overall desired thickness of the total source/drain deposition. In general, the liners 313/315 may be thicker in cases where they are configured with a graded germanium content to provide 20 compatibility with a substrate 300 that has no or an otherwise low germanium content. In other cases where the substrate 300 is a germanium substrate or otherwise contains a relatively high concentration of germanium, the liners 313/315 need not be graded, and may therefore be relatively thinner (e.g., one to several monolayers). In yet still other cases where the substrate has no or an otherwise low germanium content, the liners 313/315 can be implemented with a relatively thin 25 layer of silicon or otherwise low germanium content material, and the germanium content of the caps 318/320 can be graded as needed for compatibility. In any such cases, the liners 313/315 generally make up less than 50% of the total source/drain deposition layer thickness, and the remaining source/drain deposition layer thickness is generally greater than 50% of the total source/drain deposition layer thickness. In accordance with some such example embodiments where 30 the liners 313/315 are not graded, the thickness ratio of liners 313/315 to caps 318/320 is about 2:5 or less (i.e., where the liner makes up about 40% or less of the total source/drain deposition layer thickness). In some specific such embodiments, the thickness ratio of liners 313/315 to caps 10 WO 2012/088097 PCT/US2011/066129 318/320 is about 1:5 or less (i.e., where the liner makes up about 20% or less of the total source/drain deposition layer thickness). In one such specific example case, the thickness of liners 313/315 is in the range of one-to-several monolayers to about 10 nm, and the total source/drain deposition layer thickness is in the range of 50 to 500 nm. Numerous source/drain liner and cap 5 geometries and material configurations will be apparent in light of this disclosure. As will be appreciated in light of this disclosure, any number of other transistor features may be implemented with an embodiment of the present invention. For instance, the channel may be strained or unstrained, and the source/drain regions may or may not include tip regions formed in the area between the corresponding source/drain region and the channel region. In this sense, whether a 10 transistor structure has strained or unstrained channels, or source/drain tip regions or no source/drain tip regions, is not particularly relevant to various embodiments of the present invention, and the claimed invention is not intended to be limited to any particular such structural features. Rather, any number of transistor structures and types, and particularly those structures having p-type or both n-type and p-type source/drain transistor regions, can benefit from employing a bi-layer source/drain 15 configuration having a liner and high germanium concentration cap as described herein. A CVD process or other suitable deposition technique may be used for depositing 208 and 210. For example, depositing 208 and 210 may be carried out in a CVD reactor, an LPCVD reactor, or an ultra high vacuum CVD (UHVCVD). In some example cases, the reactor temperature may fall, for instance, between 600°C and 800°C and the reactor pressure may fall, for instance, between 20 1 and 760 Torr. The carrier gas may include, for example, hydrogen or helium at a suitable flow rate, such as between 10 and 50 SLM. In some specific embodiments, the deposition may be carried out using a germanium source precursor gas such as GeEL that is diluted in H2 (e.g., the GeEL may be diluted at 1-20%). For instance, the diluted GeEL may be used at a 1% concentration and at a flow rate that ranges between 50 and 300 SCCM. For an in situ doping of boron, diluted B2H6 may 25 be used (e.g., the B2H6 may be diluted in H2 at 1-20%). For instance, the diluted B2H6 may be used at a 3% concentration and at a flow rate that ranges between 10 and 100 SCCM. In some example cases, an etching agent may be added to increase the selectivity of the deposition. For instance, HC1 or CI2 may be added at a flow rate that ranges, for example, between 50 and 300 SCCM. Numerous variations on the source/drain bi-layer construction will be apparent in light of this 30 disclosure. For instance, in some embodiments, the liners 313/315 are implemented with epitaxially deposited boron doped SiGe, which may be in one or more layers, and have a germanium concentration in the range of 30 to 70 atomic %, or higher. As previously explained, this 11 WO 2012/088097 PCT/US2011/066129 germanium concentration of the SiGe liner may be fixed or graded so as to increase from a base level (near substrate 300) to a high level (e.g., in excess of 50 atomic %, near a base concentration of the germanium concentration of caps 318/320, which continue with the germanium gradient to 100 atomic %). The boron concentration in some such embodiments can be in excess of 1E20 cm"3, 5 such as higher than 5E20 cm" 3 or 2E21 cm"3, and may also be graded so as to increase from a base level near substrate 300 to a high level (e.g., in excess of 1E20 cm" 3 or 2E20 cm" 3 or 3E20 cm"3, etc, near caps 318/320). In embodiments where the germanium concentration of boron doped SiGe liners 313/315 is fixed, a thin graded buffer may be used to better interface the liners 313/315 with the boron doped caps 318/320. Note this buffer can be an intermediate layer or otherwise integrated 10 into the composition of the caps 318/320. For purposes of this disclosure, such a buffer can be treated as part of the caps 318/320. The thickness of the boron doped SiGe deposited layer (or collection of layers) 313/315 may range, for example, from monolayers to 50 nm, and the layer (or collection of layers) 318/320 may have a thickness in the range, for example, of 51 to 500 nm, in accordance with some specific embodiments, although alternative embodiments may have other 15 liner and cap thicknesses, as will be apparent in light of this disclosure. In some embodiments, note that cavities 312/314 may be created underneath the spacers during cyclical deposition-etch processing, and those cavities 312/314 can be backfilled by an epitaxial cap layer as well (which can have, for example, the same composition as the boron doped germanium caps 318/320). As will further be appreciated in light of this disclosure, the combination of high germanium 20 concentration (e.g., in excess of 50 atomic % and up to pure germanium) and high boron concentration (e.g., in excess of 1E20 cm"3), as discussed herein, can be used to realize significantly higher conductance in the source and drain regions (R3 in Figure 1) as well as their respective tip regions (R2 in Figure 1) in PMOS transistor devices. Further, and as previously explained, since boron diffusion is sufficiently suppressed in high germanium composition layers relative to lower 25 germanium composition layers, less adverse SCE degradation is realized with subsequent thermal anneals when comparing to a lower germanium composition layer with equal p-type dopant species and doping levels despite high doping levels in the deposited stressor film. Barrier height lowering is also enabled from the higher concentration of germanium at the contact surface resulting in lower contact resistance R4 in Figure 1. In some example embodiments, a germanium concentration in 30 excess of 80 atomic % and up to pure germanium (100 atomic %) can be used to achieve such benefits. Note that pure germanium is not required, however. For instance, some embodiments may have a germanium concentration in excess of 90 or 95 atomic %, but not be pure. 12 WO 2012/088097 PCT/US2011/066129 As further seen with reference to Figure 3C, forming the source/drain tips 318A/320A in relatively close proximity to the channel region also imparts a larger hydrostatic stress on the channel. This stress increases the strain within the channel, thereby increasing mobility in the channel and increasing drive current. This stress can be further amplified by increasing the 5 germanium concentration of the source/drain tips 318A/320A in the case of a silicon-containing substrate, and by increasing the tin concentration in the case of a germanium substrate. This is an improvement over diffusion-based processes where the tip regions generally do not induce a strain on the channel region. Once the source and drain regions are filled in accordance with an embodiment of the present 10 invention, various conventional MOS processing can be carried out to complete fabrication of MOS transistor, such as replacement gate oxide processes, replacement metal gate processes, annealing, and salicidation processes, that may further modify the transistor and/or provide the necessary electrical interconnections. For instance, after the epitaxial deposition of the source/drain regions along with their respective tips, and with further reference to Figure 2, the method may continue 15 with removing 212 any masking from n-type regions and processing those regions as desired (if applicable, such as in a CMOS process), and depositing 214 an insulator over the transistor, and then planarizing that insulator layer as commonly done. The insulator layer may be formed using materials known for the applicability in insulator layers for integrated circuit structures, such as low-k dielectric (insulator) materials. Such insulator materials include, for example, oxides such as 20 silicon dioxide (Si02) and carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. In some example configurations, the insulator layer may include pores or other voids to further reduce its dielectric constant. Figure 3D illustrates an example insulator layer 322 that has been deposited and then planarized down to the hard mask 25 306. As can be further seen with reference to Figure 3D', some embodiments of the present invention use a replacement metal gate process, and the method may include removing the gate stack (including the high-k gate dielectric layer 302, the sacrificial gate electrode 304, and the hard mask layer 306) using an etching process as conventionally done. In alternate implementations, 30 only the sacrificial gate 304 is removed. If the gate dielectric 302 is removed, the method may include depositing a new gate dielectric layer into the trench opening. Any suitable high-k dielectric materials such as those previously described may be used here, such as hafnium oxide. The same 13 WO 2012/088097 PCT/US2011/066129 deposition processes may also be used. Replacement of the gate dielectric 302 may be used, for example, to address any damage that may have occurred to the original gate dielectric layer during application of the dry and wet etch processes, and/or to replace a low-k or sacrificial dielectric material with a high-k or otherwise desired gate dielectric material. The method may then continue 5 with depositing the metal gate electrode layer into the trench and over the gate dielectric layer. Conventional metal deposition processes may be used to form the metal gate electrode layer, such as CVD, ALD, PVD, electroless plating, or electroplating. The metal gate electrode layer may include, for example, a p-type workfunction metal, such as ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. In some example configurations, two or more 10 metal gate electrode layers may be deposited. For instance, a workfunction metal may be deposited followed by a suitable metal gate electrode fill metal such as aluminum. Figure 3D' illustrates an example high-k gate dielectric layer 324 and a metal gate electrode 326 that have been deposited into the trench opening, in accordance with one embodiment. Note that such a RMG process may be carried out at a different time in the process, if so desired. 15 With further reference to Figure 2, after insulator layer 322 is provided (and any desired pre- contact formation RMG process), the method continues with etching 216 to form the source/drain contact trenches. Any suitable dry and/or wet etch processes can be used. Figure 3E shows the source/drain contact trenches after etching is complete, in accordance with one example embodiment. 20 The method then continues with depositing 218 contact resistance reducing metal and annealing, and then depositing 220 the source/drain contact plugs. Figure 3F shows the contact resistance reducing metals 325, which in some embodiments include silver, nickel, aluminum, titanium, gold, gold-germanium, nickel-platinum or nickel-aluminum, and/or other such resistance reducing metals or alloys. Figure 3F further shows the contact plug metal 329, which in some 25 embodiments includes aluminum or tungsten, although any suitably conductive contact metal or alloy can be used, such as silver, nickel-platinum or nickel-aluminum or other alloys of nickel and aluminum, or titanium, using conventional deposition processes. Metalization of the source/drain contacts can be carried out, for example, using a germanidation process (generally, deposition of contact metal and subsequent annealing). For instance, germanidation with nickel, aluminum, 30 nickel-platinum or nickel-aluminum or other alloys of nickel and aluminum, or titanium with or without germanium pre-amorphization implants can be used to form a low resistance germanide. The boron doped germanium caps 318/320 allow for metal-germanide formation (e.g., nickel- 14 WO 2012/088097 PCT/US2011/066129 germanium). The germanide allows for significantly lower Schottky-barrier height and improved contact resistance over that in conventional metal-silicide systems. For instance, conventional transistors typically use a source/drain SiGe epi process, with germanium concentration in the range of 30-40 atomic %. Such conventional systems exhibit Rext values of about 140 Ohm-um, limited 5 by epi/silicide interfacial resistance, which is high and may impede future gate pitch scaling. Some embodiments of the present invention allow for a significant improvement in Rext in PMOS devices (e.g., a 2x or better improvement, such as an Rext of about 70 Ohm-um, or less), which can better support PMOS device scaling. Thus, transistors having a source/drain configured with a bi-layer source/drain structure as described herein, can exhibit relatively lower Rext values compared to 10 conventional transistors. Non-Planar Configuration A non-planar architecture can be implemented, for instance, using FinFETs or nanowire configurations. A FinFET is a transistor built around a thin strip of semiconductor material (generally referred to as the fin). The transistor includes the standard field effect transistor (FET) 15 nodes, including a gate, a gate dielectric, a source region, and a drain region. The conductive channel of the device resides on/within the outer sides of the fin beneath the gate dielectric. Specifically, current runs along both sidewalls of the fin (sides perpendicular to the substrate surface) as well as along the top of the fin (side parallel to the substrate surface). Because the conductive channel of such configurations essentially resides along the three different outer, planar 20 regions of the fin, such a FinFET design is sometimes referred to as a tri-gate FinFET. Other types of FinFET configurations are also available, such as so-called double-gate FinFETs, in which the conductive channel principally resides only along the two sidewalls of the fin (and not along the top of the fin). Figures 4A to 4G each shows a perspective view of a FinFET transistor structure formed in 25 accordance with one embodiment of the present invention. The previous discussion with respect to Figures 2 through 3F is equally applicable here, as will be appreciated. As can be seen, the example non-planar configuration shown in Figure 4A is implemented with a fin structure which includes a substrate 400 having a semiconductor body or fin 410 extending from the substrate 400 through shallow trench isolation (STI) layer 420. The substrate may be, for example, silicon, germanium, or 30 SiGe. Figure 4B shows a gate electrode 440 formed over three surfaces of the fin 410 to form three gates (hence, a tri-gate device). A gate dielectric material 430 is provided between the fin 410 and 15 WO 2012/088097 PCT/US2011/066129 gate electrode 440, and hard mask 450 is formed on top of the gate electrode 440. Figure 4C illustrates the resulting structure after deposition of insulating material and subsequent etch that leaves a coating of the insulator material on all vertical surfaces, so as to provide spacers 460. Figure 4D illustrates the resulting structure after an additional etch treatment to eliminate 5 excess insulating/spacer material from sidewalls of fin 410, thereby leaving only spacers 460 opposite sidewalls of the gate electrode 440. Figure 4E illustrates the resulting structure after a recess etch to remove fin 410 in the source/drain region of substrate 400, thereby forming recess 470. Note that other embodiments may not be recessed (e.g., source/drain region is flush with STI layer 420). 10 Figure 4F illustrates the resulting structure after growth of epitaxial liner 480, which may be thin, p-type and contain significant fraction of silicon (e.g., silicon or SiGe having 70 atomic % silicon), or be pure germanium (e.g., a separate layer of germanium, or a non-detectable layer that is integrated or otherwise included in the composition of the caps 318/320). Figure 4G illustrates the resulting structure after growth of epitaxial source/drain cap 490, which can be p-type, and comprise 15 primarily germanium but may contain less than 20 atomic % tin or other suitable alloying material, as previously explained. As will be appreciated in light of this disclosure, conventional processes and forming techniques can be used to fabricate the FinFET transistor structure having the bi-layer source/drain structure as described herein. As will further be appreciated, note that an alternative to the tri-gate configuration as shown is 20 a double-gate architecture, which would include a dielectric/isolation layer on top of the fin 410. Further note that the example shapes of the liner 480 and cap 490 making up the source/drain regions shown in Figure 4G are not intended to limit the claimed invention to any particular source/drain types or formation processes, and other source/drain shapes will be apparent in light of this disclosure (e.g., round, square or rectangular source/drain regions may be implemented). 25 Figure 5A shows a perspective view of a nanowire transistor structure formed in accordance with one embodiment of the present invention. A nanowire transistor (sometimes referred to as gate-all-around FET) is configured similarly to a fin-based transistor, but instead of a fin, a nanowire is used and the gate material generally surrounds the channel region on all sides. Depending on the particular design, some nanowire transistors have, for instance, four effective gates. Figure 5A 30 illustrates a nanowire channel architecture having two nanowires 510, although other embodiments can have any number of wires. The nanowires 510 can be implemented, for example, with p-type silicon or germanium or SiGe nanowire. As can be seen, one nanowire 510 is formed or otherwise 16 WO 2012/088097 PCT/US2011/066129 provided in a recess of substrate 400 and the other nanowire 510 effectively floats in the source/drain material bi-layer construction comprising liner 580 and cap 590. Just as with the fin configuration, note that the nanowire 510 can be replaced in the source/drain regions with a bi-layer construction of source/drain material as described herein (e.g., relatively thin silicon or germanium 5 or SiGe liner and relatively thick high concentration germanium cap). Alternatively, the bi-layer construction can be provided around the originally formed nanowire 510 as shown (where liner 580 is provided around nanowire 510, and cap 590 is then provided around liner 580). Figure 5B also illustrates a nanowire configuration having multiple nanowires 510, but in this example case, non-active material 511 is not removed from between the individual nanowires during the nanowire 10 forming process, which can be carried out using various conventional techniques, as will be appreciated in light of this disclosure. Thus, one nanowire 510 is provided in a recess of substrate 400 and the other nanowire 510 effectively sits on top of the material 511. Note the nanowires 510 are active through the channel, but the 511 material is not. As can be seen, the bi-layer source/drain construction of liner 580 and cap 590 is provided around all other exposed surfaces of the nanowires 15 510. Example System Figure 6 illustrates a computing system 1000 implemented with one or more transistor structures configured in accordance with an example embodiment of the present invention. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may 20 include a number of components, including but not limited to a processor 1004 and at least one communication chip 1006, each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board or a daughterboard mounted on a main board or the only board of system 1000, etc. Depending on its applications, computing 25 system 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global 30 positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 may include one or more 17 WO 2012/088097 PCT/US2011/066129 transistor structures as described herein (e.g., having a bi-layer source/drain structure comprising a relatively thin p-type silicon or germanium or SiGe liner and a relatively thicker p-type high germanium content cap). These transistor structures can be used, for instance, to implement an onboard processor cache or memory array. In some embodiments, multiple functions can be integrated 5 into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004). The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may 10 communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, 15 HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless 20 communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments of the present invention, the integrated circuit die of the processor includes onboard memory circuitry that is implemented with one or more transistor structures (e.g., PMOS or CMOS) as described herein. The term "processor" may refer to any 25 device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 1006 may also include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated 30 circuit die of the communication chip includes one or more circuits implemented with one or more transistor structures as described herein (e.g., on-chip processor or memory). As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into 18 WO 2012/088097 PCT/US2011/066129 the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated 5 therein. In various implementations, the computing system 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further 10 implementations, the system 1000 may be any other electronic device that processes data or employs low resistance transistor devices as described herein (e.g., PMOS and CMOS circuitry). Numerous embodiments will be apparent, and features described herein can be combined in any number of configurations. One example embodiment of the present invention provides a transistor device. The device includes a substrate having a channel region, a gate electrode above 15 the channel region, and source and drain regions formed on or in the substrate and adjacent to the channel region. Each of the source and drain regions has a total thickness comprising a p-type liner of silicon or germanium or silicon germanium, and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. In some cases, the device is one of a planar, FinFET, or nanowire PMOS transistor. In some cases, the device 20 further includes metal-germanide source and drain contacts. In some cases, the thickness ratio of liner thickness to cap thickness is 2:5, or less (liner is 40% or less of the total thickness). In some cases, the thickness ratio of liner thickness to cap thickness is 1:5, or less (liner is 20% or less of the total thickness). In some cases, each of the liners has a thickness in the range of about one monolayer to 10 nm, and each of the caps has a thickness in the range of about 50 nm to 500 nm. In 25 some cases, at least one of the liners and/or caps has at least one of a graded concentration of germanium and/or p-type dopant. For instance, in some cases, at least one of the liners has a germanium concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 50 atomic %. In one such case, the high concentration is in excess of 90 atomic %. In some cases, at least one of the liners has a p-type dopant 30 concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 1E20 cm"3. In one such case, the p-dopant of the one or more liners is boron. In some cases, at least one of the caps has a germanium concentration in excess of 95 atomic 19 WO 2012/088097 PCT/US2011/066129 %. In some cases, at least one of the caps has a germanium concentration that is graded from a base level concentration compatible with the corresponding liner to a high concentration in excess of 80 atomic %. In some cases, at least one of the caps has a p-type dopant concentration that is graded from a base level concentration compatible with the corresponding liner to a high concentration in 5 excess of 1E20 cm~3. In one such case, the p-dopant of the one or more caps is boron. In some cases, at least one of the caps further comprises tin. Numerous variations will be apparent. For instance, in some example cases the substrate is a silicon-containing substrate. In some such cases, the p-type liner comprises silicon or silicon germanium. In other example cases, the substrate is a germanium substrate. In some such cases, the p-type liner is p-type germanium. In some example 10 such cases, each liner is included in the composition of the corresponding cap (such that a distinct and separate liner layer may not be discernible from a distinct and separate cap layer). In some cases, at least one of the caps further comprises misfit dislocations and/or threading dislocations and/or twins, while in other cases, the caps are free of misfit dislocations, threading dislocations, and twins. Another embodiment of the present invention includes an electronic device that includes a 15 printed circuit board having an integrated circuit including one or more transistor devices as variously defined in this paragraph. In one such case, the integrated circuit comprises at least one of a communication chip and/or a processor. In some cases, the electronic device is a computing device. Another embodiment of the present invention provides an integrated circuit. The circuit 20 includes a substrate (e.g., silicon, SiGe, or germanium) having a channel region, a gate electrode above the channel region, source and drain regions formed on or in the substrate and adjacent to the channel region, and metal-germanide source and drain contacts. Each of the source and drain regions has a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is 25 40% or less of the total thickness. In some cases, the thickness ratio of liner thickness to cap thickness is 1:5, or less. In some case, at least one of the caps further comprises tin. Another embodiment of the present invention provides a method for forming a transistor device. The method includes providing a substrate having a channel region, providing a gate electrode above the channel region, and providing source and drain regions formed on or in the 30 substrate and adjacent to the channel region. Each of the source and drain regions has a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of 20 WO 2012/088097 PCT/US2011/066129 the total thickness. In some cases, the method includes providing metal-germanide source and drain contacts. In some cases, the thickness ratio of liner thickness to cap thickness is 2:5, or less. In some cases, at least one of the liners and/or caps has at least one of a graded concentration of germanium and/or p-type dopant. In some cases, at least one of the caps further comprises tin (or 5 other suitable strain inducer). The foregoing description of example embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. For instance, while some embodiments of the present invention utilize in situ boron 10 doping of germanium, other embodiments may use an intrinsic germanium that after its deposition is subsequently subjected to p-type dopant implantation and annealing processes to provide the desired p-type doping concentration. Moreover, some embodiments may include source and drain regions fabricated as described herein, but still use conventional processing (e.g., implantation and annealing) to form the tips of the source and drain regions. In such embodiments, the tips may have 15 a lower germanium and/or p-type dopant concentration than the main source/drain region, which may be acceptable in some applications. In still other embodiments, only tips of the source and drain regions may be configured with the high germanium and p-type dopant concentrations and the main portions of the source and drain regions may have conventional or otherwise lower germanium/dopant concentrations. It is intended that the scope of the invention be limited not by 20 this detailed description, but rather by the claims appended hereto. 21 WO 2012/088097 PCT/US2011/066129 CLAIMS What is claimed is: 1. A transistor device, comprising: a substrate having a channel region; 5 a gate electrode above the channel region; and source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. 10 2. The device of claim 1 wherein the device is one of a planar, FinFET, or nanowire PMOS transistor. 3. The device of claims 1 or 2 further comprising metal-germanide source and drain contacts. 15 4. The device of any of the preceding claims wherein the thickness ratio of liner thickness to cap thickness is 2:5, or less. 5. The device of any of the preceding claims wherein the thickness ratio of liner thickness to cap 20 thickness is 1:5, or less. 6. The device of any of the preceding claims wherein each of the liners has a thickness in the range of about one monolayer to 10 nm, and each of the caps has a thickness in the range of about 50 nm to 500 nm. 25 7. The device of any of the preceding claims wherein at least one of the liners and/or caps has at least one of a graded concentration of germanium and/or p-type dopant. 8. The device of claim 7 wherein at least one of the liners has a germanium concentration that is 30 graded from a base level concentration compatible with the substrate to a high concentration in excess of 50 atomic %. 22 WO 2012/088097 PCT/US2011/066129 9. The device of claim 8 wherein the high concentration is in excess of 90 atomic %. 10. The device of any of claims 7 through 9 wherein at least one of the liners has a p-type dopant concentration that is graded from a base level concentration compatible with the substrate to a high concentration in excess of 1E20 cm"3. 11. The device of claim 10 wherein the p-dopant of the one or more liners is boron. 12. The device of any of claims 7 through 11 wherein at least one of the caps has a germanium concentration in excess of 95 atomic %. 13. The device of any of claims 7 through 12 wherein at least one of the caps has a germanium concentration that is graded from a base level concentration compatible with the corresponding liner to a high concentration in excess of 80 atomic %. 14. The device of any of claims 7 through 13 wherein at least one of the caps has a p-type dopant concentration that is graded from a base level concentration compatible with the corresponding liner to a high concentration in excess of 1E20 cm"3. 15. The device of claim 14 wherein the p-dopant of the one or more caps is boron. 16. The device of any of the preceding claims wherein at least one of the caps further comprises tin. 17. The device of any of the preceding claims wherein at least one of the caps further comprises misfit dislocations and/or threading dislocations and/or twins. 18. The device of any of the preceding claims wherein the caps are free of misfit dislocations, threading dislocations, and twins. 19. An electronic device comprising: a printed circuit board having an integrated circuit including one or more transistor devices as 23 WO 2012/088097 PCT/US2011/066129 defined in any of the preceding claims. 20. The electronic device of claim 19 wherein the integrated circuit comprises at least one of a communication chip and/or a processor. 5 21. The electronic device of claims 19 or 20 wherein the electronic device is a computing device. 22. An integrated circuit, comprising: a substrate having a channel region; 10 a gate electrode above the channel region; source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 40% of the total thickness; and 15 metal-germanide source and drain contacts. 23. The circuit of claim 20 wherein the thickness ratio of liner thickness to cap thickness is 1:5, or less. 20 24. The circuit of claims 20 or 21 wherein at least one of the caps further comprises tin. 25. A method for forming a transistor device, comprising: providing a substrate having a channel region; providing a gate electrode above the channel region; and 25 providing source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or germanium or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. 30 26. The method of claim 25 further comprising providing metal-germanide source and drain contacts. 24 WO 2012/088097 PCT/US2011/066129 27. The method of claims 25 or 26 wherein the thickness ratio of liner thickness to cap thickness is 2:5, or less. 28. The method of any of claims 25 through 27 wherein at least one of the liners and/or caps has at least one of a graded concentration of germanium and/or p-type dopant. 29. The method of any of claims 25 through 28 wherein at least one of the caps further comprises tin. 30. A transistor device, comprising: a silicon-containing substrate having a channel region; a gate electrode above the channel region; and source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of silicon or silicon germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. 31. A transistor device, comprising: a germanium substrate having a channel region; a gate electrode above the channel region; and source and drain regions formed on or in the substrate and adjacent to the channel region, each of the source and drain regions having a total thickness comprising a p-type liner of germanium and a p-type cap having a germanium concentration in excess of 80 atomic %, wherein the liner is less than 50% of the total thickness. 32. The device of claim 31 wherein each liner is included in the composition of the corresponding cap. 25 |
Disclosed embodiments relate to an interleaved pipeline of floating-point (FP) adders. In one example, a processor is to execute an instruction specifying an opcode and locations of a M by K first source matrix, a K by N second source matrix, and a M by N destination matrix, the opcode indicating execution circuitry, for each FP element (M, N) of the destination matrix, is to: launch K instances of a pipeline having a first, MULTIPLY stage, during which a FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix are multiplied; concurrently, in an EXPDIFF stage, determine an exponent difference between the product and a previous FP value of the element (M, N) of the destination matrix; and in a second, ADD-BYPASS stage, accumulate the product with the previous FP value and, concurrently, bypassing the accumulated sum to a subsequent pipeline instance. |
A processor comprising:decode circuitry to decode an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K pipeline instances over K cycles, each pipeline instance comprising:in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and element (K, N) of the second source matrix;concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix;in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, and, if rounding is determined to be required, causing a next pipeline instance to add a one;wherein the product, before the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum to a subsequent instance of the pipeline; andexecution circuitry to execute the decoded instruction as per the opcode.The processor of claim 1, wherein the execution circuitry is to complete execution of the K instances of the pipeline over K-plus-one cycles.The processor of claim 1 or 2, wherein the execution circuitry, during the MULTIPLY stage, is to perform rounding of the generated product, as necessary.The processor of any of claims 1 to 3, wherein the execution circuitry, during the ADD-BYPASS stage, is to perform saturation, as necessary, on the accumulated sum.The processor of any of claims 1 to 4, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is one of 1, 2, 3, 4, 8, and 16.The processor of any of claims 1 to 5, wherein the first source, second source, and destination matrices are each located in one of a collection of vector registers of a register file, a collection of tile registers, and a plurality of memory locations representing a matrix.The processor of any of claims 1 to 6, wherein the execution circuitry saves a state after performing the K pipeline instances on each element (M,N) of the destination matrix, and, in the case of a fault, uses the saved state after recovering from the fault to continue execution.The processor of any of claims 1 to 7, wherein the EXPDIFF and ADD-BYPASS pipeline stages of the first executed instance of the pipeline receive the previous FP value of the element (M, N) of the destination matrix from its location as specified by the instruction, and the EXPDIFF and ADD-BYPASS pipeline stages of subsequent executed instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as a bypass from the ADD-BYPASS stage of an immediately preceding instance of the pipeline.The processor of any of claims 1 to 8, wherein the instruction further specifies a multibit writemask, each bit of which is to mask or otherwise to allow writing of a corresponding element (M, N) of the destination matrix.The processor of claim 9, wherein each of the masked elements is to be either zeroed or merged.A method to be performed by a processor, the method comprising:decoding, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles; andexecuting, using execution circuitry, the decoded instruction as per the opcode; andwherein each instance of the pipeline comprises:in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix;concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix;in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.The method of claim 11, wherein the execution circuitry is to complete execution of the K instances of the pipeline over K-plus-one cycles.The method of claim 11 or 12, wherein the execution circuitry, during the MULTIPLY stage, is to perform rounding of the generated product, as necessary.The method of any of claims 11 to 13, wherein the execution circuitry, during the ADD-BYPASS stage, is to perform saturation, as necessary, on the accumulated sum.The method of any of claims 11 to 14, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is one of 1, 2, 3, 4, 8, and 116. |
FIELD OF INVENTIONThe field of invention relates generally to computer processor architecture, and, more specifically, to systems and methods for executing an interleaved pipeline of floating-point adders.BACKGROUNDMatrices are increasingly important in many computing tasks such as machine learning and other bulk data processing. Deep Learning is a class of machine learning algorithms. Deep learning architectures, such as deep neural networks, have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design.Inference and training, two tools used for deep learning, are tending towards low precision arithmetic. Maximizing throughput of deep learning algorithms and computations may assist in meeting the needs of deep learning processors, for example, those performing deep learning in a data center.Matrix-matrix multiplication (a.k.a., GEMM or General Matrix Multiplication) is a common compute-heavy operation on today's processors. Special hardware for matrix multiplication (e.g., GEMM) is a good option for improving the peak compute (and energy efficiency) of certain applications, such as deep learning. Some of these applications, including deep learning, can operate on input data elements with relatively few bits without losing accuracy, as long as the output elements have enough bits (i.e., more than the inputs).A common operation performed in the context of machine learning is a matrix (tile) floating-point fused multiply-accumulate (FMA) instruction, be it in single-precision or double-precision. Improving the power and performance of FMA instructions is expected to improve the power and performance of applications that use those instructions, including machine learning training and inference applications.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:Figure 1A illustrates an embodiment of configured tiles;Figure 1B illustrates an embodiment of configured tiles;Figure 2 illustrates several examples of matrix storage;Figure 3 illustrates an embodiment of a system utilizing a matrix (tile) operations accelerator;Figures 4 and 5 show different embodiments of how memory is shared using a matrix operations accelerator;Figure 6 illustrates an embodiment of matrix multiply-accumulate operation using tiles ("TMMA");Figure 7 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction;Figure 8 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction;Figure 9 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction;Figure 10 illustrates an embodiment of a subset of the execution of an iteration of chained fused multiply-accumulate instruction;Figure 11 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment;Figure 12 illustrates an embodiment of a system utilizing matrix operations circuitry;Figure 13 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles;Figure 14 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles;Figure 15 illustrates an example of a matrix expressed in row major format and column major format;Figure 16 illustrates an example of usage of matrices (tiles);Figure 17 illustrates an embodiment a method of usage of matrices (tiles);Figure 18 illustrates support for configuration of the usage of tiles according to an embodiment;Figure 19 illustrates an embodiment of a description of the matrices (tiles) to be supported;Figures 20(A)-(D) illustrate examples of register(s);Figures 21A-Billustrate floating-point multiply-accumulate pipelines;Figure 21A illustrates a basic floating-point multiply-accumulate pipeline;Figure 21B illustrates an interleaved matrix (tile) floating-point multiply-accumulate pipeline, according to some embodiments;Figure 22A is a block diagram illustrating execution of a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction, according to some embodiments;Figure 22B is a block diagram illustrating execution of a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction using an interleaved pipeline, according to some embodiments;Figure 23 illustrates an embodiment of a processor executing a flow to process a matrix (tile) floating-point multiply-accumulate (TILEFPFMA) instruction;Figure 24 is a block diagram illustrating a format of a matrix (tile) floating-point multiply-accumulate (TILEFPFMA) instruction according to some embodiments;Figures 25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments;Figure 25A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments;Figure 25B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments;Figure 26A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments;Figure 26B is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the full opcode field according to one embodiment;Figure 26C is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the register index field according to one embodiment;Figure 26D is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the augmentation operation field according to one embodiment;Figure 27 is a block diagram of a register architecture according to one embodiment;Figure 28A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments;Figure 28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments;Figures 29A-Billustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;Figure 29A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments;Figure 29B is an expanded view of part of the processor core in Figure 29A according to embodiments;Figure 30 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments;Figures 31-34 are block diagrams of exemplary computer architectures;Figure 31 shown a block diagram of a system in accordance with one embodiment of the present invention;Figure 32 is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention;Figure 33 is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention;Figure 34 is a block diagram of a System-on-a-Chip (SoC) in accordance with an embodiment of the present invention; andFigure 35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.In many mainstream processors, handling matrices is a difficult and/or instruction intensive task. For example, rows of a matrix could be put into a plurality of packed data (e.g., SIMD or vector) registers and then operated on individually. For example, an add two 8x2 matrices may require a load or gather into four packed data registers depending upon data sizes. Then a first add of packed data registers corresponding to a first row from each matrix is performed and a second add of packed data registers corresponding to a second row from each matrix is performed. Then the resulting packed data registers are scattered back to memory. While for small matrices this scenario may be acceptable, it is often not acceptable with larger matrices.DISCUSSIONDescribed herein are mechanisms to support matrix operations in computer hardware such as central processing units (CPUs), graphic processing units (GPUs), and accelerators. The matrix operations utilize 2-dimensional (2-D) data structures representing one or more packed regions of memory such as registers. Throughout this description, these 2-D data structures are referred to as tiles. Note that a matrix may be smaller than a tile (use less than all of a tile) or utilize a plurality of tiles (the matrix is larger than the size of any one tile). Throughout the description, matrix (tile) language is used to indicate operations performed using tiles that impact a matrix; whether or not that matrix is larger than any one tile is not typically relevant.Each tile may be acted upon by different operations such as those that are detailed herein and include, but are not limited to: matrix (tile) multiplication, tile add, tile subtract, tile diagonal, tile zero, tile transform, tile dot product, tile broadcast, tile row broadcast, tile column broadcast, tile multiplication, tile multiplication and accumulation, tile move, etc. Additionally, support for operators such as the use of a scale and/or bias may be used with these operations or in support of non-numeric applications in the future, for instance, OpenCL "local memory," data compression/decompression, etc. Also described herein are instructions for performing matrix (tile) floating-point multiply-accumulate (TILEFPFMA) instructions.Portions of storage (such as memory (non-volatile and volatile), registers, cache, etc.) are arranged into tiles of different horizontal and vertical dimensions. For example, a tile may have horizontal dimension of 4 (e.g., four rows of a matrix) and a vertical dimension of 8 (e.g., 8 columns of the matrix). Typically, the horizontal dimension is related to element sizes (e.g., 2-, 4-, 8-, 16-, 32-, 64-, 128-bit, etc.). Multiple datatypes (single-precision floating-point, double-precision floating-point, integer, etc.) may be supported.EXEMPLARY USAGE OF CONFIGURED TILESIn some embodiments, tile parameters can be configured. For example, a given tile may be configured to provide tile options. Exemplary tile options include but are not limited to: a number of rows of the tile, a number of columns of the tile, whether the tile is VALID, and whether the tile consists of a PAIR of equal-sized tiles.Figure 1A illustrates an embodiment of configured tiles. As shown, 4 kB of application memory 102 have stored thereon 4 1kB titles, tile t0 104, tile t1 106, tile t2 108, and tile t3 110. In this example, the 4 tiles do not consist of pairs, and each have elements arranged in rows and columns. Tile t0 104 and tile t1 106 have K rows and N columns of 4-byte elements (e.g., single-precision data), where K equals 8 and N=32. Tile t2 108 and tile t3 110 have K rows and N/2 columns of 8-byte elements (e.g., double-precision data). As the double-precision operands are twice the width of single-precision, this configuration is consistent with a palette, used to provide tile options, supplying at least 4 names with total storage of at least 4 kB. In operation, the tiles can be loaded from and stored to memory using load and store operations. Depending upon the instruction encoding scheme used, the amount of available application memory, as well as the size, number, and configuration of available tiles varies.Figure 1B illustrates an embodiment of configured tiles. As shown, 4 kB of application memory 122 have stored thereon 2 pairs of 1kB-titles, the first pair being tile t4L 124 and tile t4R 126, and the second pair being tile t5L 128 and tile t5R 130. As shown the pairs of tiles are divided into a left tile and a right tile. In other embodiments, the pair of tiles are divided into an even tile and an odd tile. In this example, the 4 tiles each have elements arranged in rows and columns. Tile t4L 124 and tile t4R 126 have K rows and N columns of 4-byte elements (e.g., single-precision floating-point data), where K equals 8 and N equals 32. Tile t5L 128 and tile t5R 130 have K rows and N/2 columns of 8-byte elements (e.g., double-precision floating-point data). As the double-precision operands are twice the width of single-precision, this configuration is consistent with a palette, used to provide tile options, supplying at least 2 names with total storage of at least 4 kB. The four tiles of Figure 1A use 4 names, each naming a 1 kB tile, whereas the 2 pairs of tiles in Figure 1B can use 2 names to specify the paired tiles. In some embodiments, tile instructions accept a name of a paired tile as an operand. In operation, the tiles can be loaded from and stored to memory using load and store operations. Depending upon the instruction encoding scheme used, the amount of available application memory, as well as the size, number, and configuration of available tiles varies.In some embodiments, tile parameters are definable. For example, a "palette" is used to provide tile options. Exemplary options include, but are not limited to: the number of tile names, the number of bytes in a row of storage, the number of rows and columns in a tile, etc. For example, a maximum "height" (number of rows) of a tile may be defined as:Tile Max Rows = Architected Storage / (The Number of Palette Names ∗ The Number of Bytes per row).As such, an application can be written such that a fixed usage of names will be able to take advantage of different storage sizes across implementations.Configuration of tiles is done using a tile configuration ("TILECONFIG") instruction, where a particular tile usage is defined in a selected palette. This declaration includes the number of tile names to be used, the requested number of rows and columns per name (tile), and, in some embodiments, the requested datatype of each tile. In some embodiments, consistency checks are performed during the execution of a TILECONFIG instruction to determine that it matches the restrictions of the palette entry.EXEMPLARY TILE STORAGE TYPESFigure 2 illustrates several examples of matrix storage. In (A), a tile is stored in memory. As shown, each "row" consists of four packed data elements. To get to the next "row," a stride value is used. Note that rows may be consecutively stored in memory. Strided memory accesses allows for access of one row to then next when the tile storage does not map the underlying memory array row width.Tile loads from memory and stores to memory are typically strided accesses from the application memory to packed rows of data. Exemplary TILELOAD and TILESTORE instructions, or other instruction references to application memory as a TILE operand in load-op instructions, are, in some embodiments, restartable to handle (up to) 2*rows of page faults, unmasked floating-point exceptions, and/or interrupts per instruction.In (B), a matrix is stored in a tile comprised of a plurality of registers such as packed data registers (single instruction, multiple data (SIMD) or vector registers). In this example, the tile is overlaid on three physical registers. Typically, consecutive registers are used, however, this need not be the case.In (C), a matrix is stored in a tile in non-register storage accessible to a fused multiple accumulate (FMA) circuit used in tile operations. This storage may be inside of or adjacent to an FMA. Additionally, in some embodiments, discussed below, the storage may be for a data element and not an entire row or tile.The supported parameters for the TMMA architecture are reported via CPUID. In some embodiments, the list of information includes a maximum height and a maximum SIMD dimension. Configuring the TMMA architecture requires specifying the dimensions for each tile, the element size for each tile and the palette identifier. This configuration is done by executing the TILECONFIG instruction.Successful execution of a TILECONFIG instruction enables subsequent TILE operators. A TILERELEASEALL instruction clears the tile configuration and disables the TILE operations (until the next TILECONFIG instructions executes). In some embodiments, XSAVE, XSTORE, etc. are used in context switching using tiles. In some embodiments, 2 XCR0 bits are used in XSAVE, one for TILECONFIG metadata and one bit corresponding to actual tile payload data.TILECONFIG not only configures the tile usage, but also sets a state variable indicating that the program is in a region of code with tiles configured. An implementation may enumerate restrictions on other instructions that can be used with a tile region such as no usage of an existing register set, etc.Exiting a tile region is typically done with the TILERELEASEALL instruction. It takes no parameters and swiftly invalidates all tiles (indicating that the data no longer needs any saving or restoring) and clears the internal state corresponding to being in a tile region.In some embodiments, tile operations will zero any rows and any columns beyond the dimensions specified by the tile configuration. For example, tile operations will zero the data beyond the configured number of columns (factoring in the size of the elements) as each row is written. For example, with 64-byte rows and a tile configured with 10 rows and 12 columns, an operation writing FP32 elements would write each of the first 10 rows with 12∗4 bytes with output/result data and zero the remaining 4∗4 bytes in each row. Tile operations also fully zero any rows after the first 10 configured rows. When using 1K tile with 64-byte rows, there would be 16 rows, so in this example, the last 6 rows would also be zeroed.In some embodiments, a context restore instruction (e.g., XRSTOR), when loading data, enforces that the data beyond the configured rows for a tile will be maintained as zero. If there is no valid configuration, all rows are zeroed. XRSTOR of tile data can load garbage in the columns beyond those configured. It should not be possible for XRSTOR to clear beyond the number of columns configured because there is not an element width associated with the tile configuration.Context save (e.g., XSAVE) exposes the entire TILE storage area when writing it to memory. If XRSTOR loaded garbage data in to the rightmost part of a tile, that data will be saved by XSAVE. XSAVE will write zeros for rows beyond the number specified for each tile.In some embodiments, tile instructions are restartable. The operations that access memory allow restart after page faults. The computational instructions that deal with floating-point operations also allow for unmasked floating-point exceptions, with the masking of the exceptions controlled by a control and/or status register.To support restarting instructions after these events, the instructions store information in the start registers detailed below.MATRIX (TILE) OPERATION SYSTEMSEXEMPLARY HARDWARE SUPPORTFigure 3 illustrates an embodiment of a system utilizing a matrix (tile) operations accelerator. In this illustration, a host processor/processing system 301 communicates commands 311 (e.g., matrix manipulation operations such as arithmetic or matrix manipulation operations, or load and store operations) to a matrix operations accelerator 307. However, this is shown this way for discussion purposes only. As detailed later, this accelerator 307 may be a part of a processing core. Typically, commands 311 that are tile manipulation operator instructions will refer to tiles as register-register ("reg-reg") or register-memory ("reg-mem") format. Other commands such as TILESTORE, TILELOAD, TILECONFIG, etc., do not perform data operations on a tile. Commands may be decoded instructions (e.g., micro-ops) or macro-instructions for the accelerator 307 to handle.In this example, a coherent memory interface 303 is coupled to the host processor/processing system 301 and matrix operations accelerator 307 such that they can share memory. Figures 4 and 5 show different embodiments of how memory is shared using a matrix operations accelerator. As shown in Figure 4, the host processor 401 and matrix operations accelerator circuitry 405 share the same memory 403. Figure 5 illustrates an embodiment where the host processor 501 and matrix operations accelerator 505 do not share memory but can access each other's memory. For example, processor 501 can access tile memory 507 and utilize its host memory 503 as normal. Similarly, the matrix operations accelerator 505 can access host memory 503, but more typically uses its own memory 507. Note these memories may be of different types.In some embodiments, tiles are supported using an overlay over physical registers. For example, a tile may utilize 16 1,024-bit registers, 32 512-bit registers, etc. depending on the implementation. In some embodiments, the matrix operations utilize 2-dimensional (2-D) data structures representing one or more packed regions of memory such as registers. Throughout this description, these 2-D data structures are referred to as tiles or tile registers.In some embodiments, the matrix operations accelerator 307 includes a plurality of FMAs 309 coupled to data buffers 305 (in some implementations, one or more of these buffers 305 are stored in the FMAs of the grid as shown). The data buffers 305 buffer tiles loaded from memory and/or tiles to be stored to memory (e.g., using a tileload or tilestore instruction). Data buffers may be, for example, a plurality of registers. Typically, these FMAs are arranged as a grid of chained FMAs 309 which are able to read and write tiles. In this example, the matrix operations accelerator 307 is to perform a matrix multiply operation using tiles T0, T1, and T2. At least one of tiles is housed in the FMA grid 309. In some embodiments, all tiles in an operation are stored in the FMA grid 309. In other embodiments, only a subset is stored in the FMA grid 309. As shown, T1 is housed and T0 and T2 are not. Note that A, B, and C refer to the matrices of these tiles which may or may not take up the entire space of the tile.Figure 6 illustrates an embodiment of matrix multiply-accumulate operation using tiles ("TMMA").The number of rows in the matrix (TILE A 601) matches the number of serial (chained) FMAs comprising the computation's latency. An implementation is free to recirculate on a grid of smaller height, but the computation remains the same.The source/destination vector comes from a tile of N rows (TILE C 605) and the grid of FMAs 611 performs N vector-matrix operations resulting in a complete instruction performing a matrix multiplication of tiles. Tile B 603 is the other vector source and supplies "broadcast" terms to the FMAs in each stage.In operation, in some embodiments, the elements of matrix B (stored in a tile B 603) are spread across the rectangular grid of FMAs. Matrix B (stored in tile A 601) has its elements of a row transformed to match up with the columnar dimension of the rectangular grid of FMAs. At each FMA in the grid, an element of A and B are multiplied and added to the incoming summand (from above in the Figure) and the outgoing sum is passed to the next row of FMAs (or the final output).The latency of a single step is proportional to K (row height of matrix B) and dependent TMMAs typically have enough source-destination rows (either in a single tile or across tile) to hide that latency. An implementation may also split the SIMD (packed data element) dimension M (row height of matrix A) across time steps, but this simply changes the constant that K is multiplied by. When a program specifies a smaller K than the maximum enumerated by the TMMA, an implementation is free to implement this with "masking" or "early outs."The latency of an entire TMMA is proportional to N*K. The repeat rate is proportional to N. The number of MACs per TMMA instruction is N*K*M.Figure 7 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply-accumulate is operating on signed sources wherein the accumulator is 2x the input data size.A first signed source (source 1 701) and a second signed source (source 2 703) each have four packed data elements. Each of these packed data elements stores signed data such as floating-point data. A third signed source (source 3 709) has two packed data elements, each of which stores signed data. The sizes of the first and second signed sources 701 and 703 are half that of the third signed source (initial value or previous result) 709. For example, the first and second signed sources 701 and 703 could have 32-bit packed data elements (e.g., single-precision floating-point) while the third signed source 709 could have 64-bit packed data elements (e.g., double-precision floating-point).In this illustration, only the two most significant packed data element positions of the first and second signed sources 701 and 703 and the most significant packed data element position of the third signed source 709 are shown. Of course, the other packed data element positions would also be processed.As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 705, and the data from second most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 707. In some embodiments, these multiplier circuits 705 and 707 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 709. The results of each of the multiplications are added using addition circuitry 711.The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 709 (using a different adder 713 or the same adder 711).Finally, the result of the second addition is either stored into the signed destination 715 in a packed data element position that corresponds to the packed data element position used from the signed third source 709 or passed on to the next iteration if there is one. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 8 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply-accumulate is operating on signed sources wherein the accumulator is 2x the input data size.A first signed source (source 1 801) and a second signed source (source 2 803) each have four packed data elements. Each of these packed data elements stores signed data such as integer data. A third signed source (source 3 809) has two packed data elements, each of which stores signed data. The sizes of the first and second signed sources 801 and 803 are half that of the third signed source 809. For example, the first and second signed sources 801 and 803 could have 32-bit packed data elements (e.g., single-precision floating-point) the third signed source 809 could have 64-bit packed data elements (e.g., double-precision floating-point).In this illustration, only the two most significant packed data element positions of the first and second signed sources 801 and 803 and the most significant packed data element position of the third signed source 809 are shown. Of course, the other packed data element positions would also be processed.As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 801 and 803 are multiplied using a multiplier circuit 805, and the data from second most significant packed data element positions of the first and second signed sources 801 and 803 are multiplied using a multiplier circuit 807. In some embodiments, these multiplier circuits 805 and 807 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source (initial value or previous iteration result) 809. The results of each of the multiplications are added to the signed third source 809 using addition/saturation circuitry 813.Addition/saturation (accumulator) circuitry 813 preserves a sign of an operand when the addition results in a value that is too big. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the destination or next iteration. When the accumulator 813 is floating-point and the input terms are integer, the sum of products and the floating-point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.Unsigned saturation means the output values are limited to a maximum unsigned number for that element width (all 1s). Signed saturation means a value is limited to the be in the range between a minimum negative number and a max positive number for that element width (for bytes for example, the range is from -128 (= - 2^7) to 127(=2^7-1)).The result of the addition and saturation check is stored into the signed result 815 in a packed data element position that corresponds to the packed data element position used from the signed third source 809 or passed on to the next iteration if there is one. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 9 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply-accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply-accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size.A first signed source (source 1 901) and a second unsigned source (source 2 903) each have four packed data elements. Each of these packed data elements has data such as floating-point or integer data. A third signed source (initial value or result 915) has a packed data element of which stores signed data. The sizes of the first and second sources 901 and 903 are a quarter of the third signed source 915. For example, the first and second sources 901 and 903 could have 16-bit packed data elements (e.g., word) and the third signed source 915 could have 64-bit packed data elements (e.g., double-precision floating-point or 64-bit integer).In this illustration, the four most significant packed data element positions of the first and second sources 901 and 903 and the most significant packed data element position of the third signed source 915 are shown. Of course, other packed data element positions would also be processed if there are any.As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 905, data from second most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 907, data from third most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 909, and data from the least significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 911. In some embodiments, the signed packed data elements of the first source 901 are sign extended and the unsigned packed data elements of the second source 903 are zero extended prior to the multiplications.In some embodiments, these multiplier circuits 905-911 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 915. The results of each of the multiplications are added using addition circuitry 913.The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 915 (using a different adder 917 or the same adder 913).Finally, the result 919 of the second addition is either stored into the signed destination in a packed data element position that corresponds to the packed data element position used from the signed third source 915 or passed to the next iteration. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 10 illustrates an embodiment of a subset of the execution of an iteration of chained fused multiply-accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply-accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size.A first signed source 1001 and a second unsigned source 1003 each have four packed data elements. Each of these packed data elements stores data such as floating-point or integer data. A third signed source 1015 (initial or previous result) has a packed data element of which stores signed data. The sizes of the first and second sources are a quarter of the third signed source 1015 (initial or previous result). For example, the first and second sources could have 16-bit packed data elements (e.g., word) and the third signed source 1015 (initial or previous result) could have 64-bit packed data elements (e.g., double-precision floating-point or 64-bit integer).In this illustration, the four most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 and the most significant packed data element position of the third signed source 1015 are shown. Of course, other packed data element positions would also be processed if there are any.As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1005, data from second most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1007, data from third most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1009, and data from the least significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1011. In some embodiments, the signed packed data elements of the first signed source 1001 are sign extended and the unsigned packed data elements of the second unsigned source 1003 are zero extended prior to the multiplications.In some embodiments, these multiplier circuits 1005-1011 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of third signed source 1015 (initial or previous result). The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of third signed source 1015 (initial or previous result) using adder/saturation 1013 circuitry.Addition/saturation (accumulator) circuitry 1013 preserves a sign of an operand when the addition results in a value that is too big or too small for signed saturation. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the destination. When the accumulator 1013 is floating-point and the input terms are integer, the sum of products and the floating-point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.The result 1019 of the addition and saturation check is stored into the signed destination in a packed data element position that corresponds to the packed data element position used from third signed source 1015 (initial or previous result) or passed to the next iteration. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 11 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment. Note the source (to the multipliers) and accumulator values may be signed or unsigned values. For an accumulator having 2X input sizes (in other words, the accumulator input value is twice the size of the packed data element sizes of the sources), table 1101 illustrates different configurations. For byte sized sources, the accumulator uses word or half-precision floating-point (HPFP) values that are 16-bit in size. For word sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values that are 32-bit in size. For SPFP or 32-bit integer sized sources, the accumulator uses 64-intenger or double-precision floating-point (DPFP) values that are 64-bit in size.For an accumulator having 4X input sizes (in other words, the accumulator input value is four times the size of the packed data element sizes of the sources), table 1103 illustrates different configurations. For byte sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values that are 32-bit in size. For word sized sources, the accumulator uses 64-bit integer or double-precision floating-point (DPFP) values that are 64-bit in size in some embodiments.For an accumulator having 8X input sizes (in other words, the accumulator input value is eight times the size of the packed data element sizes of the sources), table 1105 illustrates a configuration. For byte sized sources, the accumulator uses 64-bit integer.As hinted at earlier, matrix operations circuitry may be included in a core, or as an external accelerator. Figure 12 illustrates an embodiment of a system utilizing matrix operations circuitry. In this illustration, multiple entities are coupled with a ring interconnect 1245.A plurality of cores, core 0 1201, core 1 1203, core 2 1205, and core N 1207 provide non-tile-based instruction support. In some embodiments, matrix operations circuitry 1251 is provided in a core 1203, and in other embodiments matrix operations circuitry 1211 and 1213 are accessible on the ring interconnect 1245.Additionally, one or more memory controllers 1223-1225 are provided to communicate with memory 1233 and 1231 on behalf of the cores and/or matrix operations circuitry.Figure 13 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles. Branch prediction and decode circuitry 1303 performs branch predicting of instructions, decoding of instructions, and/or both from instructions stored in instruction storage 1301. For example, instructions detailed herein may be stored in instruction storage. In some implementations, separate circuitry is used for branch prediction and in some embodiments, at least some instructions are decoded into one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals using microcode 1305. The branch prediction and decode circuitry 1303 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc.The branch prediction and decode circuitry 1303 is coupled to allocate/rename 1307 circuitry which is coupled, in some embodiments, to scheduler circuitry 1309. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of: 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments).The scheduler circuitry 1309 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler circuitry 1309 is coupled to, or includes, physical register file(s) 1315. Each of the physical register file(s) 1315 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), tiles, etc. In one embodiment, the physical register file(s) 1315 comprises vector registers circuitry, write mask registers circuitry, and scalar registers circuitry. These register circuits may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1315 is overlapped by a retirement circuit 1317 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement circuit 1317 and the physical register file(s) 1315 are coupled to the execution circuitry 1311.While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.The execution circuitry 1311 is a set of one or more execution circuits, including scalar circuitry 1321, vector/SIMD circuitry 1323, and matrix operations circuitry 1327, as well as memory access circuitry 1325 to access cache 1313. The execution circuits perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scalar circuitry 1321 performs scalar operations, the vector/SIMD circuitry 1323 performs vector/SIMD operations, and matrix operations circuitry 1327 performs matrix (tile) operations detailed herein.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement a pipeline as follows: 1) an instruction fetch circuit performs fetch and length decoding stages; 2) the branch and decode circuitry 1303 performs a decode stage; 3) the allocate/rename 1307 circuitry performs an allocation stage and renaming stage; 4) the scheduler circuitry 1309 performs a schedule stage; 5) physical register file(s) (coupled to, or included in, the scheduler circuitry 1309 and allocate/rename 1307 circuitry and a memory unit perform a register read/memory read stage; the execution circuitry 1311 performs an execute stage; 6) a memory unit and the physical register file(s) unit(s) perform a write back/memory write stage; 7) various units may be involved in the exception handling stage; and 8) a retirement unit and the physical register file(s) unit(s) perform a commit stage.The core may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1390 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).Figure 14 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles. Branch prediction and decode circuitry 1403 performs branch predicting of instructions, decoding of instructions, and/or both from instructions stored in instruction storage 1401. For example, instructions detailed herein may be stored in instruction storage. In some implementations, separate circuitry is used for branch prediction and in some embodiments, at least some instructions are decoded into one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals using microcode 1405. The branch prediction and decode circuitry 1403 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc.The branch prediction and decode circuitry 1403 is coupled to allocate/rename 1407 circuitry which is coupled, in some embodiments, to scheduler circuitry 1409. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of: 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments).The scheduler circuitry 1409 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) scheduler circuitry 1409 is coupled to, or includes, physical register file(s) 1415. Each of the physical register file(s) 1415 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), tiles, etc. In one embodiment, the physical register file(s) 1415 comprises vector registers circuitry, write mask registers circuitry, and scalar registers circuitry. These register circuits may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1415 is overlapped by a retirement circuit 1417 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement circuit 1417 and the physical register file(s) 1415 are coupled to the execution circuitry 1411.While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.The execution circuitry 1411 a set of one or more execution circuits 1427 and a set of one or more memory access circuits 1425 to access cache 1413. The execution circuits 1427 perform matrix (tile) operations detailed herein.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement a pipeline as follows: 1) an instruction fetch circuit performs fetch and length decoding stages; 2) the branch and decode circuitry 1403 performs a decode stage; 3) the allocate/rename 1407 circuitry performs an allocation stage and renaming stage; 4) the scheduler circuitry 1409 performs a schedule stage; 5) physical register file(s) (coupled to, or included in, the scheduler circuitry 1409 and allocate/rename 1407 circuitry and a memory unit perform a register read/memory read stage; the execution circuitry 1411 performs an execute stage; 6) a memory unit and the physical register file(s) unit(s) perform a write back/memory write stage; 7) various units may be involved in the exception handling stage; and 8) a retirement unit and the physical register file(s) unit(s) perform a commit stage.The core may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).LAYOUTThroughout this description, data is expressed using row major data layout. Column major users should translate the terms according to their orientation. Figure 15 illustrates an example of a matrix expressed in row major format and column major format. As shown, matrix A is a 2x3 matrix. When this matrix is stored in row major format, the data elements of a row are consecutive. When this matrix is stored in column major format, the data elements of a column are consecutive. It is a well-known property of matrices that AT ∗BT = (BA)T, where superscript T means transform. Reading column major data as row major data results in the matrix looking like the transform matrix.In some embodiments, row-major semantics are utilized in hardware, and column major data is to swap the operand order with the result being transforms of matrix, but for subsequent column-major reads from memory it is the correct, non-transformed matrix.For example, if there are two column-major matrices to multiply:a bg i kag+bh ai+bj ak+blc d *h j l =cg+dh ci+dj ck+dle feg+fh ei+fj ek+fl(3x2)(2x3)(3x3)The input matrices would be stored in linear memory (column-major) as:acebdfandg h i j k l.Reading those matrices as row-major with dimensions 2x3 and 3x2, they would appear as:a c eandg hb d fi jk lSwapping the order and matrix multiplying:g ha c eag+bh cg+dh eg+fhi j*b d f =ai+bj ci+dj ei+fjk lak+bl ck+dl ek+flThe transform matrix is out and can then be stored in in row-major order:ag+bhcg+dheg+fhai+bjci+djei+fjak+blck+dlek+fland used in subsequent column major computations, it is the correct untransformed matrix:ag+bhai+bjak+blcg+dhci+djck+dleg+fhei+fjek+flExemplary UsageFigure 16 illustrates an example of usage of matrices (tiles). In this example, matrix C 1601 includes two tiles, matrix A 1603 includes one tile, and matrix B 1605 includes two tiles. This figure shows an example of the inner loop of an algorithm to compute a matrix multiplication. In this example, two result tiles, tmm0 and tmm1, from matrix C 1601 are used to accumulate the intermediate results. One tile from the matrix A 1603 (tmm2) is reused twice as it multiplied by two tiles from matrix B 1605. Pointers to load a new A matrix (tile) and two new B matrices (tiles) from the directions indicated by the arrows. An outer loop, not shown, adjusts the pointers for the C tiles.The exemplary code as shown includes the usage of a tile configuration instruction and is executed to configure tile usage, load tiles, a loop to process the tiles, store tiles to memory, and release tile usage.Figure 17 illustrates an embodiment of usage of matrices (tiles). At 1701, tile usage is configured. For example, a TILECONFIG instruction is executed to configure tile usage including setting a number of rows and columns per tile. Typically, at least one matrix (tile) is loaded from memory at 1703. At least one matrix (tile) operation is performed at 1705 using the matrices (tiles). At 1707, at least one matrix (tile) is stored out to memory and a context switch can occur at 1709.EXEMPLARY CONFIGURATIONTILE CONFIGURATION HARDWARE SUPPORTAs discussed above, tile usage typically needs to be configured prior to use. For example, full usage of all rows and columns may not be needed. Not only does not configuring these rows and columns save power in some embodiments, but the configuration may be used to determine if an operation will generate an error. For example, a matrix multiplication of the form (N x M) ∗ (L x N) will typically not work if M and L are not the same.Prior to using matrices using tiles, in some embodiments, tile support is to be configured. For example, how many rows and columns per tile, tiles that are to be used, etc. are configured. A TILECONFIG instruction is an improvement to a computer itself as it provides for support to configure the computer to use a matrix accelerator (either as a part of a processor core, or as an external device). In particular, an execution of the TILECONFIG instruction causes a configuration to be retrieved from memory and applied to matrix (tile) settings within a matrix accelerator.TILE USAGE CONFIGURATIONFigure 18 illustrates support for configuration of the usage of tiles according to an embodiment. A memory 1801 contains the tile description 1803 of the matrices (tiles) to be supported.Instruction execution resources 1811 of a processor/core 1805 stores aspects of a tile description 1803 into tile configurations 1817. The tile configurations 1817 include palette table 1813 to detail what tiles for a palette are configured (the number of rows and columns in each tile) and a marking that matrix support is in use. In particular, instruction execution resources 1811 are configured to use tiles as specified by the tile configurations 1817. The instruction execution resources 1811 may also include a machine specific register or configuration register to indicate tile usage. Additional values such as in-use and start values are also set. The tile configurations 1817 utilize register(s) 1819 to store tile usage and configuration information.Figure 19 illustrates an embodiment of a description of the matrices (tiles) to be supported. This is the description that is to be stored upon an execution of a STTILECFG instruction. In this example, each field is a byte. In byte [0], a palette ID 1901 is stored. The palette ID is used to index a palette table 1813 which stores, per palette ID, a number of bytes in a tile, and bytes per row of the tiles that are associated with this ID as defined by the configuration.Byte 1 stores a value to be stored in a "startRow" register 1903 and byte 2 stores a value to be stored in a register, startP 1905. To support restarting instructions after these events, the instructions store information these registers. To support restarting instructions after break events such as those detailed above, the instructions store information in these registers. The startRow value indicates the row that should be used for restart. The startP value indicates the position within the row for store operations when pairs are used and, in some embodiments, indicates the lower half of the row (in the lower tile of a pair) or higher half of the row (in the higher tile of a pair). Generally, the position in the row (the column) is not needed.With the exception of TILECONFIG and STTILECFG, successfully executing matrix (tile) instructions will set both startRow and startP to zero.Any time an interrupted matrix (tile) instruction is not restarted, it is the responsibility of software to zero the startRow and startP values. For example, unmasked floating-point exception handlers might decide to finish the operation in software and change the program counter value to another instruction, usually the next instruction. In this case the software exception handler must zero the startRow and startP values in the exception presented to it by the operating system before resuming the program. The operating system will subsequently reload those values using a restore instruction.Byte 3 stores an indication of pairs (1b per tile) of tiles 1907.Bytes 16-17 store the number of rows 1913 and columns 1915 for tile 0, bytes 18-19 store the number of rows and columns for tile 1, etc. In other words, each 2-byte group specifies a number of rows and columns for a tile. If a group of 2 bytes is not used to specify tile parameters, they should have the value zero. Specifying tile parameters for more tiles than the implementation limit or the palette limit results in a fault. Unconfigured tiles are set to an initial state with 0 rows, 0 columns.Finally, the configuration in memory typically ends with an ending delineation such as all zeros for several consecutive bytes.EXEMPLARY TILE AND TILE CONFIGURATION STORAGEFigures 20(A)-(D) illustrate examples of register(s) 1819. Figure 20(A) illustrates a plurality of registers 1819. As shown each tile (TMM0 2001 ... TMMN 2003) has a separate register with each register storing a row and column size for that particular tile. StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (e.g., TILES_CONFIGURED = 1) to indicate tiles are configured for use.Figure 20(B) illustrates a plurality of registers 1819. As shown each tile has separate registers for its rows and columns. For example, TMM0 rows configuration 2021, TMM0 columns configuration 2023, StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (e.g., TILES_CONFIGURED = 1) to indicate tiles are configured for use.Figure 20(C) illustrates a single register 1819. As shown, this register stores tile configurations (rows and columns per tile) 2031, StartP 2011, and StartRow 2013 are stored in single register as packed data registers. One or more status registers 2015 are set (e.g., TILES_CONFIGURED = 1) to indicate tiles are configured for use.Figure 20(D) illustrates a plurality of registers 1819. As shown, a single register stores tile configuration (rows and columns per tile) 2031. StartP and StartRow are stored in separate registers 2011 and 2013. One or more status registers 2015 are set (e.g., TILES_CONFIGURED = 1) to indicate tiles are configured for use.Other combinations are contemplated such as combining the start registers into a single register where they are shown separately, etc.MATRIX (TILE) FLOATING-POINT MULTIPLY-ACCUMULATE (TILEFPFMA) INSTRUCTIONAs mentioned above, a common operation performed in the context of machine learning is a matrix (tile) floating-point fused multiply-accumulate (FMA) instruction, be it in single-precision or double-precision. Improving the power and performance of FMA instructions is expected to improve the power and performance of applications that use those instructions, including machine learning training and inference applications.Accordingly, disclosed methods and systems perform a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction using an interleaved pipeline that performs tile and matrix floating-point fused multiply-accumulations at a rate of one FMA per cycle. In other words, an interleaved pipeline as disclosed is to perform each FMA operation one cycle after its immediately preceding FMA operation has produced its source (with which to accumulate its generated product). This optimization allows easier scheduling of matrix (tile)-multiplications (at least by allowing FMA operations to be scheduled one after the other without intervening overhead), better utilization of system FMA circuitry (at least by eliminating the need to buffer intermediate results), reduced power consumption (at least by requiring fewer instructions that are tightly packed to operate one after the other without intervening overhead), and hardware costs (at by reducing in routing costs and in the number of clocked elements (flops).Matrix (Tile) Multiplication operation (i.e. TMUL) is given by Equation 1, where the dimensions of A, B and C matrix are given by: C[n, m], A[m, k], B[k, n]. In other words, the A matrix has M rows by K columns, the B matrix has K rows by N columns, and the C matrix has M rows by N columns. Cij + = ∑ n = 0 to k Ain * BnjWhere K, M, and N can be any one of 1, 2, 3, 4, 8, and 16, subject, of course, to maximum available matrix (tile) dimensions of the processor.A basic, non-interleaved pipeline approach, in which one fused multiply-accumulate (FMA) instruction waits until execution of the previous FMA instruction is completed before it starts, implies long latencies and relatively low performance. Such an approach is illustrated and described with respect to Figure 21A.Disclosed embodiments, on the other hand, execute a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction using an interleaved pipeline, by which the multiplication operation of the next FMA occurs in parallel with the accumulation operation of the current FMA operation. For example, given two FMA operations, disclosed embodiments perform (Accumulate 1) concurrently with (Multiply 2), the product of which is accumulated with the result of Accumulate 1. Such an approach, as applied in disclosed embodiments, is further illustrated and described with respect to Figures 21B, 22A-B, and 23. A format of the TILEFPFMA instruction is illustrated and described with respect to Figures 24, 25A-B, and26A-C.The interleaved pipeline enables dramatic TMUL latency reduction and better memory utilization. By this, improving performance and power saving are achieved.As illustrated and described herein, an embodiment of a processor performing a TILEFPFMA instruction includes decode circuitry to decode an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K pipeline instances over K cycles, each pipeline instance including: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and element (K, N) of the second source matrix; concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix; in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, and, if rounding is determined to be required, causing a next pipeline instance to add a one; wherein the product, before the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum to a subsequent instance of the pipeline; and execution circuitry to execute the decoded instruction as per the opcode.The EXPDIFF stage operates while the previous FP value is still unknown: it works in parallel with the mantissa calculation for the previous FP value (which is being calculated by the ADD-BYPASS stage of the previous pipeline instance).This requires making exponent independent of late mantissa adjustments. Disclosed embodiments address this requirement by not making the late mantissa adjustments at all.Disclosed embodiments obviate the need for late adjustments using 1) no rounding, and 2) fuzzy-Jbit:NO ROUNDING: No rounding means just determining whether rounding is required, and, if so, causing the next add operation to add a one. This way, disclosed embodiments improve timing of mantissa calculation as well as removing much of the hardware required for rounding. Moreover, since no rounding is actually done, no adjustment is required for mantissa and exponent.FUZZY-JBIT LOCATION: Fuzzy-JBit location, as used by disclosed embodiments, is an internal FP format for single accuracy. Fuzzy-Jbit format allows j-bit to be on one of the positions: 23 and 24. This format increases mantissa by 1 bit but allows avoidance of late adjustment according to the j-bit position of the final result.Each pipeline instance further includes a second, ADD-BYPASS stage, during which the product is accumulated with the previous FP value, the accumulated sum to be stored to the element (M, N) of the destination matrix. Before performing the accumulation, the product generated during the MULTIPLY stage is to be brought into alignment by shifting its mantissa by the exponent difference. The pipeline instance is further to, concurrently during the ADD-BYPASS stage, bypass the accumulated sum for use by a subsequent instance of the pipeline.A format of a TILEFPFMA instruction is further illustrated and described with respect to Figures 24, 25A-B, and 26A-DFigures 21A-Billustrate example embodiments of floating-point multiply-accumulate pipelines. Figure 21A illustrates a traditional floating-point multiply-accumulate pipeline. As shown, pipeline 2100 is to perform 16 fused multiply-accumulations (FMAs) in 32 cycles. Disclosed embodiments, on the other hand, interleave the pipeline so as to effectively perform those calculations in about half the time. The interleaved pipeline of disclosed embodiments is further illustrated and described with respect to Figures 21B, 22A-B, and23.Figure 21B illustrates an interleaved matrix (tile) floating-point multiply-accumulate pipeline, according to some embodiments. As shown, pipeline 2150 is to perform 16 fused multiply-accumulations (FMAs) in 16 cycles, or about twice as fast as the non-interleaved pipeline of Figure 21AIn operation, a processor performing a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction is to decode, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles, the opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to execute K instances of a pipeline over K cycles. The processor performing the TILEFPFMA instruction is further to include decode circuitry to decode the instruction, and execution circuitry to execute the instruction as per the opcode.Specifically, execution circuitry performing the TILEFPFMA instruction, as further illustrated and described with respect to Figures 22Band23, is to launch (or, in other words, initiate, implement, execute) K instances of a pipeline over K cycles, each instance of the pipeline comprising: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix, concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix, in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.Advantageously, the interleaved pipeline of disclosed embodiments, as illustrated and described with respect to Figures 21B,22A-B, and23, performs the matrix multiplication operations called for by the TILEFPFMA instruction roughly twice as fast as the non-interleaved pipeline of Figure 21A. In particular, as shown, 16 FMA operations are completed in 16 + 1 cycles.The execution circuitry of disclosed embodiments is further illustrated and described with respect to Figures 21A,22A-B,23,28A-B, and 29A-B.EXEMPLARY EXECUTIONFigure 22A is a block diagram illustrating execution of a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction according to some embodiments. A processing system 2200 performing a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction is to decode, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles. The processor is further to execute, using execution circuitry, the decoded instruction as per the opcode.The term "launching" an instance of the pipeline, as used herein, means to initiate execution of the pipeline, or to start execution of the pipeline, or just to execute the pipeline. The processor performing the TILEFPFMA instruction is further to include execution circuitry 2206 to execute the instruction as per the opcode. A format of a TILEFPFMA instruction 2201 is further illustrated and described with respect to Figures 24,25A-B, and 26A-D.More specifically, each instance of the pipeline includes: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix. Concurrently, an EXPDIFF stage entails determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix. The pipeline, in a second, ADD-BYPASS stage, is to accumulate the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference, and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.To illustrate execution of the TILEFPFMA instruction, Figure 22A illustrates an exemplary, 4-row by 4-column first matrix (tile) 2202 and an exemplary, 4-row by 4-column second matrix (tile) 2204. Also shown is execution circuitry 2206, illustrating the mathematical operations to be performed for each element (M, N) of the destination matrix (tile).Execution circuitry to execute TILEFPFMA instruction 2201 is further illustrated and described with respect to Figure 22B,23, 28A-B, and 29A-B.Figure 22B is a block diagram illustrating execution of a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction using an interleaved pipeline, according to some embodiments. As shown, a processor performing a matrix (tile) floating-point fused multiply-accumulate (TILEFPFMA) instruction is to decode, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles. Each of each instance of the pipeline includes in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix. Concurrently, in an EXPDIFF stage, the processor is to determine an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix. In a second, ADD-BYPASS stage, the processor is to accumulate the product with the previous FP value and store the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.A format of a TILEFPFMA instruction 2251 is further illustrated and described with respect to Figures 24,25A-B, and 26A-D. The processor performing the TILEFPFMA instruction is further to include decode circuitry (not shown) to decode the instruction, and execution circuitry to execute the instruction as per the opcode.Also shown is an interleaved pipeline 2250 used to perform the TILEFPFMA instruction as illustrated in Figure 22A. According to disclosed embodiments, execution circuitry is to launch K instances of a pipeline over K cycles, each instance of the pipeline comprising: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix; concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of the element (M, N) of the destination matrix; in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.Here, a first instance 2252 of the pipeline is launched at time -t=0. Pipeline 2252 is to retrieve and operate on specified instances of the A (first source), B (second source), and C (destination) matrices. Here, the EXPDIFF and ADD-BYPASS pipeline stages of the first executed instance 2252 of the pipeline receive and use the previous FP value of the element (M, N) of the destination matrix from its location as specified by the instruction, and the EXPDIFF and ADD-BYPASS pipeline stages of subsequent executed instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as a bypass from the ADD-BYPASS stage of an immediately preceding instance of the pipeline. Second, third, and fourth instances of the pipeline are launched at 2254, 2256, and 2258. Arcs 2262, 2264, and 2266 show element (M, N) of the destination matrix being bypassed from the first, second, and third instances of the pipeline to their following pipeline instance.Advantageously, a processor performing the TILEFPFMA instruction as illustrated in Figure 22B is to complete execution of the K instances of the pipeline over K-plus-one cycles. Execution circuitry of disclosed embodiments is further illustrated and described with respect to Figure 21B, 28A-B, and 29A-B.EXEMPLARY METHOD(S) OF EXECUTIONFigure 23 illustrates an embodiment of a processor executing a flow to process a matrix (tile) floating-point multiply-accumulate (TILEFPFMA) instruction. As shown, a processor is to perform an instruction 2301 specifying an opcode and locations of a M by K first source matrix, a K by N second source matrix, and a M by N destination matrix.In some embodiments, a processor is to perform flow 2300. At 2303, the processor is to decode, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, and a M by N destination matrix. The instruction further specifies an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles. Each instance of the pipeline is shown at 2305, showing that each instance of the pipeline includes: a first, MULTIPLY stage, during which a product is generated of FP element (M, K) of the first source matrix and a FP element (K, N) of the second source matrix. Concurrently, in an EXPDIFF stage, the processor is to determine an exponent difference between the product and a previous FP value of the element (M, N) of the destination matrix. The pipeline further includes a second, ADD-BYPASS stage during which the product is accumulated with the previous FP value and is stored in the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be aligned with the previous FP value by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, the processor is to bypass the accumulated sum for use by a subsequent instance of the pipeline.In some embodiments, the processor at 2307 is to schedule execution of the instruction. Operation 2307 is optional, as indicated by its dashed border insofar as it may occur at a different time, or not at all.At 2309, the processor is to execute, using execution circuitry, the instruction as per the opcode.In some embodiments, the processor at 2311 is retire and commit the execution results. Operation 2311 is optional, as indicated by its dashed border insofar as it may occur at a different time, or not at all.Execution circuitry is further illustrated and described with respect to Figures 3-14. In some embodiments, execution circuitry is a matrix operations accelerator, such as that illustrated and described as accelerator 307 (Figure 3). In some embodiments, execution circuitry is a matrix operations circuit, such as matrix operations circuitry 405 (Figure 4), 505 (Figure 5), or 1213 (Figure 12), and 1327 (Figure 13).aEXEMPLARY INSTRUCTION FORMAT(S)Figure 24 is a block diagram illustrating a format of a TILEFPFMA instruction, according to some embodiments. As shown, TILEFPFMA instruction 2400 includes fields for specifying an opcode 2402, a destination location 2404, a first source matrix (tile) location 2406, a second source matrix (tile) location 2408, and a third source matrix (tile) location 2410. In some embodiments, the specified third source matrix (tile) location also serves as the destination location, which is therefore an optional field, as indicated by its dashed border.TILEFPFMA instruction 2400 further includes several optional parameters to control the processor's behavior, including M 2412, K 2414, and N 2416, element size 2418 (crumb, nibble, byte, word, doubleword, or quadword), element format 2420 (packed or scalar single- or double-precision floating-point data and packed or scalar integer data), and mask 2422 (multi-bit value with one bit per destination element, the bit to control whether the destination element is to be updated, or if it is to be masked from being updated. Masked destination elements are either to be zeroed or merged, as controlled by another instruction field, or by a control register programmed by software).Opcode 2402 is shown including an asterisk, which is to convey that additional prefixes and/or suffixes may be added to specify instruction behavior. One or more of instructions modifiers 2412, 2414, 2416, 2418, 2420, and 2422 may be specified using prefixes or suffixes to opcode 2402.In some embodiments, one or more of optional instructions modifiers 2412, 2414, 2416, 2418, 2420, and 2422 are encoded in an immediate field (not shown) optionally included with the instruction 2400. In some embodiments, one or more of optional instructions modifiers 2412, 2414, 2416, 2418, 2420, and 2422 is specified via a configuration/status register (e.g., XTILECONFIG). In other words, when any one or more of optional modifiers 2412, 2414, 2416, 2418, 2420, and 2422 are not specified by the instruction, they sometimes use implicit parameters that are inherited from other parts of the tile architecture.DETAILED EXEMPLARY SYSTEMS, PROCESSORS, AND EMULATIONDetailed herein are examples of hardware, software, etc. to execute the above described instructions. For example, what is described below details aspects of instruction execution including various pipeline stages such as fetch, decode, schedule, execute, retire, etc.INSTRUCTION SETSAn instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014 ; and see Intel® Advanced Vector Extensions Programming Reference, October 2014 ).EXEMPLARY INSTRUCTION FORMATSEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.Figures 25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments. Figure 25A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments; while Figure 25B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments. Specifically, a generic vector friendly instruction format 2500 for which are defined class A and class B instruction templates, both of which include no memory access 2505 instruction templates and memory access 2520 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in Figure 25A include: 1) within the no memory access 2505 instruction templates there is shown a no memory access, full round control type operation 2510 instruction template and a no memory access, data transform type operation 2515 instruction template; and 2) within the memory access 2520 instruction templates there is shown a memory access, temporal 2525 instruction template and a memory access, non-temporal 2530 instruction template. The class B instruction templates in Figure 25B include: 1) within the no memory access 2505 instruction templates there is shown a no memory access, write mask control, partial round control type operation 2512 instruction template and a no memory access, write mask control, vsize type operation 2517 instruction template; and 2) within the memory access 2520 instruction templates there is shown a memory access, write mask control 2527 instruction template.The generic vector friendly instruction format 2500 includes the following fields listed below in the order illustrated in Figures 25A-25B.Format field 2540 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 2542 - its content distinguishes different base operations.Register index field 2544 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 2546 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 2505 instruction templates and memory access 2520 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 2550 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 2568, an alpha field 2552, and a beta field 2554. The augmentation operation field 2550 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 2560 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base).Displacement Field 2562A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement Factor Field 2562B (note that the juxtaposition of displacement field 2562A directly over displacement factor field 2562B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale ∗ index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 2574 (described later herein) and the data manipulation field 2554C. The displacement field 2562A and the displacement factor field 2562B are optional in the sense that they are not used for the no memory access 2505 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 2564 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 2570 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 2570 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the write mask field's 2570 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 2570 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 2570 content to directly specify the masking to be performed.Immediate field 2572 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 2568 - its content distinguishes between different classes of instructions. With reference to Figures 25A-B, the contents of this field select between class A and class B instructions. In Figures 25A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 2568A and class B 2568B for the class field 2568 respectively in Figures 25A-B).INSTRUCTION TEMPLATES OF CLASS AIn the case of the non-memory access 2505 instruction templates of class A, the alpha field 2552 is interpreted as an RS field 2552A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2552A.1 and data transform 2552A.2 are respectively specified for the no memory access, round type operation 2510 and the no memory access, data transform type operation 2515 instruction templates), while the beta field 2554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2505 instruction templates, the scale field 2560, the displacement field 2562A, and the displacement factor field 2562B are not present.NO-MEMORY ACCESS INSTRUCTION TEMPLATES - FULL ROUND CONTROL TYPE OPERATIONIn the no memory access full round control type operation 2510 instruction template, the beta field 2554 is interpreted as a round control field 2554A, whose content(s) provide static rounding. While in the described embodiments the round control field 2554A includes a suppress all floating-point exceptions (SAE) field 2556 and a round operation control field 2558, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 2558).SAE field 2556 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 2556 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.Round operation control field 2558 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 2558 allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 2550 content overrides that register value.No MEMORY ACCESS INSTRUCTION TEMPLATES - DATA TRANSFORM TYPE OPERATIONIn the no memory access data transform type operation 2515 instruction template, the beta field 2554 is interpreted as a data transform field 2554B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 2520 instruction template of class A, the alpha field 2552 is interpreted as an eviction hint field 2552B, whose content distinguishes which one of the eviction hints is to be used (in Figure 25A, temporal 2552B.1 and non-temporal 2552B.2 are respectively specified for the memory access, temporal 2525 instruction template and the memory access, non-temporal 2530 instruction template), while the beta field 2554 is interpreted as a data manipulation field 2554C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 2520 instruction templates include the scale field 2560, and optionally the displacement field 2562A or the displacement factor field 2562B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.MEMORY ACCESS INSTRUCTION TEMPLATES - TEMPORALTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.MEMORY ACCESS INSTRUCTION TEMPLATES - NON-TEMPORALNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.INSTRUCTION TEMPLATES OF CLASS BIn the case of the instruction templates of class B, the alpha field 2552 is interpreted as a write mask control (Z) field 2552C, whose content distinguishes whether the write masking controlled by the write mask field 2570 should be a merging or a zeroing.In the case of the non-memory access 2505 instruction templates of class B, part of the beta field 2554 is interpreted as an RL field 2557A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2557A.1 and vector length (VSIZE) 2557A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 2512 instruction template and the no memory access, write mask control, VSIZE type operation 2517 instruction template), while the rest of the beta field 2554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2505 instruction templates, the scale field 2560, the displacement field 2562A, and the displacement factor field 2562B are not present.In the no memory access, write mask control, partial round control type operation 2510 instruction template, the rest of the beta field 2554 is interpreted as a round operation field 2559A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler).Round operation control field 2559A - just as round operation control field 2558, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 2559A allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 2550 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 2517 instruction template, the rest of the beta field 2554 is interpreted as a vector length field 2559B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 2520 instruction template of class B, part of the beta field 2554 is interpreted as a broadcast field 2557B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 2554 is interpreted the vector length field 2559B. The memory access 2520 instruction templates include the scale field 2560, and optionally the displacement field 2562A or the displacement factor field 2562B.With regard to the generic vector friendly instruction format 2500, a full opcode field 2574 is shown including the format field 2540, the base operation field 2542, and the data element width field 2564. While one embodiment is shown where the full opcode field 2574 includes all of these fields, the full opcode field 2574 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 2574 provides the operation code (opcode).The augmentation operation field 2550, the data element width field 2564, and the write mask field 2570 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general-purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general-purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.EXEMPLARY SPECIFIC VECTOR FRIENDLY INSTRUCTION FORMATFigure 26A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments. Figure 26A shows a specific vector friendly instruction format 2600 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 2600 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figures 25A-Binto which the fields from Figure 26A map are illustrated.It should be understood that, although embodiments are described with reference to the specific vector friendly instruction format 2600 in the context of the generic vector friendly instruction format 2500 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 2600 except where claimed. For example, the generic vector friendly instruction format 2500 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 2600 is shown as having fields of specific sizes. By way of specific example, while the data element width field 2564 is illustrated as a one-bit field in the specific vector friendly instruction format 2600, the invention is not so limited (that is, the generic vector friendly instruction format 2500 contemplates other sizes of the data element width field 2564).The specific vector friendly instruction format 2600 includes the following fields listed below in the order illustrated in Figure 26A.EVEX Prefix 2602 (Bytes 0-3)- is encoded in a four-byte form.Format Field 2540 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 2540 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 2605 (EVEX Byte 1, bits [7-5]) - consists of an EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and EVEX.B bit field (EVEX Byte 1, bit [5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 2510 - this is the first part of the REX' field 2510 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 2615 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 2564 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 2620 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (1s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 2620 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 2568 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 2625 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2-bit SIMD prefix encodings, and thus not require the expansion.Alpha field 2552 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 2554 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 2510 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 2570 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 2630 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 2640 (Byte 5) includes MOD field 2642, Reg field 2644, and R/M field 2646. As previously described, the MOD field's 2642 content distinguishes between memory access and non-memory access operations. The role of Reg field 2644 can be summarized to two situations: encoding either the destination register operand or a source register operand or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 2646 may include the following: encoding the instruction operand that references a memory address or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the content of SIB 2650 is used for memory address generation. SIB.xxx 2654 and SIB.bbb 2656 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 2562A (Bytes 7-10) - when MOD field 2642 contains 10, bytes 7-10 are the displacement field 2562A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 2562B (Byte 7) - when MOD field 2642 contains 01, byte 7 is the displacement factor field 2562B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64-byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, - 64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 2562B is a reinterpretation of disp8; when using displacement factor field 2562B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement assumes that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 2562B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 2562B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 2572 operates as previously described.FULL OPCODE FIELDFigure 26B is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the full opcode field 2574 according to one embodiment. Specifically, the full opcode field 2574 includes the format field 2540, the base operation field 2542, and the data element width (W) field 2564. The base operation field 2542 includes the prefix encoding field 2625, the opcode map field 2615, and the real opcode field 2630.Register Index FieldFigure 26C is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the register index field 2544 according to one embodiment. Specifically, the register index field 2544 includes the REX 2605 field, the REX' 2610 field, the MODR/M.reg field 2644, the MODR/M.r/m field 2646, the VVVV field 2620, xxx field 2654, and the bbb field 2656.AUGMENTATION OPERATION FIELDFigure 26D is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the augmentation operation field 2550 according to one embodiment. When the class (U) field 2568 contains 0, it signifies EVEX.U0 (class A 2568A); when it contains 1, it signifies EVEX.U1 (class B 2568B). When U=0 and the MOD field 2642 contains 11 (signifying a no memory access operation), the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 2552A. When the rs field 2552A contains a 1 (round 2552A.1), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 2554A. The round control field 2554A includes a one-bit SAE field 2556 and a two-bit round operation field 2558. When the rs field 2552A contains a 0 (data transform 2552A.2), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three-bit data transform field 2554B. When U=0 and the MOD field 2642 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 2552B and the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 2554C.When U=1, the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 2552C. When U=1 and the MOD field 2642 contains 11 (signifying a no memory access operation), part of the beta field 2554 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 2557A; when it contains a 1 (round 2557A.1) the rest of the beta field 2554 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 2559A, while when the RL field 2557A contains a 0 (VSIZE 2557A.2) the rest of the beta field 2554 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 2559B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 2642 contains 00, 01, or 10 (signifying a memory access operation), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 2559B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 2557B (EVEX byte 3, bit [4]- B).EXEMPLARY REGISTER ARCHITECTUREFigure 27 is a block diagram of a register architecture 2700 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 2710 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 2600 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 2559BA (Figure 25A; U=0)2510, 2515, 2525, 2530zmm registers (the vector length is 64 byte)B (Figure 25B; U=1)2512zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 2559BB (Figure 25B; U=1)2517, 2527zmm, ymm, or xmm registers (the vector length is 64-byte, 32 byte, or 16 byte) depending on the vector length field 2559BIn other words, the vector length field 2559B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 2559B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 2600 operate on packed or scalar single/double-precision floating-point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in a zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 2715 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 2715 are 16 bits in size. As previously described, in one embodiment, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 2725 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating-point stack register file (x87 stack) 2745, on which is aliased the MMX packed integer flat register file 2750 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating-point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.EXEMPLARY CORE ARCHITECTURESIN-ORDER AND OUT-OF-ORDER CORE BLOCK DIAGRAMFigure 28A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments. Figure 28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments. The solid lined boxes in Figures 28A-Billustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 28A, a processor pipeline 2800 includes a fetch stage 2802, a length decode stage 2804, a decode stage 2806, an allocation stage 2808, a renaming stage 2810, a scheduling (also known as a dispatch or issue) stage 2812, a register read/memory read stage 2814, an execute stage 2816, a write back/memory write stage 2818, an exception handling stage 2822, and a commit stage 2824.Figure 28B shows processor core 2890 including a front-end unit 2830 coupled to an execution engine unit 2850, and both are coupled to a memory unit 2870. The core 2890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 2890 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front-end unit 2830 includes a branch prediction unit 2832 coupled to an instruction cache unit 2834, which is coupled to an instruction translation lookaside buffer (TLB) 2836, which is coupled to an instruction fetch unit 2838, which is coupled to a decode unit 2840. The decode unit 2840 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 2840 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 2890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 2840 or otherwise within the front-end unit 2830). The decode unit 2840 is coupled to a rename/allocator unit 2852 in the execution engine unit 2850.The execution engine unit 2850 includes the rename/allocator unit 2852 coupled to a retirement unit 2854 and a set of one or more scheduler unit(s) 2856. The scheduler unit(s) 2856 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 2856 is coupled to the physical register file(s) unit(s) 2858. Each of the physical register file(s) units 2858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 2858 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) unit(s) 2858 is overlapped by the retirement unit 2854 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 2854 and the physical register file(s) unit(s) 2858 are coupled to the execution cluster(s) 2860. The execution cluster(s) 2860 includes a set of one or more execution units 2862 and a set of one or more memory access units 2864. The execution units 2862 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 2856, physical register file(s) unit(s) 2858, and execution cluster(s) 2860 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating-point/packed integer/packed floating-point/vector integer/vector floating-point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 2864). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 2864 is coupled to the memory unit 2870, which includes a data TLB unit 2872 coupled to a data cache unit 2874 coupled to a level 2 (L2) cache unit 2876. In one exemplary embodiment, the memory access units 2864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 2872 in the memory unit 2870. The instruction cache unit 2834 is further coupled to a level 2 (L2) cache unit 2876 in the memory unit 2870. The L2 cache unit 2876 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 2800 as follows: 1) the instruction fetch 2838 performs the fetch and length decoding stages 2802 and 2804; 2) the decode unit 2840 performs the decode stage 2806; 3) the rename/allocator unit 2852 performs the allocation stage 2808 and renaming stage 2810; 4) the scheduler unit(s) 2856 performs the schedule stage 2812; 5) the physical register file(s) unit(s) 2858 and the memory unit 2870 perform the register read/memory read stage 2814; the execution cluster 2860 perform the execute stage 2816; 6) the memory unit 2870 and the physical register file(s) unit(s) 2858 perform the write back/memory write stage 2818; 7) various units may be involved in the exception handling stage 2822; and 8) the retirement unit 2854 and the physical register file(s) unit(s) 2858 perform the commit stage 2824.The core 2890 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 2890 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 2834/2874 and a shared L2 cache unit 2876, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.SPECIFIC EXEMPLARY IN-ORDER CORE ARCHITECTUREFigures 29A-Billustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 29A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 2902 and with its local subset of the Level 2 (L2) cache 2904, according to embodiments. In one embodiment, an instruction decoder 2900 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 2906 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 2908 and a vector unit 2910 use separate register sets (respectively, scalar registers 2912 and vector registers 2914) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 2906, alternative embodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 2904 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 2904. Data read by a processor core is stored in its L2 cache subset 2904 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 2904 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 29B is an expanded view of part of the processor core in Figure 29A according to embodiments. Figure 29B includes an L1 data cache 2906A part of the L1 cache 2904, as well as more detail regarding the vector unit 2910 and the vector registers 2914. Specifically, the vector unit 2910 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 2928), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 2920, numeric conversion with numeric convert units 2922A-B, and replication with replication unit 2924 on the memory input. Write mask registers 2926 allow predicating resulting vector writes.Figure 30 is a block diagram of a processor 3000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments. The solid lined boxes in Figure 30 illustrate a processor 3000 with a single core 3002A, a system agent 3010, a set of one or more bus controller units 3016, while the optional addition of the dashed lined boxes illustrates an alternative processor 3000 with multiple cores 3002A-N, a set of one or more integrated memory controller unit(s) 3014 in the system agent unit 3010, and special purpose logic 3008.Thus, different implementations of the processor 3000 may include: 1) a CPU with the special purpose logic 3008 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 3002A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 3002A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 3002A-N being a large number of general purpose in-order cores. Thus, the processor 3000 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 3000 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 3006, and external memory (not shown) coupled to the set of integrated memory controller units 3014. The set of shared cache units 3006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 3012 interconnects the special purpose logic 3008 (integrated graphics logic is an example of and is also referred to herein as special purpose logic), the set of shared cache units 3006, and the system agent unit 3010/integrated memory controller unit(s) 3014, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 3006 and cores 3002A-N.In some embodiments, one or more of the cores 3002A-N are capable of multithreading. The system agent 3010 includes those components coordinating and operating cores 3002A-N. The system agent unit 3010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 3002A-N and the special purpose logic 3008. The display unit is for driving one or more externally connected displays.The cores 3002A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 3002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.EXEMPLARY COMPUTER ARCHITECTURESFigures 31-34 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 31, shown is a block diagram of a system 3100 in accordance with one embodiment of the present invention. The system 3100 may include one or more processors 3110, 3115, which are coupled to a controller hub 3120. In one embodiment the controller hub 3120 includes a graphics memory controller hub (GMCH) 3190 and an Input/Output Hub (IOH) 3150 (which may be on separate chips); the GMCH 3190 includes memory and graphics controllers to which are coupled memory 3140 and a coprocessor 3145; the IOH 3150 couples input/output (I/O) devices 3160 to the GMCH 3190. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 3140 and the coprocessor 3145 are coupled directly to the processor 3110, and the controller hub 3120 in a single chip with the IOH 3150.The optional nature of additional processors 3115 is denoted in Figure 31 with broken lines. Each processor 3110, 3115 may include one or more of the processing cores described herein and may be some version of the processor 3000.The memory 3140 may be, for example, dynamic random-access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3120 communicates with the processor(s) 3110, 3115 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 3195.In one embodiment, the coprocessor 3145 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 3120 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 3110, 3115 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 3110 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 3110 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 3145. Accordingly, the processor 3110 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 3145. Coprocessor(s) 3145 accept and execute the received coprocessor instructions.Referring now to Figure 32, shown is a block diagram of a first more specific exemplary system 3200 in accordance with an embodiment of the present invention. As shown in Figure 32, multiprocessor system 3200 is a point-to-point interconnect system, and includes a first processor 3270 and a second processor 3280 coupled via a point-to-point interconnect 3250. Each of processors 3270 and 3280 may be some version of the processor 3000. In one embodiment, processors 3270 and 3280 are respectively processors 3110 and 3115, while coprocessor 3238 is coprocessor 3145. In another embodiment, processors 3270 and 3280 are respectively processor 3110 coprocessor 3145.Processors 3270 and 3280 are shown including integrated memory controller (IMC) units 3272 and 3282, respectively. Processor 3270 also includes as part of its bus controller units point-to-point (P-P) interfaces 3276 and 3278; similarly, second processor 3280 includes P-P interfaces 3286 and 3288. Processors 3270, 3280 may exchange information via a point-to-point (P-P) interface 3250 using P-P interface circuits 3278, 3288. As shown in Figure 32 , IMCs 3272 and 3282 couple the processors to respective memories, namely a memory 3232 and a memory 3234, which may be portions of main memory locally attached to the respective processors.Processors 3270, 3280 may each exchange information with a chipset 3290 via individual P-P interfaces 3252, 3254 using point to point interface circuits 3276, 3294, 3286, 3298. Chipset 3290 may optionally exchange information with the coprocessor 3238 via a high-performance interface 3292. In one embodiment, the coprocessor 3238 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 3290 may be coupled to a first bus 3216 via an interface 3296. In one embodiment, first bus 3216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 32, various I/O devices 3214 may be coupled to first bus 3216, along with a bus bridge 3218 which couples first bus 3216 to a second bus 3220. In one embodiment, one or more additional processor(s) 3215, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 3216. In one embodiment, second bus 3220 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 3220 including, for example, a keyboard and/or mouse 3222, communication devices 3227 and a storage unit 3228 such as a disk drive or other mass storage device which may include instructions/code and data 3230, in one embodiment. Further, an audio I/O 3224 may be coupled to the second bus 3220. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 32, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 33, shown is a block diagram of a second more specific exemplary system 3300 in accordance with an embodiment of the present invention. Like elements in Figures 32and33 bear like reference numerals, and certain aspects of Figure 32 have been omitted from Figure 33 in order to avoid obscuring other aspects of Figure 33.Figure 33 illustrates that the processors 3270, 3280 may include integrated memory and I/O control logic ("CL") 3372 and 3382, respectively. Thus, the CL 3372, 3382 include integrated memory controller units and include I/O control logic. Figure 33 illustrates that not only are the memories 3232, 3234 coupled to the CL 3372, 3382, but also that I/O devices 3314 are also coupled to the control logic 3372, 3382. Legacy I/O devices 3315 are coupled to the chipset 3290.Referring now to Figure 34, shown is a block diagram of a SoC 3400 in accordance with an embodiment of the present invention. Similar elements in Figure 30 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 34, an interconnect unit(s) 3402 is coupled to: an application processor 3410 which includes a set of one or more cores 3002A-N, which include cache units 3004A-N, and shared cache unit(s) 3006; a system agent unit 3010; a bus controller unit(s) 3016; an integrated memory controller unit(s) 3014; a set or one or more coprocessors 3420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 3430; a direct memory access (DMA) unit 3432; and a display unit 3440 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 3420 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 3230 illustrated in Figure 32, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object-oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.EMULATION (INCLUDING BINARY TRANSLATION, CODE MORPHING, ETC.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 35 shows a program in a high-level language 3502 may be compiled using an x86 compiler 3504 to generate x86 binary code 3506 that may be natively executed by a processor with at least one x86 instruction set core 3516. The processor with at least one x86 instruction set core 3516 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 3504 represents a compiler that is operable to generate x86 binary code 3506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 3516. Similarly, Figure 35 shows the program in the high level language 3502 may be compiled using an alternative instruction set compiler 3508 to generate alternative instruction set binary code 3510 that may be natively executed by a processor without at least one x86 instruction set core 3514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 3512 is used to convert the x86 binary code 3506 into code that may be natively executed by the processor without an x86 instruction set core 3514. This converted code is not likely to be the same as the alternative instruction set binary code 3510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 3512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 3506.FURTHER EXAMPLESExample 1 provides an exemplary processor including: decode circuitry to decode an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K pipeline instances over K cycles, each pipeline instance including: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and element (K, N) of the second source matrix, concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix, in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, and, if rounding is determined to be required, causing a next pipeline instance to add a one; wherein the product, before the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum to a subsequent instance of the pipeline; and execution circuitry to execute the decoded instruction as per the opcode.Example 2 includes the substance of the exemplary processor of Example 1, wherein the execution circuitry is to complete execution of the K instances of the pipeline over K-plus-one cycles.Example 3 includes the substance of the exemplary processor of Example 1, wherein the execution circuitry, during the MULTIPLY stage, is to perform rounding of the generated product, as necessary.Example 4 includes the substance of the exemplary processor of Example 1, wherein the execution circuitry, during the ADD-BYPASS stage, is to perform saturation, as necessary, on the accumulated sum.Example 5 includes the substance of the exemplary processor of Example 1, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is one of 1, 2, 3, 4, 8, and 16.Example 6 includes the substance of the exemplary processor of Example 1, wherein the first source, second source, and destination matrices are each located in one of a collection of vector registers of a register file, a collection of tile registers, and a plurality of memory locations representing a matrix.Example 7 includes the substance of the exemplary processor of Example 1, wherein the execution circuitry saves a state after performing the K pipeline instances on each element (M,N) of the destination matrix, and, in the case of a fault, uses the saved state after recovering from the fault to continue execution.Example 8 includes the substance of the exemplary processor of Example 1, wherein the EXPDIFF and ADD-BYPASS pipeline stages of the first executed instance of the pipeline receive the previous FP value of the element (M, N) of the destination matrix from its location as specified by the instruction, and the EXPDIFF and ADD-BYPASS pipeline stages of subsequent executed instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as a bypass from the ADD-BYPASS stage of an immediately preceding instance of the pipeline.Example 9 includes the substance of the exemplary processor of Example 1, wherein the instruction further specifies a multibit writemask, each bit of which is to mask or otherwise to allow writing of a corresponding element (M, N) of the destination matrix.Example 10 includes the substance of the exemplary processor of Example 9, wherein each of the masked elements is to be either zeroed or merged.Example 11 provides an exemplary method to be performed by a processor, the method including: decoding, using decode circuitry, an instruction specifying locations of a M by K first source matrix, a K by N second source matrix, a M by N destination matrix, and an opcode indicating execution circuitry, for each floating-point (FP) element (M, N) of the destination matrix, is to launch K instances of a pipeline over K cycles, and executing, using execution circuitry, the decoded instruction as per the opcode; and wherein each instance of the pipeline includes: in a first, MULTIPLY stage, generating a product of FP element (M, K) of the first source matrix and a corresponding FP element (K, N) of the second source matrix, concurrently, in an EXPDIFF stage, determining an exponent difference between the product and a previous FP value of element (M, N) of the destination matrix, in a second, ADD-BYPASS stage, accumulating the product with the previous FP value and storing the accumulated sum to the element (M, N) of the destination matrix, wherein the product, before performing the accumulation, is to be brought into alignment by shifting its mantissa by the exponent difference; and concurrently, in the ADD-BYPASS stage, bypassing the accumulated sum for use by a subsequent instance of the pipeline.Example 12 includes the substance of the exemplary method of Example 11, wherein the execution circuitry is to complete execution of the K instances of the pipeline over K-plus-one cycles.Example 13 includes the substance of the exemplary method of Example 11, wherein the execution circuitry, during the MULTIPLY stage, is to perform rounding of the generated product, as necessary.Example 14 includes the substance of the exemplary method of Example 11, wherein the execution circuitry, during the ADD-BYPASS stage, is to perform saturation, as necessary, on the accumulated sum.Example 15 includes the substance of the exemplary method of Example 11, wherein M is one of 1, 2, 3, 4, 8, and 16, N is one of 1, 2, 3, 4, 8, and 16, and K is one of 1, 2, 3, 4, 8, and 116.Example 16 includes the substance of the exemplary method of Example 11, wherein the first source, second source, and destination matrices are each located in one of a collection of vector registers of a register file, a collection of tile registers, and a plurality of memory locations representing a matrix.Example 17 includes the substance of the exemplary method of Example 11, wherein the execution circuitry saves a state after performing the K pipeline instances on each element (M,N) of the destination matrix, and, in the case of a fault, uses the saved state after recovering from the fault to continue execution.Example 18 includes the substance of the exemplary method of Example 11, wherein the EXPDIFF and ADD-BYPASS pipeline stages of the first executed instance of the pipeline receive the previous FP value of the element (M, N) of the destination matrix from its location as specified by the instruction, and the EXPDIFF and ADD-BYPASS pipeline stages of subsequent executed instances of the pipeline receive the previous FP value of the element (M, N) of the destination matrix as a bypass from the ADD-BYPASS stage of an immediately preceding instance of the pipeline.Example 19 includes the substance of the exemplary method of Example 11, wherein the instruction further specifies a multibit writemask, each bit of which is to mask or otherwise to allow writing of a corresponding element (M, N) of the destination matrix.Example 20 includes the substance of the exemplary method of Example 19, wherein each of the masked elements is to be either zeroed or merged. |
A method used in forming a memory array comprising strings of memory cells comprises forming a stack comprising vertically-alternating first tiers and second tiers. The stack comprises laterally-spaced memory-block regions. Simultaneously, (a), (b), and (c) are formed, where (a): horizontally-elongated trenches into the stack laterally-between immediately-laterally-adjacent of the memory-block regions; (b): channel openings into the stack laterally-between the horizontally-elongated trenches; and (c): through-array-via (TAV) openings into the stack in a stair-step region. Intervening material is formed in the horizontally-elongated trenches, a channel-material string in individual of the channel openings, and conductive material in the TAV openings. Other aspects, including structure independent of method, are disclosed. |
24CLAIMS:1. A method used in forming a memory array comprising strings of memory cells, comprising: forming a stack comprising vertically-alternating first tiers and second tiers, the stack comprising laterally-spaced memory-block regions; simultaneously forming (a), (b), and (c), where,(a): horizontally-elongated trenches into the stack laterally-between immediately-laterally-adjacent of the memory-block regions;(b): channel openings into the stack laterally-between the horizontally-elongated trenches; and(c): through-array-via (TAV) openings into the stack in a stair-step region; and forming intervening material in the horizontally-elongated trenches, a channel-material string in individual of the channel openings, and conductive material in the TAV openings.2. The method of claim 1 comprising forming the intervening material, the channel-material strings, and the conductive material at different times relative one another; the channel-material strings being formed before the forming of the intervening material and before the forming of the conductive material.3. The method of claim 2 comprising forming the conductive material before forming the intervening material.4. The method of claim 2 comprising forming the conductive material after forming the intervening material.5. The method of claim 1 comprising forming the intervening material, the channel-material strings, and the conductive material at different times relative one another; the conductive material being formed before the forming of the intervening material and before the forming of the channel-material strings.6. The method of claim 5 comprising forming the channelmaterial strings before forming the intervening material.7. The method of claim 5 comprising forming the channelmaterial strings after forming the intervening material.8. The method of claim 1 comprising forming the intervening material, the channel-material strings, and the conductive material at different times relative one another; the intervening material being formed before the forming of the conductive material and before the forming of the channel-material strings.9. The method of claim 8 comprising forming the channelmaterial strings before forming the conductive material.10. The method of claim 8 comprising forming the channelmaterial strings after forming the conductive material.11. The method of claim 1 wherein the conductive material in the TAV openings comprises TAV structures extending through the first tiers and the second tiers, individual of the TAV structures comprising an upper portion above and joined with a lower portion, the individual TAV structures comprising at least one external jog surface in a vertical cross-section where the upper and lower portions join.12. The method of claim 11 wherein the individual TAV structures have external sidewall surfaces that are straight through multiple of the first tiers and multiple of the second tiers in the vertical cross-section above and below the at least one external jog surface.13. A method used in forming a memory array comprising strings of memory cells, comprising: forming a lower stack comprising vertically-alternating lower first tiers and lower second tiers, the lower stack comprising laterally-spaced memory-block regions; simultaneously forming (a), (b), and (c), where,
(a): horizontally-elongated lower trenches into the lower stack laterally-between immediately-laterally-adjacent of the memory-block regions;(b): lower channel openings into the lower stack laterally-between the horizontally-elongated lower trenches; and(c): lower through-array-via (TAV) openings into the lower stack in a stair-step region; forming first sacrificial material in that which was formed by the (a), the (b), and the (c); forming an upper stack directly above the lower stack and the first sacrificial material, the upper stack comprising vertically-alternating upper first tiers and upper second tiers, the upper stack comprising the laterally-spaced memory-block regions; forming (d), (e), and (f), where,(d): horizontally-elongated upper trenches into the upper stack laterally-between immediately-laterally-adjacent of the memory-block regions, individual of the horizontally- elongated upper trenches extending to the first sacrificial material in individual of the horizontally-elongated lower trenches;(e): upper channel openings into the upper stack laterally-between the horizontally-elongated upper trenches, individual of the upper channel openings extending to the first sacrificial material in individual of the lower channel openings; and(f): upper TAV openings into the upper stack in the stair-step region, individual of the upper TAV openings extending to the first sacrificial material in individual of the lower TAV openings; forming second sacrificial material in that which was formed by the (d), the (e), and the (f);
27 removing the first and second sacrificial materials to form upwardly- open vertically-extended trenches, upwardly-open vertically-extended channel openings, and upwardly-open vertically-extended TAV openings; and forming intervening material in the upwardly-open vertically- extended horizontally-elongated trenches, a channel-material string in individual of the upwardly-open vertically-extended channel openings, and conductive material in the upwardly-open vertically-extended TAV openings.14. The method of claim 13 wherein the (d), the (e), and the (f) are formed simultaneously.15. The method of claim 13 wherein the (d), the (e), and the (f) are not formed simultaneously.16. The method of claim 15 comprising forming the (e) before the forming of the (d) and the (f).17. The method of claim 16 comprising forming the (f) before the forming of the (d).18. The method of claim 1 wherein the conductive material in the extended TAV openings comprises TAV structures extending through the upper first tiers, the upper second tiers, the lower first tiers and the lower second tiers; individual of the TAV structures comprising at least one external jog surface in a vertical cross-section where the upper and lower stacks join.19. The method of claim 18, wherein the individual TAV structures have external sidewall surfaces that are straight through multiple of the upper first tiers and multiple of the upper second tiers in the vertical cross-section above the at least one external jog surface;
28 forming the channel-material strings to comprise part of channel- material-string structures that extending through the first tiers and the second insulative tiers, individual of the channel-material-string structures comprising an upper portion above and joined with a lower portion, the individual channel-material-string structures comprising at least one external jog surface in the vertical cross-section where the upper and lower portions of the individual channel-material-string structures join, the individual channel-material-string structures have external sidewall surfaces that are straight through multiple of the first tiers and multiple of the second tiers in the vertical cross-section above and below its at least one external jog surface; and horizontally-elongated walls being laterally-between immediately- laterally-adjacent of the memory-block regions, individual of the horizontally-elongated walls comprising an upper portion above and joined with a lower portion, the individual walls comprising at least one external jog surface in the vertical cross-section where the upper and lower portions of the horizontally-elongated walls join, the individual horizontally- elongated walls have external sidewall surfaces that are straight through multiple of the first tiers and multiple of the second tiers in the vertical cross-section above and below its at least one external jog surface.20. A memory array comprising strings of memory cells, comprising: laterally-spaced memory blocks individually comprising a vertical stack comprising alternating insulative tiers and conductive tiers, channel- material-string structures of memory cells extending through the insulative tiers and the conductive tiers; and through-array-via (TAV) structures extending through the insulative tiers and the conductive tiers, individual of the TAV structures comprising an upper portion above and joined with a lower portion, the individual TAV structures comprising at least one external jog surface in a vertical crosssection where the upper and lower portions join.
2921. The memory array of claim 20 wherein the individual TAV structures have external sidewall surfaces that are straight through multiple of the insulative tiers and multiple of the conductive tiers in the vertical cross-section above and below the at least one external jog surface.22. The memory array of claim 20 wherein the at least one jog surface includes a part that is horizontal.23. The memory array of claim 22 wherein the part is exactly horizontal.24. The memory array of claim 22 wherein the individual TAV structures have external sidewall surfaces that are straight through multiple of the insulative tiers and multiple of the conductive tiers in the vertical cross-section above and below the at least one external jog surface.25. The memory array of claim 24 wherein the part is exactly horizontal.26. The memory array of claim 20 comprising NAND.27. A memory array comprising strings of memory cells, comprising: laterally-spaced memory blocks individually comprising a vertical stack comprising alternating insulative tiers and conductive tiers, channel- material-string structures of memory cells extending through the insulative tiers and the conductive tiers; through-array-via (TAV) structures extending through the insulative tiers and the conductive tiers, individual of the TAV structures comprising an upper portion above and joined with a lower portion, the individual TAV structures comprising at least one external jog surface in a vertical crosssection where the upper and lower portions of the individual TAV structures join;
30 channel-material-string structures extending through the insulative tiers and the conductive tiers, individual of the channel-material-string structures comprising an upper portion above and joined with a lower portion, the individual channel-material-string structures comprising at least one external jog surface in the vertical cross-section where the upper and lower portions of the individual channel-material-string structures join; and horizontally-elongated walls laterally-between immediately-laterally- adjacent of the memory blocks, individual of the horizontally-elongated walls comprising an upper portion above and joined with a lower portion, the individual walls comprising at least one external jog surface in the vertical cross-section where the upper and lower portions of the horizontally-elongated walls join.28. The memory array of claim 27 wherein, the individual TAV structures have external sidewall surfaces that are straight through multiple of the insulative tiers and multiple of the conductive tiers in the vertical cross-section above and below its at least one external jog surface; the individual channel-material-string structures have external sidewall surfaces that are straight through multiple of the insulative tiers and multiple of the conductive tiers in the vertical cross-section above and below its at least one external jog surface; and the individual horizontally-elongated walls have external sidewall surfaces that are straight through multiple of the insulative tiers and multiple of the conductive tiers in the vertical cross-section above and below its at least one external jog surface.29. The memory array of claim 27 comprising NAND. |
MEMORY ARRAY COMPRISING STRINGS OF MEMORY CELLS AND METHOD USED IN FORMING A MEMORY ARRAY COMPRISING STRINGS OF MEMORY CELLSTECHNICAL FIELDEmbodiments disclosed herein pertain to memory arrays comprising strings of memory cells and to methods used in forming a memory array comprising strings of memory cells.BACKGROUNDMemory is one type of integrated circuitry and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digitlines (which may also be referred to as bitlines, data lines, or sense lines) and access lines (which may also be referred to as wordlines). The sense lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a sense line and an access line.Memory cells may be volatile, semi-volatile, or non-volatile. Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage. Volatile memory may have a retention time of milliseconds or less. Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a “0” or a “1 ”. In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A field effect transistor is one type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows
current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example a reversibly programmable charge-storage region as part of the gate construction between the gate insulator and the conductive gate.Flash memory is one type of memory and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of integrated flash memory. A NAND cell unit comprises at least one selecting device coupled in series to a serial combination of memory cells (with the serial combination commonly being referred to as a NAND string). NAND architecture may be configured in a three-dimensional arrangement comprising vertically-stacked memory cells individually comprising a reversibly programmable vertical transistor. Control or other circuitry may be formed below the vertically-stacked memory cells. Other volatile or non-volatile memory array architectures may also comprise vertically-stacked memory cells that individually comprise a transistor.Memory arrays may be arranged in memory pages, memory blocks and partial blocks (e.g., sub-blocks), and memory planes, for example as shown and described in any of U.S. Patent Application Publication Nos. 2015/0228651 , 2016/0267984, and 2017/0140833. The memory blocks may at least in part define longitudinal outlines of individual wordlines in individual wordline tiers of vertically-stacked memory cells. Connections to these wordlines may occur in a so-called “stair-step structure” at an end or edge of an array of the vertically-stacked memory cells. The stair-step structure includes individual “stairs” (alternately termed “steps” or “stairsteps”) that define contact regions of the individual wordlines upon which
elevationally-extending conductive vias contact to provide electrical access to the wordlines.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic cross-sectional view of a portion of a substrate in process in accordance with an embodiment of the invention and is taken through line 1 - 1 in Fig. 2.Fig. 2 is a diagrammatic cross-sectional view taken through line 2-2 in Figs. 1 and 3.Fig. 3 is a diagrammatic cross-sectional view taken through line 3-3 in Fig. 2 and 4.Fig. 4 is a diagrammatic cross-sectional view taken through line 4-4 in Fig. 1 and 3Figs. 5-38 are diagrammatic sequential sectional, expanded, enlarged, and/or partial views of the construction of Figs. 1 -4, or portions thereof, in process in accordance with some embodiments of the invention.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass methods used in forming a memory array, for example an array of NAND or other memory cells that may have at least some peripheral control circuitry under the array (e.g., CMOS-under-array). Embodiments of the invention encompass so-called “gate-last” or “replacement-gate” processing, so-called “gate-first” processing, and other processing whether existing or future-developed independent of when transistor gates are formed. Embodiments of the invention also encompass a memory array (e.g., NAND architecture) independent of method of manufacture. First example method embodiments are described with reference to Figs. 1-38 which may be considered as a “gate-last” or “replacement-gate” process, and starting with Figs. 1 -4.Figs. 1 -4 show a construction 10 having an array or array area 12 in which elevationally-extending strings of transistors and/or memory cells will be formed. Construction 10 comprises a base substrate 11 having any one or more of conductive/conductor/conducting, semiconductive/semiconductor/semiconducting, or insulative/insulator/insulating (i.e., electrically herein) materials. Various
materials have been formed elevationally over base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of the Figs. 1 - 4-depicted materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within base substrate 11. Control and/or other peripheral circuitry for operating components within an array (e.g., array 12) of elevationally- extending strings of memory cells may also be fabricated and may or may not be wholly or partially within an array or sub-array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. In this document, a “sub-array” may also be considered as an array.In one embodiment, construction 10 comprises a stair-step region 15 in which a stair-step structure (not shown) may be formed, for example that may be in an end area of array 12 and away from area of array 12 in which the elevationally-extending strings of transistors and/or memory cells will be formed. Stair-step region 15 may or may not be considered as part of array 12. By way of example only, example stair-step region 15 is diagrammatically shown as having islands 80 and circumferentially about which insulator material 81 (e.g., HfOx, A1OX) is received. Islands 80 may be formed in one or more areas in which steps, stair-step-flight crests, and/or stair-step-flight landings will be formed (none of such being shown).In some embodiments and as shown, a conductor tier 16 comprising conductor material 17 has been formed above substrate 11. As an example, conductor material 17 comprises upper conductor material 43 (e.g., n-type or p-type conductively-doped polysilicon) directly above (e.g., directly against) lower conductor material 44 (e.g., WSix) of different composition from upper conductor material 43. Conductor tier 16 may comprise part of control circuitry (e.g., peripheral-under-array circuitry and/or a common source line or plate) used to control read and write access to the transistors and/or memory cells that will be formed within array 12.In some embodiments, conductor tier 16 may be considered as being part of a lower stack 18L comprising vertically-alternating lower insulative tiers 20L and lower conductive tiers 22L. Example lower stack 18L comprises laterally-spaced memory-block regions 58 that will comprise laterally-spaced memory blocks 58 in a finished circuitry construction. In
this document, “block” is generic to include “sub-block”. Memory-block regions 58 and resultant memory blocks 58 (not yet shown) may be considered as being longitudinally elongated horizontally, for example along a direction 55. Memory-block regions 58 may not be discernable prior to the processing shown by Figs. 1-4.Example thickness for each of lower tiers 20L and 22L is 22 to 60 nanometers. Only a small number of lower tiers 20L and 22L is shown, with more likely lower stack 18L comprising dozens, a hundred or more, etc. of lower tiers 20L and 22L. Other circuitry that may or may not be part of peripheral and/or control circuitry may be between conductor tier 16 and lower stack 18L. For example, multiple vertically-alternating tiers of conductive material and insulative material of such circuitry may be below a lowest of lower conductive tiers 22L and/or above an uppermost of lower conductive tiers 22L. For example, one or more select gate tiers (not shown) or dummy tiers (not shown) may be between conductor tier 16 and the lowest conductive tier 22E and one or more select gate tiers (not shown) or dummy tiers (not shown) may be above an uppermost of lower conductive tiers 22E. Alternately or additionally, at least one of the depicted lowest conductive tiers 22E may be a select gate tier. Regardless, lower conductive tiers 22E (alternately referred to as lower first tiers) may not comprise conducting material and lower insulative tiers 20E (alternately referred to as lower second tiers) may not comprise insulative material or be insulative at this point in processing in conjunction with the hereby initially-described example method embodiment which is “gate-last” or “replacement-gate”. Example lower conductive tiers 22E comprise first material 26 (e.g., silicon nitride) which may be wholly or partially sacrificial. Example lower insulative tiers 20L comprise second material 24 (e.g., silicon dioxide) that is of different composition from that of first material 26 and which may be wholly or partially sacrificial.In one embodiment and as shown, a lowest lower second tier 20Lz of lower stack 18L is directly above (e.g., directly against) conductor material 17. Tier 20Lz may be sacrificial. A lowest lower first tier 22Lz of lower stack 18L is directly above (e.g., directly against) tier 20Lz and comprises sacrificial material 77. Example sacrificial materials 77 include silicon nitride and doped or undoped polysilicon. In this document, “undoped
polysilicon” is polysilicon having from 0 atoms/cm3to 1 x 1012atoms/cm3of atoms of conductivity-increasing impurity. “Doped polysilicon” is polysilicon that has more than 1 x 1012atoms/cm3of atoms of conductivity-increasing impurity and “conductively-doped polysilicon” is polysilicon that has at least 1 x 1018atoms/cm3of atoms of conductivity-increasing impurity. In one embodiment, a next-lowest lower second tier 20Lx is directly above tier 20Lz and a conducting-material tier 21 comprising conducting material 47 (e.g., conductively-doped polysilicon) is directly above tier 20Lx.Processing with respect to (a), (b), and (c) has occurred simultaneously, where,(a): forming horizontally-elongated lower trenches 40L into lower stack 18L laterally-between immediately-laterally-adjacent memory-block regions 58;(b): forming lower channel openings 25L into lower stack 18L laterally-between horizontally-elongated lower trenches 40; and(c): forming lower through-array-via (TAV) openings 3 IL into the lower stack 18L in stair-step region 15.Such may occur, for example, using photolithographic patterning and etch, and that may include pitch multiplication. Sacrificial horizontally-elongated lines 13 may have been previously formed in tier conducting-material tier 21 (and in one or more tiers there-below, or not). Example sacrificial lines 13 are individually between immediately-laterally-adjacent memory-block regions 58, and to which horizontally-elongated lower trenches 40L have been formed. Sacrificial pillars 60 may also be formed and to which lower channel openings 25L have been formed. By way of example and for brevity only, pillars 60 and lower channel openings 25L are shown as being arranged in groups or columns of staggered rows of four and five per row. In one embodiment, pillars 60 and lines 13 comprise second sacrificial material 75.
Referring to Figs. 5 and 6, first sacrificial material 33L has been formed in that which was formed by the (a), the (b), and the (c) (e.g., 40L, 25L, and 3 IL). Optional lines 13 (not shown) and pillars 60 (not shown) have been removed prior to forming first sacrificial material 33L. First sacrificial material 33L may be of any composition and is ideally of a composition that may be etched selectively relative to materials 24, 26, and 81.Referring to Figs. 7-9, an upper stack 18U comprising vertically-alternating upper insulative tiers 20U (alternately referred to as upper second tiers) and upper conductive tiers 22U (alternately referred to as upper first tiers) has been formed directly above lower stack 18L, with upper and lower stacks 18U and 18L collectively comprising memory-block regions 58. Upper insulative tiers 20U and upper conductive tiers 22U may have any of the attributes described above with respect to lower insulative tiers 20L and lower conductive tiers 22L. Example upper insulative tiers 20U are shown as comprising second material 24 and upper conductive tiers 22U are shown as comprising first material 26, although other compositions may of course be used and not necessarily of the same composition as in lower stack 18L.Processing with respect to (d), (e), and (f) has occurred, where,(d): forming horizontally-elongated upper trenches 40U into upper stack 18U laterally-between immediately-laterally-adjacent memory-block regions 58 (individual horizontally-elongated upper trenches 40U extending to first sacrificial material 33L in individual horizontally-elongated lower trenches 40L);(e): forming upper channel openings 25U into upper stack 18U laterally-between horizontally-elongated upper trenches 40U (individual upper channel openings 25U extending to first sacrificial material 33L in individual lower channel openings 25L); and(f): forming upper TAV openings 31U into upper stack 18L in stairstep region 15 (individual upper TAV openings 31U extending
to first sacrificial material 33L in individual lower TAV openings 3 IL).In one embodiment, the (d), the (e), and the (f) are formed simultaneously. In another embodiment, the (d), the (e), and the (f) are not formed simultaneously. In one such another embodiment, the (e) is formed before the forming of the (d) and the (f), and in one such latter embodiment the (f) is formed before the forming of the (d). Regardless, and thereafter, second sacrificial material 33U has been formed in that which was formed by the (d), the (e), and the (f) (e.g., 40U, 25U, and 31U). Second sacrificial material 33U may be of any composition and is ideally of a composition that may be etched selectively relative to materials 24 and 26. First sacrificial material 33L and second sacrificial material 33U may be of the same composition or of different compositions relative one another.The first and second sacrificial materials are removed to form upwardly-open vertically-extended trenches, upwardly-open vertically- extended channel openings, and upwardly-open vertically-extended TAV openings. Intervening material is formed in the upwardly-open vertically- extended horizontally-elongated trenches, a channel-material string in individual of the upwardly-open vertically-extended channel openings, and conductive material in the upwardly-open vertically-extended TAV openings. An example embodiment of doing so is next described with reference to Figs. 10-38.Referring to Figs. 10 and 11 , horizontally-elongated upper trenches 40U and upper TAV openings 31U (and second sacrificial material 33U therein) have been masked (e.g., with masking material 59 [e.g., silicon dioxide]). First sacrificial material 33L and second sacrificial material 33U have thereafter been removed from lower channel openings 25L and from upper channel openings 25U (33U and 33L thereby not being shown therein) to form upwardly-open vertically-extended channel openings 25U/25L.Referring to Figs. 12- 16, individual channel-material strings 53 have been formed in individual of upwardly-open vertically-extended channel openings 25U/25L. For example, one embodiment is shown where charge-blocking material 30, storage material 32, charge-passage material 34, and channel material 36 (forming channel-material strings 53) have been
formed in extended channel openings 25U/25L elevationally along insulative tiers 20U/20L and conductive tiers 22U/22L. Transistor materials 30, 32, and 34 (e.g., memory-cell materials) and channel material 36 may be formed by, for example, deposition of respective thin layers thereof over upper stack 18U and within individual extended channel openings 25U/25L followed by planarizing such back at least to a top surface of upper stack 18U.Remaining masking material 59 may be removed by such processing or subsequently (material 59 thereby not being shown in Fig. 13). Materials 30, 32, 34, and 36 are collectively shown as and only designated as material 37 in Figs. 12 and 13 due to scale.Example thickness for each of materials 30, 32, 34, and 36 is 25 to 100 Angstroms. Punch etching may be conducted to remove materials 30, 32, and 34 from the bases of lower channel openings 25 and trenches 40L (not shown) to expose conductor tier 16 such that channel material 36 is directly against conductor material 17 of conductor tier 16. Such punch etching may occur separately with respect to each of materials 30, 32, and 34 (not shown) or may occur with respect to only some (not shown). Alternately, and by way of example only, no punch etching may be conducted (none being shown) and channel material 36 may be directly electrically coupled to conductor material 17 of conductor tier 16 only by a separate conductive interconnect (not yet shown). Extended channel openings 25U/25L are shown as comprising a radially/longitudinally-central solid dielectric material 38 (e.g., spin-on-dielectric, silicon dioxide, and/or silicon nitride). Alternately, and by way of example only, the radially- central portion within extended channel openings 25U/25L may include void-space(s) (not shown) and/or be devoid of solid material (not shown). A conductive plug (e.g., conductively-doped polysilicon and/or metal material and not shown) may be radially inside of an uppermost portion of channel material 36 and atop dielectric material 38 there-below.Referring to Figs. 17- 19, horizontally-elongated upper trenches 40U and extended channel openings 25U/25L (and materials 37, 38 and 33U therein) have been masked (e.g., with masking material 59). First sacrificial material 33L and second sacrificial material 33U have thereafter been removed from lower TAV openings 3 IL and from upper TAV openings 31U (thereby not being shown therein) to form upwardly-open
vertically-extended TAV openings 31U/31L, with conductive material 61 thereafter having been formed in individual of upwardly-open vertically- extended TAV openings 31U/31L. An insulative liner 62 (e.g., silicon dioxide) may be formed as shown prior to forming conductive material 61.Referring to Figs. 20-22, extended TAV openings 31U/31L and extended channel openings 25U/25L (and materials 61 , 62, 37, and 38 therein) have been masked (e.g., with masking material 59). First sacrificial material 33L and second sacrificial material 33U have thereafter been removed from lower horizontally-elongated trench 40L and upper horizontally-elongated trench 40U (33L and 33U thereby not being shown therein), respectively, to form upwardly-open vertically-extended horizontally-elongated trenches 40U/40L. A thin sacrificial liner 78 (e.g., hafnium oxide, aluminum oxide, etc.), in one embodiment, may then be formed, followed by punch-etching there-through to expose sacrificial material 77, and then removal (not shown) of masking material 59.As stated above, in some embodiments, the forming of horizontally-elongated upper trenches 40U, upper channel openings 25U, and upper TAV openings 31U does not occur simultaneously. As an example, and in one embodiment, upper channel openings 25U may be formed while regions where upper trenches 40U and upper TAV openings 31 will be are masked. Then, sacrificial material 33L can be removed from lower channel openings 25L. Extended channel openings 25U/25L resulting therefrom can then be filled with materials 30, 32, 34, 36, and 38. Analogous or other processing may then occur with respect to upper TAV openings 31U and upper trenches 40U simultaneously or separately.Referring to Figs. 23-25, exposed sacrificial material 77 (not shown) has been isotropically etched (e.g., using H3PO4 where such comprises silicon nitride and using tetramethylammonium hydroxide where such comprises polysilicon) from lowest first tier 22z through trenches 40U/40L.Conductive material is formed in the lowest first tier that directly electrically couples together the channel material of the individual channel-material strings and the conductor material of the conductor tier. In one embodiment, such conductive material is formed directly against a bottom of the conducting material of the conducting tier and directly against a top of the conductor material of the conductor tier. For example, and first
referring to Figs. 26 and 27, such show example subsequent processing wherein, in one embodiment, material 30 (e.g., silicon dioxide), material 32 (e.g., silicon nitride), and material 34 (e.g., silicon dioxide or a combination of silicon dioxide and silicon nitride) have been etched in tier 20z to expose a sidewall 41 of channel material 36 of channel-material strings 53 in lowest first tier 22z. Any of materials 30, 32, and 34 in tier 22z may be considered as being sacrificial material therein. As an example, consider an embodiment where liner 78 is one or more insulative oxides (other than silicon dioxide) and memory-cell materials 30, 32, and 34 individually are one or more of silicon dioxide and silicon nitride layers. In such example, the depicted construction can result by using modified or different chemistries for sequentially etching silicon dioxide and silicon nitride selectively relative to the other. As examples, a solution of 100: 1 (by volume) water to HF will etch silicon dioxide selectively relative to silicon nitride, whereas a solution of 1000: 1 (by volume) water to HF will etch silicon nitride selectively relative to silicon dioxide. Accordingly, and in such example, such etching chemistries can be used in an alternating manner where it is desired to achieve the example construction shown by Figs. 26 and 27. The artisan is capable of selecting other chemistries for etching other different materials where a construction as shown in Figs. 26 and 27 is desired. Some or all of insulative material (e.g., 24) from tiers 20Lx and 20Lz (when present, and material 24 not shown as having been removed) may be removed when removing other materials, may be removed separately, or may partially or wholly remain (not shown).Referring to Figs. 28-30, conductively-doped semiconductive material 42 (e.g., conductively-doped polysilicon) has been formed in lowest first tier 22Lz. Conductively-doped semiconductive material 42 thereby directly electrically couples together channel material 36 of individual channel-material strings 53 and conductor material 17 of conductor tier 16. Subsequently, and by way of example, conductive material 42 has been removed from trenches 40 as has sacrificial liner 78 (not shown). Sacrificial liner 78 may be removed before forming conductive material 42 (not shown).Referring to Figs. 31-38, material 26 (not shown) of conductive tiers 22U/22L has been removed, for example by being isotropically etched
away through trenches 40U/40L ideally selectively relative to the other exposed materials (e.g., using liquid or vapor H3PO4 as a primary etchant where material 26 is silicon nitride and other materials comprise one or more oxides or polysilicon). Material 26 (not shown) in conductive tiers 22U/22L in the example embodiment is sacrificial and has been replaced with conducting material 48, and which has thereafter been removed from trenches 40U/40L, thus forming individual conductive lines 29 (e.g., wordlines) and elevationally-extending strings 49 of individual transistors and/or memory cells 56.A thin insulative liner (e.g., AI2O3 and not shown) may be formed before forming conducting material 48. Approximate locations of transistors and/or memory cells 56 are indicated with a bracket in Fig. 35 and some with dashed outlines in Figs. 31 , 32, and 34, with transistors and/or memory cells 56 being essentially ring-like or annular in the depicted example. Alternately, transistors and/or memory cells 56 may not be completely encircling relative to individual channel openings 25U/25L such that each channel opening 25U/25L may have two or more elevationally-extending strings 49 (e.g., multiple transistors and/or memory cells about individual channel openings in individual conductive tiers with perhaps multiple wordlines per channel opening in individual conductive tiers, and not shown). Conducting material 48 may be considered as having terminal ends 50 (Fig. 35) corresponding to control-gate regions 52 of individual transistors and/or memory cells 56. Control-gate regions 52 in the depicted embodiment comprise individual portions of individual conductive lines 29. Materials 30, 32, and 34 may be considered as a memory structure 65 that is laterally between control-gate region 52 and channel material 36. In one embodiment and as shown with respect to the example “gate-last” processing, conducting material 48 of conductive tiers 22U/22L is formed after forming channel openings 25U/25L and/or trenches 40U/40L. Alternately, the conducting material of the conductive tiers may be formed before forming channel openings 25U/25L and/or trenches 40U/40L (not shown), for example with respect to “gate-first” processing.A charge-blocking region (e.g., charge-blocking material 30) is between storage material 32 and individual control-gate regions 52. A charge block may have the following functions in a memory cell: In a
program mode, the charge block may prevent charge carriers from passing out of the storage material (e.g., floating-gate material, charge-trapping material, etc.) toward the control gate, and in an erase mode the charge block may prevent charge carriers from flowing into the storage material from the control gate. Accordingly, a charge block may function to block charge migration between the control-gate region and the storage material of individual memory cells. An example charge-blocking region as shown comprises insulator material 30. By way of further examples, a chargeblocking region may comprise a laterally (e.g., radially) outer portion of the storage material (e.g., material 32) where such storage material is insulative (e.g., in the absence of any different-composition material between an insulative storage material 32 and conducting material 48). Regardless, as an additional example, an interface of a storage material and conductive material of a control gate may be sufficient to function as a charge-blocking region in the absence of any separate-composition-insulator material 30. Further, an interface of conducting material 48 with material 30 (when present) in combination with insulator material 30 may together function as a charge-blocking region, and as alternately or additionally may a laterally- outer region of an insulative storage material (e.g., a silicon nitride material 32). An example material 30 is one or more of silicon hafnium oxide and silicon dioxide. Example channel materials 36 include appropriately-doped crystalline semiconductor material, such as one or more silicon, germanium, and so-called III/V semiconductor materials (e.g., GaAs, InP, GaP, and GaN).Intervening material 57 has been formed in extended trenches 40U/40L and thereby laterally-between and longitudinally-along immediately-laterally-adjacent memory-block regions 58. Intervening material 57 may provide lateral electrical isolation (insulation) between immediately-laterally-adjacent memory blocks. Such may include one or more of insulative, semiconductive, and conducting materials and, regardless, may facilitate conductive tiers 22U/22L from shorting relative one another in a finished circuitry construction. Example insulative materials are one or more of SiO2, SiaN4, AI2O3, and undoped polysilicon. Intervening material 57 may include TAVs.
Subsequent processing may occur that is not material to aspects of the inventions disclosed herein.In one embodiment, conductive material 61 in extended TAV openings 31U/31L (with liner 62 therein, when present) comprises TAV structures 45 extending through first tiers 20* and second tiers 22* (an * being used as a suffix to be inclusive of all such same-numerically- designated components that may or may not have other suffixes). Individual TAV structures 45 comprise an upper portion (e.g., that in upper stack 18U) above and joined with a lower portion (e.g., that in lower stack 18L), with individual TAV structures comprising at least one external jog surface 63 (Fig. 37) in a vertical cross-section (e.g., that of Figs. 33 and 37) where the upper and lower portions join (e.g., two jog surfaces 63 being shown in the vertical cross-section). In this document, a “jog surface” is characterized or defined by an abrupt change in direction [at least 15°] in comparison to surfaces that are immediately-above and immediately-below the jog surface. In one such embodiment and as shown, individual TAV structures 45 have external sidewall surfaces 64 (Fig. 37) that are straight through multiple of the first tiers 20* and multiple of the second tiers 22* in the vertical crosssection above and below the at least one external jog surface 63. Regardless, in one embodiment, the at least one jog surface 63 includes a part 66 that is horizontal and in one such embodiment as shown that is exactly horizontal.In one embodiment, channel-material strings 53 comprise part of channel-material-string structures 46 that extend through insulative tiers 20* and conductive tiers 22*. Channel-material-string structures 46 individually comprise an upper portion (e.g., that in upper stack 18U) above and joined with a lower portion (e.g., that in lower stack 18L), with individual channel- material-string structures 46 comprising at least one external jog surface 67 in a vertical cross-section (e.g., that of Figs. 33 and 36) where the upper and lower portions of individual channel-material-string structures 46 join (e.g., two jog surfaces 67 being shown in the vertical cross-section). In one such embodiment and as shown, individual channel-material-string structures 46 have external sidewall surfaces 68 that are straight through multiple of the second tiers 20* and multiple of the first tiers 22* in the vertical crosssection above and below its at least one external jog surface 67. Regardless,
in one embodiment, the at least one jog surface 67 includes a part 73 that is horizontal and in one such embodiment as shown that is exactly horizontal.In one embodiment, horizontally-elongated walls 70 (e.g., comprising intervening material 57) are laterally-between immediately-laterally- adjacent memory-block regions 58. Individual horizontally-elongated walls 70 comprise an upper portion (e.g., that in upper stack 18U) above and joined with a lower portion (e.g., that in lower stack 18L), with individual walls 70 comprising at least one external jog surface 71 in a vertical cross-section (e.g., that of Figs. 33 and 38) where the upper and lower portions of individual horizontally-elongated walls 70 join (e.g., two jog surfaces 71 being shown in the vertical cross-section). In one such embodiment and as shown, individual horizontally-elongated walls 70 have external sidewall surfaces 72 that are straight through multiple of the second tiers 20* and multiple of the conductive tiers 22* in the vertical crosssection above and below its at least one external jog surface 71. Regardless, in one embodiment, the at least one jog surface 71 includes a part 74 that is horizontal and in one such embodiment as shown that is exactly horizontal.The above example described processing forms intervening material 57, channel-material strings 53, and conductive material 61 at different times relative one another. Channel-material strings 53 have been formed before the forming of intervening material 57 and before the forming of conductive material 61 , with conductive material 61 being formed before forming intervening material 57. Alternately, conductive material 61 may be formed after forming intervening material 57. Further alternately, conductive material 61 may be formed before the forming of intervening material 57 and before the forming of channel-material strings 53, with the forming of channel-material strings 53 occurring before or after forming intervening material 57. Still further alternately, intervening material 57 may be formed before the forming of conductive material 61 and before the forming of channel-material strings 53, with the forming of channel-material strings 53 occurring before or after forming conductive material 61.Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used in the embodiments shown and described with reference to the above embodiments.
In some embodiments, a method used in forming a memory array (e.g., 12) comprising strings (e.g., 49) of memory cells (e.g., 56) comprises forming a stack (e.g., 18*) comprising vertically-alternating first tiers (e.g., 22*) and second tiers (e.g., 20*). The stack comprising laterally-spaced memory-block regions (e.g., 58). Processing with respect to (a), (b), and (c) occurs simultaneously, where,(a): forming horizontally-elongated trenches (e.g., 40) into the stack laterally-between immediately-laterally-adjacent of the memory-block regions;(b): forming channel openings (e.g., 25U/25L) into the stack laterally-between the horizontally-elongated trenches; and(c): through-array-via (TAV) (e.g., 31U/31L) openings into the stack in a stair-step region (e.g., 15).Intervening material (e.g., 57) is in the horizontally-elongated trenches. A channel-material string (e.g., 53) is in individual of the channel openings. Conductive material (e.g., 61) is in the TAV openings. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Alternate embodiment constructions may result from method embodiments described above, or otherwise. Regardless, embodiments of the invention encompass memory arrays independent of method of manufacture. Nevertheless, such memory arrays may have any of the attributes as described herein in method embodiments. Likewise, the abovedescribed method embodiments may incorporate, form, and/or have any of the attributes described with respect to device embodiments.In one embodiment, a memory array (e.g., 12) comprising strings (e.g., 49) of memory cells (e.g., 56) comprises laterally-spaced memory blocks (e.g., 58) individually comprising a vertical stack (e.g., 18*) comprising alternating insulative tiers (e.g., 20*) and conductive tiers (e.g., 22*). Channel-material-string structures (e.g., 46) of memory cells (e.g., 56) extend through the insulative tiers and the conductive tiers. Through-
array-via (TAV) structures (e.g., 45) extend through the insulative tiers and the conductive tiers. Individual of the TAV structures comprise an upper portion above and joined with a lower portion. The individual TAV structures comprise at least one external jog surface (e.g., 63) in a vertical cross-section where the upper and lower portions join. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In one embodiment, a memory array (e.g., 12) comprising strings (e.g., 49) of memory cells (e.g., 56) comprises laterally-spaced memory blocks (e.g., 58) individually comprising a vertical stack (e.g., 18*) comprising alternating insulative tiers (e.g., 20*) and conductive tiers (e.g., 22*). Channel-material-string structures (e.g., 46) of memory cells (e.g., 56) extending through the insulative tiers and the conductive tiers. Through-array-via (TAV) structures (e.g., 45) extend through the insulative tiers and the conductive tiers. Individual of the TAV structures comprise an upper portion above and joined with a lower portion. The individual TAV structures comprise at least one external jog surface (e.g., 63) in a vertical cross-section where the upper and lower portions of the individual TAV structures join. Channel-material-string structures (e.g., 46) extend through the insulative tiers and the conductive tiers. Individual of the channel- material-string structures comprise an upper portion above and joined with a lower portion. The individual channel-material-string structures comprise at least one external jog surface (e.g., 67) in the vertical cross-section where the upper and lower portions of the individual channel-material-string structures join. Horizontally-elongated walls (e.g., 70) are laterally-between immediately-laterally-adjacent of the memory blocks. Individual of the horizontally-elongated walls comprising an upper portion above and joined with a lower portion. The individual walls comprising at least one external jog surface (e.g., 71 ) in the vertical cross-section where the upper and lower portions of the horizontally-elongated walls join. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Processing as described above may result in reduction of one or more masking steps and deep etching associated therewith.
The above processing(s) or construction(s) may be considered as being relative to an array of components formed as or within two stacks or two decks of such components above or as part of an underlying base substrate albeit, the two stacks/decks may each have multiple tiers). Control and/or other peripheral circuitry for operating or accessing such components within an array may also be formed anywhere as part of the finished construction, and in some embodiments may be under the array (e.g., CMOS under-array). Regardless, one or more additional such stack(s)/deck(s) may be provided or fabricated above and/or below that shown in the figures or described above. Further, the array(s) of components may be the same or different relative one another in different stacks/decks and different stacks/decks may be of the same thickness or of different thicknesses relative one another. Intervening structure may be provided between immediately-vertically-adjacent stacks/decks (e.g., additional circuitry and/or dielectric layers). Also, different stacks/decks may be electrically coupled relative one another. The multiple stacks/decks may be fabricated separately and sequentially (e.g., one atop another), or two or more stacks/decks may be fabricated at essentially the same time. Alternately, the processing(s) or construction(s) may be with respect to a single stack or single deck above or part of an underlying base substrate.The assemblies and structures discussed above may be used in integrated circuits/circuitry and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.In this document unless otherwise indicated, “elevational”, “higher”, “upper”, “lower”, “top”, “atop”, “bottom”, “above”, “below”, “under”, “beneath”, “up”, and “down” are generally with reference to the vertical direction. “Horizontal” refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally
orthogonal thereto. Reference to “exactly horizontal” is the direction along the primary substrate surface (i.e. , no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, “vertical” and “horizontal” as used herein are generally perpendicular directions relative one another and independent of orientation of the substrate in three-dimensional space. Additionally, “elevationally- extending” and “extend(ing) elevationally” refer to a direction that is angled away by at least 45° from exactly horizontal. Further, “extend(ing) elevationally”, “elevationally-extending”, “extend(ing) horizontally”, “horizontally-extending” and the like with respect to a field effect transistor are with reference to orientation of the transistor’s channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors, “extend(ing) elevationally” “elevationally- extending”, “extend(ing) horizontally”, “horizontally-extending” and the like, are with reference to orientation of the base length along which current flows in operation between the emitter and collector. In some embodiments, any component, feature, and/or region that extends elevationally extends vertically or within 10° of vertical.Further, “directly above”, “directly below”, and “directly under” require at least some lateral overlap (i.e., horizontally) of two stated regions/materials/components relative one another. Also, use of “above” not preceded by “directly” only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components). Analogously, use of “below” and “under” not preceded by “directly” only requires that some portion of the stated region/material/component that is below/under the other be elevationally inward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components).Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Where one or more example composition(s) is/are provided for any material, that material may comprise, consist essentially of, or consist of such one or more composition(s). Further, unless otherwise stated, each material may be
formed using any suitable existing or future-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples.Additionally, “thickness” by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately-adjacent material of different composition or of an immediately-adjacent region. Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, “different composition” only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another, “different composition” only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is “directly against” another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast, “over”, “on”, “adjacent”, “along”, and “against” not preceded by “directly” encompass “directly against” as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, regions-materials-components are “electrically coupled” relative one another if in normal operation electric current is capable of continuously flowing from one to the other and does so predominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions-materials-components are referred to as being "directly electrically coupled”, no intervening electronic component (e.g., no diode, transistor,
resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials-components.Any use of “row” and “column” in this document is for convenience in distinguishing one series or orientation of features from another series or orientation of features and along which components have been or may be formed. “Row” and “column” are used synonymously with respect to any series of regions, components, and/or features independent of function. Regardless, the rows may be straight and/or curved and/or parallel and/or not parallel relative one another, as may be the columns. Further, the rows and columns may intersect relative one another at 90° or at one or more other angles (i.e., other than the straight angle).The composition of any of the conductive/conductor/conducting materials herein may be metal material and/or conductively-doped semiconductive/semiconductor/semiconducting material. “Metal material” is any one or combination of an elemental metal, any mixture or alloy of two or more elemental metals, and any one or more conductive metal compound(s).Herein, any use of “selective” as to etch, etching, removing, removal, depositing, forming, and/or formation is such an act of one stated material relative to another stated material(s) so acted upon at a rate of at least 2: 1 by volume. Further, any use of selectively depositing, selectively growing, or selectively forming is depositing, growing, or forming one material relative to another stated material or materials at a rate of at least 2: 1 by volume for at least the first 75 Angstroms of depositing, growing, or forming.Unless otherwise indicated, use of “or” herein encompasses either and both.CONCLUSIONIn some embodiments, a method used in forming a memory array comprising strings of memory cells comprises forming a stack comprising vertically-alternating first tiers and second tiers. The stack comprises laterally-spaced memory-block regions. Simultaneously, (a), (b), and (c) are formed, where (a): horizontally-elongated trenches into the stack laterally-between immediately-laterally-adjacent of the memory-block regions; (b): channel openings into the stack laterally-between the
horizontally-elongated trenches; and (c): through-array-via (TAV) openings into the stack in a stair-step region. Intervening material is formed in the horizontally-elongated trenches, a channel-material string in individual of the channel openings, and conductive material in the TAV openings.In some embodiments, a method used in forming a memory array comprising strings of memory cells comprises forming a lower stack comprising vertically-alternating lower first tiers and lower second tiers. The lower stack comprises laterally-spaced memory-block regions. Simultaneously, (a), (b), and (c) are formed, where (a): horizontally- elongated lower trenches into the lower stack laterally-between immediately-laterally-adjacent of the memory-block regions; (b): lower channel openings into the lower stack laterally-between the horizontally- elongated lower trenches; and (c): lower through-array-via (TAV) openings into the lower stack in a stair-step region. First sacrificial material is formed in that which was formed by the (a), the (b), and the (c). An upper stack is formed directly above the lower stack and the first sacrificial material. The upper stack comprises vertically-alternating upper first tiers and upper second tiers. The upper stack comprises the laterally-spaced memory-block regions, (d), (e), and (f) are formed, where (d): horizontally-elongated upper trenches into the upper stack laterally-between immediately-laterally-adjacent of the memory-block regions, individual of the horizontally-elongated upper trenches extending to the first sacrificial material in individual of the horizontally-elongated lower trenches;(e): upper channel openings into the upper stack laterally-between the horizontally-elongated upper trenches, individual of the upper channel openings extending to the first sacrificial material in individual of the lower channel openings; and (f): upper TAV openings into the upper stack in the stair-step region, individual of the upper TAV openings extending to the first sacrificial material in individual of the lower TAV openings. Second sacrificial material is formed in that which was formed by the (d), the (e), and the (f). The first and second sacrificial materials are removed to form upwardly-open vertically-extended trenches, upwardly-open vertically- extended channel openings, and upwardly-open vertically-extended TAV openings. Intervening material is formed in the upwardly-open vertically- extended horizontally-elongated trenches, a channel-material string in
individual of the upwardly-open vertically-extended channel openings, and conductive material in the upwardly-open vertically-extended TAV openings.In some embodiments, a memory array comprising strings of memory cells comprises laterally-spaced memory blocks individually comprising a vertical stack comprising alternating insulative tiers and conductive tiers. Channel-material-string structures of memory cells extend through the insulative tiers and the conductive tiers. Through-array-via (TAV) structures extend through the insulative tiers and the conductive tiers. Individual of the TAV structures comprise an upper portion above and joined with a lower portion. The individual TAV structures comprise at least one external jog surface in a vertical cross-section where the upper and lower portions join.In some embodiments, a memory array comprising strings of memory cells comprises laterally-spaced memory blocks individually comprising a vertical stack comprising alternating insulative tiers and conductive tiers. Channel-material-string structures of memory cells extend through the insulative tiers and the conductive tiers. Through-array-via (TAV) structures extend through the insulative tiers and the conductive tiers. Individual of the TAV structures comprise an upper portion above and joined with a lower portion. The individual TAV structures comprise at least one external jog surface in a vertical cross-section where the upper and lower portions of the individual TAV structures join. Channel-material-string structures extend through the insulative tiers and the conductive tiers. Individual of the channel-material-string structures comprise an upper portion above and joined with a lower portion. The individual channel-material-string structures comprise at least one external jog surface in the vertical cross-section where the upper and lower portions of the individual channel-material-string structures join. Horizontally-elongated walls are laterally-between immediately-laterally-adjacent of the memory blocks. Individual of the horizontally-elongated walls comprise an upper portion above and joined with a lower portion. The individual walls comprise at least one external jog surface in the vertical cross-section where the upper and lower portions of the horizontally-elongated walls join. |
One or more neural networks for generating complete depictions of objects based on their partial description are disclosed. An image 114 of a whole object is generated, based on an image 106 of a portion of the object, using an encoder 108 trained using training data 102 produced from the output of a decoder 112. The neural network may comprise a generative model framework, which can be a variational autoencoder, a generative adversarial network (GAN) or a normalising flow. The decoder can be trained on a dataset comprising images of complete objects and excluding images of partial entities. The decoder may output a complete version of an incomplete picture input into the decoder. The decoder parameters may remain unvaried while training the encoder (Fig. 6). Two images may be entered into the encoder, with the resulting output being the first image which is partially occluded by features from the second picture. An associated training technique is also described. |
CLAIMSA processor, comprising: one or more circuits to use one or more neural networks to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.2. The processor of claim 1, wherein the one or more neural networks comprises a generative model framework to generate a plurality of complete images based on an input image.3. The processor of claim 1 or claim 2, wherein the decoder is trained based, at least in part, on a dataset comprising images of complete objects and excluding images of portions of objects.4. The processor of any preceding claim, wherein parameters of the decoder are not adjusted while training the encoder using the training data 5. The processor of claim 4, wherein output of the trained encoder causes the decoder to generate a plurality of images of a complete object based on input, to the one or more neural networks, of an image of a portion of an object.6. The processor of any preceding claim, the one or more circuits to generate the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.7. The processor of any preceding claim, the one or more circuits to refine training of the encoder based, at least in part, on a plurality of real-images of portions of objects after training the encoder with the generated training data.8. The processor of any preceding claim, wherein one or more of the one or more neural networks is trained to spatially transform output of the decoder.A system, comprising: one or more processors to train one or more neural networks to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.10. The system of claim 9, wherein the one or more neural networks comprise a generative model framework to generate a plurality of complete images based on an input image, wherein the generative model framework comprises at least one of a variational autoencoder, a generative adversarial network, or a normalizing flow.11. The system of claim 9 or claim I 0, the one or more processors to train the decoder, using a dataset comprising images of complete objects, to generate variations of images of complete objects.12. The system of any of claims 9-11, wherein parameters of the decoder are frozen while training the encoder using the training data generated based, at least in part, on output of the decoder.13. The system of claim 12, wherein output of the trained encoder causes the decoder to generate a plurality of images of a complete object and a plurality of probabilities corresponding to the plurality of images, based on input, to the one or more neural networks, of an image of a portion of an object.14. The system of any of claims 9-13, the one or more processors to generate the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.15. The system of any of claims 9-14, the one or more processors to train one or more of the one or more neural networks to spatially transform output of the decoder.16. A method, comprising: training a neural network to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.17. The method of claim 16, wherein the one or more neural networks comprise at least one of a variational autoencoder, generative adversarial network, or normalizing flow.18. The method of claim 16 or claim 17, further comprising: training the decoder, using a dataset comprising images of complete objects, to generate variations of images of complete objects.19. The method of any of claims 16-18, further comprising: freezing the parameters of the decoder while training the encoder using the training data generated based, at least in part, on output of the decoder.20. The method of claim 19, wherein output of the trained encoder causes the decoder to generate a plurality of images of a complete object and a plurality of probabilities corresponding to the plurality of images, based on input, to the one or more neural networks, of an image of a portion of an object.21. The method of any of claims 16-20, further comprising: generating the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.22. The method of any of claims 16-21, further comprising: training one or more of the one or more neural networks to spatially transform output of the decoder.23. A machine-readable medium having stored thereon instructions, which if performed by one or more processors, cause the one or more processors to at least: alter an image to incorporate a depiction of a complete object that is generated, by one or more neural networks, from a depiction of a portion of the object in the image, the one or more neural networks trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.24. The machine-readable medium of claim 23, having stored thereon further instructions, which if performed by one or more processors, cause the one or more processors to at least: generate a plurality of variations of a complete object, based at least in part on the image of the portion of the object 25. The machine-readable medium of claim 23 or claim 24, wherein the parameters of the decoder are frozen after training the decoder using images of complete objects.26. The machine-readable medium of any of claims 23-25, wherein the training data comprises an image depicting a plurality of objects generated based on the output of the decoder.27. The machine-readable medium of claim 26, wherein a first object of the plurality of objects overlaps a second object of the plurality of objects.28. The machine-readable medium of any of claims 23-27, wherein the alteration of the image comprises replacing a depiction of a partial object in the image with a depiction of the complete object.29. The machine-readable medium of any of claims 23-28, wherein the alteration of the image comprises removing an object in the image that occludes the portion of the object. |
OBJECT IMAGE COMPLETIONTECHNICAL FIELD[0001] At least one embodiment pertains to artificial intelligence. For example, at least one embodiment pertains to processors or computing system used to train neural networks to perform a task related to image processing.BACKGROUND[0002] Training a neural networks to perform image processing tasks such as object completion can use significant amounts of time and resources, and can produce poor results. Techniques for performing object completion can be improved.SUMMARY OF THE INVENTION[0003] Aspects and embodiments of the present invention are set out in the appended claims. These and other aspects and embodiments of the invention are also described herein.[0004] According to various aspects described herein, there may be provided apparatuses, systems, and techniques to generate complete depictions of objects based on a partial depiction of the object. In at least one embodiment, an image of a complete object may be generated by one or more neural networks, based on an image of a portion of the object, using an encoder of the one or more neural networks trained using training data generated from output of a decoder of the one or more neural networks.[0005] The disclosure extends to any novel aspects or features described and/or illustrated herein.[0006] Further features of the disclosure are characterized by the independent and dependent claims [0007] Any feature in one aspect of the disclosure may be applied to other aspects of the disclosure, in any appropriate combination. In particular, method aspects may be applied to apparatus or system aspects, and vice versa.[0008] Furthermore, features implemented in hardware may be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.[0009] Any system or apparatus feature as described herein may also be provided as a method feature, and vice versa. System and/or apparatus aspects described functionally (including means plus function features) may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.[0010] It should also be appreciated that particular combinations of the various features described and defined in any aspects of the disclosure can be implemented and/or supplied and/or used independently.[0011] The disclosure also provides computer programs and computer program products comprising software code adapted, when executed on a data processing apparatus, to perform any of the methods and/or for embodying any of the apparatus and system features described herein, including any or all of the component steps of any method.[0012] The disclosure also provides a computer or computing system (including networked or distributed systems) having an operating system which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus or system features described herein.[0013] The disclosure also provides a computer readable media having stored thereon any one or more of the computer programs aforesaid.[0014] The disclosure also provides a signal carrying any one or more of the computer programs aforesaid.[0015] The disclosure extends to methods and/or apparatus and/or systems as herein described with reference to the accompanying drawings.[0016] Aspects and embodiments of the disclosure will now be described purely by way of example, with reference to the accompanying drawings.BRIEF DESCRIPTION OF DRAWINGS[0017] FIG. 1 illustrates an example of one or more neural networks for generating an image of a complete object based on an image of a portion of an object, according to at least one embodiment; [0018] FIG. 2 illustrates an example of a process for training one or more neural networks to generate an image of a complete object based on an image of a portion of an object, according to at least one embodiment; [0019] FIG. 3 illustrates examples of training samples generated using output of a decoder portion of one or more neural networks, according to at least one embodiment; [0020] FIG. 4 illustrates examples of images of complete objects generated by one or more neural networks based on images of partial objects, according to at least one embodiment; [0021] FIG. 5 illustrates example applications of images of complete objects generated by one or more neural networks based on an image of a portion of an object, according to at least one embodiment; [0022] FIG. 6 illustrates an example of a process for training one or more neural networks to generate images of complete objects based on images of portions of objects, according to at least one embodiment; [0023] FIG. 7A illustrates inference and/or training logic, according to at least one embodiment; [0024] FIG. 7B illustrates inference and/or training logic, according to at least one embodiment; [0025] FIG. 8 illustrates training and deployment of a neural network, according to at least one embodiment; [0026] FIG. 9 illustrates an example data center system, according to at least one embodiment; [0027] FIG. 10A illustrates an example of an autonomous vehicle, according to at least one embodiment; [0028] FIG. I OB illustrates an example of camera locations and fields of view for the autonomous vehicle of FIG. 10A, according to at least one embodiment; [0029] FIG. I OC is a block diagram illustrating an example system architecture for the autonomous vehicle of FIG. 10A, according to at least one embodiment; [0030] FIG. 10D is a diagram illustrating a system for communication between cloud-based server(s) and the autonomous vehicle of FIG. 10A, according to at least one embodiment; [0031] FIG. 11 is a block diagram illustrating a computer system, according to at least one embodiment; [0032] FIG. 12 is a block diagram illustrating a computer system, according to at least one embodiment; [0033] FIG. 13 illustrates a computer system, according to at least one embodiment; [0034] FIG. 14 illustrates a computer system, according to at least one embodiment; [0035] FIG. 15A illustrates a computer system, according to at least one embodiment; [0036] FIG. I 5B illustrates a computer system, according to at least one embodiment; [0037] FIG. 15C illustrates a computer system, according to at least one embodiment; [0038] FIG. 15D illustrates a computer system, according to at least one embodiment; [0039] FIGS. 15E and 15F illustrate a shared programming model, according to at least one embodiment; [0040] FIG. 16 illustrates exemplary integrated circuits and associated graphics processors, according to at least one embodiment; [0041] FIGS. I 7A and I 7B illustrate exemplary integrated circuits and associated graphics processors, according to at least one embodiment; [0042] FIGS. 18A and 18B illustrate additional exemplary graphics processor logic according to at least one embodiment; [0043] FIG. 19 illustrates a computer system, according to at least one embodiment; [0044] FIG. 20A illustrates a parallel processor, according to at least one embodiment; [0045] FIG. 20B illustrates a partition unit, according to at least one embodiment; [0046] FIG. 20C illustrates a processing cluster, according to at least one embodiment; [0047] FIG. 20D illustrates a graphics multiprocessor, according to at least one embodiment; [0048] FIG. 21 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment; [0049] FIG. 22 illustrates a graphics processor, according to at least one embodiment; [0050] FIG. 23 is a block diagram illustrating a processor micro-architecture for a processor, according to at least one embodiment; [0051] FIG. 24 illustrates a deep learning application processor, according to at least one embodiment; [0052] FIG. 25 is a block diagram illustrating an example neuromorphic processor, according Lo at least one embod mem; [0053] FIG. 26 illustrates at least portions of a graphics processor, according to one or more embodiments; [0054] FIG. 27 illustrates at least portions of a graphics processor, according to one or more embodiments; [0055] FIG. 28 illustrates at least portions of a graphics processor,according to one or more embodiments: [0056] FIG. 29 is a block diagram of a graphics processing engine of a graphics processor accordance with at least one embodiment; [0057] FIG. 30 is a block diagram of at least portions of a graphics processor core, according to at least one embodiment; [0058] FIGS. 31A and 31B illustrate thread execution logic including an array of processing elements of a graphics processor core according to at least one embodiment; [0059] FIG. 32 illustrates a parallel processing unit ("PPU"), according to at least one embodiment; [0060] FIG. 33 illustrates a general processing cluster ("GPC"), according to at least one embodiment; [0061] FIG. 34 illustrates a memory partition unit of a parallel processing unit ("PPU"), according to at least one embodiment; [0062] FIG. 35 illustrates a streaming multi-processor, according to at least one embodiment.[0063] FIG. 36 is an example data flow diagram for an advanced computing pipeline, in accordance with at least one embodiment; [0064] FIG. 37 is a system diagram for an example system for training, adapting, instantiating and deploying machine learning models in an advanced computing pipeline, in accordance with at least one embodiment; [0065] FIG. 38 includes an example illustration of an advanced computing pipeline 3710A for processing imaging data, in accordance with at least one embodiment; [0066] FIG. 39A includes an example data flow diagram of a virtual instrument supporting an ultrasound device, in accordance with at least one embodiment; [0067] FIG. 39B includes an example data flow diagram of a virtual instrument supporting an CT scanner, in accordance with at least one embodiment; [0068] FIG. 40A illustrates a data flow diagram for a process to train a machine learning model, in accordance with at least one embodiment; and [0069] FIG. 40B is an example illustration of a client-server architecture to enhance annotation tools with pre-trained annotation models, in accordance with at least one embodiment.DETAILED DESCRIPTION[0070] FIG. I illustrates an example of one or more neural networks for generating an image of a complete object based on an image of a portion of this object, according to at least one embodiment.[0071] In at least one embodiment, one or more neural networks 100 is capable of reasoning about occluded objects. In at least one embodiment, said reasoning pertains to characteristics of an object that is occluded in a scene, such as an object's shape, texture, or color in a region that is occluded from view. In at least one embodiment, said reasoning is referred to as object completion or instance completion, because it involves taking an image of a partial object and from that, inferring an image of a complete object, or instance.[0072] In at least one embodiment, input to one or more neural networks 100 comprises a scene in which one or more objects are occluded. In at least one embodiment, such occlusion makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. In at least one embodiment, a variational generative framework for amodal object completion is used for these tasks. In at least one embodiment, this framework does not require amodal labels at training time.[0073] In at least one embodiment, one or more neural networks 100 may be used in relation to a task of scene editing, in which a user is provided with interactive tools to complete and erase objects in photographs. These may include, in at least one embodiment, complex scenes which include many objects.[0074] In at least one embodiment, one or more neural networks 100 are capable of rapidly recognizing objects and understanding their spatial extent in complex visual scenes, even when objects are barely visible due to occlusion. In at least one embodiment, this capability is used in conjunction with fields, such as robotics, that could leverage these capabilities to more accurately anticipate what can happen a few moments into the future, and plan or react accordingly. In at least one embodiment, these capabilities are leveraged in semantic image editing tasks. For example, in at least one embodiment, these capabilities might be leveraged to enable a user of an image editing application to delete or manipulate an object that are partially hidden due to being occluded by another object. In at least one embodiment, one or more neural networks 100 are able to "complete" occluded objects in an image, by reasoning about their spatial extent. In at least one embodiment, this comprises completion of their masks. In at least one embodiment, a mask comprises data indicative of an object's spatial bounds or shape. In at least one embodiment, one or more neural networks 100 are able to complete occluded objects in an image, including aspects related to appearance, such as color, texture, and so forth.[0075] In at least one embodiment, a mask comprises a representation of a shape of an object. In at least one embodiment, a mask comprises a bitmap. In at least one embodiment, a vector representation is used. In at least one embodiment, a mask comprises data about an object's interior, in addition to information about said object's boundaries or spatial extent. In at least one embodiment, an example mask comprises a bitmap in which non-zero values represent space occupied by an object, and zero values represent space not occupied, or not known to be occupied, by said object. In at least one embodiment, a partial mask represents an extent of an object that is at least partially occluded, such that a partial mask is an incomplete representation of an object's shape or extent. In at least one embodiment, a complete mask represents a shape or extent of an object that is not occluded.[0076] In at least one embodiment, one or more neural networks 100 are capable of amodal perception of objects. In at least one embodiment, model perception of an object involves segmenting visible pixels of said object, and there are large-scale annotated datasets available to assist in such training, but for amodal segmentation there is a lack of labeled data that may be due to difficulty and ambiguity of annotation for amodal tasks. For example, in cases where objects are highly occluded there may be multiple valid hypotheses for a plausible completion. In at least one embodiment, annotation for a training set for amodal tasks involves a human labeler drawing an imagined contour, rather than tracing_ a visible contour in an image as might be done to label data for modal tasks.[0077] In at least one embodiment, system 100 implements a variational generative framework for amodal instance completion, which may be referred to as amodal-VAE. In at least one embodiment, said framework does not require amodal labels at training time, and instead exploits instance masks of visible parts of objects taken from an available datasets. In at least one embodiment, one or more neural networks 100 learns to reconstruct full objects from partial masks by training a variational autoencoder, designated as amodal-vae 104 in FIG. 1, in stages that allow modelling of a complete mask with a low-dimensional latent representation. In at least one embodiment, a probabilistic framework incorporates ambiguity in a mask completion task, and by doing so is able to produce multiple plausible completions.[0078] In at least one embodiment, amodal instance segmentation comprises segmenting both visible and occluded parts of an object instance, rather than merely segmenting only visible pixels of an object.[0079] In at least one embodiment, a training dataset 102 is used to train amodal-vae 104. In at least one embodiment, given a dataset V = fyif1, an amodal-vae 104 learns a latent variable generative model p(y, z) = 1(y1z)p(z), where p(z) is a prior distribution over latent variables and py",(y1z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w1. In at least one embodiment, a true posterior distribution p (y, z) may be intractable, and amodal-vae 104 instead employs an auxiliary approximate posterior distribution or encoder q",2(zly), parametrized by another neural network with parameters w2. In at least one embodiment, when additional information about training data is available, such as a sample's classes or category c, amodal-vae 104 is extended to be a conditional variational autoencoder, in which an encoder portion and decoder portion are conditioned on this class information.[0080] In at least one embodiment, amodal-vae 104 is trained via variational inference, maximizing an evidence lower bound ("ELBO"). In at least one embodiment, a case is considered in which only an encoder of amodal-vae 104 is conditioned on additional class information c that is available for samples in a dataset D. In said embodiment, an ELBO equation may then be represented as: LV AE( V1,1472) = Ey,c-DEIEz-qwz(zIy,e) [log p,(y1z)] -ADKL(q c)Iip(4)1 (Eq. 1) [0081] In at least one embodiment, when calculating gradients during training, expectation over data is estimated using mini batches and expectation over latent variables z is usually calculated using a single sample from an approximate posterior. In at least one embodiment, parameter updates are done with stochastic gradient descent, employing a re-parameterization trick. In at least one embodiment, due to KL-regularization, amodal-vae 104 is taught to encode data y in an efficient low-dimensional latent representation z. In at least one embodiment, although rigorous variational inference could correspond to A = 1, different values of A may allow careful control of balance between KL and reconstruction terms.[0082] In at least one embodiment, a dataset D = is a dataset of "partial" instance object masks Yi E f( 106 in images, and an amodal mask completion method comprises a mapping f:V -> V with completed masks E V. In at least one embodiment, an amodal instance completion task recovers an occluded part of a particular object from a partially occluded instance mask. In at least one embodiment, additional information, if available, can be used in function f, such as an image's RGB pixel values or instance classes cf. [0083] In at least one embodiment, a solution to an amodal mask completion task comprises collecting a training dataset Dtrain = fyi,Yir_i comprising paired partial masks Yi and corresponding complete masks yi (and potentially additional information, such as instance classes ci). In at least one embodiment, a parametric model, such as a neural network, could be fitted to it by image segmentation. However, this approach to annotating an amodal dataset is challenging, time-consuming, expensive, and sometimes ambiguous, as objects resulting from occlusions may in some cases not be well-defined.[0084] In at least one embodiment, a weakly-supervised approach is used instead. In at least one embodiment, this is done where there is access to data with only partially visible masks (?) and separate data with only full masks (V). In at least one embodiment, as depicted in FIG. 1, partially visible masks Y are first encoded into a smooth latent space, and then resulting latent codes z are decoded into full masks y.[0085] In at least one embodiment, an advantage associated with one or more neural networks is that they are trained using an approach that naturally captures ambiguity, when completing partial masks, in a posterior distribution, as depicted in FIG. 4. In at least one embodiment, one or more neural networks 100 also deal gracefully with inputs that said neural networks are uncertain about. In at least one embodiment, said one or more neural networks 100 are trained such that points under a prior distribution map to realistic completed masks, and consequently slightly erroneous latent code predictions may still decode into well-defined outputs.[0086] In at least one embodiment, training of amodal-vae 104 is based on an assumption of a factorial normal prior distribution p(z) .7\(0, I), and factorial normal approximate posteriors q,2 (Z ly, c) and qw, (z ly, c) with means and standard deviations parametrized via convolutional neural networks that also see object categories c. In at least one embodiment, a decoder 112, which may be stated as p (ylz), comprises a factorial Bernoulli distribution, predicts binary masks, and is parametrized using a deconvolutional neural network. In at least one embodiment, to leverage datasets Tr with fully visible masks and 1' with partially visible masks, amodal-VAE 104 is trained in three stages [0087] In at least one embodiment, a first stage of training is based on full masks. In at least one embodiment, this is done to generate only realistic full masks, even when provided with partial masks that are significantly occluded as input. In at least one embodiment, during said first stage a generative component pwi(ylz)p(z) of a model is leaned. In at least one embodiment, this is done by training amodal-vae 104 based only on full masks. In at least one embodiment, amodal-vae 104 is trained using an ELBO, e.g. as defined in Eq. 1, on IL In at least one embodiment, amodal-vae 104 is taught low-dimensional representations of complete masks of real objects in its continuous latent space.[0088] In at least one embodiment, a second stage of training comprises simulated partial-tofull-mask training. In at least one embodiment, after said first stage, points in latent space map to a realistic completed mask. In at least one embodiment, based on full mask data, various occlusions are simulated to generate a synthetic dataset of paired partial and complete masks of form Dtrain = Ivo-37xLi. Freezing previously learnt decoder 112, stated as my, (Az), a new encoder 108 may be learned. In at least one embodiment, encoder 108 is stated as Ow, (z119, c) with parameters w3 that maps partial masks 5 to points in latent space z that decode to correct completed masks y.[0089] In at least one embodiment, a synthetic dataset is constructed by random sampling of instances v foreground and v instance from I( and masking out v instance by randomly positioning Yforeground in front of it.[0090] In at least one embodiment, an ELBO objective as follows is maximized: LArnodal-VAE(W3) =E9,y,c-D".ai" [Ez_th,:3(ziko[logpwi (371z)] -ADKL(q (zIY, c)11p(z))1 (2) In this equation, 3 represents simulated partial masks, y are full masks, and c is additional object class information. In at least one embodiment, only new encoder parameters w3 are optimized and RGB image information is not used.[0091] In at least one embodiment, composition of new encoder 108 with frozen decoder 112 forms an amodal instance completion mapping, which can be expressed as f(51, c) = pits1(c7,12(j, c)), where a deterministic functions qwm,(y, is defined as a mean of (-1,3 (ziy, and p,1 (z)is defined as a binary output mask calculated from pixelwise Bernoulli probabilities pwi(ylz)with threshold t. In at least one embodiment, a first term of equation 2, as seen above, is a reconstruction loss that guides encoder 108 to find an appropriate position in a low dimensional Gaussian manifold which is decoded toW. In at least one embodiment, a second term of equation 2 represents KL loss, that regularizes new approximate posterior th,v3(z15), c) to generate encodings that fall under prior distribution p (z). In at least one embodiment, because of step one of training and keeping decoder 112 frozen, all such encodings z map to complete masks.[0092] In at least one embodiment, to aid encoder 108 to more easily search latent space 110, an additional latent code distance loss is exploited. In at least one embodiment, encodings are pulled from complete and corresponding partial masks close to each other, since they both need to decode into the same full masks. Specifically, this loss term is minimized for paired y" and y: LL atentCo de (W3) = 1Eîq (ZIP, c),z-q"z(z ly, [1 -Z] 2 In at least one embodiment, training comprises approximating expectation using single samples from approximate posteriors In at least one embodiment, adding this loss to an ELBO objective may increase performance.[0093] In at least one embodiment, a third stage of training comprises fine-tuning using partialmask-only data. In at least one embodiment, amodal-vae 104 is fine-tuned by training its encoder 108 using various techniques associated with variational autoencoders, but using only partial masks from!, masking out all non-visible pixels. In at least one embodiment, fine-tuning amodal-vae 104 in this way helps it to deal with complex realistic occlusions that may not occur during occlusion simulation in stage two. For example, in stage two training may be done using only single foreground instances to create simulated occlusions, but complex realistic occlusions may comprise more complex cases. In at least one embodiment, decoder 112 remains frozen during stage three. In at least one embodiment, for a partially visible mask 17, its visible pixels are defined as ?is, and an ELBO is defined as: IFineluning(W3) = Esc4FIE,_qw, (49-4 [log p,,", (leis I z)] ADKL (qw,(z13), c) I lp (4)1 where, in at least one embodiment, consideration of reconstruction loss is limited to visible pixels.[0094] In at least one embodiment, in training stages two and three, a spatial transformer network is applied to output of amodal-vae 104 that learns to generate a resized image of 116 of completed masks such that they can be pasted back into a scene. In at least one embodiment, amodal-vae 104 generates images of complete objects 114, including shape variations 118. In at least one embodiment, these are generated with indicators of probability, corresponding to a variations likelihood of being an accurate representation of a completed shape. In at least one embodiment, these may be also resized by spatial transformation.[0095] In at least one embodiment, separate training stages one and two convey various advantages, in that when learning an amodal completion model in step two, an approximate posterior can see different partial masks, which can look entirely different due to different simulated occlusions, but that nevertheless map to similar completed masks. Alternatively, in at least one embodiment, similar partial masks may correspond to very different completed masks. In at least one embodiment, while training on such data constitutes a difficult and ambiguous learning problem, that is addressed at least in part said separate training stages. If a generative component, i.e. decoder 112, was also trained like this, it would result in a weaker model encoding less information in latent space. In at least one embodiment, it is therefore beneficial to first separately train said generative component in robust VAE fashion with full masks only, and then freeze it. In at least one embodiment, difficulty associated with learning a high quality generative component is separated from difficulty associated with learning to map many different partial masks to similar completed masks and vice versa.[0096] In at least one embodiment, a spatial transformer, to generate resized image 116, is also trained in stage two. In at least one embodiment, this permits first learning decoder 112 on full masks only and then separately learning said spatial transformer on top of a trained decoder 112, instead of training both simultaneously.[0097] In at least one embodiment, as noted, completed masks may be resized using a spatial transformers. in at least one embodiment, input and output of amodal-vae 104 are tightly cropped 2D instance masks, separately resized or squeezed to conform to fixed input and output dimensions. In at least one embodiment, completed output masks are therefore not similarly scaled with respect to partial input masks. In at least one embodiment, simple resizing of a completed masks is not sufficient to allow it to be pasted back into an original image. In at least one embodiment, this hurdle is overcome by learning an affine transformation that shifts and scales a completed output mask to correct for size discrepancy. In at least one embodiment, this resized image can then be pasted back into a full original image using resized image 116.[0098] In at least one embodiment, with an instance's partial mask Y' and completed mask y, generated by Amodal-VAE's decoder in a VAE's fixed output dimensions, a spatial transformation function go(y,Y) -> y' is learned, such that transformed y' is a completed mask of a same scale and at same position as input mask In at least one embodiment, transformation parameters are predicted: sz t,1 (tx, ty, sy) = go (y, Y) Ao = [0 sy ty where go is a neural network and Ao is a 2D affine transformation matrix that is applied to each pixel in y and used to do differentiable image sampling. In at least one embodiment, a transformation defined through go and A9 is end-to-end differentiable and can be trained by backpropagation together with amodal-vae 104. In at least one embodiment, said spatial transformer function, operating on output of amodal-vae 104, is trained during training stages two and three, whereas in training stage one training is based on complete masks only.[0099] FIG. 2 illustrates an example of a process for training one or more neural networks to generate an image of a complete object based on an image of a portion of an object, according to at least one embodiment.[0100] Although example process 200 is depicted as a sequence of operations, it will be appreciated that, in embodiments, operations depicted in FIG. 2 may be altered in various ways, and some operations may be omitted, reordered, or performed in parallel with other operations, except where an order is explicitly stated or logically implied, such as when input from one operation depends upon output of another operation.[0101] Operations depicted by FIG. 2 may be performed by any one or more of a variety of systems, including many computing systems, circuits, and other devices depicted in various figures disclosed herein. In at least one embodiment, for example, operations described in relation to FIG. 2 are performed by a system comprising at least one processor and a memory comprising instructions executable by said at least one processor, such that execution of said instructions cause said system to at least perform said operations.[0102] At 202, said system trains a decoder based on full masks of complete object. In at least one embodiment, full masks are images of complete objects, not occluded. In at least one embodiment, said full masks are obtained from images of complete objects. In at least one embodiment, pre-processing is done on an image of a complete object, for example to eliminate image information not related to said object, infer visible boundaries of a complete object, conform to an input format for a mask, and so forth. In at least one embodiment, images in a training set are analyzed to determine if they include occluded objects, and if so those images are excluded from use during this stage of training.[0103] In at least one embodiment, training of said decoder is done in conformance with various embodiments described herein, such as regarding stage one of a training process described herein in relation to FIG. 1. In at least one embodiment, said decoder corresponds to decoder 112 as depicted in FIG. 1. In at least one embodiment, said decoder is paired with an encoder, such as encoder 108 depicted in FIG. I. In at least one embodiment, said encoder and decoder are components of one or more neural networks comprising aspects of a variational auto-encoder. In at least one embodiment, said encoder and decoder are components of an amodal autoencoder as described herein, such as amodal-vae 104 depicted in FIG. I. [0104] At 204, said system freezes said decoder. In at least one embodiment, said freezing comprises halting changes to various weights or parameters of said decoder, while training to other portions of one or more neural networks associated with said decoder continues.[0105] At 206, said system generates a synthetic dataset of occluded objects, using pairs of objects in which one of this pair is occluded. In at least one embodiment, this is done using output of a decoder, such as decoder 112 depicted in FIG. I. In at least one embodiment, images of complete objects, which may be described as complete masks, are selected from variational output of decoder 112. In at least one embodiment, two of such complete masks are combined into a single image and made to overlap based on some randomized process, and by doing so generating an image in which one object is fully visible and another object is at least partially occluded. In at least one embodiment, this image is then used to obtain an image of a portion of said occluded object, which may be describe as an image of a partial object or as a partial mask.[0106] At 208, said system learns a new encoder based on data from a synthetic dataset generated in relation to operation 204. In at least one embodiment, said system trains said new encoder while its corresponding decoder, trained at 202, is frozen. In at least one embodiment, said system trains said new encoder according to stage two as described in relation to FIG. 1. In at least one embodiment, said "new" encoder is describe as such because it is retrained or refined from its original parameters. In at least one embodiment, these parameters are learned during a first training stage, when trained along with said decoder, but adjusted during subsequent stages. In at least one embodiment, said "new" encoder is re-initialized to a starting state prior to commencing stage two of training.[0107] At 210, said system learns spatial transformations to apply to completed masks are to be inferred from partial masks. In at least one embodiment, this is done by learning an affine transformation that shifts and scales a completed output mask to correct for size discrepancy, as described above in relation to FIG. 1. In at least one embodiment, spatial transformations are learned simultaneously with training of a new encoder.[010S] At 212, said system fine-tunes said encoder using additional partial masks. In at least one embodiment, said system trains said new encoder according to stage three as described in relation to FIG. 1.[0109] FIG. 3 illustrates examples 300 of training samples generated using output of a decoder portion of one or more neural networks, according to at least one embodiment. In at least one embodiment, as describe above in relation to FIG. 1, a second stage of training an amodal-vae uses simulated occluded objects. In at least one embodiment, these are generated using output of a trained decoder of an amodal-vae, such as is described in relation to FIG. I. In at least one embodiment, after training this decoder, points in a corresponding latent space (such as latent space 110 in FIG. 10) map to completed masks. In at least one embodiment, output of this decoder is therefore used to generate a variety of complete masks. In at least one embodiment, as depicted in FIG. 3, these completed masks can be used to generate an image in which one mask overlaps another. For example, in at least one embodiment, two complete masks have been positioned at random in an image to form a first generated training image 302, comprising complete mask 306 in a foreground position, and a partial mask 308 in a background position, where partial mask 308 was a complete mask until another mask was positioned over it. In at least one embodiment, random perturbations in positions of these masks can produce a variety of training images, such as depicted training image "N" 304.[0110] FIG. 4 illustrates examples 400 of images of complete objects generated by one or more neural networks based on images of partial objects, according to at least one embodiment. In at least one embodiment, one or more neural networks, such as amodal-vae 104 depicted in FIG. 1, may be trained as described herein. In at least one embodiment, an image 402 may be obtained and used as a basic of input to said one or more neural networks. In at least one embodiment, image 402 is from an image editing program or other application that might benefit from capabilities related to object completion.[0111] In at least one embodiment, a partial mask, which may be referred to as an image of a partial object 404, is obtained from input image 402. In at least one embodiment, said partial mask comprises data indicating known outlines of an occluded object depicted in input image 402.[01 12] In at least one embodiment, said partial masks are provided to an amodal-vae trained as described according to various embodiment herein. In at least one embodiment, an amodal-vae 104 as depicted in FIG. I is used. In at least one embodiment, said partial masks are scaled to a defined input size, and input to an amodal-vae which then infers one or more possible complete masks 406-410. In at least one embodiment, an amodal-vae outputs, with each generated complete mask 406-410, a probability value indicating how confident said amodal-vae is regarding accuracy of a respective one of output complete masks 406-410.[0113] FIG. 5 illustrates example uses of images of complete objects generated by one or more neural networks based on an image of a portion of an object, according to at least one embodiment. In at least one embodiment, an example 500 is associated with an image editing application. In at least one embodiment, an image editing application comprises computer software and/or hardware to facilitate alteration and refinement of images. In at least one embodiment, an image editing application relates to alterations of photographic images. In at least one embodiment, an image editing application relates to alterations of computer-generated images. In at least one embodiment, said computer-generated images comprise rendered scenes. In at least one embodiment, said computer-generated images comprise artistic images, generated by a user who manipulates various tools such as virtual pens, virtual paint brushes, and so forth. In at least one embodiment, an image editing application comprises tools to select objects in an image, including partial objects. In at least one embodiment, an image editing application comprises facilities to generate complete masks or partial masks. In at least one embodiment, said image editing application can convert partial masks to complete masks using an embodiment of one or more neural networks as described herein, such as amodal-vae 104 depicted in FIG. 1.[0114] In at least one embodiment, an original image 502 is processed by one or more neural networks, such as an embodiment of an amodal-vae 104, to produce a completed object 510. This completed object 510 is an image of a complete object that has been generated based on inferences obtained from an image of a partial object 512, from original image 502.[0115] In at least one embodiment, said image editing program generates completed object 510 and inserts it into an image in which completed object 510 is in front of, rather than behind, other objects. In at least one embodiment, this results in an image with swapped objects 504.[01 16] In at least one embodiment, said image editing program generates completed object 510, resizes it, and inserts it at a new position. In at least one embodiment, other objects in original image 502 are also manipulated. In at least one embodiment, this results in an image with moved and scaled objects 506.[0117] In at least one embodiment, said image editing program generates completed object 510 and changes its orientation or pose. In at least one embodiment, orientation refers to a facing of completed object 510 in two or three dimensions. In at least one embodiment, for three-dimensional changes of orientation, inferences to generate a complete mask are three-dimensional, in that some determination is made regarding an object's extent in three-dimensions, rather than only in two dimensions. In at least one embodiment, examples of orientation changes 508 include rotating or tilting an object along an axis. In at least one embodiment, pose refers to an object's configuration or way of standing. For example, in at least one embodiment, a completed object 510 is of a human subject, and a change to pose comprises alterations to said subject's arms, legs, hands, or feet.[0118] FIG. 6 illustrates an example of a process for training one or more neural networks to generate images of complete objects based on images of portions of objects, according to at least one embodiment.[0119] Although example process 600 is depicted as a sequence of operations, it will be appreciated that, in embodiments, operations depicted in FIG. 6 may be altered in various ways, and some operations may be omitted, reordered, or performed in parallel with other operations, except where an order is explicitly stated or logically implied, such as when input from one operation depends upon output of another operation.[0120] Operations depicted by FIG. 6 may be performed by any one or more of a variety of systems, including computing systems, circuits, and other devices depicted in various figures disclosed herein. In at least one embodiment, for example, operations described in relation to FIG. 6 are performed by a system comprising at least one processor and a memory comprising instructions executable by said at least one processor, such that execution of said instructions cause said system to at least perform said operations.[0121] At 602, in at least one embodiment, a decoder is trained using images of complete objects. In at least one embodiment, this is performed according to a first training stage as described herein in relation to FIG. I. In at least one embodiment, said training data is generated according to embodiments described in relation to FIGS. 2.[0122] At 604, in at least one embodiment, a decoder trained at 602 is frozen. In at least one embodiment, frozen refers to non-adjustment of parameters, weights, or other configuration of said decoder based on training of one or more neural networks comprising said decoder, while training of said one or more neural networks continues.[0123] At 606, in at least one embodiment, training data is generated using output from a decoder frozen at 604. In at least one embodiment, said training data is generated according to embodiments described in relation to stage two of training as described above in relation to FIG. 1. In at least one embodiment, said training data is generated according to embodiments described in relation to FIGS. 2-3.[0124] At 608, in at least one embodiment, an encoder is trained using training data generated at 606. In at least one embodiment, said encoder is trained to generate images of complete objects based on images of partial objects. In at least one embodiment, said training data is generated according to embodiments described in relation to stage two of training as described above in relation to FIG. 1. In at least one embodiment, said training data is generated according to embodiments described in relation to FIGS. 2-3.INFERENCE AND TRAINING LOGIC[0125] FIG. 7A illustrates inference and/or training logic 715 used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided below in conjunction with FIGS. 7A and/or 7B.[0126] In at least one embodiment, inference and/or training logic 715 may include, without limitation, code and/or data storage 701 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 715 may include, or be coupled to code and/or data storage 701 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, code and/or data storage 701 stores weight parameters and/or inputloutput data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 701 may be included with other on-chip or off-chip data storage, including a processor's L 1, L2, or L3 cache or system memory.[0127] In at least one embodiment, any portion of code and/or data storage 701 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 701 may be cache memory, dynamic randomly addressable memory ("DRAM"), static randomly addressable memory ("SRAM"), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or code and/or data storage 701 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training anWor inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0128] In at least one embodiment, inference and/or training logic 715 may include, without limitation, a code and/or data storage 705 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage 705 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic 715 may include, or be coupled to code and/or data storage 705 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs).[0129] In at least one embodiment, code, such as graph code, causes loading of weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage 705 may be included with other on-chip or off-chip data storage, including a processor's Li, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 705 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 705 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or data storage 705 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0130] In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be separate storage structures. In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be a combined storage structure. In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage 701 and code and/or data storage 705 may be included with other on-chip or off-chip data storage, including a processor's Li, L2, or L3 cache or system memory.[0131] In at least one embodiment, inference and/or training logic 715 may include, without limitation, one or more arithmetic logic unit(s) (ALU(s)") 710, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 720 that are functions of input/output and/or weight parameter data stored in code and/or data storage 701 and/or code and/or data storage 705. In at least one embodiment, activations stored in activation storage 720 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 710 in response to performing instructions or other code, wherein weight values stored in code and/or data storage 705 and/or data storage 70 I are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 705 or code and/or data storage 701 or another storage on or off-chip.[0132] In at least one embodiment, ALU(s) 710 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 710 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a coprocessor). In at least one embodiment, ALUs 710 may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, code and/or data storage 701, code and/or data storage 705, and activation storage 720 may share a processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 720 may be included with other on-chip or off-chip data storage, including a processor's LT, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits.[0133] In at least one embodiment, activation storage 720 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, activation storage 720 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, a choice of whether activation storage 720 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0134] In at least one embodiment, inference and/or training logic 715 illustrated in FIG. 7A may be used in conjunction with an application-specific integrated circuit ("ASIC"), such as a TensorFlow® Processing Unit from Google, an inference processing unit (WU) from GraphcoreTM, or a Nervana® (e.g., "Lake Cresf') processor from Intel Corp. In at least one embodiment, inference and/or training logic 715 illustrated in FIG. 7A may be used in conjunction with central processing unit ("CPU") hardware, graphics processing unit ("GPU") hardware or other hardware, such as field programmable gate arrays ("FPGAs").[0135] FIG. 7B illustrates inference and/or training logic 715, according to at least one embodiment. In at least one embodiment, inference and/or training logic 715 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 715 illustrated in FIG. 7B may be used in conjunction with an application-specific integrated circuit (ASIC), such as TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from GraphcoreTm, or a Newana® (e.g., "Lake Crest") processor from Intel Corp. In at least one embodiment, inference and/or training logic 715 illustrated in FIG. 7B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 715 includes, without limitation, code and/or data storage 701 and code and/or data storage 705, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in FIG. 7B, each of code and/or data storage 701 and code and/or data storage 705 is associated with a dedicated computational resource, such as computational hardware 702 and computational hardware 706, respectively. In at least one embodiment, each of computational hardware 702 and computational hardware 706 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 701 and code and/or data storage 705, respectively, result of which is stored in activation storage 720.[0136] In at least one embodiment, each of code and/or data storage 701 and 705 and corresponding computational hardware 702 and 706, respectively, correspond to different layers of a neural network, such that resulting activation from one storage/computational pair 701/702 of code and/or data storage 701 and computational hardware 702 is provided as an input to a next storage/computational pair 705/706 of code and/or data storage 705 and computational hardware 706, in order to mirror a conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 701/702 and 705/706 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage/computation pairs 701/702 and 705/706 may be included in inference and/or training logic 715NEURAL NETWORK TRAINING AND DEPLOYMENT[0137] FIG. 8 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 806 is trained using a training dataset 802. In at least one embodiment, training framework 804 is a PyTorch framework, whereas in other embodiments, training framework 804 is a TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MX,Net, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment, training framework 804 trains an untrained neural network 806 and enables it to be trained using processing resources described herein to generate a trained neural network 808. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner.[0138] In at least one embodiment, untrained neural network 806 is trained using supervised learning, wherein training dataset 802 includes an input paired with a desired output for an input, or where training dataset 802 includes input having a known output and an output of neural network 806 is manually graded. In at least one embodiment, untrained neural network 806 is trained in a supervised manner and processes inputs from training dataset 802 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 806. In at least one embodiment, training framework 804 adjusts weights that control untrained neural network 806. In at least one embodiment, training framework 804 includes tools to monitor how well untrained neural network 806 is converging towards a model, such as trained neural network 808, suitable to generating correct answers, such as in result 814, based on input data such as a new dataset 812. In at least one embodiment, training framework 804 trains untrained neural network 806 repeatedly while adjust weights to refine an output of untrained neural network 806 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 804 trains untrained neural network 806 until untrained neural network 806 achieves a desired accuracy. In at least one embodiment, trained neural network 808 can then be deployed to implement any number of machine learning operations.[0139] In at least one embodiment, untrained neural network 806 is trained using unsupervised learning, wherein untrained neural network 806 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 802 will include input data without any associated output data or "ground truth" data. In at least one embodiment, untrained neural network 806 can learn groupings within training dataset 802 and can determine how individual inputs are related to untrained dataset 802. In at least one embodiment, unsupervised training can be used to generate a self-organizing map in trained neural network 808 capable of performing operations useful in reducing dimensionality of new dataset 812. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in new dataset 812 that deviate from normal patterns of new dataset 812.[0140] In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 802 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 804 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 808 to adapt to new dataset 812 without forgetting knowledge instilled within trained neural network 808 during initial training.DATA CENTER[0141] FIG. 9 illustrates an example data center 900, in which at least one embodiment may be used. In at least one embodiment, data center 900 includes a data center infrastructure layer 910, a framework layer 920, a software layer 930 and an application layer 940.[0142] In at least one embodiment, as shown in FIG. 9, data center infrastructure layer 910 may include a resource orchestrator 912, grouped computing resources 914, and node computing resources (-node C.R.s") 916(1)-916(N), where "N" represents a positive integer (which may be a different integer "N" than used in other figures). In at least one embodiment, node C.R.s 916(1)-916(N) may include, but are not limited to, any number of central processing units ("CPUs") or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory storage devices 918(I)-918(N) (e.g., dynamic read-only memory, solid state storage or disk drives), network input/output ("NW I/O") devices, network switches, virtual machines ("VMs"), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 916(1)-916(N) may be a server having one or more of above-mentioned computing resources.[0143] In at least one embodiment, grouped computing resources 914 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). In at least one embodiment, separate groupings of node C.R.s within grouped computing resources 914 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.[0144] In at least one embodiment, resource orchestrator 912 may configure or otherwise control one or more node C.R.s 9I6(1)-916(N) and/or grouped computing resources 914. In at least one embodiment, resource orchestrator 912 may include a software design infrastructure ("SDI") management entity for data center 900. In at least one embodiment, resource orchestrator 712 may include hardware, software or some combination thereof.[0145] In at least one embodiment, as shown in FIG. 9, framework layer 920 includes a job scheduler 922, a configuration manager 924, a resource manager 926 and a distributed file system 928. In at least one embodiment, framework layer 920 may include a framework to support software 932 of software layer 930 and/or one or more application(s) 942 of application layer 940. In at least one embodiment, software 932 or application(s) 942 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 920 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM (hereinafter "Spark") that may utilize distributed file system 928 for large-scale data processing (e.g., "big data"). In at least one embodiment, job scheduler 932 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 900. In at least one embodiment, configuration manager 924 may be capable of configuring different layers such as software layer 930 and framework layer 920 including Spark and distributed file system 928 for supporting large-scale data processing. In at least one embodiment, resource manager 926 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 928 and job scheduler 922. In at least one embodiment, clustered or grouped computing resources may include grouped computing resources 914 at data center infrastructure layer 910. In at least one embodiment, resource manager 926 may coordinate with resource orchestrator 912 to manage these mapped or allocated computing resources.[0146] In at least one embodiment, software 932 included in software layer 930 may include software used by at least portions of node C.R.s 916(1)-916(N), grouped computing resources 914, and/or distributed file system 928 of framework layer 920. In at least one embodiment, one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.[0147] In at least one embodiment, application(s) 942 included in application layer 940 may include one or more types of applications used by at least portions of node C.R.s 916(1)-916(N), grouped computing resources 914, and/or distributed file system 928 of framework layer 920. In at least one embodiment, one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, application and a machine learning application, including training or inferencing software, machine learning framework software (e.g., Py Torch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments.[0148] In at least one embodiment, any of configuration manager 924, resource manager 926, and resource orchestrator 912 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center 900 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.[0149] In at least one embodiment, data center 900 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 900. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 900 by using weight parameters calculated through one or more training techniques described herein.[0150] In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.[0151] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 9 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0152] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.AUTONOMOUS VEHICLE[0153] FIG. 10A illustrates an example of an autonomous vehicle 1000, according to at least one embodiment. In at least one embodiment, autonomous vehicle 1000 (alternatively referred to herein as "vehicle 1000") may be, without limitation, a passenger vehicle, such as a car, a truck, a bus, and/or another type of vehicle that accommodates one or more passengers. In at least one embodiment, vehicle 1000 may be a semi-tractor-trailer truck used for hauling cargo. In at least one embodiment, vehicle 1000 may be an airplane, robotic vehicle, or other kind of vehicle.[0154] Autonomous vehicles may be described in terms of automation levels, defined by National Highway Traffic Safety Administration ("NHTSA"), a division of US Department of Transportation, and Society of Automotive Engineers ("SAE") "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles" (e.g., Standard No. J3016-201806, published on June 15, 2018, Standard No. J3016-201609, published on September 30, 2016, and previous and future versions of this standard). In at least one embodiment, vehicle 1000 may be capable of functionality in accordance with one or more of Level 1 through Level 5 of autonomous driving levels. For example, in at least one embodiment, vehicle 1000 may be capable of conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on embodiment.[0155] In at least one embodiment, vehicle 1000 may include, without limitation, components such as a chassis, a vehicle body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of a vehicle. In at least one embodiment, vehicle 1000 may include, without limitation, a propulsion system 1050, such as an internal combustion engine, hybrid electric power plant, an all-electric engine, and/or another propulsion system type. In at least one embodiment, propulsion system 1050 may be connected to a drive train of vehicle 1000, which may include, without limitation, a transmission, to enable propulsion of vehicle 1000. In at least one embodiment, propulsion system 1050 may be controlled in response to receiving signals from a throttle/accelerator(s) 1052.[0156] In at least one embodiment, a steering system 1054, which may include, without limitation, a steering wheel, is used to steer vehicle 1000 (e.g., along a desired path or route) when propulsion system 1050 is operating (e.g., when vehicle 1000 is in motion). In at least one embodiment, steering system 1054 may receive signals from steering actuator(s) 1056. In at least one embodiment, a steering wheel may be optional for full automation (Level 5) functionality. In at least one embodiment, a brake sensor system 1046 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 1048 and/or brake sensors.[0157] In at least one embodiment, controller(s) 1036, which may include, without limitation, one or more system on chips ("SoCs") (not shown in FIG. 10A) and/or graphics processing unit(s) ("GPU(s)"), provide signals (e.g., representative of commands) to one or more components and/or systems of vehicle 1000. For instance, in at least one embodiment, controller(s) 1036 may send signals to operate vehicle brakes via brake actuator(s) 1048, to operate steering system 1054 via steering actuator(s) 1056, to operate propulsion system 1050 via throttle/accelerator(s) 1052. In at least one embodiment, controller(s) 1036 may include one or more onboard (e.g., integrated) computing devices that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving vehicle 1000. In at least one embodiment, controller(s) 1036 may include a first controller for autonomous driving functions, a second controller for functional safety functions, a third controller for artificial intelligence functionality (e.g., computer vision), a fourth controller for infotainment functionality, a fifth controller for redundancy in emergency conditions, and/or other controllers. In at least one embodiment, a single controller may handle two or more of above functionalities, two or more controllers may handle a single functionality, and/or any combination thereof.[0158] In at least one embodiment, controller(s) 1036 provide signals for controlling one or more components and/or systems of vehicle 1000 in response to sensor data received from one or more sensors (e.g., sensor inputs). In at least one embodiment, sensor data may be received from, for example and without limitation, global navigation satellite systems ("GNSS") sensor(s) 1058 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1060, ultrasonic sensor(s) 1062, LIDAR sensor(s) 1064, inertial measurement unit ("WU") sensor(s) 1066 (e.g., accelerometer(s), gyroscope(s), a magnetic compass or magnetic compasses, magnetometer(s), etc.), microphone(s) 1096, stereo camera(s) 1068, wide-view camera(s) 1070 (e.g., fisheye cameras), infrared camera(s) 1072, surround camera(s) 1074 (e.g., 360 degree cameras), long-range cameras (not shown in FIG. 10A), mid-range camera(s) (not shown in FIG. 10A), speed sensor(s) 1044 (e.g., for measuring speed of vehicle 1000), vibration sensor(s) 1042, steering sensor(s) 1040, brake sensor(s) (e.g., as part of brake sensor system 1046), and/or other sensor types.[0159] In at least one embodiment, one or more of controller(s) 1036 may receive inputs (e.g., represented by input data) from an instrument cluster 1032 of vehicle 1000 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface ("HMI") display 1034, an audible annunciator, a loudspeaker, and/or via other components of vehicle 1000. In at least one embodiment, outputs may include information such as vehicle velocity, speed, time, map data (e.g., a High Definition map (not shown in FIG. 10A), location data (e.g., vehicle's 1000 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by controller(s) 1036, etc. For example, in at least one embodiment, HMI display 1034 may display information about presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.).[0160] In at least one embodiment, vehicle 1000 further includes a network interface 1024 which may use wireless antenna(s) 1026 and/or modem(s) to communicate over one or more networks. For example, in at least one embodiment, network interface 1024 may be capable of communication over Long-Term Evolution ("LIE"), Wideband Code Division Multiple Access ("WCDMA"), Universal Mobile Telecommunications System ("UMTS"), Global System for Mobile conununication ("GSM"), IMT-CDMA Multi-Carrier ("CDMA2000") networks, etc. In at least one embodiment, wireless antenna(s) 1026 may also enable communication between objects in environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth Low Energy ("LE"), Z-Wave, ZigBee, etc., and/or low power wide-area network(s) ("LPWANs"), such as LoRaWAN, SigFox, etc. protocols.[0161] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 10A for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0162] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0163] FIG. 10B illustrates an example of camera locations and fields of view for autonomous vehicle 1000 of FIG. 10A, according to at least one embodiment. In at least one embodiment, cameras and respective fields of view are one example embodiment and are not intended to be limiting. For instance, in at least one embodiment, additional and/or alternative cameras may be included and/or cameras may be located at different locations on vehicle 1000.[0164] In at least one embodiment, camera types for cameras may include, but are not limited to, digital cameras that may be adapted for use with components and/or systems of vehicle 1000. In at least one embodiment, camera(s) may operate at automotive safety integrity level ("ASIL") B and/or at another ASEL. In at least one embodiment, camera types may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on embodiment. In at least one embodiment, cameras may be capable of using rolling shutters, global shutters, another type of shutter, or a combination thereof In at least one embodiment, color filter array may include a red clear clear clear ("RCCC") color filter array, a red clear clear blue ("RCCB") color filter array, a red blue green clear ("RBGC") color filter array, a Foveon X3 color filter array, a Bayer sensors ("RGG13") color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In at least one embodiment, clear pixel cameras, such as cameras with an RCCC, an RCCB, and/or an RBGC color filter array, may be used in an effort to increase light sensitivity.[0165] In at least one embodiment, one or more of camera(s) may be used to perform advanced driver assistance systems ("ADAS") functions (e.g., as part of a redundant or fail-safe design).For example, in at least one embodiment, a Multi-Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. In at least one embodiment, one or more of camera(s) (e.g., all cameras) may record and provide image data (e.g., video) simultaneously.[0166] In at least one embodiment, one or more camera may be mounted in a mounting assembly, such as a custom designed (three-dimensional ("3D") printed) assembly, in order to cut out stray light and reflections from within vehicle 1000 (e.g., reflections from dashboard reflected in windshield mirrors) which may interfere with camera image data capture abilities. With reference to wing-minor mounting assemblies, in at least one embodiment, wing-mirror assemblies may be custom 3D printed so that a camera mounting plate matches a shape of a wing-mirror. In at least one embodiment, camera(s) may be integrated into wing-mirrors. In at least one embodiment, for side-view cameras, camera(s) may also be integrated within four pillars at each corner of a cabin.[0167] In at least one embodiment, cameras with a field of view that include portions of an environment in front of vehicle 1000 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well as aid in, with help of one or more of controller(s) 1036 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining preferred vehicle paths. In at least one embodiment, front-facing cameras may be used to perform many similar ADAS functions as LIDAR, including, without limitation, emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, front-facing cameras may also be used for ADAS functions and systems including, without limitation, Lane Departure Warnings ("LDW"), Autonomous Cruise Control ("ACC"), and/or other functions such as traffic sign recognition.[0168] In at least one embodiment, a variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a CMOS ("complementary metal oxide semiconductor-) color imager. In at least one embodiment, a wide-view camera 1070 may be used to perceive objects coming into view from a periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera 1070 is illustrated in FIG. 10B, in other embodiments, there may be any number (including zero) wide-view cameras on vehicle 1000. In at least one embodiment, any number of long-range camera(s) 1098 (e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. In at least one embodiment, long-range camera(s) 1098 may also be used for object detection and classification, as well as basic object tracking.[0169] In at least one embodiment, any number of stereo camera(s) 1068 may also be included in a front-facing configuration. In at least one embodiment, one or more of stereo camera(s) 1068 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic ("FPGA") and a multi-core micro-processor with an integrated Controller Area Network ("CAN") or Ethernet interface on a single chip. In at least one embodiment, such a unit may be used to generate a 3D map of an environment of vehicle 1000, including a distance estimate for all points in an image. In at least one embodiment, one or more of stereo camera(s) 1068 may include, without limitation, compact stereo vision sensor(s) that may include, without limitation, two camera lenses (one each on left and right) and an image processing chip that may measure distance from vehicle 1000 to target object and use generated information (e.g., metadata) to activate autonomous emergency braking and lane departure warning functions. In at least one embodiment, other types of stereo camera(s) 1068 may be used in addition to, or alternatively from, those described herein.[0170] In at least one embodiment, cameras with a field of view that include portions of environment to sides of vehicle 1000 (e.g., side-view cameras) may be used for surround view, providing information used to create and update an occupancy grid, as well as to generate side impact collision warnings. For example, in at least one embodiment, surround camera(s) 1074 (e.g., four surround cameras as illustrated in FIG. 10B) could be positioned on vehicle 1000. In at least one embodiment, surround camera(s) 1074 may include, without limitation, any number and combination of wide-view cameras, fisheye camera(s), 360 degree camera(s), and/or similar cameras. For instance, in at least one embodiment, four fisheye cameras may be positioned on a front, a rear, and sides of vehicle 1000. In at least one embodiment, vehicle 1000 may use three surround camera(s) 1074 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround-view camera.[0171] In at least one embodiment, cameras with a field of view that include portions of an environment behind vehicle 1000 (e.g., rear-view cameras) may be used for parking assistance, surround view, rear collision warnings, and creating and updating an occupancy grid. In at least one embodiment, a wide variety of cameras may be used including, but not limited to, cameras that are also suitable as a front-facing camera(s) (e.g., long-range cameras 1098 and/or mid-range camera(s) 1076, stereo camera(s) 1068), infrared camera(s) 1072, etc.), as described herein.[0172] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 10B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0173] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0174] FIG. 10C is a block diagram illustrating an example system architecture for autonomous vehicle 1000 of FIG. 10A, according to at least one embodiment In at least one embodiment, each of components, features, and systems of vehicle 1000 in FIG. 10C is illustrated as being connected via a bus 1002. In at least one embodiment, bus 1002 may include, without limitation, a CAN data interface (alternatively referred to herein as a "CAN bus-). In at least one embodiment, a CAN may be a network inside vehicle 1000 used to aid in control of various features and functionality of vehicle 1000, such as actuation of brakes, acceleration, braking, steering, windshield wipers, etc. In at least one embodiment, bus 1_002 may be configured to have dozens or even hundreds of nodes, each with its own unique identifier (e.g., a CAN ID). In at least one embodiment, bus 1002 may be read to find steering wheel angle, ground speed, engine revolutions per minute ("RPMs"), button positions, and/or other vehicle status indicators. In at least one embodiment, bus 1002 may be a CAN bus that is ASIL B compliant.[0175] In at least one embodiment, in addition to, or alternatively from CAN, FlexRay and/or Ethernet protocols may be used. In at least one embodiment, there may be any number of busses forming bus 1002, which may include, without limitation, zero or more CAN busses, zero or more FlexRay busses, zero or more Ethernet busses, and/or zero or more other types of busses using different protocols. In at least one embodiment, two or more busses may be used to perform different functions, and/or may be used for redundancy. For example, a first bus may be used for collision avoidance functionality and a second bus may be used for actuation control. In at least one embodiment, each bus of bus 1002 may communicate with any of components of vehicle 1000, and two or more busses of bus 1002 may communicate with corresponding components. In at least one embodiment, each of any number of system(s) on chip(s) ("SoC(s)") 1004 (such as SoC 1004(A) and SoC 1004(B), each of controller(s) 1036, and/or each computer within vehicle may have access to same input data (e g inputs from sensors of vehicle 1000), and may be connected to a common bus, such CAN bus.[0176] In at least one embodiment, vehicle 1000 may include one or more controller(s) 1036, such as those described herein with respect to FIG. 10A. In at least one embodiment, controller(s) 1036 may be used for a variety of functions. In at least one embodiment, controller(s) 1036 may be coupled to any of various other components and systems of vehicle 1000, and may be used for control of vehicle 1000, artificial intelligence of vehicle 1000, infotainment for vehicle 1000, and/or other functions.[0177] In at least one embodiment, vehicle 1000 may include any number of SoCs 1004. In at least one embodiment, each of SoCs 1004 may include, without limitation, central processing units (-CPU(s)-) 1006, graphics processing units ("GPU(s)") 1008, processor(s) 1010, cache(s) 1012, accelerator(s) 1014, data store(s) 1016, and/or other components and features not illustrated. In at least one embodiment, SoC(s) 1004 may be used to control vehicle 1000 in a variety of platforms and systems. For example, in at least one embodiment, SoC(s) 1004 may be combined in a system (e.g., system of vehicle 1000) with a High Definition ("1-11D") map 1022 which may obtain map refreshes and/or updates via network interface 1024 from one or more servers (not shown in Figure 10C).[0178] In at least one embodiment, CPU(s) 1006 may include a CPU cluster or CPU complex (alternatively referred to herein as a "CCPLEX"). In at least one embodiment, CPU(s) 1006 may include multiple cores and/or level two ("L2") caches. For instance, in at least one embodiment, CPU(s) 1006 may include eight cores in a coherent multi-processor configuration. In at least one embodiment, CPU(s) 1006 may include four dual-core clusters where each cluster has a dedicated L2 cache (e.g., a 2 megabyte (MB) L2 cache). In at least one embodiment, CPU(s) 1006 (e.g., CCPLEX) may be configured to support simultaneous cluster operations enabling any combination of clusters of CPU(s) 1006 to be active at any given time.[0179] In at least one embodiment, one or more of CPU(s) 1006 may implement power management capabilities that include, without limitation, one or more of following features: individual hardware blocks may be clock-gated automatically when idle to save dynamic power; each core clock may be gated when such core is not actively executing instructions due to execution of Wait for Interrupt ("WFI")/Wait for Event ("WEE") instructions; each core may be independently power-gated; each core cluster may be independently clock-gated when all cores are clock-gated or power-gated; and/or each core cluster may be independently power-gated when all cores are power-gated. In at least one embodiment, CPU(s) 1006 may further implement an enhanced algorithm for managing power states, where allowed power states and expected wakeup times are specified, and hardware/microcode determines which best power state to enter for core, cluster, and CCPLEX. In at least one embodiment, processing cores may support simplified power state entry sequences in software with work offloaded to microcode.[0180] In at least one embodiment, GPU(s) 1008 may include an integrated GPU (alternatively referred to herein as an -iGPU"). In at least one embodiment, GPU(s) 1008 may be programmable and may be efficient for parallel workloads. In at least one embodiment, GPU(s) 1008 may use an enhanced tensor instruction set. In at least one embodiment, GPU(s) 1008 may include one or more streaming microprocessors, where each streaming microprocessor may include a level one ("LI") cache (e.g., an LI cache with at least 96 KB storage capacity), and two or more streaming microprocessors may share an L2 cache (e.g., an L2 cache with a 5 12 KB storage capacity). In at least one embodiment, GPU(s) 1008 may include at least eight streaming microprocessors. In at least one embodiment, GPU(s) 1008 may use compute application programming interface(s) (API(s)). In at least one embodiment, GPU(s) 1008 may use one or more parallel computing platforms and/or programming models (e.g., NVIDIA's CUDA model).[0181] In at least one embodiment, one or more of GPU(s) 1008 may be power-optimized for best performance in automotive and embedded use cases. For example, in at least one embodiment, GPU(s) 1008 could be fabricated on Fin field-effect transistor ("FinFET") circuitry. In at least one embodiment, each streaming microprocessor may incorporate a number of mixed-precision processing cores partitioned into multiple blocks. For example, and without limitation, 64 PF32 cores and 32 PF64 cores could be partitioned into four processing blocks. In at least one embodiment, each processing block could be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA Tensor cores for deep learning matrix arithmetic, a level zero ("LO") instruction cache, a warp scheduler, a dispatch unit, and/or a 64 KB register file. In at least one embodiment, streaming microprocessors may include independent parallel integer and floating-point data paths to provide for efficient execution of workloads with a mix of computation and addressing calculations. In at least one embodiment, streaming microprocessors may include independent thread scheduling capability to enable finer-grain synchronization and cooperation between parallel threads. In at least one embodiment, streaming microprocessors may include a combined Li data cache and shared memory unit in order to improve performance while simplifying programming.[0182] In at least one embodiment, one or more of GPU(s) 1008 may include a high bandwidth memory ("HBM) and/or a 16 GB HBM2 memory subsystem to provide, in some examples, about 900 GB/second peak memory bandwidth. In at least one embodiment, in addition to, or alternatively from, 1-113M memory, a synchronous graphics random-access memory ("SGRAM") may be used, such as a graphics double data rate type five synchronous random-access memory ("GDDR5").[0183] In at least one embodiment, GPU(s) 1008 may include unified memory technology. In at least one embodiment, address translation services ("ATS") support may be used to allow GPU(s) 1008 to access CPU(s) 1006 page tables directly. In at least one embodiment, embodiment, when a GPU of GPU(s) 1008 memory management unit ("MNIU") experiences a miss, an address translation request may be transmitted to CPU(s) 1006. In response, 2 CPU of CPU(s) 1006 may look in its page tables for a virtual-to-physical mapping for an address and transmit translation back to GPU(s) 1008, in at least one embodiment. In at least one embodiment, unified memory technology may allow a single unified virtual address space for memory of both CPU(s) 1006 and GPU(s) 1008, thereby simplifying GPU(s) 1008 programming and porting of applications to GPU(s) 1008.[0184] In at least one embodiment, GPIJ(s) 1008 may include any number of access counters that may keep track of frequency of access of GPU(s) 1008 to memory of other processors. In at least one embodiment, access counter(s) may help ensure that memory pages are moved to physical memory of a processor that is accessing pages most frequently, thereby improving efficiency for memory ranges shared between processors.[0185] In at least one embodiment, one or more of SoC(s) 1004 may include any number of cache(s) 1012, including those described herein. For example, in at least one embodiment, cache(s) 1012 could include a level three ("L3") cache that is available to both CPU(s) 1006 and GPU(s) 1008 (e.g., that is connected to CPU(s) 1006 and GPU(s) 1008). In at least one embodiment, cache(s) 1012 may include a write-back cache that may keep track of states of lines, such as by using a cache coherence protocol (e.g., MEI MESI, MSI, etc.). In at least one embodiment, a L3 cache may include 4 MB of memory or more, depending on embodiment, although smaller cache sizes may be used.[0186] In at least one embodiment, one or more of SoC(s) 1004 may include one or more accelerator(s) 1014 (e.g., hardware accelerators, software accelerators, or a combination thereof). In at least one embodiment, SoC(s) 1004 may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, large on-chip memory (e.g., 4 MB of SRAM), may enable a hardware acceleration cluster to accelerate neural networks and other calculations. In at least one embodiment, a hardware acceleration cluster may be used to complement GPU(s) 1008 and to off-load some of tasks of GPU(s) 1008 (e.g., to free up more cycles of GPU(s) 1008 for performing other tasks). In at least one embodiment, accelerator(s) 1014 could be used for targeted workloads (e.g., perception, convolutional neural networks ("CNNs"), recurrent neural networks ("RNNs"), etc.) that are stable enough to be amenable to acceleration. In at least one embodiment, a CNN may include a region-based or regional convolutional neural networks ("RCNNs") and Fast RCNNs (e.g., as used for object detection) or other type of CNN.[0187] In at least one embodiment, accelerator(s) 1014 (e.g., hardware acceleration cluster) may include one or more deep learning accelerator ("DLA"). In at least one embodiment, DLA(s) may include, without limitation, one or more Tensor processing units ("TPUs") that may be configured to provide an additional ten trillion operations per second for deep learning applications and inferencing. In at least one embodiment, TPUs may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.). In at least one embodiment, DLA(s) may further be optimized for a specific set of neural network types and floating point operations, as well as inferencing. In at least one embodiment, design of DLA(s) may provide more performance per millimeter than a typical general-purpose GPU, and typically vastly exceeds performance of a CPU. In at least one embodiment, TPU(s) may perform several functions, including a single-instance convolution function, supporting, for example, INT8, INT16, and FP16 data types for both features and weights, as well as post-processor functions. In at least one embodiment, DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones; a CNN for facial recognition and vehicle owner identification using data from camera sensors and/or a CNN for security and/or safety related events.[0188] In at least one embodiment, DLA(s) may perform any function of GPU(s) 1008, and by using an inference accelerator, for example, a designer may target either DLA(s) or GPU(s) 1008 for any function. For example, in at least one embodiment, a designer may focus processing of CNNs and floating point operations on DLA(s) and leave other functions to GPU(s) 1008 and/or accelerator(s) 1014.[0189] In at least one embodiment, accelerator(s) 1014 may include programmable vision accelerator ("PVA"), which may alternatively be referred to herein as a computer vision accelerator. In at least one embodiment, PVA may be designed and configured to accelerate computer vision algorithms for advanced driver assistance system ("ADAS") 1038, autonomous driving, augmented reality ("AR") applications, and/or virtual reality ("VR") applications. In at least one embodiment, PVA may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA may include, for example and without limitation, any number of reduced instruction set computer ("RISC") cores, direct memory access ("DMA"), and/or any number of vector processors.[0190] In at least one embodiment, RISC cores may interact with image sensors (e.g., image sensors of any cameras described herein), image signal processor(s), etc. In at least one embodiment, each RISC core may include any amount of memory. In at least one embodiment, RISC cores may use any of a number of protocols, depending on embodiment. In at least one embodiment, RISC cores may execute a real-time operating system ("RTOS"). In at least one embodiment, RISC cores may be implemented using one or more integrated circuit devices, application specific integrated circuits ("ASICs"), and/or memory devices. For example, in at least one embodiment, RISC cores could include an instruction cache and/or a tightly coupled RAM.[0191] In at least one embodiment, DMA may enable components of PVA to access system memory independently of CPU(s) 1006. In at least one embodiment, DMA may support any number of features used to provide optimization to a PVA including, but not limited to, supporting multi-dimensional addressing and/or circular addressing. In at least one embodiment, DMA may support up to six or more dimensions of addressing, which may include, without limitation, block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping.[0192] In at least one embodiment, vector processors may be programmable processors that may be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In at least one embodiment, a PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, a PVA core may include a processor subsystem, DMA engine(s) (e.g., two DMA engines), and/or other peripherals. In at least one embodiment, a vector processing subsystem may operate as a primary processing engine of a PVA, and may include a vector processing unit ("WU"), an instruction cache, and/or vector memory (e.g., "VATEM"). In at least one embodiment, VPU core may include a digital signal processor such as, for example, a single instruction, multiple data ("SIMD"), very long instruction word ("VLIW") digital signal processor. In at least one embodiment, a combination of SIMD and VLIW may enhance throughput and speed.[0193] In at least one embodiment, each of vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in at least one embodiment, each of vector processors may be configured to execute independently of other vector processors. In at least one embodiment, vector processors that are included in a particular PVA may be configured to employ data parallelism. For instance, in at least one embodiment, plurality of vector processors included in a single PVA may execute a common computer vision algorithm, but on different regions of an image. In at least one embodiment, vector processors included in a particular PVA may simultaneously execute different computer vision algorithms, on one image, or even execute different algorithms on sequential images or portions of an image. In at least one embodiment, among other things, any number of PVAs may be included in hardware acceleration cluster and any number of vector processors may be included in each PVA. In at least one embodiment, PVA may include additional error correcting code ("ECC") memory, to enhance overall system safety.[0194] In at least one embodiment, accelerator(s) 1014 may include a computer vision network on-chip and static random-access memory ("SRAM"), for providing a high-bandwidth, low latency SRAM for accelerator(s) 1014. In at least one embodiment, on-chip memory may include at least 4 MB SRAM, comprising, for example and without limitation, eight field-configurable memory blocks, that may be accessible by both a PVA and a DLA. In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus ("APB") interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, a PVA and a DLA may access memory via a backbone that provides a PVA and a DLA with high-speed access to memory, In at least one embodiment, a backbone may include a computer vision network on-chip that interconnects a PVA and a DLA to memory (e.g., using APB).[0195] In at least one embodiment, a computer vision network on-chip may include an interface that determines, before transmission of any control signal/address/data, that both a PVA and a DLA provide ready and valid signals. In at least one embodiment, an interface may provide for separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transfer. In at least one embodiment, an interface may comply with International Organization for Standardization ("ISO-) 26262 or International Electrotechnical Commission ("IEC") 61508 standards, although other standards and protocols may be used.[0196] In at least one embodiment, one or more of SoC(s) 1004 may include a real-time ray-tracing hardware accelerator. In at least one embodiment, real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses.[0197] In at least one embodiment, accelerator(s) 1014 can have a wide array of uses for autonomous driving. In at least one embodiment, a PVA may be used for key processing stages in ADAS and autonomous vehicles. In at least one embodiment, a PVA's capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, a PVA performs well on semi-dense or dense regular computation, even on small data sets, which might require predictable run-times with low latency and low power. In at least one embodiment, such as in vehicle 1000, PVAs might be designed to run classic computer vision algorithms, as they can be efficient at object detection and operating on integer math.[0198] For example, according to at least one embodiment of technology, a PVA is used to perform computer stereo vision. In at least one embodiment, a semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. In at least one embodiment, applications for Level 3-5 autonomous driving use motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). In at least one embodiment, a PVA may perform computer stereo vision functions on inputs from two monocular cameras.[0199] In at least one embodiment, a PVA may be used to perform dense optical flow. For example, in at least one embodiment, a PVA could process raw RADAR data (e.g., using a 4D Fast Fourier Transform) to provide processed RADAR data. In at least one embodiment, a PVA is used for time of flight depth processing, by processing raw time of flight data to provide processed time of flight data, for example.[0200] In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative "weight" of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking ("AEB") system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. In at least one embodiment, a DLA may run a neural network for regressing confidence value. In at least one embodiment, neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g., from another subsystem), output from IMU sensor(s) 1066 that correlates with vehicle 1000 orientation, distance, 3D location estimates of object obtained from neural network and/or other sensors (e g, LIDAR sensor(s) 1064 or RADAR sensor(s) 1060), among others.[0201] In at least one embodiment, one or more of SoC(s) 1004 may include data store(s) 1016 (e.g., memory). In at least one embodiment, data store(s) 1016 may be on-chip memory of SoC(s) 1004, which may store neural networks to be executed on GPU(s) 1008 and/or a DLA. In at least one embodiment, data store(s) 1016 may be large enough in capacity to store multiple instances of neural networks for redundancy and safety. In at least one embodiment, data store(s) 1016 may comprise L2 or L3 cache(s).[0202] In at least one embodiment, one or more of SoC(s) 1004 may include any number of processor(s) 1010 (e.g., embedded processors). In at least one embodiment, processor(s) 1010 may include a boot and power management processor that may be a dedicated processor and subsystem to handle boot power and management functions and related security enforcement In at least one embodiment, a boot and power management processor may be a part of a boot sequence of SoC(s) 1004 and may provide runtime power management services. In at least one embodiment, a boot power and management processor may provide clock and voltage programming, assistance in system low power state transitions, management of SoC(s) 1004 thermals and temperature sensors, and/or management of SoC(s) 1004 power states. In at least one embodiment, each temperature sensor may be implemented as a ring-oscillator whose output frequency is proportional to temperature, and SoC(s) 1004 may use ring-oscillators to detect temperatures of CPU(s) 1006, GPU(s) 1008, and/or accelerator(s) 1014. In at least one embodiment, if temperatures are determined to exceed a threshold, then a boot and power management processor may enter a temperature fault routine and put SoC(s) 1004 into a lower power state and/or put vehicle 1000 into a chauffeur to safe stop mode (e.g., bring vehicle 1000 to a safe stop).[0203] In at least one embodiment, processor(s) 1010 may further include a set of embedded processors that may serve as an audio processing engine which may be an audio subsystem that enables full hardware support for multi-channel audio over multiple interfaces, and a broad and flexible range of audio I/O interfaces. In at least one embodiment, an audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM.[0204] In at least one embodiment, processor(s) 1010 may further include an always-on processor engine that may provide necessary hardware features to support low power sensor management and wake use cases. In at least one embodiment, an always-on processor engine may include, without limitation, a processor core, a tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various 1/0 controller peripherals, and routing logic.[0205] In at least one embodiment, processor(s) 1010 may further include a safety cluster engine that includes, without limitation, a dedicated processor subsystem to handle safety management for automotive applications. In at least one embodiment, a safety cluster engine may include, without limitation, two or more processor cores, a tightly coupled RAM, support peripherals (e.g., timers, an interrupt controller, etc.), and/or routing logic. In a safety mode, two or more cores may operate, in at least one embodiment, in a lockstep mode and function as a single core with comparison logic to detect any differences between their operations. In at least one embodiment, processor(s) 1010 may further include a real-time camera engine that may include, without limitation, a dedicated processor subsystem for handling real-time camera management. In at least one embodiment, processor(s) 1010 may further include a high-dynamic range signal processor that may include, without limitation, an image signal processor that is a hardware engine that is part of a camera processing pipeline.[0206] In at least one embodiment, processor(s) 1010 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce a final image for a player window. In at least one embodiment, a video image compositor may perform lens distortion correction on wide-view camera(s) 1070, surround camera(s) 1074, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC 1004, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change a vehicle's destination, activate or change a vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to a driver when a vehicle is operating in an autonomous mode and are disabled otherwise.[0207] In at least one embodiment, a video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, where motion occurs in a video, noise reduction weights spatial information appropriately, decreasing weights of information provided by adjacent frames. In at least one embodiment, where an image or portion of an image does not include motion, temporal noise reduction performed by video image compositor may use information from a previous image to reduce noise in a current image.[020S] In at least one embodiment, a video image compositor may also be configured to perform stereo rectification on input stereo lens frames. In at least one embodiment, a video image compositor may further be used for user interface composition when an operating system desktop is in use, and GPU(s) 1008 are not required to continuously render new surfaces. In at least one embodiment, when GPU(s) 1008 are powered on and active doing 3D rendering, a video image compositor may be used to offload GPU(s) 1008 to improve performance and responsiveness.[0209] In at least one embodiment, one or more SoC of SoC(s) 1004 may further include a mobile industry processor interface ("MIN") camera serial interface for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for a camera and related pixel input functions. In at least one embodiment, one or more of SoC(s) 1004 may further include an input/output controller(s) that may be controlled by software and may be used for receiving I/0 signals that are uncommitted to a specific role.[0210] In at least one embodiment, one or more Soc of SoC(s) 1004 may further include a broad range of peripheral interfaces to enable communication with peripherals, audio encoders/decoders ("codecs"), power management, and/or other devices. In at least one embodiment, SoC(s) 1004 may be used to process data from cameras (e.g., connected over Gigabit Multimedia Serial Link and Ethernet channels), sensors (e.g., LIDAR sensor(s) 1064, RADAR sensor(s) 1060, etc. that may be connected over Ethernet channels), data from bus 1002 (e.g., speed of vehicle 1000, steering wheel position, etc.), data from GNSS sensor(s) 1058 (e.g., connected over a Ethernet bus or a CAN bus), etc. In at least one embodiment, one or more SoC of SoC(s) 1004 may further include dedicated high-performance mass storage controllers that may include their own DMA engines, and that may be used to free CPU(s) 1006 from routine data management tasks.[0211] In at least one embodiment, SoC(s) 1004 may be an end-to-end platform with a flexible architecture that spans automation Levels 3-5, thereby providing a comprehensive functional safety architecture that leverages and makes efficient use of computer vision and ADAS techniques for diversity and redundancy, and provides a platform for a flexible, reliable driving software stack, along with deep learning tools. In at least one embodiment, SoC(s) 1004 may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, in at least one embodiment, accelerator(s) 1014, when combined with CPU(s) 1006, GPU(s) 1008, and data store(s) 1016, may provide for a fast, efficient platform for Level 3-5 autonomous vehicles.[0212] In at least one embodiment, computer vision algorithms may be executed on CPUs, which may be configured using a high-level programming language, such as C, to execute a wide variety of processing algorithms across a wide variety of visual data. However, in at least one embodiment, CPUs are oftentimes unable to meet performance requirements of many computer vision applications, such as those related to execution time and power consumption, for example. In at least one embodiment, many CPUs are unable to execute complex object detection algorithms in real-time, which is used in in-vehicle ADAS applications and in practical Level 3-5 autonomous vehicles.[0213] Embodiments described herein allow for multiple neural networks to be performed simultaneously and/or sequentially, and for results to be combined together to enable Level 3-5 autonomous driving functionality. For example, in at least one embodiment, a CNN executing on a DLA or a discrete GPTJ (e.g., GPU(s) 1020) may include text and word recognition, allowing reading and understanding of traffic signs, including signs for which a neural network has not been specifically trained. In at least one embodiment, a DLA may further include a neural network that is able to identify, interpret, and provide semantic understanding of a sign, and to pass that semantic understanding to path planning modules running on a CPU Complex.[0214] In at least one embodiment, multiple neural networks may be run simultaneously, as for Level 3, 4, or 5 driving. For example, in at least one embodiment, a warning sign stating "Caution: flashing lights indicate icy conditions," along with an electric light, may be independently or collectively interpreted by several neural networks. In at least one embodiment, such warning sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), text "flashing lights indicate icy conditions" may be interpreted by a second deployed neural network, which informs a vehicle's path planning software (preferably executing on a CPU Complex) that when flashing lights are detected, icy conditions exist. In at least one embodiment, a flashing light may be identified by operating a third deployed neural network over multiple frames, informing a vehicle's path-planning software of a presence (or an absence) of flashing lights. In at least one embodiment, all three neural networks may run simultaneously, such as within a DLA and/or on GPU(s) 1008.[0215] In at least one embodiment, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify presence of an authorized driver and/or owner of vehicle 1000. In at least one embodiment, an always-on sensor processing engine may be used to unlock a vehicle when an owner approaches a driver door and turns on lights, and, in a security mode, to disable such vehicle when an owner leaves such vehicle. In this way, SoC(s) 1004 provide for security against theft and/or carjacking.[0216] In at least one embodiment, a CNN for emergency vehicle detection and identification may use data from microphones 1096 to detect and identify emergency vehicle sirens. In at least one embodiment, SoC(s) 1004 use a CNN for classifying environmental and urban sounds, as well as classifying visual data. In at least one embodiment, a CNN running on a DLA is trained to identify a relative closing speed of an emergency vehicle (e.g., by using a Doppler effect). In at least one embodiment, a CNN may also be trained to identify emergency vehicles specific to a local area in which a vehicle is operating, as identified by GNSS sensor(s) 1058. In at least one embodiment, when operating in Europe, a CNN will seek to detect European sirens, and when in North America, a CNN will seek to identify only North American sirens. In at least one embodiment, once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing a vehicle, pulling over to a side of a road, parking a vehicle, and/or idling a vehicle, with assistance of ultrasonic sensor(s) 1062, until emergency vehicles pass.[0217] In at least one embodiment, vehicle 1000 may include CPU(s) 1018 (e.g., discrete CPU(s), or dCPU(s)), that may be coupled to SoC(s) 1004 via a high-speed interconnect (e.g., PC1e). In at least one embodiment, CPU(s) 1018 may include an X86 processor, for example. CPU(s) 1018 may be used to perform any of a variety of functions, including arbitrating potentially inconsistent results between ADAS sensors and SoC(s) 1004, and/or monitoring status and health of controller(s) 1036 and/or an infotainment system on a chip ("infotainment SoC") 1030, for example.[0218] In at least one embodiment, vehicle 1000 may include CPU(s) 1020 (e.g., discrete GPU(s), or dGPU(s)), that may be coupled to SoC(s) 1004 via a high-speed interconnect (e.g., NVIDIA's NVLINK channel). In at least one embodiment, GPU(s) 1020 may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based at least in part on input (e.g., sensor data) from sensors of a vehicle 1000.[0219] In at least one embodiment, vehicle 1000 may further include network interface 1024 which may include, without limitation, wireless antenna(s) 1026 (e.g., one or more wireless antennas for different communication protocols, such as a cellular antenna, a Bluetooth antenna, etc.). In at least one embodiment, network interface 1024 may be used to enable wireless connectivity to Internet cloud services (e.g., with server(s) and/or other network devices), with other vehicles, and/or with computing devices (e.g., client devices of passengers). in at least one embodiment, to communicate with other vehicles, a direct link may be established between vehicle 100 and another vehicle and/or an indirect link may be established (e.g., across networks and over the Internet). In at least one embodiment, direct links may be provided using a vehicleto-vehicle communication link. In at least one embodiment, a vehicle-to-vehicle communication link may provide vehicle 1000 information about vehicles in proximity to vehicle 1000 (e.g., vehicles in front of, on a side of, and/or behind vehicle 1000). In at least one embodiment, such aforementioned functionality may be part of a cooperative adaptive cruise control functionality of vehicle 1000.[0220] In at least one embodiment, network interface 1024 may include an SoC that provides modulation and demodulation functionality and enables controller(s) 1036 to communicate over wireless networks. In at least one embodiment, network interface 1024 may include a radio frequency front-end for up-conversion from baseband to radio frequency, and down conversion from radio frequency to baseband. In at least one embodiment, frequency conversions may be performed in any technically feasible fashion. For example, frequency conversions could be performed through well-known processes, and/or using super-heterodyne processes. In at least one embodiment, radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, network interfaces may include wireless functionality for communicating over LIE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, W -Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols.[0221] In at least one embodiment, vehicle 1000 may further include data store(s) 1028 which may include, without limitation, off-chip (e.g., off SoC(s) 1004) storage. In at least one embodiment, data store(s) 1028 may include, without limitation, one or more storage elements including RAM, SRAM, dynamic random-access memory ("DRAM'), video random-access memory ("VRAM"), flash memory, hard disks, and/or other components and/or devices that may store at least one bit of data.[0222] In at least one embodiment, vehicle 1000 may further include GNSS sensor(s) 1058 (e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions. In at least one embodiment, any number of GNSS sensor(s) 1058 may be used, including, for example and without limitation, a GPS using a USB connector with an Ethernet-to-Serial (e.g., RS-232) bridge.[0223] In at least one embodiment, vehicle 1000 may further include RADAR sensor(s) 1060. In at least one embodiment, RADAR sensor(s) 1060 may be used by vehicle 1000 for long-range vehicle detection, even in darkness and/or severe weather conditions. In at least one embodiment, RADAR functional safety levels may be ASIL B. In at least one embodiment, RADAR sensor(s) 1060 may use a CAN bus and/or bus 1002 (e.g., to transmit data generated by RADAR sensor(s) 1060) for control and to access object tracking data, with access to Ethernet channels to access raw data in some examples. In at least one embodiment, a wide variety of RADAR sensor types may be used. For example, and without limitation, RADAR sensor(s) 1060 may be suitable for front, rear, and side RADAR use. In at least one embodiment, one or more sensor of RADAR sensors(s) 1060 is a Pulse Doppler RADAR sensor.[0224] In at least one embodiment, RADAR sensor(s) 1060 may include different configurations, such as long-range with narrow field of view, short-range with wide field of view, short-range side coverage, etc. In at least one embodiment, long-range RADAR may be used for adaptive cruise control functionality. In at least one embodiment, long-range RADAR systems may provide a broad field of view realized by two or more independent scans, such as within a 250 m (meter) range. In at least one embodiment, RADAR sensor(s) 1060 may help in distinguishing between static and moving objects, and may be used by ADAS system 1038 for emergency brake assist and forward collision warning. In at least one embodiment, sensors 1060(s) included in a long-range RADAR system may include, without limitation, monostatic multimodal RADAR with multiple (e.g., six or more) fixed RADAR antennae and a high-speed CAN and FlexRay interface. In at least one embodiment, with six antennae, a central four antennae may create a focused beam pattern, designed to record vehicle's 1000 surroundings at higher speeds with minimal interference from traffic in adjacent lanes. In at least one embodiment, another two antennae may expand field of view, making it possible to quickly detect vehicles entering or leaving a lane of vehicle 1000.[0225] In at least one embodiment, mid-range RADAR systems may include, as an example, a range of up to 160 m (front) or 80 m (rear), and a field of view of up to 42 degrees (front) or 150 degrees (rear). In at least one embodiment, short-range RADAR systems may include, without limitation, any number of RADAR sensor(s) 1060 designed to be installed at both ends of a rear bumper. When installed at both ends of a rear bumper, in at least one embodiment, a RADAR sensor system may create two beams that constantly monitor blind spots in a rear direction and next to a vehicle. In at least one embodiment, short-range RADAR systems may be used in ADA S system 1038 for blind spot detection and/or lane change assist.[0226] In at least one embodiment, vehicle 1000 may further include ultrasonic sensor(s) 1062. In at least one embodiment, ultrasonic sensor(s) 1062, which may be positioned at a front, a back, and/or side location of vehicle 1000, may be used for parking assist and/or to create and update an occupancy grid. In at least one embodiment, a wide variety of ultrasonic sensor(s) 1062 may be used, and different ultrasonic sensor(s) 1062 may be used for different ranges of detection (e.g., 2.5 m, 4 m). In at least one embodiment, ultrasonic sensor(s) 1062 may operate at functional safety levels of ASIL B. [0227] In at least one embodiment, vehicle 1000 may include LIDAR sensor(s) 1064. In at least one embodiment, LIDAR sensor(s) 1064 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, LIDAR sensor(s) 1064 may operate at functional safety level A SIL B. In at least one embodiment, vehicle 1000 may include multiple LIDAR sensors 1064 (e.g., two, four, six, etc.) that may use an Ethernet channel (e.g., to provide data to a Gigabit Ethernet switch).[0228] In at least one embodiment, LIDAR sensor(s) 1064 may be capable of providing a list of objects and their distances for a 360-degree field of view. In at least one embodiment, commercially available LIDAR sensor(s) 1064 may have an advertised range of approximately 100 m, with an accuracy of 2 cm to 3 cm, and with support for a 100 Mbps Ethernet connection, for example. In at least one embodiment, one or more non-protruding LIDAR sensors may be used. In such an embodiment, LIDAR sensor(s) 1064 may include a small device that may be embedded into a front, a rear, a side, and/or a corner location of vehicle 1000. In at least one embodiment, LIDAR sensor(s) 1064, in such an embodiment, may provide up to a 120-degree horizontal and 35-degree vertical field-of-view, with a 200 m range even for low-reflectivity objects. In at least one embodiment, front-mounted LIDAR sensor(s) 1064 may be configured for a horizontal field of view between 45 degrees and 135 degrees.[0229] In at least one embodiment, LTDAR technologies, such as 3D flash LIDAR, may also be used. In at least one embodiment, 3D flash LIDAR uses a flash of a laser as a transmission source, to illuminate surroundings of vehicle 1000 up to approximately 200 m. In at least one embodiment, a flash LIDAR unit includes, without limitation, a receptor, which records laser pulse transit time and reflected light on each pixel, which in turn corresponds to a range from vehicle 1000 to objects. In at least one embodiment, flash LIDAR may allow for highly accurate and distortion-free images of surroundings to be generated with every laser flash. In at least one embodiment, four flash LIDAR sensors may be deployed, one at each side of vehicle 1000. In at least one embodiment, 3D flash LIDAR systems include, without limitation, a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). In at least one embodiment, flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture reflected laser light as a 3D range point cloud and co-registered intensity data.[0230] In at least one embodiment, vehicle 1000 may further include IMU sensor(s) 1066. In at least one embodiment, IMU sensor(s) 1066 may be located at a center of a rear axle of vehicle 1000. In at least one embodiment, IMU sensor(s) 1066 may include, for example and without limitation, accelerometer(s), magnetometer(s), gyroscope(s), a magnetic compass, magnetic compasses, and/or other sensor types. In at least one embodiment, such as in six-axis applications, IMU sensor(s) 1066 may include, without limitation, accelerometers and gyroscopes. In at least one embodiment, such as in nine-axis applications, IMU sensor(s) 1066 may include, without limitation, accelerometers, gyroscopes, and magnetometers.[0231] In at least one embodiment, IMU sensor(s) 1066 may be implemented as a miniature, high performance GPS-Aided Inertial Navigation System ("GPS/INS") that combines microelectro-mechanical systems ("MEMS") inertial sensors, a high-sensitivity GPS receiver, and advanced Kalman filtering algorithms to provide estimates of position, velocity, and attitude. In at least one embodiment, IMU sensor(s) 1066 may enable vehicle 1000 to estimate its heading without requiring input from a magnetic sensor by directly observing and correlating changes in velocity from a UPS to IMU sensor(s) 1066. In at least one embodiment, IMU sensor(s) 1066 and GNSS sensor(s) 1058 may be combined in a single integrated unit.[0232] In at least one embodiment, vehicle 1000 may include microphone(s) 1096 placed in and/or around vehicle 1000. In at least one embodiment, microphone(s) 1096 may be used for emergency vehicle detection and identification, among other things.[0233] In at least one embodiment, vehicle 1000 may further include any number of camera types, including stereo camera(s) 1068, wide-view camera(s) 1070, infrared camera(s) 1072, surround camera(s) 1074, long-range camera(s) 1098, mid-range camera(s) 1076, and/or other camera types. In at least one embodiment, cameras may be used to capture image data around an entire periphery of vehicle 1000. In at least one embodiment, which types of cameras used depends on vehicle 1000. In at least one embodiment, any combination of camera types may be used to provide necessary coverage around vehicle 1000. In at least one embodiment, a number of cameras deployed may differ depending on embodiment. For example, in at least one embodiment, vehicle 1000 could include six cameras, seven cameras, ten cameras, twelve cameras, or another number of cameras. In at least one embodiment, cameras may support, as an example and without limitation, Gigabit Multimedia Serial Link ("GMSL") and/or Gigabit Ethernet communications. In at least one embodiment, each camera might be as described with more detail previously herein with respect to FIG. 10A and FIG. 10B.[0234] In at least one embodiment, vehicle 1000 may further include vibration sensor(s) 1042. In at least one embodiment, vibration sensor(s) 1042 may measure vibrations of components of vehicle 1000, such as axle(s). For example, in at least one embodiment, changes in vibrations may indicate a change in road surfaces. In at least one embodiment, when two or more vibration sensors 1042 are used, differences between vibrations may be used to determine friction or slippage of road surface (e.g., when a difference in vibration is between a power-driven axle and a freely rotating axle).[0235] In at least one embodiment, vehicle 1000 may include ADAS system 1038. In at least one embodiment, ADAS system 1038 may include, without limitation, an SoC, in some examples. In at least one embodiment, ADAS system 1038 may include, without limitation, any number and combination of an autonomous/adaptive/automatic cruise control ("ACC") system, a cooperative adaptive cruise control ("CACC") system, a forward crash warning ("FCW') system, an automatic emergency braking ("AEB") system, a lane departure warning ("LDW)" system, a lane keep assist ("LKA") system, a blind spot warning ("BSW") system, a rear cross-traffic warning ("RCTW") system, a collision warning ("CW') system, a lane centering ("LC") system, and/or other systems, features, and/or functionality.[0236] In at least one embodiment, ACC system may use RADAR sensor(s) 1060, LIDAR sensor(s) I 064, and/or any number of camera(s). In at least one embodiment, ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, a longitudinal ACC system monitors and controls distance to another vehicle immediately ahead of vehicle 1000 and automatically adjusts speed of vehicle 1000 to maintain a safe distance from vehicles ahead. In at least one embodiment, a lateral ACC system performs distance keeping, and advises vehicle 1000 to change lanes when necessary. In at least one embodiment, a lateral ACC is related to other ADAS applications, such as LC and CW.[0237] In at least one embodiment, a CACC system uses information from other vehicles that may be received via network interface 1024 and/or wireless antenna(s) 1026 from other vehicles via a wireless link, or indirectly, over a network connection (e.g., over the Internet). In at least one embodiment, direct links may be provided by a vehicle-to-vehicle ("V2V-) communication link, while indirect links may be provided by an infrastructure-to-vehicle ("I2V") communication link. In general, V2V communication provides information about immediately preceding vehicles (e.g., vehicles immediately ahead of and in same lane as vehicle 1000), while I2V communication provides information about traffic further ahead. In at least one embodiment, a CACC system may include either or both I2V and V2V information sources. In at least one embodiment, given information of vehicles ahead of vehicle 1000, a CACC system may be more reliable and it has potential to improve traffic flow smoothness and reduce congestion on road.[0235] In at least one embodiment, an FCW system is designed to alert a driver to a hazard, so that such driver may take corrective action. In at least one embodiment, an FCW system uses a front-facing camera and/or RADAR sensor(s) 1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an FCW system may provide a warning, such as in form of a sound, visual warning, vibration and/or a quick brake pulse.[0239] In at least one embodiment, an AEB system detects an impending forward collision with another vehicle or other object, and may automatically apply brakes if a driver does not take corrective action within a specified time or distance parameter. In at least one embodiment, AEB system may use front-facing camera(s) and/or RADAR sensor(s) 1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when an AEB system detects a hazard, it will typically first alert a driver to take corrective action to avoid collision and, if that driver does not take corrective action, that AEB system may automatically apply brakes in an effort to prevent, or at least mitigate, an impact of a predicted collision. In at least one embodiment, an AEB system may include techniques such as dynamic brake support and/or crash imminent braking.[0240] In at least one embodiment, an LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert driver when vehicle 1000 crosses lane markings. In at least one embodiment, an LDW system does not activate when a driver indicates an intentional lane departure, such as by activating a turn signal. In at least one embodiment, an LDW system may use front-side facing cameras, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an LKA system is a variation of an LDW system. In at least one embodiment, an LKA system provides steering input or braking to correct vehicle 1000 if vehicle 1000 starts to exit its lane.[0241] In at least one embodiment, a BSW system detects and warns a driver of vehicles in an automobile's blind spot. In at least one embodiment, a BSW system may provide a visual, audible, and/or tactile alert to indicate that merging or changing lanes is unsafe. In at least one embodiment, a BSW system may provide an additional warning when a driver uses a turn signal. In at least one embodiment, a BSW system may use rear-side facing camera(s) and/or RADAR sensor(s) 1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component.[0242] In at least one embodiment, an RCTW system may provide visual, audible, and/or tactile notification when an object is detected outside a rear-camera range when vehicle 1000 is backing up. In at least one embodiment, an RCTW system includes an AEB system to ensure that vehicle brakes are applied to avoid a crash. In at least one embodiment, an RCTW system may use one or more rear-facing RADAR sensor(s) 1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component.[0243] In at least one embodiment, conventional ADAS systems may be prone to false positive results which may be annoying and distracting to a driver, but typically are not catastrophic, because conventional ADAS systems alert a driver and allow that driver to decide whether a safety condition truly exists and act accordingly. In at least one embodiment, vehicle 1000 itself decides, in case of conflicting results, whether to heed result from a primary computer or a secondary computer (e.g., a first controller or a second controller of controllers 1036). For example, in at least one embodiment, ADAS system 1038 may be a backup and/or secondary computer for providing perception information to a backup computer rationality module. In at least one embodiment, a backup computer rationality monitor may run redundant diverse software on hardware components to detect faults in perception and dynamic driving tasks. In at least one embodiment, outputs from ADAS system 1038 may be provided to a supervisory MCU. in at least one embodiment, if outputs from a primary computer and outputs from a secondary computer conflict, a supervisory MCU determines how to reconcile conflict to ensure safe operation.[0244] In at least one embodiment, a primary computer may be configured to provide a supervisory MCU with a confidence score, indicating that primary computer's confidence in a chosen result. In at least one embodiment, if that confidence score exceeds a threshold, that supervisory MCU may follow that primary computer's direction, regardless of whether that secondary computer provides a conflicting or inconsistent result. In at least one embodiment, where a confidence score does not meet a threshold, and where primary and secondary computers indicate different results (e.g., a conflict), a supervisory MCU may arbitrate between computers to determine an appropriate outcome.[0245] In at least one embodiment, a supervisory MCU may be configured to run a neural network(s) that is trained and configured to determine, based at least in part on outputs from a primary computer and outputs from a secondary computer, conditions under which that secondary computer provides false alarms. In at least one embodiment, neural network(s) in a supervisory MCU may learn when a secondary computer's output may be trusted, and when it cannot. For example, in at least one embodiment, when that secondary computer is a RADAR-based FCW system, a neural network(s) in that supervisory MCU may learn when an FCW system is identifying metallic objects that are not, in fact, hazards, such as a drainage grate or manhole cover that triggers an alarm. In at least one embodiment, when a secondary computer is a camera-based LDW system, a neural network in a supervisory MCU may learn to override LDW when bicyclists or pedestrians are present and a lane departure is, in fact, a safest maneuver. In at least one embodiment, a supervisory MCU may include at least one of a DLA or a GPU suitable for running neural network(s) with associated memory. In at least one embodiment, a supervisory MCU may comprise and/or be included as a component of SoC(s) 1004.[0246] In at least one embodiment, ADAS system 1038 may include a secondary computer that performs ADAS functionality using traditional rules of computer vision. In at least one embodiment, that secondary computer may use classic computer vision rules (if-then), and presence of a neural network(s) in a supervisory MCU may improve reliability, safety and performance. For example, in at least one embodiment, diverse implementation and intentional non-identity makes an overall system more fault-tolerant, especially to faults caused by software (or software-hardware interface) functionality. For example, in at least one embodiment, if there is a software bug or error in software running on a primary computer, and non-identical software code running on a secondary computer provides a consistent overall result, then a supervisory MCU may have greater confidence that an overall result is correct, and a bug in software or hardware on that primary computer is not causing a material error.[0247] In at least one embodiment, an output of ADAS system 1038 may be fed into a primary computer's perception block and/or a primary computer's dynamic driving task block. For example, in at least one embodiment, if ADAS system 1038 indicates a forward crash warning due to an object immediately ahead, a perception block may use this information when identifying objects. In at least one embodiment, a secondary computer may have its own neural network that is trained and thus reduces a risk of false positives, as described herein.[0248] In at least one embodiment, vehicle 1000 may further include infotainment SoC 1030 (e.g., an in-vehicle infotainment system (WI)). Although illustrated and described as an SoC, infotainment system SoC 1030, in at least one embodiment, may not be an SoC, and may include, without limitation, two or more discrete components. In at least one embodiment, infotainment SoC 1030 may include, without limitation, a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.), phone (e.g., hands-free calling), network connectivity (e.g., LTE, WiFi, etc.), and/or information services (e.g., navigation systems, rear-parking assistance, a radio data system, vehicle related information such as fuel level, total distance covered, brake fuel level, oil level, door open/close, air filter information, etc.) to vehicle 1000. For example, infotainment SoC 1030 could include radios, disk players, navigation systems, video players, USB and Bluetooth connectivity, carputers, in-car entertainment, WiFi, steering wheel audio controls, hands free voice control, a heads-up display ("HUD"), HMI display 1034, a telematics device, a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. In at least one embodiment, infotainment SoC 1030 may further be used to provide information (e.g., visual and/or audible) to user(s) of vehicle 1000, such as information from ADAS system 1038, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information.[0249] In at least one embodiment, infotainment SoC 1030 may include any amount and type of GPU functionality. In at least one embodiment, infotainment SoC 1030 may communicate over bus 1002 with other devices, systems, and/or components of vehicle 1000. In at least one embodiment, infotainment SoC 1030 may be coupled to a supervisory MCU such that a GPU of an infotainment system may perform some self-driving functions in event that primary controller(s) 1036 (e.g., primary and/or backup computers of vehicle 1000) fail. In at least one embodiment, infotainment SoC 1030 may put vehicle 1000 into a chauffeur to safe stop mode, as described herein.[0250] In at least one embodiment, vehicle 1000 may further include instrument cluster 1032 (e.g., a digital dash, an electronic instrument cluster, a digital instrument panel, etc.). In at least one embodiment, instrument cluster 1032 may include, without limitation, a controller and/or supercomputer (e.g., a discrete controller or supercomputer), in at least one embodiment, instrument cluster 1032 may include, without limitation, any number and combination of a set of instrumentation such as a speedometer, fuel level, oil pressure, tachometer, odometer, turn indicators, gearshift position indicator, seat belt warning light(s), parking-brake warning light(s), engine-malfunction light(s), supplemental restraint system (e.g., airbag) information, lighting controls, safety system controls, navigation information, etc. In some examples, information may be displayed and/or shared among infotainment SoC 1030 and instrument cluster 1032. In at least one embodiment, instrument cluster 1032 may be included as part of infotainment SoC 1030, or vice versa.[0251] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 10C for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0252] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0253] FIG. IOD is a diagram of a system 1076 for communication between cloud-based server(s) and autonomous vehicle 1000 of FIG. 10A, according to at least one embodiment. In at least one embodiment, system 1076 may include, without limitation, server(s) 1078, network(s) 1090, and any number and type of vehicles, including vehicle 1000. In at least one embodiment, server(s) 1078 may include, without limitation, a plurality of GPUs 1084(A)-1084(H) (collectively referred to herein as GPUs 1084), PCIe switches 1082(A)-1082(D) (collectively referred to herein as PCIe switches 1082), and/or CPUs 1080(A)-1080(B) (collectively referred to herein as CPUs 1080). In at least one embodiment, GPUs 1084, CPUs 1080, and PCIe switches 1082 may be interconnected with high-speed interconnects such as, for example and without limitation, NVLink interfaces 1088 developed by NVIDIA and/or PCIe connections 1086. In at least one embodiment, GPUs 1084 are connected via an NVLink and/or NVSwitch SoC and GPUs 1084 and PCIe switches 1082 are connected via PCIe interconnects. Although eight GPUs 1084, two CPUs 1080, and four PCIe switches 1082 are illustrated, this is not intended to be limiting. In at least one embodiment, each of server(s) 1078 may include, without limitation, any number of GPUs 1084, CPUs 1080, and/or PCIe switches 1082, in any combination. For example, in at least one embodiment, server(s) 1078 could each include eight, sixteen, thirty-two, and/or more CPUs 1084.[0254] In at least one embodiment, server(s) 1078 may receive, over network(s) 1090 and from vehicles, image data representative of images showing unexpected or changed road conditions, such as recently commenced road-work. In at least one embodiment, server(s) 1078 may transmit, over network(s) 1090 and to vehicles, neural networks 1092, updated or otherwise, and/or map information 1094, including, without limitation, information regarding traffic and road conditions. In at least one embodiment, updates to map information 1094 may include, without limitation, updates for HD map 1022, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions. In at least one embodiment, neural networks 1092, and/or map information 1094 may have resulted from new training and/or experiences represented in data received from any number of vehicles in an environment, and/or based at least in part on training performed at a data center (e.g., using server(s) 1078 and/or other servers).[0255] In at least one embodiment, server(s) 1078 may be used to train machine learning models (e.g., neural networks) based at least in part on training data. In at least one embodiment, training data may be generated by vehicles, and/or may be generated in a simulation (e.g., using a game engine). In at least one embodiment, any amount of training data is tagged (e.g., where associated neural network benefits from supervised learning) and/or undergoes other preprocessing. In at least one embodiment, any amount of training data is not tagged and/or preprocessed (e.g., where associated neural network does not require supervised learning). In at least one embodiment, once machine learning models are trained, machine learning models may be used by vehicles (e.g., transmitted to vehicles over network(s) 1090), and/or machine learning models may be used by server(s) 1078 to remotely monitor vehicles.[0256] In at least one embodiment, server(s) 1078 may receive data from vehicles and apply data to up-to-date real-time neural networks for real-time intelligent inferencing. In at least one embodiment, server(s) 1078 may include deep-learning supercomputers and/or dedicated Al computers powered by GPU(s) 1084, such as a DGX and DGX Station machines developed by NVIDIA. However, in at least one embodiment, server(s) 1078 may include deep learning infrastructure that uses CPU-powered data centers.[0257] In at least one embodiment, deep-learning infrastructure of server(s) 1078 may be capable of fast, real-time inferencing, and may use that capability to evaluate and verify health of processors, software, and/or associated hardware in vehicle 1000. For example, in at least one embodiment, deep-learning infrastructure may receive periodic updates from vehicle 1000, such as a sequence of images and/or objects that vehicle 1000 has located in that sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). In at least one embodiment, deep-learning infrastructure may run its own neural network to identify objects and compare them with objects identified by vehicle 1000 and, if results do not match and deep-learning infrastructure concludes that Al in vehicle 1000 is malfunctioning, then server(s) 1078 may transmit a signal to vehicle 1000 instructing a fail-safe computer of vehicle 1000 to assume control, notify passengers, and complete a safe parking maneuver.[0258] In at least one embodiment, server(s) 1078 may include GPU(s) 1084 and one or more programmable inference accelerators (e.g., NVIDIA's TensorRT 3 devices). In at least one embodiment, a combination of GPU-powered servers and inference acceleration may make real-time responsiveness possible. In at least one embodiment, such as where performance is less critical, servers powered by CPUs, FPGAs, and other processors may be used for inferencing In at least one embodiment, hardware structure(s) 715 are used to perform one or more embodiments. Details regarding hardware structure(x) 715 are provided herein in conjunction with FIGS. 7A and/or 7B.COMPUTER SYSTEMS[0259] FIG. 11 is a block diagram illustrating an exemplary computer system, which may be a system with interconnected devices and components, a system-on-a-chip (SOC) or some combination thereof formed with a processor that may include execution units to execute an instruction, according to at least one embodiment. In at least one embodiment, a computer system 1100 may include, without limitation, a component, such as a processor 1 102 to employ execution units including logic to perform algorithms for process data, in accordance with present disclosure, such as in embodiment described herein. In at least one embodiment, computer system 1100 may include processors, such as PENTIUM® Processor family, XeonTM, Itani um®, XScaIeTM and/or StrongARMTm, Intel® C0reTM, or Intel® NervanaTM microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 1100 may execute a version of WINDOWS operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux, for example), embedded software, and/or graphical user interfaces, may also be used.[0260] Embodiments may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants ("PDAs"), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor ("DSP"), system on a chip, network computers ("NetPCs"), set-top boxes, network hubs, wide area network (-WAN") switches, or any other system that may perform one or more instructions in accordance with at least one embodiment.[0261] In at least one embodiment, computer system 1100 may include, without limitation, processor 1102 that may include, without limitation, one or more execution units 1108 to perform machine learning model training and/or inferencing according to techniques described herein. In at least one embodiment, computer system 1100 is a single processor desktop or server system, but in another embodiment, computer system 1100 may be a multiprocessor system. In at least one embodiment, processor 1102 may include, without limitation, a complex instruction set computer ("CISC") microprocessor, a reduced instruction set computing ("RISC") microprocessor, a very long instruction word ("VEIW") microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In at least one embodiment, processor 1102 may be coupled to a processor bus I 110 that may transmit data signals between processor 1102 and other components in computer system 1100.[0262] In at least one embodiment, processor 1102 may include, without limitation, a Level I ("LI") internal cache memory ("cache") 1104. In at least one embodiment, processor 1102 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor 1102. Other embodiments may also include a combination of both internal and external caches depending on particular implementation and needs. In at least one embodiment, a register file 1106 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and an instruction pointer register.[0263] In at least one embodiment, execution unit 1108, including, without limitation, logic to perform integer and floating point operations, also resides in processor 1102. In at least one embodiment, processor 1102 may also include a microcode ("ucode") read only memory ("ROM") that stores microcode for certain macro instructions. In at least one embodiment, execution unit 1108 may include logic to handle a packed instruction set 1109. In at least one embodiment, by including packed instruction set 1109 in an instruction set of a general-purpose processor, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in processor 1102. In at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using a full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across that processor's data bus to perform one or more operations one data element at a time.[0264] In at least one embodiment, execution unit 1108 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 1100 may include, without limitation, a memory 1120. In at least one embodiment, memory 1120 may be a Dynamic Random Access Memory ("DRAM") device, a Static Random Access Memory ("SRAM") device, a flash memory device, or another memory device. In at least one embodiment, memory 1120 may store instruction(s) 1119 and/or data 1121 represented by data signals that may be executed by processor 1102.[0265] In at least one embodiment, a system logic chip may be coupled to processor bus 1110 and memory 1120. In at least one embodiment, a system logic chip may include, without limitation, a memory controller hub ("MCH") 1116, and processor 1102 may communicate with MCH 1116 via processor bus 1110. In at least one embodiment, MCH 1116 may provide a high bandwidth memory path 1118 to memory 1120 for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH 1 116 may direct data signals between processor 1102, memory 1120, and other components in computer system 1100 and to bridge data signals between processor bus 1 1 10, memory 1120, and a system I/O interface 1122. In at least one embodiment, a system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH 1116 may be coupled to memory 1120 through high bandwidth memory path 1118 and a graphics/video card 1112 may be coupled to MCH 1116 through an Accelerated Graphics Port ("ACP") interconnect 1114.[0266] In at least one embodiment, computer system 1100 may use system I/O interface 1122 as a proprietary hub interface bus to couple MCH 1116 to an I/O controller hub ("ICH") 1130. In at least one embodiment, ICH 1130 may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, a local I/O bus may include, without limitation, a high-speed LO bus for connecting peripherals to memory 1120, a chipset, and processor 1102. Examples may include, without limitation, an audio controller 1129, a firmware hub ("flash BIOS") 1128, a wireless transceiver 1126, a data storage 1124, a legacy 1/0 controller 1123 containing user input and keyboard interfaces 1125, a serial expansion port 1127, such as a Universal Serial Bus ("USB") port, and a network controller 1134. In at least one embodiment, data storage 1124 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0267] In at least one embodiment, FIG. 11 illustrates a system, which includes interconnected hardware devices or "chips", whereas in other embodiments, FIG. 11 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in FIG. 11 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof In at least one embodiment, one or more components of computer system 1100 are interconnected using compute express link (CXL) interconnects.[0268] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 11 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0269] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0270] FIG. 12 is a block diagram illustrating an electronic device 1200 for utilizing a processor 1210, according to at least one embodiment. In at least one embodiment, electronic device 1200 may be, for example and without limitation, a notebook, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[0271] In at least one embodiment, electronic device 1200 may include, without limitation, processor 1210 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor 1210 is coupled using a bus or interface, such as a I2C bus, a System Management Bus ("SMBus"), a Low Pin Count (LPC) bus, a Serial Peripheral Interface ("SPI"), a High Definition Audio ("HDA") bus, a Serial Advance Technology Attachment ("SATA") bus, a Universal Serial Bus ("USB") (versions 1, 2, 3, etc.), or a Universal Asynchronous Receiver/Transmitter ("CART") bus. In at least one embodiment, FIG. 12 illustrates a system, which includes interconnected hardware devices or "chips", whereas in other embodiments, FIG. 12 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in FIG. 12 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of FIG. 12 are interconnected using compute express link (OCL) interconnects.[0272] In at least one embodiment, FIG 12 may include a display 1224, a touch screen 1225, a touch pad 1230, a Near Field Communications unit ("NFC") 1245, a sensor hub 1240, a thermal sensor 1246, an Express Chipset ("EC") 1235, a Trusted Platform Module ("TPM") 1238, BIOS/firmware/flash memory ("BIOS, FW Flash") 1222, a DSP 1260, a drive 1220 such as a Solid State Disk ("SSD") or a Hard Disk Drive ("HDD"), a wireless local area network unit ("WLAN") 1250, a Bluetooth unit 1252, a Wireless Wide Area Network unit ("WWAN") 1256, a Global Positioning System (GPS) unit 1255, a camera ("USB 3.0 camera") 1254 such as a USB 3.0 camera, and/or a Low Power Double Data Rate ("LPDDR") memory unit ("LPDDR3") 1215 implemented in, for example, an LPDDR3 standard. These components may each be implemented in any suitable manner.[0273] In at least one embodiment, other components may be communicatively coupled to processor 1210 through components described herein. In at least one embodiment, an accelerometer 1241, an ambient light sensor ("ALS") 1242, a compass 1243, and a gyroscope 1244 may be communicatively coupled to sensor hub 1240. In at least one embodiment, a thermal sensor 1239, a fan 1237, a keyboard 1236, and touch pad 1230 may be communicatively coupled to EC 1235. In at least one embodiment, speakers 1263, headphones 1264, and a microphone ("mic") 1265 may be communicatively coupled to an audio unit ("audio codec and class D amp") 1262, which may in turn be communicatively coupled to DSP 1260. In at least one embodiment, audio unit 1262 may include, for example and without limitation, an audio coder/decoder ("codec") and a class D amplifier. In at least one embodiment, a SIM card ("SIM") 1257 may be communicatively coupled to WWAN unit 1256. In at least one embodiment, components such as WLAN unit 1250 and Bluetooth unit 1252, as well as WWAN unit 1256 may be implemented in a Next Generation Form Factor ("NGFF").[0274] Inference and/or training logic 715 are used to perform inferencmg and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 12 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0275] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0276] FIG. 13 illustrates a computer system 1300, according to at least one embodiment. In at least one embodiment, computer system 1300 is configured to implement various processes and methods described throughout this disclosure.[0277] In at least one embodiment, computer system 1300 comprises, without limitation, at least one central processing unit ("CPU") 1302 that is connected to a communication bus 1310 implemented using any suitable protocol, such as PCI ("Peripheral Component Interconnect"), peripheral component interconnect express ("PCI-Express"), AGP ("Accelerated Graphics Port"), HyperTransport, or any other bus or point-to-point communication protocol(s). In at least one embodiment, computer system 1300 includes, without limitation, a main memory 1304 and control logic (e.g., implemented as hardware, software, or a combination thereof) and data are stored in main memory 1304, which may take form of random access memory ("RAM"). In at least one embodiment, a network interface subsystem ("network interface") 1322 provides an interface to other computing devices and networks for receiving data from and transmitting data to other systems with computer system 1300.[0278] In at least one embodiment, computer system 1300, in at least one embodiment, includes, without limitation, input devices 1308, a parallel processing system 1312, and display devices 1306 that can be implemented using a conventional cathode ray tube ("CRT"), a liquid crystal display ("LCD"), a light emitting diode ("LED") display, a plasma display, or other suitable display technologies. In at least one embodiment, user input is received from input devices 1308 such as keyboard, mouse, touchpad, microphone, etc. In at least one embodiment, each module described herein can be situated on a single semiconductor platform to form a processing system.[0279] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 13 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0280] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6 [0281] FIG. 14 illustrates a computer system 1400, according to at least one embodiment. In at least one embodiment, computer system 1400 includes, without limitation, a computer 1410 and a USB stick 1420. In at least one embodiment, computer 1410 may include, without limitation, any number and type of processor(s) (not shown) and a memory (not shown). In at least one embodiment, computer 1410 includes, without limitation, a server, a cloud instance, a laptop, and a desktop computer.[0282] In at least one embodiment, USB stick 1420 includes, without limitation, a processing unit 1430, a USB interface 1440, and USB interface logic 1450. In at least one embodiment, processing unit 1430 may be any instruction execution system, apparatus, or device capable of executing instructions. In at least one embodiment, processing unit 1430 may include, without limitation, any number and type of processing cores (not shown). In at least one embodiment, processing unit 1430 comprises an application specific integrated circuit ("ASIC") that is optimized to perform any amount and type of operations associated with machine learning. For instance, in at least one embodiment, processing unit 1430 is a tensor processing unit ("TPC") that is optimized to perform machine learning inference operations. In at least one embodiment, processing unit 1430 is a vision processing unit ("VPU") that is optimized to perform machine vision and machine learning inference operations.[0283] In at least one embodiment, USB interface 1440 may be any type of USB connector or USB socket. For instance, in at least one embodiment, USB interface 1440 is a USB 3.0 Type-C socket for data and power. In at least one embodiment, !TSB interface 1440 is a USB 3.0 Type-A connector. In at least one embodiment, USB interface logic 1450 may include any amount and type of logic that enables processing unit 1430 to interface with devices (e.g computer 1410) via USB connector 1440.[0284] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 14 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0285] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0286] FIG. 15A illustrates an exemplary architecture in which a plurality of GPUs 1510(1)-15 I 0(N) is communicatively coupled to a plurality of multi-core processors 1505(1)-I 505(M) over high-speed links 1540(1)-1540(N) (e.g., buses, point-to-point interconnects, etc.). In at least one embodiment, high-speed links 1540(1)-1540(N) support a communication throughput of 4 GB's, 30 GB/s, 80 GB/s or higher. In at least one embodiment, various interconnect protocols may be used including, but not limited to, Pete 4.0 or 5.0 and NVLink 2.0. In various figures, "N" and "M" represent positive integers, values of which may be different from figure to figure.[0287] In addition, and in at least one embodiment, two or more of GPUs 1510 are interconnected over high-speed links 1529(1)-1529(2), which may be implemented using similar or different protocols/links than those used for high-speed links 1540(I)-1540(N). Similarly, two or more of multi-core processors 1505 may be connected over a high-speed link 1528 which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or higher. Alternatively, all communication between various system components shown in FIG. 15A may be accomplished using similar protocols/links (e.g., over a common interconnection fabric).[0288] In at least one embodiment, each multi-core processor 1505 is communicatively coupled to a processor memory 1501(1)-1501(M), via memory interconnects 1526(1)-1526(M), respectively, and each GPU 1510(1)-1510(N) is communicatively coupled to GPU memory 1520(1)-1520(N) over GPU memory interconnects 1550(1)-1550(N), respectively. In at least one embodiment, memory interconnects 1526 and 1550 may utilize similar or different memory access technologies. By way of example, and not limitation, processor memories 1501(1)-1501(M) and GPU memories 1520 may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D ?<Point or Nano-Ram. In at least one embodiment, some portion of processor memories 1501 may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).[0289] As described herein, although various multi-core processors 1505 and GPUs 1510 may be physically coupled to a particular memory 1501, 1520, respectively, and/or a unified memory architecture may be implemented in which a virtual system address space (also referred to as "effective address" space) is distributed among various physical memories. For example, processor memories 1501(1)-1501(M) may each comprise 64 GB of system memory address space and GPU memories 1520(O-1 520(N) may each comprise 32 GB of system memory address space resulting in a total of 256 GB addressable memory when M=2 and N=4. Other values for N and M are possible.[0290] FIG. I 5B illustrates additional details for an interconnection between a multi-core processor 1507 and a graphics acceleration module 1546 in accordance with one exemplary embodiment. in at least one embodiment, graphics acceleration module 1546 may include one or more GPU chips integrated on a line card which is coupled to processor 1507 via high-speed link 1540 (e.g., a PCie bus, NVLink, etc.). In at least one embodiment, graphics acceleration module 1546 may alternatively be integrated on a package or chip with processor 1507.[0291] In at least one embodiment, processor 1507 includes a plurality of cores 1560A-1560D, each with a translation lookaside buffer ("TLB") 1561A-1561D and one or more caches 1562A-1562D. In at least one embodiment, cores 1560A-1560D may include various other components for executing instructions and processing data that are not illustrated. In at least one embodiment, caches 1562A-1562D may comprise Level 1 (L1) and Level 2 (L2) caches. In addition, one or more shared caches 1556 may be included in caches 1562A-1562D and shared by sets of cores 1560A-1560D. For example, one embodiment of processor 1507 includes 24 cores, each with its own Li cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one or more L2 and L3 caches are shared by two adjacent cores. In at least one embodiment, processor 1507 and graphics acceleration module 1546 connect with system memory 1514, which may include processor memories 1501(1)-1501(M) of FIG. 15A.[0292] In at least one embodiment, coherency is maintained for data and instructions stored in various caches 1562A-1562D, 1556 and system memory 1514 via inter-core communication over a coherence bus 1564. In at least one embodiment, for example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over coherence bus 1564 in response to detected reads or writes to particular cache lines. In at least one embodiment, a cache snooping protocol is implemented over coherence bus 1564 to snoop cache accesses.[0293] In at least one embodiment, a proxy circuit I 525 communicatively couples graphics acceleration module 1546 to coherence bus 1564, allowing graphics acceleration module 1546 to participate in a cache coherence protocol as a peer of cores 1560A-1560D. In particular, in at least one embodiment, an interface 1535 provides connectivity to proxy circuit 1525 over highspeed link 1540 and an interface 1537 connects graphics acceleration module 1546 to high-speed link 1540 [0294] In at least one embodiment, an accelerator integration circuit 1536 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 153 1(1)-1531(N) of graphics acceleration module 1546. In at least one embodiment, graphics processing engines 1531(1)-153I(N) may each comprise a separate graphics processing unit (GPU), in at least one embodiment, graphics processing engines 1531(1)-1531(N) alternatively may comprise different types of graphics processing engines within a CPU, such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, graphics acceleration module 1546 may be a CPU with a plurality of graphics processing engines 1531(1)-1531(N) or graphics processing engines 1531(1)-1531(N) may be individual GPUs integrated on a common package, line card, or chip.[0295] In at least one embodiment, accelerator integration circuit 1536 includes a memory management unit (MMU) 1539 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 1514. In at least one embodiment, MMU 1539 may also include a translation lookaside buffer (TLB) (not shown) for caching virtual/effective to physical/real address translations. In at least one embodiment, a cache 1538 can store commands and data for efficient access by graphics processing engines 1531(1)-1531(N). In at least one embodiment, data stored in cache 1538 and graphics memories 1533(1)-1533(M) is kept coherent with core caches 1562A-1562D, 1556 and system memory 1514, possibly using a fetch unit 1544. As mentioned, this may be accomplished via proxy circuit 1525 on behalf of cache 1538 and memories 1533(1)-1533(M) (e.g., sending updates to cache 1538 related to modifications/accesses of cache lines on processor caches 1562A-1562D, 1556 and receiving updates from cache 1538).[0296] In at least one embodiment, a set of registers 1545 store context data for threads executed by graphics processing engines 1531(1)-153I(N) and a context management circuit 1548 manages thread contexts. For example, context management circuit 1548 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1548 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In at least one embodiment, an interrupt management circuit I 547 receives and processes interrupts received from system devices.[0297] In at least one embodiment, virtual/effective addresses from a graphics processing engine 1531 are translated to real/physical addresses in system memory 1514 by MMU 1539. In at least one embodiment, accelerator integration circuit 1536 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 1546 and/or other accelerator devices. In at least one embodiment, graphics accelerator module 1546 may be dedicated to a single application executed on processor 1507 or may be shared between multiple applications. In at least one embodiment, a virtualized graphics execution environment is presented in which resources of graphics processing engines 1531(1)-1531(N) are shared with multiple applications or virtual machines (VMs). In at least one embodiment, resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on processing requirements and priorities associated with VMs and/or applications.[0298] In at least one embodiment, accelerator integration circuit 1536 performs as a bridge to a system for graphics acceleration module 1546 and provides address translation and system memory cache services. In addition, in at least one embodiment, accelerator integration circuit 1536 may provide virtualization facilities for a host processor to manage virtualization of graphics processing engines 1531(1)-1531(N), interrupts, and memory management.[0299] In at least one embodiment, because hardware resources of graphics processing engines 153 1 ( I)-I 531(N) are mapped explicitly to a real address space seen by host processor I 5 07, any host processor can address these resources directly using an effective address value. In at least one embodiment, one function of accelerator integration circuit 1536 is physical separation of graphics processing engines 1531(1)-1531(N) so that they appear to a system as independent units.[0300] In at least one embodiment, one or more graphics memories 1533(1)-1533(M) are coupled to each of graphics processing engines 1531(1)-1531(N), respectively and N=M. In at least one embodiment, graphics memories 1533(1)-1533(M) store instructions and data being processed by each of graphics processing engines 1531(1)-1531(N). In at least one embodiment, graphics memories 1533(I)-1533(M) may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or TIBM, and/or may be nonvolatile memories such as 3D XPoint or Nano-Ram.[0301] In at least one embodiment, to reduce data traffic over high-speed link 1540, biasing techniques can be used to ensure that data stored in graphics memories 1533(1)-1533(M) is data that will be used most frequently by graphics processing engines 1531(1)-1531(N) and preferably not used by cores 1560A-1560D (at least not frequently). Similarly, in at least one embodiment, a biasing mechanism attempts to keep data needed by cores (and preferably not graphics processing engines 1531(1)-1531(N)) within caches 1562A-1562D, 1556 and system memory 1514.[0302] FIG. 15C illustrates another exemplary embodiment in which accelerator integration circuit 1536 is integrated within processor 1507. In this embodiment, graphics processing engines 1531(1)-1531(N) communicate directly over high-speed link 1540 to accelerator integration circuit 1536 via interface 1537 and interface 1535 (which, again, may be any form of bus or interface protocol). In at least one embodiment, accelerator integration circuit 1536 may perform similar operations as those described with respect to FIG. 15B, but potentially at a higher throughput given its close proximity to coherence bus 1564 and caches 1562A-1562D, 1556. In at least one embodiment, an accelerator integration circuit supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization), which may include programming models which are controlled by accelerator integration circuit 1536 and programming models which are controlled by graphics acceleration module 1546.[0303] In at least one embodiment, graphics processing engines 1531(1)-1531(N) are dedicated to a single application or process under a single operating system. In at least one embodiment, a single application can funnel other application requests to graphics processing engines 1531(1)-1531(N), providing virtualization within a VIM/partition.[0304] In at least one embodiment, graphics processing engines 1531(1)-1531(N), may be shared by multiple VMapplication partitions. In at least one embodiment, shared models may use a system hypervisor to virtual ize graphics processing engines 1531(1)-1531(N) to allow access by each operating system. In at least one embodiment, for single-partition systems without a hypervisor, graphics processing engines 1531(1)-1531(N) are owned by an operating system. In at least one embodiment, an operating system can virtual ize graphics processing engines 1531(1)-1531(N) to provide access to each process or application.[0305] In at least one embodiment, graphics acceleration module 1546 or an individual graphics processing engine 1531(1)-1531(N) selects a process element using a process handle. In at least one embodiment, process elements are stored in system memory 1514 and are addressable using an effective address to real address translation technique described herein. In at least one embodiment, a process handle may be an implementation-specific value provided to a host process when registering its context with graphics processing engine 1531(1)-1531(N) (that is, calling system software to add a process element to a process element linked list). In at least one embodiment, a lower 16-bits of a process handle may be an offset of a process element within a process element linked list.[0306] FIG. 15D illustrates an exemplary accelerator Integration slice 1590. In at least one embodiment, a "slice" comprises a specified portion of processing resources of accelerator integration circuit 1536. In at least one embodiment, an application is effective address space 1582 within system memory 1514 stores process elements 1583. In at least one embodiment, process elements 1583 are stored in response to GPU invocations 1581 from applications 1580 executed on processor 1507. In at least one embodiment, a process element 1583 contains process state for corresponding application 1580. In at least one embodiment, a work descriptor (WD) 1584 contained in process element 1583 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1584 is a pointer to a job request queue in an application's effective address space 1582.[0307] In at least one embodiment, graphics acceleration module 1546 and/or individual graphics processing engines 1531(1)-1531(N) can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process states and sending a WD 1584 to a graphics acceleration module 1546 to start a job in a virtualized environment may be included.[0308] In at least one embodiment, a dedicated-process programming model is implementation-specific. In at least one embodiment, in this model, a single process owns graphics acceleration module 1546 or an individual graphics processing engine 1531. In at least one embodiment, when graphics acceleration module 1546 is owned by a single process, a hypervisor initializes accelerator integration circuit 1536 for an owning partition and an operating system initializes accelerator integration circuit 1536 for an owning process when graphics acceleration module 1546 is assigned.[0309] In at least one embodiment, in operation, a WD fetch unit 1591 in accelerator integration slice 1590 fetches next WD 1584, which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 1546. In at least one embodiment, data from WD 1584 may be stored in registers 1545 and used by MMU 1539, interrupt management circuit 1547 and/or context management circuit 1548 as illustrated. For example, one embodiment of MMU 1539 includes segment/Page walk circuitry for accessing segment/page tables 1586 within an OS virtual address space 1585. In at least one embodiment, interrupt management circuit 1547 may process interrupt events 1592 received from graphics acceleration module 1546. In at least one embodiment, when performing graphics operations, an effective address 1593 generated by a graphics processing engine 1531(1)-1531(N) is translated to a real address by MMU 1539.[0310] In at least one embodiment, registers 1545 are duplicated for each graphics processing engine 1531(1)-1531(N) and/or graphics acceleration module 1546 and may be initialized by a hypervisor or an operating system. In at least one embodiment, each of these duplicated registers may be included in an accelerator integration slice 1590. Exemplary registers that may be initialized by a hypervisor are shown in Table 1.Table I -Hypervisor Initialized RegistersRegister # Description1 Slice Control Register 2 Real Address (RA) Scheduled Processes Area Pointer 3 Authority Mask Override Register 4 Interrupt Vector Table Entry Offset Interrupt Vector Table Entry Limit 6 State Register 7 Logical Partition ID 8 Real address (RA) Hypervisor Accelerator Utilization Record Pointer9 Storage Description Register[0311] Exemplary registers that may be initialized by an operating system are shown in Table 2.Table 2 -Operating System Initialized RegistersRegister # Description1 Process and Thread Identification 2 Effective Address (EA) Context Save/Restore Pointer 3 Virtual Address (VA) Accelerator Utilization Record Pointer 4 Virtual Address (VA) Storage Segment Table Pointer Authority Mask 6 Work descriptor [0312] In at least one embodiment, each WD 1584 is specific to a particular graphics acceleration module 1546 and/or graphics processing engines 15310)-1531(N). In at least one embodiment, it contains all information required by a graphics processing engine 1531(1)1531(N) to do work, or it can be a pointer to a memory location where an application has set up a command queue of work to be completed.[0313] FIG. 15E illustrates additional details for one exemplary embodiment of a shared model. This embodiment includes a hypervisor real address space 1598 in which a process element list 1599 is stored. In at least one embodiment, hypet-visor real address space 1598 is accessible via a 1,,,Tpervisor 1596 which virtual izes graphics acceleration module engines for operating system 1595.[0314] In at least one embodiment, shared programming models allow for all or a subset of processes from all or a subset of partitions in a system to use a graphics acceleration module 1546. In at least one embodiment, there are two programming models where graphics acceleration module 1546 is shared by multiple processes and partitions, namely time-sliced shared and graphics directed shared.[0315] In at least one embodiment, in this model, system hypervisor 1596 owns graphics acceleration module 1546 and makes its function available to all operating systems 1595. In at least one embodiment, for a graphics acceleration module 1546 to support virtualizati on by system hypervisor 1596, graphics acceleration module 1546 may adhere to certain requirements, such as (1) an application's job request must be autonomous (that is, state does not need to be maintained between jobs), or graphics acceleration module 1546 must provide a context save and restore mechanism, (2) an application's job request is guaranteed by graphics acceleration module 1546 to complete in a specified amount of time, including any translation faults, or graphics acceleration module 1546 provides an ability to preempt processing of a job, and (3) graphics acceleration module 1546 must be guaranteed fairness between processes when operating in a directed shared programming model.[0316] In at least one embodiment, application 1580 is required to make an operating system 1595 system call with a graphics acceleration module type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). In at least one embodiment, graphics acceleration module type describes a targeted acceleration function for a system call. In at least one embodiment, graphics acceleration module type may be a system-specific value. In at least one embodiment, WD is formatted specifically for graphics acceleration module 1546 and can be in a form of a graphics acceleration module 1546 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe work to be done by graphics acceleration module 1546.[03 1 7] In at least one embodiment, an AMR value is an AMR state to use for a current process. In at least one embodiment, a value passed to an operating system is similar to an application setting an AMR In at least one embodiment, if accelerator integration circuit 1536 (not shown) and graphics acceleration module 1546 implementations do not support a User Authority Mask Override Register (UAMOR), an operating system may apply a current UAMOR value to an AMR value before passing an AMR in a hypervisor call. In at least one embodiment, hypervisor 1596 may optionally apply a current Authority Mask Override Register (AMOR) value before placing an AMR into process element 1583. In at least one embodiment, CSRP is one of registers 1545 containing an effective address of an area in an application's effective address space 1582 for graphics acceleration module 1546 to save and restore context state. In at least one embodiment, this pointer is optional if no state is required to be saved between jobs or when a job is preempted. In at least one embodiment, context save/restore area may be pinned system memory.[0318] Upon receiving a system call, operating system 1595 may verify that application 1580 has registered and been given authority to use graphics acceleration module 1546. In at least one embodiment, operating system 1595 then calls hypervisor 1596 with information shown in Table 3, Table 3 -OS to Hyper-visor Call Parameters Parameter # Description 1 A work descriptor (WD) 2 An Authority Mask Register (AMR) value (potentially masked) 3 An effective address (EA) Context Save/Restore Area Pointer (CSRP) 4 A process ID (PID) and optional thread ID (TED) A virtual address (VA) accelerator utilization record pointer (AURP) 6 Virtual address of storage segment table pointer (SSTP) 7 A logical interrupt service number (L1SN) [0319] In at least one embodiment, upon receiving a hypervisor call, hypervisor 1596 verifies that operating system 1595 has registered and been given authority to use graphics acceleration module 1546. In at least one embodiment, hypervisor 1596 then puts process element 1583 into a process element linked list for a corresponding graphics acceleration module 1546 type. In at least one embodiment, a process element may include information shown in Table 4.Table 4 -Process Element InformationElement # Description1 A work descriptor (WD) 2 An Authority Mask Register (AMR) value (potentially masked).3 An effective address (EA) Context Save/Restore Area Pointer (CSRP) 4 A process ID (PID) and optional thread ID (TED) A virtual address (VA) accelerator utilization record pointer (AURP) 6 Virtual address of storage segment table pointer (SSTP) 7 A logical interrupt service number (LISN) 8 Interrupt vector table, derived from hypervisor call parameters 9 A state register (SR) value A logical partition ID (LPID) 11 A real address (RA) hypervisor accelerator utilization record pointer 12 Storage Descriptor Register (SDR) [0320] In at least one embodiment, hypervisor initializes a plurality of accelerator integration slice 1590 registers 1545.[0321] As illustrated in FIG. 15F, in at least one embodiment, a unified memory is used, addressable via a common virtual memory address space used to access physical processor memories 15010)-1501(N) and GPU memories 15200)-1520(N). In this implementation, operations executed on GPUs 1510(1)-1510(N) utilize a same virtual/effective memory address space to access processor memories 15010)-1501(M) and vice versa, thereby simplifying programmability. In at least one embodiment, a first portion of a virtual/effective address space is allocated to processor memory 1501(1), a second portion to second processor memory 1501(N), a third portion to GPU memory 15200), and so on. In at least one embodiment, an entire virtual/effective memory space (sometimes referred to as an effective address space) is thereby distributed across each of processor memories 1501 and GPU memories 1520, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.[0322] In at least one embodiment, bias/coherence management circuitry 1594A-1594E within one or more of TVEMUs 1539A-1539E ensures cache coherence between caches of one or more host processors (e.g., 1505) and GPUs 1510 and implements biasing techniques indicating physical memories in which certain types of data should be stored. In at least one embodiment, while multiple instances of bias/coherence management circuitry 1594A-1594E are illustrated in FIG. 15F, bias/coherence circuitry may be implemented within an MMU of one or more host processors 1505 and/or within accelerator integration circuit 1536.[0323] One embodiment allows GPU memories 1520 to be mapped as part of system memory, and accessed using shared virtual memory (SW) technology, but without suffering performance drawbacks associated with full system cache coherence. In at least one embodiment, an ability for GPU memories 1520 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. In at least one embodiment, this arrangement allows software of host processor 1505 to setup operands and access computation results, without overhead of tradition I/O DMA data copies. In at least one embodiment, such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. In at least one embodiment, an ability to access GPI] memories 1520 without cache coherence overheads can be critical to execution time of an offloaded computation. In at least one embodiment, in cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce an effective write bandwidth seen by a GPU 1510. In at least one embodiment, efficiency of operand setup, efficiency of results access, and efficiency of GPU computation may play a role in determining effectiveness of a GPU offload.[0324] In at least one embodiment, selection of GPU bias and host processor bias is driven by a bias tracker data structure. In at least one embodiment, a bias table may be used, for example, which may be a page-granular structure (e.g., controlled at a granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. In at least one embodiment, a bias table may be implemented in a stolen memory range of one or more GPU memories 1520, with or without a bias cache in a CPU 1510 (e.g., to cache frequently/recently used entries of a bias table). Alternatively, in at least one embodiment, an entire bias table may be maintained within a CPU.[0325] In at least one embodiment, a bias table entry associated with each access to a GPU attached memory 1520 is accessed prior to actual access to a GPU memory, causing following operations. In at least one embodiment, local requests from a GPU 1510 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 1520. In at least one embodiment, local requests from a GPU that find their page in host bias are forwarded to processor 1505 (e.g., over a high-speed link as described herein). In at least one embodiment, requests from processor 1505 that find a requested page in host processor bias complete a request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to a GPU 1510. In at least one embodiment, a GPU may then transition a page to a host processor bias if it is not currently using a page. In at least one embodiment, a bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.[0326] In at least one embodiment, one mechanism for changing bias state employs an API call (e.g., OpenCL), which, in turn, calls a CPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to a GPU directing it to change a bias state and, for some transitions, perform a cache flushing operation in a host. In at least one embodiment, a cache flushing operation is used for a transition from host processor 1505 bias to CPU bias, but is not for an opposite transition.[0327] In at least one embodiment, cache coherency is maintained by temporarily rendering CPU-biased pages uncacheable by host processor 1505. In at least one embodiment, to access these pages, processor 1505 may request access from GPU 1510, which may or may not grant access right away. In at least one embodiment, thus, to reduce communication between processor 1505 and GPU 1510 it is beneficial to ensure that GPU-biased pages are those which are required by a GPU but not host processor 1505 and vice versa.[0328] Hardware structure(s) 715 are used to perform one or more embodiments. Details regarding a hardware structure(s) 715 may be provided herein in conjunction with FIGS 7A and/or 7B.[0329] FIG. 16 illustrates exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.[0330] FIG. 16 is a block diagram illustrating an exemplary system on a chip integrated circuit 1600 that may be fabricated using one or more IP cores, according to at least one embodiment.In at least one embodiment, integrated circuit 1600 includes one or more application processor(s) 1605 (e.g., CPUs), at least one graphics processor 1610, and may additionally include an image processor 1615 and/or a video processor 1620, any of which may be a modular IP core. In at least one embodiment, integrated circuit 1600 includes peripheral or bus logic including a USB controller 1625, a UART controller 1630, an SPI/SDIO controller 1 63 5, and an I22S/I22C controller 1640. In at least one embodiment, integrated circuit 1600 can include a display device 1645 coupled to one or more of a high-definition multimedia interface (TOW controller 1650 and a mobile industry processor interface (MIPI) display interface 1655. In at least one embodiment, storage may be provided by a flash memory subsystem 1660 including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller 1665 for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine 1670.[0331] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in integrated circuit 1600 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein [0332] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0333] FIGS. 17A and 17B illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.[0334] FIGS. I 7A and I 7B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. FIG. 17A illustrates an exemplary graphics processor 1710 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment. FIG. 17B illustrates an additional exemplary graphics processor 1740 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, graphics processor 1710 of FIG. 17A is a low power graphics processor core In at least one embodiment, graphics processor 1740 of FIG. 17B is a higher performance graphics processor core. In at least one embodiment, each of graphics processors 1710, 1740 can be variants of graphics processor 1610 of FIG. 16.[0335] In at least one embodiment, graphics processor 1710 includes a vertex processor 1705 and one or more fragment processor(s) 1715A-1715N (e.g., 1715A, 1715B, 1715C, 1715D, through 1715N-1, and 1715N). In at least one embodiment, graphics processor 1710 can execute different shader programs via separate logic, such that vertex processor 1705 is optimized to execute operations for vertex shader programs, while one or more fragment processor(s) 1715A-1715N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 1705 performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s) 1715A-1715N use primitive and vertex data generated by vertex processor 1705 to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s) 1715A-1715N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API.[0336] In at least one embodiment, graphics processor 1710 additionally includes one or more memory management units (MMUs) 1720A-1720B, cache(s) 1725A-1725B, and circuit interconnect(s) 1730A-1730B. In at least one embodiment, one or more MMU(s) 1720A-1720B provide for virtual to physical address mapping for graphics processor 1710, including for vertex processor 1705 and/or fragment processor(s) 1715A-1715N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s) 1725A-1725B. In at least one embodiment, one or more MTV1U(s) 1720A-1720B may be synchronized with other IVIMUs within a system, including one or more MMUs associated with one or more application processor(s) 1605, image processors 1615, and/or video processors 1620 of FIG. 16, such that each processor 1605-1620 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s) 1730A-1730B enable graphics processor 1710 to interface with other IP cores within SoC, either via an internal bus of SoC or via a direct connection.[0337] In at least one embodiment, graphics processor 1740 includes one or more shader core(s) 1755A-1755N (e.g., 1755A, 1755B, 1755C, 1755D, 1755E, 1755F, through 1755N-1, and 1755N) as shown in FIG. 17B, which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor 1740 includes an inter-core task manager 1745, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1755A-1755N and a tiling unit 1758 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.[0338] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in integrated circuit 17A and/or 17B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0339] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0340] FIGS. I 8A-I 8B illustrate additional exemplary graphics processor logic according to embodiments described herein. FIG. 18A illustrates a graphics core 1800 that may be included within graphics processor 1610 of FIG. 16, in at least one embodiment, and may be a unified shader core 1755A-1755N as in FIG. 17B in at least one embodiment. FIG. 18B illustrates a highly-parallel general-purpose graphics processing unit ("GPGPU") 1830 suitable for deployment on a multi-chip module in at least one embodiment.[0341] In at least one embodiment, graphics core 1800 includes a shared instruction cache 1802, a texture unit 1818, and a cache/shared memory 1820 that are common to execution resources within graphics core 1800. In at least one embodiment, graphics core 1800 can include multiple slices 1801A-1801N or a partition for each core, and a graphics processor can include multiple instances of graphics core 1800. In at least one embodiment, slices 1801A-1801N can include support logic including a local instruction cache 1804A-1804N, a thread scheduler 1806A-1806N, a thread dispatcher 1808A-1808N, and a set of registers 1810A-1810N. In at least one embodiment, slices 1801A-1801N can include a set of additional function units (AFUs 1812A-1812N), floating-point units (FPUs 1814A-1814N), integer arithmetic logic units (ALUs 1816A-1816N), address computational units (ACUs 1813A-1813N), double-precision floating-point units (DPFPUs 1815A-1815N), and matrix processing units (MPUs 1817A-1817N).[0342] In at least one embodiment, FPUs 1814A-18 I 4N can perform single-precision (32-bit) and half-precision (I6-bit) floating point operations, while DPFPUs 18I5A-1815N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs 18I6A1816N can perform variable precision integer operations at 8-bit, I6-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs 1817A-1817N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs 1817-1817N can perform a variety of matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). In at least one embodiment, AFUs 1812A-1812N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., sine, cosine, etc.).[0343] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in graphics core 1800 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0344] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0345] FIG. 18B illustrates a general-purpose processing unit (GPGPU) 1830 that can be configured to enable highly-parallel compute operations to be performed by an array of graphics processing units, in at least one embodiment. In at least one embodiment, GPGPU 1830 can be linked directly to other instances of GPGPU 1830 to create a multi-GPU cluster to improve training speed for deep neural networks. In at least one embodiment, GPGPU 1830 includes a host interface 1832 to enable a connection with a host processor. In at least one embodiment, host interface 1832 is a PCI Express interface. In at least one embodiment, host interface 1832 can be a vendor-specific communications interface or communications fabric. In at least one embodiment, GPGPU 1830 receives commands from a host processor and uses a global scheduler 1834 to distribute execution threads associated with those commands to a set of compute clusters 1836A-1836H. In at least one embodiment, compute clusters 1836A-1836H share a cache memory 1838. In at least one embodiment, cache memory 1838 can serve as a higher-level cache for cache memories within compute clusters 1836A-1836H.[0346] In at least one embodiment, GPGPU 1830 includes memory 1844A-1844B coupled with compute clusters 1836A-1836H via a set of memory controllers 1842A-1842B. In at least one embodiment, memory I 844A-1844B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory.[0347] In at least one embodiment, compute clusters 1836A-1836H each include a set of graphics cores, such as graphics core 1800 of FIG. 18A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations For example, in at least one embodiment, at least a subset of floating point units in each of compute clusters 1836A-1836H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations.[0348] In at least one embodiment, multiple instances of GPGPU 1830 can be configured to operate as a compute cluster. In at least one embodiment, communication used by compute clusters 1836A-1836H for synchronization and data exchange varies across embodiments. In at least one embodiment, multiple instances of GPGPU 1830 communicate over host interface 1832. In at least one embodiment, GPGPU 1830 includes an PO hub 1839 that couples GPGPU 1830 with a GPU link 1840 that enables a direct connection to other instances of GPGPU 1830. In at least one embodiment, GPU link 1840 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU 1830. In at least one embodiment, GPU link 1840 couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In at least one embodiment, multiple instances of GPGPU 1830 are located in separate data processing systems and communicate via a network device that is accessible via host interface 1832. In at least one embodiment GPU link 1840 can be configured to enable a connection to a host processor in addition to or as an alternative to host interface 1832.[0349] In at least one embodiment, GPGPU 1830 can be configured to train neural networks. In at least one embodiment, GPGPU 1830 can be used within an inferencing platform. In at least one embodiment, in which GPGPU 1830 is used for inferencing, GPGPU 1830 may include fewer compute clusters 1836A-1836H relative to when GPGPU 1830 is used for training a neural network. In at least one embodiment, memory technology associated with memory 1844A-1844B may differ between inferencing and training configurations, with higher bandwidth memory technologies devoted to training configurations. In at least one embodiment, an inferencing configuration of GPGPU 1830 can support inferencing specific instructions. For example, in at least one embodiment, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which may be used during inferencing operations for deployed neural networks.[0350] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in GPGPU 1830 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0351] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0352] FIG. 19 is a block diagram illustrating a computing system 1900 according to at least one embodiment. In at least one embodiment, computing system 1900 includes a processing subsystem 1901 having one or more processor(s) 1902 and a system memory 1904 communicating via an interconnection path that may include a memory hub 1905. In at least one embodiment, memory hub 1905 may be a separate component within a chipset component or may be integrated within one or more processor(s) 1902. In at least one embodiment, memory hub 1905 couples with an I/O subsystem 1911 via a communication link 1906. In at least one embodiment, I/O subsystem 1911 includes an I/O hub 1907 that can enable computing system 1900 to receive input from one or more input device(s) 1908. In at least one embodiment, I/0 hub 1907 can enable a display controller, which may be included in one or more processor(s) 1902, to provide outputs to one or more display device(s) 19! OA. In at least one embodiment, one or more display device(s) 1910A coupled with I/0 hub 1907 can include a local, internal, or embedded display device.[0353] In at least one embodiment, processing subsystem 1901 includes one or more parallel processor(s) 1912 coupled to memory hub 1905 via a bus or other communication link 1913. In at least one embodiment, communication link 1913 may use one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor-specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s) 1912 form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many-integrated core (MIC) processor. In at least one embodiment, some or all of parallel processor(s) 1912 form a graphics processing subsystem that can output pixels to one of one or more display device(s) 1910A coupled via 110 Hub 1907. In at least one embodiment, parallel processor(s) 1912 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 1910B.[0354] In at least one embodiment, a system storage unit 1914 can connect to 1/0 hub 1907 to provide a storage mechanism for computing system 1900. In at least one embodiment, an I/O switch 1916 can be used to provide an interface mechanism to enable connections between I/O hub 1907 and other components, such as a network adapter 1918 and/or a wireless network adapter 1919 that may be integrated into platform, and various other devices that can be added via one or more add-in device(s) 1920. In at least one embodiment, network adapter 1918 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 1919 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.[0355] In at least one embodiment, computing system 1900 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and like, may also be connected to I/O hub 1907. In at least one embodiment, communication paths interconnecting various components in FIG. 19 may be implemented using any suitable protocols, such as PCT (Peripheral Component Interconnect) based protocols (e.g., PO-Express), or other bus or point-to-point communication interfaces and/or protocol(s), such as NV-Link high-speed interconnect, or interconnect protocols.[0356] In at least one embodiment, parallel processor(s) 1912 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In at least one embodiment, parallel processor(s) 1912 incorporate circuitry optimized for general purpose processing. In at least embodiment components of computing system 1900 may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, parallel processor(s) 1912, memory hub 1905, processor(s) 1902, and I/O hub 1907 can be integrated into a system on chip (SoC) integrated circuit. In at least one embodiment, components of computing system 1900 can be integrated into a single package to form a system in package (SIP) configuration. In at least one embodiment, at least a portion of components of computing system 1900 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.[0357] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in system FIG. 1900 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0355] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.PROCESSORS[0359] FIG. 20A illustrates a parallel processor 2000 according to at least one embodiment. In at least one embodiment, various components of parallel processor 2000 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). In at least one embodiment, illustrated parallel processor 2000 is a variant of one or more parallel processor(s) 1912 shown in FIG. 19 according to an exemplary embodiment.[0360] In at least one embodiment, parallel processor 2000 includes a parallel processing unit 2002. In at least one embodiment, parallel processing unit 2002 includes an I/O unit 2004 that enables communication with other devices, including other instances of parallel processing unit 2002. In at least one embodiment, I/O unit 2004 may be directly connected to other devices. In at least one embodiment, I/O unit 2004 connects with other devices via use of a hub or switch interface, such as a memory hub 2005. In at least one embodiment, connections between memory hub 2005 and I/O unit 2004 form a communication link 2013. In at least one embodiment, I/O unit 2004 connects with a host interface 2006 and a memory crossbar 2016, where host interface 2006 receives commands directed to performing processing operations and memory crossbar 2016 receives commands directed to performing memory operations.[0361] In at least one embodiment, when host interface 2006 receives a command buffer via I/O unit 2004, host interface 2006 can direct work operations to perform those commands to a front end 2008. In at least one embodiment, front end 2008 couples with a scheduler 2010, which is configured to distribute commands or other work items to a processing cluster array 2012. In at least one embodiment, scheduler 2010 ensures that processing cluster array 2012 is properly configured and in a valid state before tasks are distributed to a cluster of processing cluster array 2012. In at least one embodiment, scheduler 2010 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler 2010 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array 2012. In at least one embodiment, host software can prove workloads for scheduling on processing cluster array 2012 via one of multiple graphics processing paths. In at least one embodiment, workloads can then be automatically distributed across processing array cluster 2012 by scheduler 2010 logic within a microcontroller including scheduler 2010.[0362] In at least one embodiment, processing cluster array 2012 can include up to "N" processing clusters (e.g., cluster 201 4A, cluster 20 I 4B, through cluster 201 4N), where "N" represents a positive integer (which may be a different integer "N" than used in other figures). In at least one embodiment, each cluster 2014A-2014N of processing cluster array 2012 can execute a large number of concurrent threads. In at least one embodiment, scheduler 2010 can allocate work to clusters 2014A-2014N of processing cluster array 2012 using various scheduling and/or work distribution algorithms, which may vary depending on workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler 2010, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing cluster array 2012. In at least one embodiment, different clusters 2014A-2014N of processing cluster array 2012 can be allocated for processing different types of programs or for performing different types of computations.[0363] In at least one embodiment, processing cluster array 2012 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing cluster array 2012 is configured to perform general-purpose parallel compute operations. For example, in at least one embodiment, processing cluster array 2012 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.[0364] In at least one embodiment, processing cluster array 2012 is configured to perform parallel graphics processing operations. In at least one embodiment, processing cluster array 2012 can include additional logic to support execution of such graphics processing operations, including but not limited to, texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing cluster array 2012 can be configured to execute graphics processing related shader programs such as, but not limited to, vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit 2002 can transfer data from system memory via I/O unit 2004 for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., parallel processor memory 2022) during processing, then written back to system memory.[0365] In at least one embodiment, when parallel processing unit 2002 is used to perform graphics processing, scheduler 2010 can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters 2014A-2014N of processing cluster array 2012. In at least one embodiment, portions of processing cluster array 2012 can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters 2014A-2014N may be stored in buffers to allow intermediate data to be transmitted between clusters 2014A-2014N for further processing.[0366] In at least one embodiment, processing cluster array 2012 can receive processing tasks to be executed via scheduler 2010, which receives commands defining processing tasks from front end 2008. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler 2010 may be configured to fetch indices corresponding to tasks or may receive indices from front end 2008. In at least one embodiment, front end 2008 can be configured to ensure processing cluster array 2012 is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.[0367] In at least one embodiment, each of one or more instances of parallel processing unit 2002 can couple with a parallel processor memory 2022. In at least one embodiment, parallel processor memory 2022 can be accessed via memory crossbar 2016, which can receive memory requests from processing cluster array 2012 as well as I/O unit 2004. In at least one embodiment, memory crossbar 2016 can access parallel processor memory 2022 via a memory interface 2018. In at least one embodiment, memory interface 2018 can include multiple partition units (e.g., partition unit 2020A, partition unit 2020B, through partition unit 2020N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 2022. In at least one embodiment, a number of partition units 2020A-2020N is configured to be equal to a number of memory units, such that a first partition unit 2020A has a corresponding first memory unit 2024A, a second partition unit 2020B has a corresponding memory unit 2024B, and an N-th partition unit 2020N has a corresponding N-th memory unit 2024N. In at least one embodiment, a number of partition units 2020A-2020N may not be equal to a number of memory units.[0368] In at least one embodiment, memory units 2024A-2024N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SCRAM), including graphics double data rate (GDDR) memory. In at least one embodiment, memory units 2024A-2024N may also include 3D stacked memory, including but not limited to high bandwidth memory (RBM). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units 2024A-2024N, allowing partition units 2020A-2020N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory 2022. In at least one embodiment, a local instance of parallel processor memory 2022 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.[0369] In at least one embodiment, any one of clusters 2014A-2014N of processing cluster array 2012 can process data that will be written to any of memory units 2024A-2024N within parallel processor memory 2022. In at least one embodiment, memory crossbar 2016 can be configured to transfer an output of each cluster 2014A-2014N to any partition unit 2020A-2020N or to another cluster 2014A-2014N, which can perform additional processing operations on an output. In at least one embodiment, each cluster 2014A-2014N can communicate with memory interface 2018 through memory crossbar 2016 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 2016 has a connection to memory interface 2018 to communicate with I/0 unit 2004, as well as a connection to a local instance of parallel processor memory 2022, enabling processing units within different processing clusters 2014A-2014N to communicate with system memory or other memory that is not local to parallel processing unit 2002. In at least one embodiment, memory crossbar 2016 can use virtual channels to separate traffic streams between clusters 2014A-20I4N and partition units 2020A-2020N.[0370] In at least one embodiment, multiple instances of parallel processing unit 2002 can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit 2002 can be configured to interoperate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit 2002 can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit 2002 or parallel processor 2000 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.[0371] FIG. 20B is a block diagram of a partition unit 2020 according to at least one embodiment. In at least one embodiment, partition unit 2020 is an instance of one of partition units 2020A-2020N of FIG. 20A. In at least one embodiment, partition unit 2020 includes an L2 cache 2021, a frame buffer interface 2025, and a ROP 2026 (raster operations unit). In at least one embodiment, L2 cache 2021 is a read/write cache that is configured to perform load and store operations received from memory crossbar 2016 and ROP 2026. In at least one embodiment, read misses and urgent write-back requests are output by L2 cache 2021 to frame buffer interface 2025 for processing. In at least one embodiment, updates can also be sent to a frame buffer via frame buffer interface 2025 for processing. In at least one embodiment, frame buffer interface 2025 interfaces with one of memory units in parallel processor memory, such as memory units 2024A-2024N of FIG. 20 (e.g., within parallel processor memory 2022).[0372] In at least one embodiment, ROP 2026 is a processing unit that performs raster operations such as stencil, z test, blending, etc. In at least one embodiment, ROP 2026 then outputs processed graphics data that is stored in graphics memory. In at least one embodiment, ROP 2026 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. In at least one embodiment, compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. In at least one embodiment, a type of compression that is performed by ROP 2026 can vary based on statistical characteristics of data to be compressed. For example, in at least one embodiment, delta color compression is performed on depth and color data on a per-tile basis.[0373] In at least one embodiment, ROP 2026 is included within each processing cluster (e.g., cluster 2014A-2014N of FIG. 20A) instead of within partition unit 2020. In at least one embodiment, read and write requests for pixel data are transmitted over memory crossbar 2016 instead of pixel fragment data. In at least one embodiment, processed graphics data may be displayed on a display device, such as one of one or more display device(s) 1910 of FIG. 19, routed for further processing by processor(s) 1902, or routed for further processing by one of processing entities within parallel processor 2000 of FIG. 20A.[0374] FIG. 20C is a block diagram of a processing cluster 2014 within a parallel processing unit according to at least one embodiment. In at least one embodiment, a processing cluster is an instance of one of processing clusters 2014A-2014N of FIG. 20A. In at least one embodiment, processing cluster 2014 can be configured to execute many threads in parallel, where "thread" refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of processing clusters.[0375] In at least one embodiment, operation of processing cluster 2014 can be controlled via a pipeline manager 2032 that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 2032 receives instructions from scheduler 2010 of FIG. 20A and manages execution of those instructions via a graphics multiprocessor 2034 and/or a texture unit 2036. In at least one embodiment, graphics multiprocessor 2034 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SLMT parallel processors of differing architectures may be included within processing cluster 2014. In at least one embodiment, one or more instances of graphics multiprocessor 2034 can be included within a processing cluster 2014. In at least one embodiment, graphics multiprocessor 2034 can process data and a data crossbar 2040 can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager 2032 can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar 2040.[0376] In at least one embodiment, each graphics multiprocessor 2034 within processing cluster 2014 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present [0377] In at least one embodiment, instructions transmitted to processing cluster 2014 constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a common program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 2034. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor 2034. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor 2034. In at least one embodiment, when a thread group includes more threads than number of processing engines within graphics multiprocessor 2034, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on a graphics multiprocessor 2034.[0378] In at least one embodiment, graphics multiprocessor 2034 includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor 2034 can forego an internal cache and use a cache memory (e.g., LI cache 2048) within processing cluster 2014. In at least one embodiment, each graphics multiprocessor 2034 also has access to L2 caches within partition units (e.g., partition units 2020A-2020N of FIG. 20A) that are shared among all processing clusters 2014 and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor 2034 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 2002 may be used as global memory. In at least one embodiment, processing cluster 2014 includes multiple instances of graphics multiprocessor 2034 and can share common instructions and data, which may be stored in Li cache 2048.[0379] In at least one embodiment, each processing cluster 2014 may include an MMU 2045 (memory management unit) that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU 2045 may reside within memory interface 2018 of FIG. 20A. In at least one embodiment, MN4U 2045 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MN4U 2045 may include address translation lookaside buffers (TLB) or caches that may reside within graphics multiprocessor 2034 or Li 2048 cache or processing cluster 2014. In at least one embodiment, a physical address is processed to distribute surface data access locally to allow for efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss.[0380] In at least one embodiment, a processing cluster 2014 may be configured such that each graphics multiprocessor 2034 is coupled to a texture unit 2036 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture LI cache (not shown) or from an LI cache within graphics multiprocessor 2034 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor 2034 outputs processed tasks to data crossbar 2040 to provide processed task to another processing cluster 2014 for further processing or to store processed task in an L2 cache, local parallel processor memory, or system memory via memory crossbar 2016. In at least one embodiment, a preROP 2042 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 2034, and direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 2020A-2020N of FIG. 20A). In at least one embodiment, preROP 2042 unit can perform optimizations for color blending, organizing pixel color data, and performing address translations.[0381] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in graphics processing cluster 2014 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0382] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0383] FIG. 20D shows a graphics multiprocessor 2034 according to at least one embodiment. In at least one embodiment, graphics multiprocessor 2034 couples with pipeline manager 2032 of processing cluster 2014. In at least one embodiment, graphics multiprocessor 2034 has an execution pipeline including but not limited to an instruction cache 2052, an instruction unit 2054, an address mapping unit 2056, a register file 2058, one or more general purpose graphics processing unit (GPGPU) cores 2062, and one or more load/store units 2066. In at least one embodiment, GPGPU cores 2062 and load/store units 2066 are coupled with cache memory 2072 and shared memory 2070 via a memory and cache interconnect 2068.[0384] In at least one embodiment, instruction cache 2052 receives a stream of instructions to execute from pipeline manager 2032. In at least one embodiment, instructions are cached in instruction cache 2052 and dispatched for execution by an instruction unit 2054. In at least one embodiment, instruction unit 2054 can dispatch instructions as thread groups (e.g., warps), with each thread of thread group assigned to a different execution unit within GPGPU cores 2062. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 2056 can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by load/store units 2066.[0385] In at least one embodiment, register file 2058 provides a set of registers for functional units of graphics multiprocessor 2034. In at least one embodiment, register file 2058 provides temporary storage for operands connected to data paths of functional units (e.g., GPGPTJ cores 2062, load/store units 2066) of graphics multiprocessor 2034. In at least one embodiment, register file 2058 is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file 2058. In at least one embodiment, register file 2058 is divided between different warps being executed by graphics multiprocessor 2034.[0386] In at least one embodiment, GPGPU cores 2062 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of graphics multiprocessor 2034. In at least one embodiment, GPGPU cores 2062 can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores 2062 include a single precision FPU and an integer ALU while a second portion of GPGPU cores include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard floating point arithmetic or enable variable precision floating point arithmetic In at least one embodiment, graphics multiprocessor 2034 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment, one or more of GPGPU cores 2062 can also include fixed or special function logic.[0387] In at least one embodiment, GPGPU cores 2062 include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment, GPGPU cores 2062 can physically execute SIMD4, SIMD8, and SMID16 instructions and logically execute SIMD I, SIMD2, and SIMD32 instructions. In at least one embodiment, STMD instructions for GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads that perform same or similar operations can be executed in parallel via a single SIMD8 logic unit.[0388] In at least one embodiment, memory and cache interconnect 2068 is an interconnect network that connects each functional unit of graphics multiprocessor 2034 to register file 2058 and to shared memory 2070. In at least one embodiment, memory and cache interconnect 2068 is a crossbar interconnect that allows load/store unit 2066 to implement load and store operations between shared memory 2070 and register file 2058. In at least one embodiment, register file 2058 can operate at a same frequency as GPGPLT cores 2062, thus data transfer between GPGPU cores 2062 and register file 2058 can have very low latency. In at least one embodiment, shared memory 2070 can be used to enable communication between threads that execute on functional units within graphics multiprocessor 2034. In at least one embodiment, cache memory 2072 can be used as a data cache for example, to cache texture data communicated between functional units and texture unit 2036. In at least one embodiment, shared memory 2070 can also be used as a program managed cache. In at least one embodiment, threads executing on GPGPU cores 2062 can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory 2072.[0389] In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCie or NVLink). In at least one embodiment, a GPU may be integrated on a package or chip as cores and communicatively coupled to cores over an internal processor bus/interconnect internal to a package or chip. In at least one embodiment, regardless a manner in which a GPU is connected, processor cores may allocate work to such GPU in a form of sequences of commands/instructions contained in a work descriptor. In at least one embodiment, that GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.[0390] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in graphics multiprocessor 2034 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0391] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0392] FIG. 21 illustrates a multi-GPU computing system 2100, according to at least one embodiment. In at least one embodiment, multi-GPU computing system 2100 can include a processor 2102 coupled to multiple general purpose graphics processing units (GPGPUs) 2106A-D via a host interface switch 2104. In at least one embodiment, host interface switch 2104 is a PCI express switch device that couples processor 2102 to a PCI express bus over which processor 2102 can communicate with GPGPUs 2106A-D. In at least one embodiment, GPGPUs 2106A-D can interconnect via a set of high-speed point-to-point GPU-to-GPU links 2116. In at least one embodiment, GPU-to-GPU links 2116 connect to each of GPGPUs 2106A-D via a dedicated GPU link. In at least one embodiment, P2P GPU links 2116 enable direct communication between each of GPGPUs 2106A-D without requiring communication over host interface bus 2104 to which processor 2102 is connected. In at least one embodiment, with GPU-to-GPU traffic directed to P2P GPU links 2116, host interface bus 2104 remains available for system memory access or to communicate with other instances of multi-GPU computing system 2100, for example, via one or more network devices. While in at least one embodiment GPGPUs 2106A-D connect to processor 2102 via host interface switch 2104, in at least one embodiment processor 2102 includes direct support for P2P GPU links 2116 and can connect directly to GPGPUs 2106A-D.[0393] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in multi-GPU computing system 2100 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.[0394] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0395] FIG. 22 is a block diagram of a graphics processor 2200, according to at least one embodiment. In at least one embodiment, graphics processor 2200 includes a ring interconnect 2202, a pipeline front-end 2204, a media engine 2237, and graphics cores 2280A-2280N In at least one embodiment, ring interconnect 2202 couples graphics processor 2200 to other processing units, including other graphics processors or one or more general-purpose processor cores. In at least one embodiment, graphics processor 2200 is one of many processors integrated within a multi-core processing system [0396] In at least one embodiment, graphics processor 2200 receives batches of commands via ring interconnect 2202. In at least one embodiment, incoming commands are interpreted by a command streamer 2203 in pipeline front-end 2204. In at least one embodiment, graphics processor 2200 includes scalable execution logic to perform 3D geometry processing and media processing via graphics core(s) 2280A-2280N. In at least one embodiment, for 3D geometry processing commands, command streamer 2203 supplies commands to geometry pipeline 2236. In at least one embodiment, for at least some media processing commands, command streamer 2203 supplies commands to a video front end 2234, which couples with media engine 2237. In at least one embodiment, media engine 2237 includes a Video Quality Engine (VQE) 2230 for video and image post-processing and a multi-format encode/decode (MFX) 2233 engine to provide hardware-accelerated media data encoding and decoding. In at least one embodiment, geometry pipeline 2236 and media engine 2237 each generate execution threads for thread execution resources provided by at least one graphics core 2280.[0397] In at least one embodiment, graphics processor 2200 includes scalable thread execution resources featuring graphics cores 2280A-2280N (which can be modular and are sometimes referred to as core slices), each having multiple sub-cores 2250A-50N, 2260A-2260N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor 2200 can have any number of graphics cores 2280A. In at least one embodiment, graphics processor 2200 includes a graphics core 2280A having at least a first sub-core 2250A and a second sub-core 2260A. In at least one embodiment, graphics processor 2200 is a low power processor with a single sub-core (e.g., 2250A). In at least one embodiment, graphics processor 2200 includes multiple graphics cores 2280A-2280N, each including a set of first sub-cores 2250A-2250N and a set of second sub-cores 2260A-2260N. In at least one embodiment, each sub-core in first sub-cores 2250A-2250N includes at least a first set of execution units 2252A-2252N and media/texture samplers 2254A-2254N. In at least one embodiment, each sub-core in second sub-cores 2260A-2260N includes at least a second set of execution units 2262A-2262N and samplers 2264A-2264N. In at least one embodiment, each sub-core 2250A-2250N, 2260A-2260N shares a set of shared resources 2270A-2270N. In at least one embodiment, shared resources include shared cache memory and pixel operation logic.[0398] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, inference and/or training logic 715 may be used in graphics processor 2200 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein [0399] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0400] FIG. 23 is a block diagram illustrating micro-architecture for a processor 2300 that may include logic circuits to perform instructions, according to at least one embodiment. In at least one embodiment, processor 2300 may perform instructions, including x86 instructions, ARM instructions, specialized instructions for application-specific integrated circuits (ASICs), etc. In at least one embodiment, processor 2300 may include registers to store packed data, such as 64-bit wide MN,IXTM registers in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, Calif. In at least one embodiment, MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany single instruction, multiple data ("SIMD") and streaming SIMD extensions (-SSE") instructions. In at least one embodiment, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, AVX, or beyond (referred to generically as "SSEx") technology may hold such packed data operands. In at least one embodiment, processor 2300 may perform instructions to accelerate machine learning or deep learning algorithms, training, or inferencing.[0401] In at least one embodiment, processor 2300 includes an in-order front end ("front end") 2301 to fetch instructions to be executed and prepare instructions to be used later in a processor pipeline. In at least one embodiment, front end 2301 may include several units. In at least one embodiment, an instruction prefetcher 2326 fetches instructions from memory and feeds instructions to an instruction decoder 2328 which in turn decodes or interprets instructions. For example, in at least one embodiment, instruction decoder 2328 decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called "micro ops" or "uops") that a machine may execute. In at least one embodiment, instruction decoder 2328 parses an instruction into an opcode and corresponding data and control fields that may be used by micro-architecture to perform operations in accordance with at least one embodiment. In at least one embodiment, a trace cache 2330 may assemble decoded uops into program ordered sequences or traces in a uop queue 2334 for execution. In at least one embodiment, when trace cache 2330 encounters a complex instruction, a microcode ROM 2332 provides uops needed to complete an operation.[0402] In at least one embodiment, some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete full operation. In at least one embodiment, if more than four micro-ops are needed to complete an instruction, instruction decoder 2328 may access microcode ROM 2332 to perform that instruction. In at least one embodiment, an instruction may be decoded into a small number of micro-ops for processing at instruction decoder 2328. In at least one embodiment, an instruction may be stored within microcode ROM 2332 should a number of micro-ops be needed to accomplish such operation. In at least one embodiment, trace cache 2330 refers to an entry point programmable logic array ("PLA") to determine a correct micro-instruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM 2332 in accordance with at least one embodiment In at least one embodiment, after microcode ROM 2332 finishes sequencing micro-ops for an instruction, front end 2301 of a machine may resume fetching micro-ops from trace cache 2330.[0403] In at least one embodiment, out-of-order execution engine ("out of order engine-) 2303 may prepare instructions for execution. In at least one embodiment, out-of-order execution logic has a number of buffers to smooth out and re-order flow of instructions to optimize performance as they go down a pipeline and get scheduled for execution. In at least one embodiment, out-oforder execution engine 2303 includes, without limitation, an allocator/register renamer 2340, a memory uop queue 2342, an integer/floating point uop queue 2344, a memory scheduler 2346, a fast scheduler 2302, a slow/general floating point scheduler ("slow/general FP scheduler") 2304, and a simple floating point scheduler ("simple FP scheduler") 2306. In at least one embodiment, fast schedule 2302, slow/general floating point scheduler 2304, and simple floating point scheduler 2306 are also collectively referred to herein as "uop schedulers 2302, 2304, 2306." In at least one embodiment, allocatorkegister renamer 2340 allocates machine buffers and resources that each uop needs in order to execute. In at least one embodiment, allocatorkegister renamer 2340 renames logic registers onto entries in a register file. In at least one embodiment, allocator/register renamer 2340 also allocates an entry for each uop in one of two uop queues, memory uop queue 2342 for memory operations and integer/floating point uop queue 2344 for non-memory operations, in front of memory scheduler 2346 and uop schedulers 2302, 2304, 2306. In at least one embodiment, uop schedulers 2302, 2304, 2306, determine when a uop is ready to execute based on readiness of their dependent input register operand sources and availability of execution resources uops need to complete their operation. In at least one embodiment, fast scheduler 2302 may schedule on each half of a main clock cycle while slow/general floating point scheduler 2304 and simple floating point scheduler 2306 may schedule once per main processor clock cycle. In at least one embodiment, uop schedulers 2302, 2304, 2306 arbitrate for dispatch ports to schedule uops for execution.[0404] In at least one embodiment, execution block 2311 includes, without limitation, an integer register file/bypass network 2308, a floating point register file/bypass network ("FP register file/bypass network-) 2310, address generation units ("AGUs") 23 12 and 2314, fast Arithmetic Logic Units (ALUs) ("fast ALUs") 2316 and 2318, a slow Arithmetic Logic Unit ("slow ALU") 2320, a floating point ALU ("FP") 2322, and a floating point move unit ("FP move") 2324. In at least one embodiment, integer register file/bypass network 2308 and floating point register file/bypass network 2310 are also referred to herein as -register files 2308, 2310." In at least one embodiment, AGUSs 2312 and 2314, fast Allis 2316 and 2318, slow ALU 2320, floating point ALU 2322, and floating point move unit 2324 are also referred to herein as execution units 2312, 2314, 2316, 2318, 2320, 2322, and 2324." In at least one embodiment, execution block 2311 may include, without limitation, any number (including zero) and type of register files, bypass networks, address generation units, and execution units, in any combination.[0405] In at least one embodiment, register networks 2308, 2310 may be arranged between uop schedulers 2302, 2304, 2306, and execution units 2312, 2314, 2316, 2318, 2320, 2322, and 2324. In at least one embodiment, integer register file/bypass network 2308 performs integer operations. In at least one embodiment, floating point register file/bypass network 2310 performs floating point operations. In at least one embodiment, each of register networks 2308, 2310 may include, without limitation, a bypass network that may bypass or forward just completed results that have not yet been written into a register file to new dependent uops. In at least one embodiment, register networks 2308, 2310 may communicate data with each other. In at least one embodiment, integer register file/bypass network 2308 may include, without limitation, two separate register files, one register file for a low-order thirty-two bits of data and a second register file for a high order thirty-two bits of data. In at least one embodiment, floating point register file/bypass network 2310 may include, without limitation, 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0406] In at least one embodiment, execution units 2312, 2314, 2316, 2318, 2320, 2322, 2324 may execute instructions. In at least one embodiment, register networks 2308, 2310 store integer and floating point data operand values that micro-instructions need to execute. In at least one embodiment, processor 2300 may include, without limitation, any number and combination of execution units 2312, 2314, 2316, 2318, 2320, 2322, 2324. In at least one embodiment, floating point ALU 2322 and floating point move unit 2324, may execute floating point, MNDC, SIMD, AVX and SSE, or other operations, including specialized machine learning instructions. In at least one embodiment, floating point ALU 2322 may include, without limitation, a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro ops. In at least one embodiment, instructions involving a floating point value may be handled with floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs 2316, 2318. In at least one embodiment, fast ALUS 2316, 2318 may execute fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations go to slow ALU 2320 as slow ALU 2320 may include, without limitation, integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. In at least one embodiment, memory load/store operations may be executed by AGUs 2312, 2314. In at least one embodiment, fast ALU 2316, fast ALU 2318, and slow ALU 2320 may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU 2316, fast ALU 2318, and slow ALU 2320 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. In at least one embodiment, floating point ALU 2322 and floating point move unit 2324 may be implemented to support a range of operands having bits of various widths, such as 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.[0407] In at least one embodiment, uop schedulers 2302, 2304, 2306 dispatch dependent operations before a parent load has finished executing. In at least one embodiment, as uops may be speculatively scheduled and executed in processor 2300, processor 2300 may also include logic to handle memory misses In at least one embodiment, if a data load misses in a data cache, there may be dependent operations in flight in a pipeline that have left a scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, dependent operations might need to be replayed and independent ones may be allowed to complete. In at least one embodiment, schedulers and a replay mechanism of at least one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations.[0408] In at least one embodiment, "registers" may refer to on-board processor storage locations that may be used as part of instructions to identify operands. In at least one embodiment, registers may be those that may be usable from outside of a processor (from a programmer's perspective). In at least one embodiment, registers might not be limited to a particular type of circuit. Rather, in at least one embodiment, a register may store data, provide data, and perform functions described herein. In at least one embodiment, registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In at least one embodiment, integer registers store 32-bit integer data. A register file of at least one embodiment also contains eight multimedia SIMD registers for packed data.[0409] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment portions or all of inference and/or training logic 715 may be incorporated into execution block 2311 and other memory or registers shown or not shown. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs illustrated in execution block 23 I I. Moreover, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of execution block 2311 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.[0410] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0411] FIG. 24 illustrates a deep learning application processor 2400, according to at least one embodiment. In at least one embodiment, deep learning application processor 2400 uses instructions that, if executed by deep learning application processor 2400, cause deep learning application processor 2400 to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, deep learning application processor 2400 is an application-specific integrated circuit (ASIC). In at least one embodiment, application processor 2400 performs matrix multiply operations either "hard-wired" into hardware as a result of performing one or more instructions or both. In at least one embodiment, deep learning application processor 2400 includes, without limitation, processing clusters 2410(1)-2410(12), Inter-Chip Links ("ICLs") 2420(1)-2420(12), Inter-Chip Controllers ("ICCs") 2430(1)-2430(2), high-bandwidth memory second generation ("HBM2") 2440(1)-2440(4), memory controllers ("Mem Ctrlrs") 2442(1)-2442(4), high bandwidth memory physical layer ("HBM PHY") 2444(1)-2444(4), a management-controller central processing unit ("management-controller CPU") 2450, a Serial Peripheral Interface, Inter-Integrated Circuit, and General Purpose Input/Output block ("SPI, I2C, GPIO") 2460, a peripheral component interconnect express controller and direct memory access block ("PCIe Controller and DMA") 2470, and a sixteen-lane peripheral component interconnect express port ("PCT Express x 16") 2480.[0412] In at least one embodiment, processing clusters 2410 may perform deep learning operations, including inference or prediction operations based on weight parameters calculated one or more training techniques, including those described herein. In at least one embodiment, each processing cluster 2410 may include, without limitation, any number and type of processors. In at least one embodiment, deep learning application processor 2400 may include any number and type of processing clusters 2400. In at least one embodiment, Inter-Chip Links 2420 are bi-directional. In at least one embodiment, Inter-Chip Links 2420 and Inter-Chip Controllers 2430 enable multiple deep learning application processors 2400 to exchange information, including activation information resulting from performing one or more machine learning algorithms embodied in one or more neural networks. In at least one embodiment, deep learning application processor 2400 may include any number (including zero) and type of ICLs 2420 and ICCs 2430.[0413] In at least one embodiment, HBM2s 2440 provide a total of 32 Gigabytes (GB) of memory. In at least one embodiment, ]BM2 2440(i) is associated with both memory controller 24420) and IIBM PHY 2444(i) where "i" is an arbitrary integer. In at least one embodiment, any number of HBM2s 2440 may provide any type and total amount of high bandwidth memory and may be associated with any number (including zero) and type of memory controllers 2442 and HBM PHYs 2444. In at least one embodiment, SPT, T2C, GPTO 2460, PCIe Controller and DMA 2470, and/or PCIe 2480 may be replaced with any number and type of blocks that enable any number and type of communication standards in any technically feasible fashion.[0414] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to deep learning application processor 2400. In at least one embodiment, deep learning application processor 2400 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by deep learning application processor 2400. In at least one embodiment, processor 2400 may be used to perform one or more neural network use cases described herein.[0415] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0416] FIG. 25 is a block diagram of a neuromorphic processor 2500, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2500 may receive one or more inputs from sources external to neuromorphic processor 2500. In at least one embodiment, these inputs may be transmitted to one or more neurons 2502 within neuromorphic processor 2500. In at least one embodiment, neurons 2502 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (AL1Js). In at least one embodiment, neuromorphic processor 2500 may include, without limitation, thousands or millions of instances of neurons 2502, but any suitable number of neurons 2502 may be used. In at least one embodiment, each instance of neuron 2502 may include a neuron input 2504 and a neuron output 2506. In at least one embodiment, neurons 2502 may generate outputs that may be transmitted to inputs of other instances of neurons 2502. For example, in at least one embodiment, neuron inputs 2504 and neuron outputs 2506 may be interconnected via synapses 2508.[0417] In at least one embodiment, neurons 2502 and synapses 2508 may be interconnected such that neuromorphic processor 2500 operates to process or analyze information received by neuromorphic processor 2500. In at least one embodiment, neurons 2502 may transmit an output pulse (or "fire" or "spike") when inputs received through neuron input 2504 exceed a threshold. In at least one embodiment, neurons 2502 may sum or integrate signals received at neuron inputs 2504. For example, in at least one embodiment, neurons 2502 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a "membrane potential") exceeds a threshold value, neuron 2502 may generate an output (or "fire") using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2504 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2504 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2502 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2502 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2506 when result of applying a transfer function to neuron input 2504 exceeds a threshold. In at least one embodiment, once neuron 2502 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2502 may resume normal operation after a suitable period of time (or refractory period).[0418] In at least one embodiment, neurons 2502 may be interconnected through synapses 2508. In at least one embodiment, synapses 2508 may operate to transmit signals from an output of a first neuron 2502 to an input of a second neuron 2502. In at least one embodiment, neurons 2502 may transmit information over more than one instance of synapse 2508. In at least one embodiment, one or more instances of neuron output 2506 may be connected, via an instance of synapse 2508, to an instance of neuron input 2504 in same neuron 2502. In at least one embodiment, an instance of neuron 2502 generating an output to be transmitted over an instance of synapse 2508 may be referred to as a "pre-synaptic neuron" with respect to that instance of synapse 2508. In at least one embodiment, an instance of neuron 2502 receiving an input transmitted over an instance of synapse 2508 may be referred to as a "post-synaptic neuron" with respect to that instance of synapse 2508. Because an instance of neuron 2502 may receive inputs from one or more instances of synapse 2508, and may also transmit outputs over one or more instances of synapse 2508, a single instance of neuron 2502 may therefore be both a "pre-synaptic neuron" and "post-synaptic neuron," with respect to various instances of synapses 2508, in at least one embodiment.[0419] In at least one embodiment, neurons 2502 may be organized into one or more layers. In at least one embodiment, each instance of neuron 2502 may have one neuron output 2506 that may fan out through one or more synapses 2508 to one or more neuron inputs 2504. In at least one embodiment, neuron outputs 2506 of neurons 2502 in a first layer 2510 may be connected to neuron inputs 2504 of neurons 2502 in a second layer 2512. In at least one embodiment, layer 2510 may be referred to as a "feed-forward layer." In at least one embodiment, each instance of neuron 2502 in an instance of first layer 2510 may fan out to each instance of neuron 2502 in second layer 2512. In at least one embodiment, first layer 2510 may be referred to as a "fully connected feed-forward layer." In at least one embodiment, each instance of neuron 2502 in an instance of second layer 2512 may fan out to fewer than all instances of neuron 2502 in a third layer 2514. In at least one embodiment, second layer 2512 may be referred to as a "sparsely connected feed-forward layer." In at least one embodiment, neurons 2502 in second layer 2512 may fan out to neurons 2502 in multiple other layers, including to neurons 2502 also in second layer 2512. In at least one embodiment, second layer 2512 may be referred to as a "recurrent layer." In at least one embodiment, neuromorphic processor 2500 may include, without limitation, any suitable combination of recurrent layers and feed-forward layers, including, without limitation, both sparsely connected feed-forward layers and fully connected feed-forward layers.[0420] In at least one embodiment, neuromorphic processor 2500 may include, without limitation, a reconfigurable interconnect architecture or dedicated hard-wired interconnects to connect synapse 2508 to neurons 2502. In at least one embodiment, neuromorphic processor 2500 may include, without limitation, circuitry or logic that allows synapses to be allocated to different neurons 2502 as needed based on neural network topology and neuron fan-in/out. For example, in at least one embodiment, synapses 2508 may be connected to neurons 2502 using an interconnect fabric, such as network-on-chip, or with dedicated connections. In at least one embodiment, synapse interconnections and components thereof may be implemented using circuitry or logic.[0421] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0422] FIG. 26 is a block diagram of a processing system, according to at least one embodiment. In at least one embodiment, system 2600 includes one or more processors 2602 and one or more graphics processors 2608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 2602 or processor cores 2607. In at least one embodiment, system 2600 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.[0423] In at least one embodiment, system 2600 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, system 2600 is a mobile phone, a smart phone, a tablet computing device or a mobile Internet device. In at least one embodiment, processing system 2600 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, a smart eyewear device, an augmented reality device, or a virtual reality device. In at least one embodiment, processing system 2600 is a television or set top box device having one or more processors 2602 and a graphical interface generated by one or more graphics processors 2608.[0424] In at least one embodiment, one or more processors 2602 each include one or more processor cores 2607 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment each of one or more processor cores 2607 is configured to process a specific instruction sequence 2609. In at least one embodiment, instruction sequence 2609 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). In at least one embodiment, processor cores 2607 may each process a different instruction sequence 2609, which may include instructions to facilitate emulation of other instruction sequences. In at least one embodiment, processor core 2607 may also include other processing devices, such a Digital Signal Processor (DSP).[0425] In at least one embodiment, processor 2602 includes a cache memory 2604. In at least one embodiment, processor 2602 can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor 2602. In at least one embodiment, processor 2602 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 2607 using known cache coherency techniques. In at least one embodiment, a register file 2606 is additionally included in processor 2602, which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file 2606 may include general-purpose registers or other registers.[0426] In at least one embodiment, one or more processor(s) 2602 are coupled with one or more interface bus(es) 2610 to transmit communication signals such as address, data, or control signals between processor 2602 and other components in system 2600. In at least one embodiment, interface bus 2610 can be a processor bus, such as a version of a Direct Media Interface (DMO bus. In at least one embodiment, interface bus 2610 is not limited to a DM1 bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In at least one embodiment processor(s) 2602 include an integrated memory controller 2616 and a platform controller hub 2630. In at least one embodiment, memory controller 2616 facilitates communication between a memory device and other components of system 2600, while platform controller hub (PCH) 2630 provides connections to I/O devices via a local I/O bus.[0427] In at least one embodiment, a memory device 2620 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In at least one embodiment, memory device 2620 can operate as system memory for system 2600, to store data 2622 and instructions 2621 for use when one or more processors 2602 executes an application or process. In at least one embodiment, memory controller 2616 also couples with an optional external graphics processor 2612, which may communicate with one or more graphics processors 2608 in processors 2602 to perform graphics and media operations. In at least one embodiment, a display device 2611 can connect to processor(s) 2602. In at least one embodiment, display device 2611 can include one or more of an internal display device, as in a mobile electronic device or a laptop device, or an external display device attached via a display interface (e g., DisplayPort, etc.). In at least one embodiment, display device 261 I can include a head mounted display (TIMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.[0428] In at least one embodiment, platform controller hub 2630 enables peripherals to connect to memory device 2620 and processor 2602 via a high-speed I/O bus. In at least one embodiment, I/0 peripherals include, but are not limited to, an audio controller 2646, a network controller 2634, a firmware interface 2628, a wireless transceiver 2626, touch sensors 2625, a data storage device 2624 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device 2624 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). In at least one embodiment, touch sensors 2625 can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver 2626 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 30, 40, or Long Term Evolution (LIE) transceiver. In at least one embodiment, firmware interface 2628 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). In at least one embodiment, network controller 2634 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus 2610. In at least one embodiment, audio controller 2646 is a multi-channel high definition audio controller. In at least one embodiment, system 2600 includes an optional legacy 1/0 controller 2640 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to system 2600. In at least one embodiment, platform controller hub 2630 can also connect to one or more Universal Serial Bus (USB) controllers 2642 connect input devices, such as keyboard and mouse 2643 combinations, a camera 2644, or other USB input devices.[0429] In at least one embodiment, an instance of memory controller 2616 and platform controller hub 2630 may be integrated into a discreet external graphics processor, such as external graphics processor 2612. In at least one embodiment, platform controller hub 2630 and/or memory controller 2616 may be external to one or more processor(s) 2602. For example, in at least one embodiment, system 2600 can include an external memory controller 2616 and platform controller hub 2630, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s) 2602.[0430] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment portions or all of inference and/or training logic 715 may be incorporated into graphics processor 2600. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2600 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.[0431] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0432] FIG. 27 is a block diagram of a processor 2700 having one or more processor cores 2702A-2702N, an integrated memory controller 2714, and an integrated graphics processor 2708, according to at least one embodiment. In at least one embodiment, processor 2700 can include additional cores up to and including additional core 2702N represented by dashed lined boxes. In at least one embodiment, each of processor cores 2702A-2702N includes one or more internal cache units 2704A-2704N. In at least one embodiment, each processor core also has access to one or more shared cached units 2706.[0433] In at least one embodiment, internal cache units 2704A-2704N and shared cache units 2706 represent a cache memory hierarchy within processor 2700. In at least one embodiment, cache memory units 2704A-2704N may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where a highest level of cache before external memory is classified as an LLC. In at least one embodiment, cache coherency logic maintains coherency between various cache units 2706 and 2704A-2704N.[0434] In at least one embodiment, processor 2700 may also include a set of one or more bus controller units 2716 and a system agent core 2710. In at least one embodiment, bus controller units 2716 manage a set of peripheral buses, such as one or more PCI or PCI express busses. in at least one embodiment, system agent core 2710 provides management functionality for various processor components. In at least one embodiment, system agent core 2710 includes one or more integrated memory controllers 2714 to manage access to various external memory devices (not shown).[0435] In at least one embodiment, one or more of processor cores 2702A-2702N include support for simultaneous multi-threading. In at least one embodiment, system agent core 2710 includes components for coordinating and operating cores 2702A-2702N during multi-threaded processing. In at least one embodiment, system agent core 2710 may additionally include a power control unit (PCU), which includes logic and components to regulate one or more power states of processor cores 2702A-2702N and graphics processor 2708.[0436] In at least one embodiment, processor 2700 additionally includes graphics processor 2708 to execute graphics processing operations. In at least one embodiment, graphics processor 2708 couples with shared cache units 2706, and system agent core 2710, including one or more integrated memory controllers 2714. In at least one embodiment, system agent core 2710 also includes a display controller 2711 to drive graphics processor output to one or more coupled displays. In at least one embodiment, display controller 2711 may also be a separate module coupled with graphics processor 2708 via at least one interconnect, or may be integrated within graphics processor 2708.[0437] In at least one embodiment, a ring-based interconnect unit 2712 is used to couple internal components of processor 2700. In at least one embodiment, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques. in at least one embodiment, graphics processor 2708 couples with ring interconnect 2712 via an I/O link 2713.[0438] In at least one embodiment, I/O link 2713 represents at least one of multiple varieties of I/0 interconnects, including an on package U0 interconnect which facilitates communication between various processor components and a high-performance embedded memory module 2718, such as an eDRAM module. In at least one embodiment, each of processor cores 2702A-2702N and graphics processor 2708 use embedded memory module 2718 as a shared Last Level Cache.[0439] In at least one embodiment, processor cores 2702A-2702N are homogeneous cores executing a common instruction set architecture. In at least one embodiment, processor cores 2702A-2702N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 2702A-2702N execute a common instruction set, while one or more other cores of processor cores 2702A-2702N executes a subset of a common instruction set or a different instruction set. In at least one embodiment, processor cores 2702A-2702N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In at least one embodiment, processor 2700 can be implemented on one or more chips or as an SoC integrated circuit.[0440] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment portions or all of inference and/or training logic 715 may be incorporated into graphics processor 2710. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline, graphics core(s) 2702, shared function logic, or other logic in FIG. 27. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of processor 2700 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.[0441] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0442] FIG. 28 is a block diagram of a graphics processor 2800, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In at least one embodiment, graphics processor 2800 communicates via a memory mapped I/O interface to registers on graphics processor 2800 and with commands placed into memory. In at least one embodiment, graphics processor 2800 includes a memory interface 2814 to access memory. In at least one embodiment, memory interface 2814 is an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.[0443] In at least one embodiment, graphics processor 2800 also includes a display controller 2802 to drive display output data to a display device 2820. In at least one embodiment, display controller 2802 includes hardware for one or more overlay planes for display device 2820 and composition of multiple layers of video or user interface elements. In at least one embodiment, display device 2820 can be an internal or external display device. In at least one embodiment, display device 2820 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In at least one embodiment, graphics processor 2800 includes a video codec engine 2806 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SIMPLL) 421MATC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.[0444] In at least one embodiment, graphics processor 2800 includes a block image transfer (BLIT) engine 2804 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in at least one embodiment, 2D graphics operations are performed using one or more components of a graphics processing engine (GPE) 2810. In at least one embodiment, GPE 2810 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.[0445] In at least one embodiment, GPE 2810 includes a 3D pipeline 2812 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). In at least one embodiment, 3D pipeline 2812 includes programmable and fixed function elements that perform various tasks and/or spawn execution threads to a 3D/Media sub-system 2815. While 3D pipeline 2812 can be used to perform media operations, in at least one embodiment, GPE 2810 also includes a media pipeline 2816 that is used to perform media operations, such as video post-processing and image enhancement.[0446] In at least one embodiment, media pipeline 2816 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of, video codec engine 2806. In at least one embodiment, media pipeline 2816 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 2815. In at least one embodiment, spawned threads perform computations for media operations on one or more graphics execution units included in 3D/Media sub-system 2815.[0447] In at least one embodiment, 3D/Media subsystem 2815 includes logic for executing threads spawned by 3D pipeline 2812 and media pipeline 2816. In at least one embodiment, 3D pipeline 2812 and media pipeline 2816 send thread execution requests to 3D/Media subsystem 2815, which includes thread dispatch logic for arbitrating and dispatching various requests to available thread execution resources. In at least one embodiment, execution resources include an array of graphics execution units to process 3D and media threads. In at least one embodiment, 3D/Media subsystem 2815 includes one or more internal caches for thread instructions and data. In at least one embodiment, subsystem 2815 also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.[0448] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment portions or all of inference and/or training logic 715 may be incorporated into graphics processor 2800. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2812. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2800 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.[0449] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0450] FIG. 29 is a block diagram of a graphics processing engine 2910 of a graphics processor in accordance with at least one embodiment. In at least one embodiment, graphics processing engine (GPE) 2910 is a version of GPE 2810 shown in FIG. 28. In at least one embodiment, a media pipeline 2916 is optional and may not be explicitly included within GPE 2910. In at least one embodiment, a separate media and/or image processor is coupled to GPE 2910.[0451] In at least one embodiment, GPE 2910 is coupled to or includes a command streamer 2903, which provides a command stream to a 3D pipeline 2912 and/or media pipeline 2916. In at least one embodiment, command streamer 2903 is coupled to memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In at least one embodiment, command streamer 2903 receives commands from memory and sends commands to 3D pipeline 2912 and/or media pipeline 2916. In at least one embodiment, commands are instructions, primitives, or micro-operations fetched from a ring buffer, which stores commands for 3D pipeline 2912 and media pipeline 2916. In at least one embodiment, a ring buffer can additionally include batch command buffers storing batches of multiple commands. In at least one embodiment, commands for 3D pipeline 2912 can also include references to data stored in memory, such as, but not limited to, vertex and geometry data for 3D pipeline 2912 and/or image data and memory objects for media pipeline 2916. In at least one embodiment, 3D pipeline 2912 and media pipeline 2916 process commands and data by performing operations or by dispatching one or more execution threads to a graphics core array 2914. In at least one embodiment, graphics core array 2914 includes one or more blocks of graphics cores (e.g., graphics core(s) 2915A, graphics core(s) 2915B), each block including one or more graphics cores. In at least one embodiment, each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic, including inference and/or training logic 715 in FIG. 7A and FIG. 7B.[0452] In at least one embodiment, 3D pipeline 2912 includes fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing instructions and dispatching execution threads to graphics core array 2914. In at least one embodiment, graphics core array 2914 provides a unified block of execution resources for use in processing shader programs. In at least one embodiment, a multi-purpose execution logic (e.g., execution units) within graphics core(s) 2915A-2915B of graphic core array 2914 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.[0453] In at least one embodiment, graphics core array 2914 also includes execution logic to perform media functions, such as video and/or image processing. In at least one embodiment, execution units additionally include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations.[0454] In at least one embodiment, output data generated by threads executing on graphics core array 2914 can output data to memory in a unified return buffer (URB) 2918. In at least one embodiment, URB 2918 can store data for multiple threads. In at least one embodiment, URB 2918 may be used to send data between different threads executing on graphics core array 2914.In at least one embodiment, URB 2918 may additionally be used for synchronization between threads on graphics core array 2914 and fixed function logic within shared function logic 2920.[0455] In at least one embodiment, graphics core array 2914 is scalable, such that graphics core array 2914 includes a variable number of graphics cores, each having a variable number of execution units based on a target power and performance level of GPE 2910. In at least one embodiment, execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.[0456] In at least one embodiment, graphics core array 2914 is coupled to shared function logic 2920 that includes multiple resources that are shared between graphics cores in graphics core array 2914. In at least one embodiment, shared functions performed by shared function logic 2920 are embodied in hardware logic units that provide specialized supplemental functionality to graphics core array 2914. In at least one embodiment, shared function logic 2920 includes but is not limited to a sampler unit 2921, a math unit 2922, and inter-thread communication (ITC) logic 2923. In at least one embodiment, one or more cache(s) 2925 are included in, or coupled to, shared function logic 2920.[0457] In at least one embodiment, a shared function is used if demand for a specialized function is insufficient for inclusion within graphics core array 2914. In at least one embodiment, a single instantiation of a specialized function is used in shared function logic 2920 and shared among other execution resources within graphics core array 2914. In at least one embodiment, specific shared functions within shared function logic 2920 that are used extensively by graphics core array 2914 may be included within shared function logic 3216 within graphics core array 2914. In at least one embodiment, shared function logic 3216 within graphics core array 2914 can include some or all logic within shared function logic 2920. In at least one embodiment, all logic elements within shared function logic 2920 may be duplicated within shared function logic 2926 of graphics core array 2914. In at least one embodiment, shared function logic 2920 is excluded in favor of shared function logic 2926 within graphics core array 2914.[0458] Inference and/or training logic 715 are used to perform inferencmg and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment portions or all of inference and/or training logic 715 may be incorporated into graphics processor 2910. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2912, graphics core(s) 2915, shared function logic 2926, shared function logic 2920, or other logic in FIG. 29. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2910 to perform one or more machine learning algorithms, neural network architectures, use cases, or training teclunques described herein.[0459] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0460] FIG. 30 is a block diagram of hardware logic of a graphics processor core 3000, according to at least one embodiment described herein. In at least one embodiment, graphics processor core 3000 is included within a graphics core array. In at least one embodiment, graphics processor core 3000, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core 3000 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. In at least one embodiment, each graphics core 3000 can include a fixed function block 3030 coupled with multiple sub-cores 3001A-300IF, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic.[0461] In at least one embodiment, fixed function block 3030 includes a geometry and fixed function pipeline 3036 that can be shared by all sub-cores in graphics processor 3000, for example, in lower performance and/or lower power graphics processor implementations. In at least one embodiment, geometry and fixed function pipeline 3036 includes a 3D fixed function pipeline, a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers.[0462] In at least one embodiment, fixed function block 3030 also includes a graphics SoC interface 3037, a graphics microcontroller 3038, and a media pipeline 3039. In at least one embodiment, graphics SoC interface 3037 provides an interface between graphics core 3000 and other processor cores within a system on a chip integrated circuit. In at least one embodiment, graphics microcontroller 3038 is a programmable sub-processor that is configurable to manage various functions of graphics processor 3000, including thread dispatch, scheduling, and preemption. In at least one embodiment, media pipeline 3039 includes logic to facilitate decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. In at least one embodiment, media pipeline 3039 implements media operations via requests to compute or sampling logic within sub-cores 3001A-3001F.[0463] In at least one embodiment, SoC interface 3037 enables graphics core 3000 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, SoC interface 3037 can also enable communication with fixed function devices within an SoC, such as camera imaging pipelines, and enables use of and/or implements global memory atomics that may be shared between graphics core 3000 and CPUs within an SoC. In at least one embodiment, graphics SoC interface 3037 can also implement power management controls for graphics processor core 3000 and enable an interface between a clock domain of graphics processor core 3000 and other clock domains within an SoC. In at least one embodiment, SoC interface 3037 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions can be dispatched to media pipeline 3039, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 3036, and/or a geometry and fixed function pipeline 3014) when graphics processing operations are to be performed.[0464] In at least one embodiment, graphics microcontroller 3038 can be configured to perform various scheduling and management tasks for graphics core 3000. In at least one embodiment, graphics microcontroller 3038 can perform graphics and/or compute workload scheduling on various graphics parallel engines within execution unit (EU) arrays 3002A-3002F, 3004A-3004F within sub-cores 3001A-3001F. In at least one embodiment, host software executing on a CPU core of an SoC including graphics core 3000 can submit workloads to one of multiple graphic processor paths, which invokes a scheduling operation on an appropriate graphics engine. In at least one embodiment, scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In at least one embodiment, graphics microcontroller 3038 can also facilitate low-power or idle states for graphics core 3000, providing graphics core 3000 with an ability to save and restore registers within graphics core 3000 across low-power state transitions independently from an operating system and/or graphics driver software on a system.[0465] In at least one embodiment, graphics core 3000 may have greater than or fewer than illustrated sub-cores 3001A-3001F, up to N modular sub-cores. For each set of N sub-cores, in at least one embodiment, graphics core 3000 can also include shared function logic 3010, shared and/or cache memory 3012, geometry/fixed function pipeline 3014, as well as additional fixed function logic 3016 to accelerate various graphics and compute processing operations. In at least one embodiment, shared function logic 3010 can include logic units (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within graphics core 3000. In at least one embodiment, shared and/or cache memory 3012 can be a last-level cache for N sub-cores 3001A-3001F within graphics core 3000 and can also serve as shared memory that is accessible by multiple sub-cores. In at least one embodiment, geometry/fixed function pipeline 3014 can be included instead of geometry/fixed function pipeline 3036 within fixed function block 3030 and can include similar logic units.[0466] In at least one embodiment, graphics core 3000 includes additional fixed function logic 3016 that can include various fixed function acceleration logic for use by graphics core 3000. In at least one embodiment, additional fixed function logic 3016 includes an additional geometry pipeline for use in position-only shading. In position-only shading, at least two geometry pipelines exist, whereas in a full geometry pipeline within geometry and fixed function pipelines 3014, 3036, and a cull pipeline, which is an additional geometry pipeline that may be included within additional fixed function logic 3016. In at least one embodiment, a cull pipeline is a trimmed down version of a full geometry pipeline. In at least one embodiment, a full pipeline and a cull pipeline can execute different instances of an application, each instance having a separate context. In at least one embodiment, position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, in at least one embodiment, cull pipeline logic within additional fixed function logic 3016 can execute position shaders in parallel with a main application and generally generates critical results faster than a full pipeline, as a cull pipeline fetches and shades position attributes of vertices, without performing rasterization and rendering of pixels to a frame buffer. In at least one embodiment, a cull pipeline can use generated critical results to compute visibility information for all triangles without regard to whether those triangles are culled. In at least one embodiment, a full pipeline (which in this instance may be referred to as a replay pipeline) can consume visibility information to skip culled triangles to shade only visible triangles that are finally passed to a rasterization phase.[0467] In at least one embodiment, additional fixed function logic 3016 can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing.[0468] In at least one embodiment, within each graphics sub-core 3001A-3001F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. In at least one embodiment, graphics sub-cores 3001A-3001F include multiple EU arrays 3002A-3002F, 3004A-3004F, thread dispatch and inter-thread communication (Talc) logic 3003A-3003F, a 3D (e.g., texture) sampler 3005A-3005F, a media sampler 3006A-3006F, a shader processor 3007A-3007F, and shared local memory (SLM) 3008A-3008F. In at least one embodiment, EU arrays 3002A-3002F, 3004A-3004F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. In at least one embodiment, TD/IC logic 3003A-3003F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitates communication between threads executing on execution units of a sub-core. In at least one embodiment, 3D samplers 3005A-3005F can read texture or other 3D graphics related data into memory. In at least one embodiment, 3D samplers can read texture data differently based on a configured sample state and texture format associated with a given texture. In at least one embodiment, media samplers 3006A-3006F can perform similar read operations based on a type and format associated with media data. In at least one embodiment, each graphics sub-core 3001A-3001F can alternately include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each of sub-cores 3001A-3001F can make use of shared local memory 3008A-3008F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.[0469] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, portions or all of inference and/or training logic 715 may be incorporated into graphics processor 3010. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline, graphics microcontroller 3038, geometry and fixed function pipeline 3014 and 3036, or other logic in FIG. 30. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 3000 to perform one or more machine learning algorithms neural network architectures, use cases, or training techniques described herein.[0470] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0471] FIGS. 31A and 31B illustrate thread execution logic 3100 including an array of processing elements of a graphics processor core according to at least one embodiment. FIG. 31A illustrates at least one embodiment, in which thread execution logic 3100 is used. FIG. 31B illustrates exemplary internal details of a graphics execution unit 3108, according to at least one embodiment.[0472] As illustrated in FIG. 31A, in at least one embodiment, thread execution logic 3100 includes a shader processor 3102, a thread dispatcher 3104, an instruction cache 3106, a scalable execution unit array including a plurality of execution units 3107A-3107N and 3108A-3108N, a sampler 3110, a data cache 3112, and a data port 3114. In at least one embodiment, a scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit 3108A-N or 3107A-N) based on computational requirements of a workload, for example. In at least one embodiment, scalable execution units are interconnected via an interconnect fabric that links to each execution unit. In at least one embodiment, thread execution logic 3100 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 3106, data port 3114, sampler 3110, and execution units 3107 or 3108. In at least one embodiment, each execution unit (e.g., 3107A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In at least one embodiment, array of execution units 3107 and/or 3108 is scalable to include any number individual execution units.[0473] In at least one embodiment, execution units 3 107 and/or 3108 are primarily used to execute shader programs. In at least one embodiment, shader processor 3102 can process various shader programs and dispatch execution threads associated with shader programs via a thread dispatcher 3104. In at least one embodiment, thread dispatcher 3104 includes logic to arbitrate thread initiation requests from graphics and media pipelines and instantiate requested threads on one or more execution units in execution units 3107 and/or 3108. For example, in at least one embodiment, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to thread execution logic for processing. In at least one embodiment, thread dispatcher 3 104 can also process runtime thread spawning requests from executing shader programs.[0474] In at least one embodiment, execution units 3 107 and/or 3108 support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. In at least one embodiment, execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, and/or vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). In at least one embodiment, each of execution units 3107 and/or 3108, which include one or more arithmetic logic units (ALUs), is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment despite higher latency memory accesses. In at least one embodiment, each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. In at least one embodiment, execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations In at least one embodiment, while waiting for data from memory or one of shared functions, dependency logic within execution units 3107 and/or 3108 causes a waiting thread to sleep until requested data has been returned. In at least one embodiment, while an awaiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, in at least one embodiment, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader.[0475] In at least one embodiment, each execution unit in execution units 3107 and/or 3108 operates on arrays of data elements. In at least one embodiment, a number of data elements is an 'execution size," or number of channels for an instruction. In at least one embodiment, an execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. In at least one embodiment, a number of channels may be independent of a number of physical arithmetic logic units (ALUs) or floating point units (FPUs) for a particular graphics processor. In at least one embodiment, execution units 3107 and/or 3108 support integer and floating-point data types.[0476] In at least one embodiment, an execution unit instruction set includes SIMD instructions. In at least one embodiment, various data elements can be stored as a packed data type in a register and execution unit will process various elements based on data size of elements. For example, in at least one embodiment, when operating on a 256-bit wide vector, 256 bits of a vector are stored in a register and an execution unit operates on a vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, in at least one embodiment, different vector widths and register sizes are possible.[0477] In at least one embodiment, one or more execution units can be combined into a fused execution unit 3109A-3109N having thread control logic (3111A-3111N) that is common to fused EUs such as execution unit 3107A fused with execution unit 3108A into fused execution unit 3109A. In at least one embodiment, multiple EUs can be fused into an EU group. In at least one embodiment, each EU in a fused EU group can be configured to execute a separate SLMD hardware thread, with a number of EUs in a fused EU group possibly varying according to various embodiments. In at least one embodiment, various SIMD widths can be performed perEU, including but not limited to SIMD8, SIMD16, and SIMD32. In at least one embodiment, each fused graphics execution unit 3109A-3109N includes at least two execution units. For example, in at least one embodiment, fused execution unit 3109A includes a first EU 3107A, second EU 3108A, and thread control logic 3111A that is common to first EU 3107A and second EU 3108A. In at least one embodiment, thread control logic 3111A controls threads executed on fused graphics execution unit 3109A, allowing each EU within fused execution units 3109A-3109N to execute using a common instruction pointer register.[0478] In at least one embodiment, one or more internal instruction caches (e.g., 3106) are included in thread execution logic 3100 to cache thread instructions for execution units. In at least one embodiment, one or more data caches (e.g., 3112) are included to cache thread data during thread execution. In at least one embodiment, sampler 3110 is included to provide texture sampling for 3D operations and media sampling for media operations. In at least one embodiment, sampler 3 110 includes specialized texture or media sampling functionality to process texture or media data during sampling process before providing sampled data to an execution unit.[0479] During execution, in at least one embodiment, graphics and media pipelines send thread initiation requests to thread execution logic 3100 via thread spawning and dispatch logic. In at least one embodiment, once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor 3102 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In at least one embodiment, a pixel shader or a fragment shader calculates values of various vertex attributes that are to be interpolated across a rasterized object. In at least one embodiment, pixel processor logic within shader processor 3102 then executes an application programming interface (API)-supplied pixel or fragment shader program. In at least one embodiment, to execute a shader program, shader processor 3102 dispatches threads to an execution unit (e.g., 3108A) via thread dispatcher 3104. In at least one embodiment, shader processor 3102 uses texture sampling logic in sampler 3110 to access texture data in texture maps stored in memory. In at least one embodiment, arithmetic operations on texture data and input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.[0480] In at least one embodiment, data port 3114 provides a memory access mechanism for thread execution logic 3100 to output processed data to memory for further processing on a graphics processor output pipeline. In at least one embodiment, data port 3114 includes or couples to one or more cache memories (e.g., data cache 3112) to cache data for memory access via a data port.[0481] As illustrated in FIG. 31B, in at least one embodiment, a graphics execution unit 3108 can include an instruction fetch unit 3137, a general register file array (GRF) 3124, an architectural register file array (ARF) 3126, a thread arbiter 3122, a send unit 3130, a branch unit 3132, a set of SIMD floating point units (FPUs) 3 134, and a set of dedicated integer SIMD ALUs 3135. In at least one embodiment, GRF 3124 and ARE 3126 includes a set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in graphics execution unit 3108. In at least one embodiment, per thread architectural state is maintained in ARF 3126, while data used during thread execution is stored in GRF 3124. In at least one embodiment, execution state of each thread, including instruction pointers for each thread, can be held in thread-specific registers in ARF 3126.[0482] In at least one embodiment, graphics execution unit 3108 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). In at least one embodiment, architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads.[0483] In at least one embodiment, graphics execution unit 3108 can co-Issue multiple instructions, which may each be different instructions. In at least one embodiment, thread arbiter 3122 of graphics execution unit thread 3108 can dispatch instructions to one of send unit 3130, branch unit 3132, or SIMD FPU(s) 3134 for execution. In at least one embodiment, each execution thread can access 128 general-purpose registers within GRF 3124, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In at least one embodiment, each execution unit thread has access to 4 kilobytes within GRF 3124, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In at least one embodiment, up to seven threads can execute simultaneously, although a number of threads per execution unit can also vary according to embodiments. In at least one embodiment, in which seven threads may access 4 kilobytes, GRF 3124 can store a total of 28 kilobytes. In at least one embodiment, flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.[0484] in at least one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via "send" instructions that are executed by message passing to send unit 3130. In at least one embodiment, branch instructions are dispatched to branch unit 3132 to facilitate SIMD divergence and eventual convergence.[0485] In at least one embodiment, graphics execution unit 3108 includes one or more STMD floating point units (FPU(s)) 3134 to perform floating-point operations. In at least one embodiment, FPU(s) 3134 also support integer computation. In at least one embodiment, FPU(s) 3134 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SFMD execute up to 2M I6-bit integer or 16-bit floating-point operations. In at least one embodiment, at least one FPU provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In at least one embodiment, a set of 8-bit integer SIMD ALUs 3135 are also present, and may be specifically optimized to perform operations associated with machine learning computations.[0486] In at least one embodiment, arrays of multiple instances of graphics execution unit 3108 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). In at least one embodiment, execution unit 3108 can execute instructions across a plurality of execution channels. In at least one embodiment, each thread executed on graphics execution unit 3108 is executed on a different channel.[0487] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, portions or all of inference and/or training logic 715 may be incorporated into thread execution logic 3100. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS. 7A or 7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs thread of execution logic 3100 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.[0488] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0489] FIG. 32 illustrates a parallel processing unit ("PPU") 3200, according to at least one embodiment. In at least one embodiment, PPU 3200 is configured with machine-readable code that, if executed by PPIT 3200, causes PPU 3200 to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, PPU 3200 is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel. In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU 3200. In at least one embodiment, PPU 3200 is a graphics processing unit ("GPU") configured to implement a graphics rendering pipeline for processing three-dimensional ("3D") graphics data in order to generate two-dimensional ("21Y) image data for display on a display device such as a liquid crystal display ("LCD") device. In at least one embodiment, PPU 3200 is utilized to perform computations such as linear algebra operations and machine-learning operations. FIG. 32 illustrates an example parallel processor for illustrative purposes only and should be construed as a non-limiting example of processor architectures contemplated within scope of this disclosure and that any suitable processor may be employed to supplement and/or substitute for same.[0490] In at least one embodiment, one or more PPUs 3200 are configured to accelerate High Performance Computing ("HPC"), data center, and machine learning applications. In at least one embodiment, PPU 3200 is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, deep learning, high-accuracy speech, image, text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and more.[0491] In at least one embodiment, PPU 3200 includes, without limitation, an Input/Output ("I/O") unit 3206, a front-end unit 3210, a scheduler unit 3212, a work distribution unit 3214, a hub 3216, a crossbar ("XBar") 3220, one or more general processing clusters ("GPCsi 3218, and one or more partition units ("memory partition units") 3222. In at least one embodiment, PPU 3200 is connected to a host processor or other PPUs 3200 via one or more high-speed GPU interconnects ("GPU interconnects") 3208. In at least one embodiment, PPU 3200 is connected to a host processor or other peripheral devices via a system bus 3202. In at least one embodiment, PPU 3200 is connected to a local memory comprising one or more memory devices ("memory") 3204. In at least one embodiment, memory devices 3204 include, without limitation, one or more dynamic random access memory ("DRAM") devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as high-bandwidth memory ("IM") subsystems, with multiple DRAM dies stacked within each device.[0492] In at least one embodiment, high-speed GPU interconnect 3208 may refer to a wire-based multi-lane communications link that is used by systems to scale and include one or more PPUs 3200 combined with one or more central processing units ("CPUs"), supports cache coherence between PPUs 3200 and CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transmitted by high-speed GPU interconnect 3208 through hub 3216 to/from other units of PPU 3200 such as one or more copy engines, video encoders, video decoders, power management units, and other components which may not be explicitly illustrated in FIG. 32.[0493] In at least one embodiment, 1/0 unit 3206 is configured to transmit and receive communications (e.g., commands, data) from a host processor (not illustrated in FIG. 32) over system bus 3202. In at least one embodiment, I/0 unit 3206 communicates with host processor directly via system bus 3202 or through one or more intermediate devices such as a memory bridge. In at least one embodiment, I/0 unit 3206 may communicate with one or more other processors, such as one or more of PPUs 3200 via system bus 3202. In at least one embodiment, I/0 unit 3206 implements a Peripheral Component Interconnect Express ("PCIe") interface for communications over a PCIe bus. In at least one embodiment, I/O unit 3206 implements interfaces for communicating with external devices.[0494] In at least one embodiment, I/O unit 3206 decodes packets received via system bus 3202. In at least one embodiment, at least some packets represent commands configured to cause PPU 3200 to perform various operations. In at least one embodiment, I/0 unit 3206 transmits decoded commands to various other units of PPU 3200 as specified by commands. In at least one embodiment, commands are transmitted to front-end unit 3210 and/or transmitted to hub 3216 or other units of PPU 3200 such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly illustrated in FIG. 32). In at least one embodiment, I/0 unit 3206 is configured to route communications between and among various logical units of PPU 3200.[0495] In at least one embodiment, a program executed by host processor encodes a command stream in a buffer that provides workloads to PPU 3200 for processing. In at least one embodiment, a workload comprises instructions and data to be processed by those instructions. In at least one embodiment, a buffer is a region in a memory that is accessible (e.g., read/write) by both a host processor and PPU 3200 -a host interface unit may be configured to access that buffer in a system memory connected to system bus 3202 via memory requests transmitted over system bus 3202 by I/O unit 3206. In at least one embodiment, a host processor writes a command stream to a buffer and then transmits a pointer to a start of a command stream to PPU 3200 such that front-end unit 3210 receives pointers to one or more command streams and manages one or more command streams, reading_ commands from command streams and forwarding commands to various units of PPU 3200.[0496] In at least one embodiment, front-end unit 3210 is coupled to scheduler unit 3212 that configures various GPCs 3218 to process tasks defined by one or more command streams. In at least one embodiment, scheduler unit 3212 is configured to track state information related to various tasks managed by scheduler unit 3212 where state information may indicate which of GPCs 3218 a task is assigned to, whether task is active or inactive, a priority level associated with task, and so forth. In at least one embodiment, scheduler unit 3212 manages execution of a plurality of tasks on one or more of GPCs 3218.[0497] in at least one embodiment, scheduler unit 3212 is coupled to work distribution unit 3214 that is configured to dispatch tasks for execution on GPCs 3218. In at least one embodiment, work distribution unit 3214 tracks a number of scheduled tasks received from scheduler unit 3212 and work distribution unit 3214 manages a pending task pool and an active task pool for each of GPCs 3218. In at least one embodiment, pending task pool comprises a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC 3218; an active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by GPCs 3218 such that as one of GPCs 3218 completes execution of a task, that task is evicted from that active task pool for GPC 3218 and another task from a pending task pool is selected and scheduled for execution on GPC 3218. In at least one embodiment, if an active task is idle on GPC 3218, such as while waiting for a data dependency to be resolved, then that active task is evicted from GPC 3218 and returned to that pending task pool while another task in that pending task pool is selected and scheduled for execution on GPC 3218.[0498] In at least one embodiment, work distribution unit 3214 communicates with one or more GPCs 3218 via XBar 3220. In at least one embodiment, XBar 3220 is an interconnect network that couples many of units of PPU 3200 to other units of PPU 3200 and can be configured to couple work distribution unit 3214 to a particular GPC 3218. In at least one embodiment, one or more other units of PPU 3200 may also be connected to XBar 3220 via hub 3216.[0499] In at least one embodiment, tasks are managed by scheduler unit 3212 and dispatched to one of GPCs 3218 by work distribution unit 3214. In at least one embodiment, GPC 3218 is configured to process task and generate results. In at least one embodiment, results may be consumed by other tasks within GPC 3218, routed to a different GPC 3218 via XBar 3220, or stored in memory 3204. In at least one embodiment, results can be written to memory 3204 via partition units 3222, which implement a memory interface for reading and writing data to/from memory 3204. In at least one embodiment, results can be transmitted to another PPU 3204 or CPU via high-speed GPU interconnect 3208. In at least one embodiment, PPU 3200 includes, without limitation, a number U of partition units 3222 that is equal to a number of separate and distinct memory devices 3204 coupled to PPU 3200, as described in more detail herein in conjunction with FIG. 34.[0500] In at least one embodiment, a host processor executes a driver kernel that implements an application programming interface ("API") that enables one or more applications executing on a host processor to schedule operations for execution on PPU 3200. In at least one embodiment, multiple compute applications are simultaneously executed by PPU 3200 and PPU 3200 provides isolation, quality of service ("QoS"), and independent address spaces for multiple compute applications. In at least one embodiment, an application generates instructions (e.g., in form of API calls) that cause a driver kernel to generate one or more tasks for execution by PPU 3200 and that driver kernel outputs tasks to one or more streams being processed by PPU 3200. In at least one embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In at least one embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In at least one embodiment, cooperating threads can refer to a plurality of threads including instructions to perform task and that exchange data through shared memory. In at least one embodiment, threads and cooperating threads are described in more detail in conjunction with FIG. 34.[0501] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to PPU 3200. In at least one embodiment, deep learning application processor 3200 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by PPU 3200. In at least one embodiment, PPU 3200 may be used to perform one or more neural network use cases described herein.[0502] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0503] FIG. 33 illustrates a general processing cluster ("GPC") 3300, according to at least one embodiment. In at least one embodiment, GPC 3300 is GPC 3218 of FIG. 32. In at least one embodiment, each GPC 3300 includes, without limitation, a number of hardware units for processing tasks and each GPC 3300 includes, without limitation, a pipeline manager 3302, a pre-raster operations unit ("preROP") 3304, a raster engine 3308, a work distribution crossbar ("WDX") 3316, a memory management unit ("MMU") 3318, one or more Data Processing Clusters ("DPCs") 3306, and any suitable combination of parts.[0504] In at least one embodiment, operation of GPC 3300 is controlled by pipeline manager 3302. In at least one embodiment, pipeline manager 3302 manages configuration of one or more DPCs 3306 for processing tasks allocated to GPC 3300. In at least one embodiment, pipeline manager 3302 configures at least one of one or more DPCs 3306 to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC 3306 is configured to execute a vertex shader program on a programmable streaming multi-processor ("SM") 3314. In at least one embodiment, pipeline manager 3302 is configured to route packets received from a work distribution unit to appropriate logical units within GPC 3300, in at least one embodiment, and some packets may be routed to fixed function hardware units in preROP 3304 and/or raster engine 3308 while other packets may be routed to DPCs 3306 for processing by a primitive engine 3312 or SM 3314. In at least one embodiment, pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline.[0505] In at least one embodiment, preROP unit 3304 is configured, in at least one embodiment, to route data generated by raster engine 3308 and DPCs 3306 to a Raster Operations ("ROP-) unit in partition unit 3222, described in more detail above in conjunction with FIG. 32. In at least one embodiment, preROP unit 3304 is configured to perform optimizations for color blending, organize pixel data, perform address translations, and more. In at least one embodiment, raster engine 3308 includes, without limitation, a number of fixed function hardware units configured to perform various raster operations, in at least one embodiment, and raster engine 3308 includes, without limitation, a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, a tile coalescing engine, and any suitable combination thereof In at least one embodiment, setup engine receives transformed vertices and generates plane equations associated with geometric primitive defined by vertices; plane equations are transmitted to a coarse raster engine to generate coverage information (e.g., an x, y coverage mask for a tile) for primitive; output of a coarse raster engine is transmitted to a culling engine where fragments associated with a primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. In at least one embodiment, fragments that survive clipping and culling are passed to a fine raster engine to generate attributes for pixel fragments based on plane equations generated by a setup engine. in at least one embodiment, an output of raster engine 3308 comprises fragments to be processed by any suitable entity, such as by a fragment shader implemented within DPC 3306.[0506] In at least one embodiment, each DPC 3306 included in GPC 3300 comprises, without limitation, an M-Pipe Controller ("MPC11) 3310; primitive engine 3312; one or more SMs 3314; and any suitable combination thereof. In at least one embodiment, MPC 3310 controls operation of DPC 3306, routing packets received from pipeline manager 3302 to appropriate units in DPC 3306. In at least one embodiment, packets associated with a vertex are routed to primitive engine 3312, which is configured to fetch vertex attributes associated with a vertex from memory; in contrast, packets associated with a shader program may be transmitted to SM 3314.[0507] In at least one embodiment, SM 3314 comprises, without limitation, a programmable streaming processor that is configured to process tasks represented by a number of threads. In at least one embodiment, SM 3314 is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently and implements a Single-Instruction, Multiple-Data ("SIMD") architecture where each thread in a group of threads (e.g., a warp) is configured to process a different set of data based on same set of instructions, in at least one embodiment, all threads in group of threads execute a common set of instructions. In at least one embodiment, SM 3314 implements a Single-Instruction, Multiple Thread ("SIMT") architecture wherein each thread in a group of threads is configured to process a different set of data based on that common set of instructions, but where individual threads in a group of threads are allowed to diverge during execution. In at least one embodiment, a program counter, call stack, and execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within a warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. In at least one embodiment, execution state is maintained for each individual thread and threads executing common instructions may be converged and executed in parallel for better efficiency. At least one embodiment of SM 3314 is described in more detail herein.[0508] In at least one embodiment, MMU 3318 provides an interface between GPC 3300 and a memory partition unit (e.g., partition unit 3222 of FIG. 32) and IVIVIU 3318 provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In at least one embodiment, MMU 33 18 provides one or more translation lookaside buffers ("TLBs") for performing translation of virtual addresses into physical addresses in memory.[0509] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to GPC 3300. In at least one embodiment, GPC 3300 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by GPC 3300. In at least one embodiment, GPC 3300 may be used to perform one or more neural network use cases described herein.[0510] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0511] FIG. 34 illustrates a memory partition unit 3400 of a parallel processing unit ("PPU"), in accordance with at least one embodiment. In at least one embodiment, memory partition unit 3400 includes, without limitation, a Raster Operations CROP") unit 3402, a level two ("L2") cache 3404, a memory interface 3406, and any suitable combination thereof In at least one embodiment, memory interface 3406 is coupled to memory. In at least one embodiment, memory interface 3406 may implement 32, 64, 128, 1024-bit data buses, or like, for high-speed data transfer. In at least one embodiment, PPU incorporates U memory interfaces 3406 where U is a positive integer, with one memory interface 3406 per pair of partition units 3400, where each pair of partition units 3400 is connected to a corresponding memory device. For example, in at least one embodiment, PPU may be connected to up to Y memory devices, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random access memory ("GDDR5 SDRAM").[0512] In at least one embodiment, memory interface 3406 implements a high bandwidth memory second generation ("HBM2") memory interface and Y equals half of U. In at least one embodiment, HBM2 memory stacks are located on a physical package with a PHI, providing substantial power and area savings compared with conventional GDDR5 SDRAM systems. In at least one embodiment, each HBM2 stack includes, without limitation, four memory dies with Y=4, with each HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits. In at least one embodiment, that memory supports Single-Error Correcting Double-Error Detecting ("SECDED") Error Correction Code ("ECC") to protect data. In at least one embodiment, ECC can provide higher reliability for compute applications that are sensitive to data corruption.[0513] In at least one embodiment, PPU implements a multi-level memory hierarchy. In at least one embodiment, memory partition unit 3400 supports a unified memory to provide a single unified virtual address space for central processing unit ("CPU") and PPU memory, enabling data sharing between virtual memory systems. In at least one embodiment frequency of accesses by a PPU to a memory located on other processors is traced to ensure that memory pages are moved to physical memory of PPU that is accessing pages more frequently. In at least one embodiment, high-speed GPU interconnect 3208 supports address translation services allowing PPU to directly access a CPU's page tables and providing full access to CPU memory by a PPU.[0514] In at least one embodiment, copy engines transfer data between multiple PPUs or between PPUs and CPUs. In at least one embodiment, copy engines can generate page faults for addresses that are not mapped into page tables and memory partition unit 3400 then services page faults, mapping addresses into page table, after which copy engine performs a transfer. In at least one embodiment, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing available memory. In at least one embodiment, with hardware page faulting, addresses can be passed to copy engines without regard as to whether memory pages are resident, and a copy process is transparent.[0515] Data from memory 3204 of FIG. 32 or other system memory is fetched by memory partition unit 3400 and stored in L2 cache 3404, which is located on-chip and is shared between various GPCs, in accordance with at least one embodiment. Each memory partition unit 3400, in at least one embodiment, includes, without limitation, at least a portion of L2 cache associated with a corresponding memory device. In at least one embodiment, lower level caches are implemented in various units within GPCs. In at least one embodiment, each of SMs 3314 in FIG. 33 may implement a Level 1 ("Li") cache wherein that Ll cache is private memory that is dedicated to a particular SM 3314 and data from L2 cache 3404 is fetched and stored in each Li cache for processing in functional units of SMs 3314. In at least one embodiment, L2 cache 3404 is coupled to memory interface 3406 and XBar 3220 shown in FIG. 32.[0516] ROP unit 3402 performs graphics raster operations related to pixel color, such as color compression, pixel blending, and more, in at least one embodiment. ROP unit 3402, in at least one embodiment, implements depth testing in conjunction with raster engine 3308, receiving a depth for a sample location associated with a pixel fragment from a culling engine of raster engine 3308. In at least one embodiment, depth is tested against a corresponding depth in a depth buffer for a sample location associated with a fragment. In at least one embodiment, if that fragment passes that depth test for that sample location, then ROP unit 3402 updates depth buffer and transmits a result of that depth test to raster engine 3308. It will be appreciated that a number of partition units 3400 may be different than a number of GPCs and, therefore, each ROP unit 3402 can, in at least one embodiment, be coupled to each GPC. In at least one embodiment, ROP unit 3402 tracks packets received from different GPCs and determines whether a result generated by ROP unit 3402 is to be routed to through XBar 3220.[0517] FIG. 35 illustrates a streaming multi-processor ("SM") 3500, according to at least one embodiment. In at least one embodiment, SM 3500 is SM of FIG. 33. In at least one embodiment, SM 3500 includes, without limitation, an instruction cache 3502, one or more scheduler units 3504, a register file 3508, one or more processing cores ("cores") 3510, one or more special function units ("SFUs") 3512, one or more load/store units ("LSUs") 3514, an interconnect network 3516, a shared memory/level one ("LI") cache 3518, and/or any suitable combination thereof.[0518] In at least one embodiment, a work distribution unit dispatches tasks for execution on general processing clusters ("GPCs") of parallel processing units ("PPUs") and each task is allocated to a particular Data Processing Cluster ("DPC") within a GPC and, if a task is associated with a shader program, that task is allocated to one of SMs 3500. In at least one embodiment, scheduler unit 3504 receives tasks from a work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM 3500. In at least one embodiment, scheduler unit 3504 schedules thread blocks for execution as warps of parallel threads, wherein each thread block is allocated at least one warp. In at least one embodiment, each warp executes threads. In at least one embodiment, scheduler unit 3504 manages a plurality of different thread blocks, allocating warps to different thread blocks and then dispatching instructions from plurality of different cooperative groups to various functional units (e.g., processing cores 3510, SFUs 3512, and LSUs 3514) during each clock cycle.[0519] In at least one embodiment, Cooperative Groups may refer to a programming model for organizing groups of communicating threads that allows developers to express granularity at which threads are communicating, enabling expression of richer, more efficient parallel decompositions. In at least one embodiment, cooperative launch APIs support synchronization amongst thread blocks for execution of parallel algorithms. In at least one embodiment, applications of conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., syncthreads( ) function). However, in at least one embodiment, programmers may define groups of threads at smaller than thread block granularities and synchronize within defined groups to enable greater performance, design flexibility, and software reuse in form of collective group-wide function interfaces. In at least one embodiment, Cooperative Groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on threads in a cooperative group. In at least one embodiment, that programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. In at least one embodiment, Cooperative Groups primitives enable new patterns of cooperative parallelism, including, without limitation, producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks.[0520] In at least one embodiment, a dispatch unit 3506 is configured to transmit instructions to one or more functional units and scheduler unit 3504 and includes, without limitation, two dispatch units 3506 that enable two different instructions from a common warp to be dispatched during each clock cycle. In at least one embodiment, each scheduler unit 3504 includes a single dispatch unit 3506 or additional dispatch units 3506.[0521] In at least one embodiment, each SM 3500, in at least one embodiment, includes, without limitation, register file 3508 that provides a set of registers for functional units of SM 3500. In at least one embodiment, register file 3508 is divided between each functional unit such that each functional unit is allocated a dedicated portion of register file 3508. In at least one embodiment, register file 3508 is divided between different warps being executed by SM 3500 and register file 3508 provides temporary storage for operands connected to data paths of functional units. In at least one embodiment, each SM 3500 comprises, without limitation, a plurality of L processing cores 3510, where Lisa positive integer. In at least one embodiment, SM 3500 includes, without limitation, a large number (e.g., 128 or more) of distinct processing cores 3510. In at least one embodiment, each processing core 3510includes, without limitation, a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes, without limitation, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, floating point arithmetic logic units implement IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores 3510 include, without limitation, 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores.[0522] Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores 3510. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation, D=AXB + C, where A, B, C, and D are 4x4 matrices.[0523] In at least one embodiment, matrix multiply inputs A and B are 1 6-bit floating point matrices and accumulation matrices C and D are16-bit floating point or 32-bit floating point matrices. In at least one embodiment, tensor cores operate on I6-bit floating point input data with 32-bit floating point accumulation. In at least one embodiment, I6-bit floating point multiply uses 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with other intermediate products for a 4x4x4 matrix multiply. Tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements, in at least one embodiment. In at least one embodiment, an API, such as a CUDA 9 C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. In at least one embodiment, at a CUDA level, a warp-level interface assumes 16x16 size matrices spanning all 32 threads of warp.[0524] In at least one embodiment, each SM 3500 comprises, without limitation, M SFUs 3512 that perform special functions (e.g., attribute evaluation, reciprocal square root, and like). In at least one embodiment, SFUs 3512 include, without limitation, a tree traversal unit configured to traverse a hierarchical tree data structure. In at least one embodiment, SFUs 3512 include, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, texture units are configured to load texture maps (e.g., a 2D array of texels) from memory and sample texture maps to produce sampled texture values for use in shader programs executed by SM 3500. In at least one embodiment, texture maps are stored in shared memory/1,1 cache 3518. In at least one embodiment, texture units implement texture operations such as filtering operations using mip-maps (e.g., texture maps of varying levels of detail), in accordance with at least one embodiment. In at least one embodiment, each SM 3500 includes, without limitation, two texture units.[0525] Each SM 3500 comprises, without limitation, N LSUs 3514 that implement load and store operations between shared memory/Li cache 3518 and register file 3508, in at least one embodiment. Interconnect network 3516 connects each functional unit to register file 3508 and LSU 3514 to register file 3508 and shared memory/L I cache 3518 in at least one embodiment. In at least one embodiment, interconnect network 3516 is a crossbar that can be configured to connect any functional units to any registers in register file 3508 and connect LSUs 3514 to register file 3508 and memory locations in shared memory/ 1 cache 3518.[0526] In at least one embodiment, shared memory/L1 cache 3518 is an array of on-chip memory that allows for data storage and communication between SM 3500 and primitive engine and between threads in SM 3500, in at least one embodiment. In at least one embodiment, shared memory/L1 cache 3518 comprises, without limitation, 128 KB of storage capacity and is in a path from SM 3500 to a partition unit. In at least one embodiment, shared memory/L1 cache 3518, in at least one embodiment, is used to cache reads and writes. In at least one embodiment, one or more of shared memory/L1 cache 3518, L2 cache, and memory are backing stores.[0527] Combining data cache and shared memory functionality into a single memory block provides improved performance for both types of memory accesses, in at least one embodiment. In at least one embodiment, capacity is used or is usable as a cache by programs that do not use shared memory, such as if shared memory is configured to use half of a capacity, and texture and load/store operations can use remaining capacity. Integration within shared memory/L1 cache 3518 enables shared memory/L1 cache 3518 to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data, in accordance with at least one embodiment. In at least one embodiment, when configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. In at least one embodiment, fixed function graphics processing units are bypassed, creating a much simpler programming model. In a general purpose parallel computation configuration, a work distribution unit assigns and distributes blocks of threads directly to DPCs, in at least one embodiment. In at least one embodiment, threads in a block execute a common program, using a unique thread ID in calculation to ensure each thread generates unique results, using SM 3500 to execute program and perform calculations, shared memory/L1 cache 3518 to communicate between threads, and LSU 3514 to read and write global memory through shared memoryil cache 3518 and memory partition unit. In at least one embodiment, when configured for general purpose parallel computation, SM 3500 writes commands that scheduler unit 3504 can use to launch new work on DPCs.[0528] In at least one embodiment, a PPU is included in or coupled to a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant ("PDA"), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and more. In at least one embodiment, a PPU is embodied on a single semiconductor substrate. In at least one embodiment, a PPU is included in a system-on-a-chip ("SoC") along with one or more other devices such as additional PPUs, memory, a reduced instruction set computer ("RISC") CPU, a memory management unit ("MMU"), a digital-to-analog converter ("DAC"), and like [0529] In at least one embodiment, a PPU may be included on a graphics card that includes one or more memory devices. In at least one embodiment, that graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In at least one embodiment, that PPU may be an integrated graphics processing unit ("iGPU") included in chipset of a motherboard.[0530] Inference and/or training logic 715 are used to perform inferencmg and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to SM 3500. In at least one embodiment, SM 3500 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by SM 3500. In at least one embodiment, SM 3500 may be used to perform one or more neural network use cases described herein.[053 I] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0532] Embodiments are disclosed related a virtualized computing platform for advanced computing, such as image inferencing and image processing in medical applications. Without limitation, embodiments may include radiography, magnetic resonance imaging (MRI), nuclear medicine, ultrasound, sonography, elastography, photoacoustic imaging, tomography, echocardiography, functional near-infrared spectroscopy, and magnetic particle imaging, or a combination thereof. In at least one embodiment, a virtualized computing platform and associated processes described herein may additionally or alternatively be used, without limitation, in forensic science analysis, sub-surface detection and imaging (e.g., oil exploration, archaeology, paleontology, etc.), topography, oceanography, geology, osteology, meteorology, intelligent area or object tracking and monitoring, sensor data processing (e.g., RADAR, SONAR, LIDAR, etc.), and/or genomics and gene sequencing.[0533] With reference to FIG. 36, FIG. 36 is an example data flow diagram for a process 3600 of generating and deploying an image processing and inferencing pipeline, in accordance with at least one embodiment. In at least one embodiment, process 3600 may be deployed for use with imaging devices, processing devices, genomics devices, gene sequencing devices, radiology devices, and/or other device types at one or more facilities 3602, such as medical facilities, hospitals, healthcare institutes, clinics, research or diagnostic labs, etc. In at least one embodiment, process 3600 may be deployed to perform genomics analysis and inferencing on sequencing data. Examples of genomic analyses that may be performed using systems and processes described herein include, without limitation, variant calling, mutation detection, and gene expression quantification.[0534] In at least one embodiment, process 3600 may be executed within a training system 3604 and/or a deployment system 3606. In at least one embodiment, training system 3604 may be used to perform training, deployment, and implementation of machine learning models (e.g., neural networks, object detection algorithms, computer vision algorithms, etc.) for use in deployment system 3606. In at least one embodiment, deployment system 3606 may be configured to offload processing and compute resources among a distributed computing environment to reduce infrastructure requirements at facility 3602. In at least one embodiment, deployment system 3606 may provide a streamlined platform for selecting, customizing, and implementing virtual instruments for use with imaging devices (e.g., MRT, CT Scan, X-Ray, Ultrasound, etc.) or sequencing devices at facility 3602. In at least one embodiment, virtual instruments may include software-defined applications for performing one or more processing operations with respect to imaging data generated by imaging devices, sequencing devices, radiology devices, and/or other device types. In at least one embodiment, one or more applications in a pipeline may use or call upon services (e.g., inference, visualization, compute, AI, etc.) of deployment system 3606 during execution of applications.[0535] In at least one embodiment, some of applications used in advanced processing and inferencing pipelines may use machine learning models or other Al to perform one or more processing steps. In at least one embodiment, machine learning models may be trained at facility 3602 using data 3608 (such as imaging data) generated at facility 3602 (and stored on one or more picture archiving and communication system (PACS) servers at facility 3602), may be trained using imaging or sequencing data 3608 from another facility or facilities (e.g., a different hospital, lab, clinic, etc.), or a combination thereof In at least one embodiment, training system 3604 may be used to provide applications, services, and/or other resources for generating working, deployable machine learning models for deployment system 3606.[0536] In at least one embodiment, a model registry 3624 may be backed by object storage that may support versioning and object metadata. In at least one embodiment, object storage may be accessible through, for example, a cloud storage (e.g., a cloud 3726 of FIG. 37) compatible application programming interface (API) from within a cloud platform. In at least one embodiment, machine learning models within model registry 3624 may uploaded, listed, modified, or deleted by developers or partners of a system interacting with an API. In at least one embodiment, an API may provide access to methods that allow users with appropriate credentials to associate models with applications, such that models may be executed as part of execution of containerized instantiations of applications.[0537] In at least one embodiment, a training pipeline 3704 (FIG. 37) may include a scenario where facility 3602 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 3608 generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data 3608 is received, AI-assisted annotation 3610 may be used to aid in generating annotations corresponding to imaging data 3608 to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation 3610 may include one or more machine learning models (e.g., convolutional neural networks (CNNs)) that may be trained to generate annotations corresponding to certain types of imaging data 3608 (e.g., from certain devices) and/or certain types of anomalies in imaging data 3608. In at least one embodiment, AI-assisted annotations 3610 may then be used directly, or may be adjusted or fine-tuned using an annotation tool (e.g., by a researcher, a clinician, a doctor, a scientist, etc.), to generate ground truth data. In at least one embodiment, in some examples, labeled clinic data 3612 (e.g., annotations provided by a clinician, doctor, scientist, technician, etc.) may be used as ground truth data for training a machine learning model. In at least one embodiment, Al-assisted annotations 3610, labeled clinic data 3612, or a combination thereof may be used as ground truth data for training a machine learning model. In at least one embodiment, a trained machine learning model may be referred to as an output model 3616, and may be used by deployment system 3606, as described herein.[0538] In at least one embodiment, training pipeline 3704 (FIG. 37) may include a scenario where facility 3602 needs a machine learning model for use in performing one or more processing tasks for one or more applications in deployment system 3606, but facility 3602 may not currently have such a machine learning model (or may not have a model that is optimized, efficient, or effective for such purposes). In at least one embodiment, an existing machine learning model may be selected from model registry 3624. In at least one embodiment, model registry 3624 may include machine learning models trained to perform a variety of different inference tasks on imaging data. In at least one embodiment, machine learning models in model registry 3624 may have been trained on imaging data from different facilities than facility 3602 (e.g., facilities remotely located). In at least one embodiment, machine learning models may have been trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when being trained on imaging data from a specific location, training may take place at that location, or at least in a manner that protects confidentiality of imaging data or restricts imaging data from being transferred off-premises (e.g., to comply with HIPAA regulations, privacy regulations, etc.). In at least one embodiment, once a model is trained -or partially trained -at one location, a machine learning model may be added to model registry 3624. In at least one embodiment, a machine learning model may then be retrained, or updated, at any number of other facilities, and a retrained or updated model may be made available in model registry 3624. In at least one embodiment, a machine learning model may then be selected from model registry 3624 -and referred to as output model 3616 -and may be used in deployment system 3606 to perform one or more processing tasks for one or more applications of a deployment system.[0539] In at least one embodiment, training pipeline 3704 (FIG. 37) may be used in a scenario that includes facility 3602 requiring a machine learning model for use in performing one or more processing tasks for one or more applications in deployment system 3606, but facility 3602 may not currently have such a machine learning model (or may not have a model that is optimized, efficient, or effective for such purposes). In at least one embodiment, a machine learning model selected from model registry 3624 might not be fine-tuned or optimized for imaging data 3608 generated at facility 3602 because of differences in populations, genetic variations, robustness of training data used to train a machine learning model, diversity in anomalies of training data, and/or other issues with training data. In at least one embodiment, AI-assisted annotation 3610 may be used to aid in generating annotations corresponding to imaging data 3608 to be used as ground truth data for retraining or updating a machine learning model. In at least one embodiment, labeled clinic data 3612 (e.g., annotations provided by a clinician, doctor, scientist, etc.) may be used as ground truth data for training a machine learning model. In at least one embodiment, retraining or updating a machine learning model may be referred to as model training 3614. In at least one embodiment, model training 3614-e.g., AI-assisted annotations 3610, labeled clinic data 3612, or a combination thereof -may be used as ground truth data for retraining or updating a machine learning model.[0540] In at least one embodiment, deployment system 3606 may include software 3618, services 3620, hardware 3622, and/or other components, features, and functionality. In at least one embodiment, deployment system 3606 may include a software "stack," such that software 3618 may be built on top of services 3620 and may use services 3620 to perform some or all of processing tasks, and services 3620 and software 3618 may be built on top of hardware 3622 and use hardware 3622 to execute processing, storage, and/or other compute tasks of deployment system 3606.[0541] In at least one embodiment, software 3618 may include any number of different containers, where each container may execute an instantiation of an application. In at least one embodiment, each application may perform one or more processing tasks in an advanced processing and inferencing pipeline (e.g., inferencing, object detection, feature detection, segmentation, image enhancement, calibration, etc.). In at least one embodiment, for each type of imaging device (e.g., CT, MRI, X-Ray, ultrasound, sonography, echocardiography, etc.), sequencing device, radiology device, genomics device, etc., there may be any number of containers that may perform a data processing task with respect to imaging data 3608 (or other data types, such as those described herein) generated by a device. In at least one embodiment, an advanced processing and inferencing pipeline may be defined based on selections of different containers that are desired or required for processing imaging data 3608, in addition to containers that receive and configure imaging data for use by each container and/or for use by facility 3602 after processing through a pipeline (e.g., to convert outputs back to a usable data type, such as digital imaging and communications in medicine (DICOM) data, radiology information system (RIS) data, clinical information system (CIS) data, remote procedure call (RPC) data, data substantially compliant with a representation state transfer (REST) interface, data substantially compliant with a file-based interface, and/or raw data, for storage and display at facility 3602). In at least one embodiment, a combination of containers within software 3618 (e.g., that make up a pipeline) may be referred to as a virtual instrument (as described in more detail herein), and a virtual instrument may leverage services 3620 and hardware 3622 to execute some or all processing tasks of applications instantiated in containers.[0542] In at least one embodiment, a data processing pipeline may receive input data (e.g., imaging data 3608) in a DICOM, RIS, CIS, REST compliant, RPC, raw, and/or other format in response to an inference request (e.g., a request from a user of deployment system 3606, such as a clinician, a doctor, a radiologist, etc.). In at least one embodiment, input data may be representative of one or more images, video, and/or other data representations generated by one or more imaging devices, sequencing devices, radiology devices, genomics devices, and/or other device types. In at least one embodiment, data may undergo pre-processing as part of data processing pipeline to prepare data for processing by one or more applications. In at least one embodiment, post-processing may be performed on an output of one or more inferencing tasks or other processing tasks of a pipeline to prepare an output data for a next application and/or to prepare output data for transmission and/or use by a user (e.g., as a response to an inference request). In at least one embodiment, inferencing tasks may be performed by one or more machine learning models, such as trained or deployed neural networks, which may include output models 3616 of training system 3604.[0543] In at least one embodiment, tasks of data processing pipeline may be encapsulated in a container(s) that each represent a discrete, fully functional instantiation of an application and virtualized computing environment that is able to reference machine learning models. In at least one embodiment, containers or applications may be published into a private (e.g., limited access) area of a container registry (described in more detail herein), and trained or deployed models may be stored in model registry 3624 and associated with one or more applications. In at least one embodiment, images of applications (e.g., container images) may be available in a container registry, and once selected by a user from a container registry for deployment in a pipeline, an image may be used to generate a container for an instantiation of an application for use by a user's system.[0544] In at least one embodiment, developers (e.g., software developers, clinicians, doctors, etc.) may develop, publish, and store applications (e.g., as containers) for performing image processing and/or inferencing on supplied data. In at least one embodiment, development, publishing, and/or storing may be performed using a software development kit (SDK) associated with a system (e.g., to ensure that an application and/or container developed is compliant with or compatible with a system). In at least one embodiment, an application that is developed may be tested locally (e.g., at a first facility, on data from a first facility) with an SDK which may support at least some of services 3620 as a system (e.g., system 3700 of FIG. 37). In at least one embodiment, because DICOM objects may contain anywhere from one to hundreds of images or other data types, and due to a variation in data, a developer may be responsible for managing (e.g., setting constructs for, building pre-processing into an application, etc.) extraction and preparation of incoming DICOM data. In at least one embodiment, once validated by system 3700 (e.g., for accuracy, safety, patient privacy, etc.), an application may be available in a container registry for selection and/or implementation by a user (e.g., a hospital, clinic, lab, healthcare provider, etc.) to perform one or more processing tasks with respect to data at a facility (e.g., a second facility) of a user.[0545] In at least one embodiment, developers may then share applications or containers through a network for access and use by users of a system (e.g., system 3700 of FIG. 37). In at least one embodiment, completed and validated applications or containers may be stored in a container registry and associated machine learning models may be stored in model registry 3624. In at least one embodiment, a requesting entity (e.g., a user at a medical facility) -who provides an inference or image processing request -may browse a container registry and/or model registry 3624 for an application, container, dataset, machine learning model, etc., select a desired combination of elements for inclusion in data processing pipeline, and submit an imaging processing request. In at least one embodiment, a request may include input data (and associated patient data, in some examples) that is necessary to perform a request, and/or may include a selection of application(s) and/or machine learning models to be executed in processing a request. In at least one embodiment, a request may then be passed to one or more components of deployment system 3606 (e.g., a cloud) to perform processing of data processing pipeline. In at least one embodiment, processing by deployment system 3606 may include referencing selected elements (e.g., applications, containers, models, etc.) from a container registry and/or model registry 3624. In at least one embodiment, once results are generated by a pipeline, results may be returned to a user for reference (e.g., for viewing in a viewing application suite executing on a local, on-premises workstation or terminal). In at least one embodiment, a radiologist may receive results from an data processing pipeline including any number of application and/or containers, where results may include anomaly detection in X-rays, CT scans, MRIs, etc. [0546] In at least one embodiment, to aid in processing or execution of applications or containers in pipelines, services 3620 may be leveraged. In at least one embodiment, services 3620 may include compute services, artificial intelligence (Al) services, visualization services, and/or other service types. In at least one embodiment, services 3620 may provide functionality that is common to one or more applications in software 3618, so functionality may be abstracted to a service that may be called upon or leveraged by applications. In at least one embodiment, functionality provided by services 3620 may run dynamically and more efficiently, while also scaling well by allowing applications to process data in parallel (e.g., using a parallel computing platform 3730 (FIG. 37)). In at least one embodiment, rather than each application that shares a same functionality offered by a service 3620 being required to have a respective instance of service 3620, service 3620 may be shared between and among various applications. In at least one embodiment, services may include an inference server or engine that may be used for executing detection or segmentation tasks, as non-limiting examples. In at least one embodiment, a model training service may be included that may provide machine learning model training and/or retraining capabilities. In at least one embodiment, a data augmentation service may further be included that may provide GPU accelerated data (e.g., DICOM, RIS, CIS, REST compliant, RPC, raw, etc.) extraction, resizing, scaling, and/or other augmentation. In at least one embodiment, a visualization service may be used that may add image rendering effects -such as ray-tracing, rasterization, denoising, sharpening, etc. -to add realism to two-dimensional (2D) and/or three-dimensional (3D) models. In at least one embodiment, virtual instrument services may be included that provide for beam-forming, segmentation, inferencing, imaging, and/or support for other applications within pipelines of virtual instruments.[0547] In at least one embodiment, where a service 3620 includes an Al service (e.g., an inference service), one or more machine learning models associated with an application for anomaly detection (e.g., tumors, growth abnormalities, scarring, etc.) may be executed by calling upon (e.g., as an API call) an inference service (e.g., an inference server) to execute machine learning model(s), or processing thereof, as part of application execution. In at least one embodiment, where another application includes one or more machine learning models for segmentation tasks, an application may call upon an inference service to execute machine learning models for performing one or more of processing operations associated with segmentation tasks. In at least one embodiment, software 3618 implementing advanced processing and inferencing pipeline that includes segmentation application and anomaly detection application may be streamlined because each application may call upon a same inference service to perform one or more inferencing tasks.[0548] In at least one embodiment, hardware 3622 may include GPUs, CPUs, graphics cards, an AT/deep learning system (e.g., an AT supercomputer, such as NVIDIA's DGX supercomputer system), a cloud platform, or a combination thereof. In at least one embodiment, different types of hardware 3622 may be used to provide efficient, purpose-built support for software 3618 and services 3620 in deployment system 3606. In at least one embodiment, use of GPU processing may be implemented for processing locally (e.g., at facility 3602), within an Al/deep learning system, in a cloud system, and/or in other processing components of deployment system 3606 to improve efficiency, accuracy, and efficacy of image processing, image reconstruction, segmentation, IVIRT exams, stroke or heart attack detection (e.g., in real-time), image quality in rendering, etc. In at least one embodiment, a facility may include imaging devices, genomics devices, sequencing devices, and/or other device types on-premises that may leverage GPUs to generate imaging data representative of a subject's anatomy.[0549] In at least one embodiment, software 3618 and/or services 3620 may be optimized for GPU processing with respect to deep learning, machine learning, and/or high-performance computing, as non-limiting examples. In at least one embodiment, at least some of computing environment of deployment system 3606 and/or training system 3604 may be executed in a datacenter one or more supercomputers or high performance computing systems, with GPU optimized software (e.g., hardware and software combination of NVIDIA's DGX system). In at least one embodiment, datacenters may be compliant with provisions of ITIPAA, such that receipt, processing, and transmission of imaging data and/or other patient data is securely handled with respect to privacy of patient data In at least one embodiment, hardware 3622 may include any number of GPUs that may be called upon to perform processing of data in parallel, as described herein. In at least one embodiment, cloud platform may further include GPU processing for GPU-optimized execution of deep learning tasks, machine learning tasks, or other computing tasks. In at least one embodiment, cloud platform (e.g., NVIDIA's NGC) may be executed using an AI/deep learning supercomputer(s) and/or GPU-optimized software (e.g., as provided on NVIDIA's DGX systems) as a hardware abstraction and scaling platform. In at least one embodiment, cloud platform may integrate an application container clustering system or orchestration system (e.g., KUBERNETES) on multiple GPUs to enable seamless scaling and load balancing.[0550] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0551] FIG. 37 is a system diagram for an example system 3700 for generating and deploying an imaging deployment pipeline, in accordance with at least one embodiment. In at least one embodiment, system 3700 may be used to implement process 3600 of FIG. 36 and/or other processes including advanced processing and inferencing pipelines. In at least one embodiment, system 3700 may include training system 3604 and deployment system 3606. In at least one embodiment, training system 3604 and deployment system 3606 may be implemented using software 3618, services 3620, and/or hardware 3622, as described herein [0552] In at least one embodiment, system 3700 (e.g., training system 3604 and/or deployment system 3606) may implemented in a cloud computing environment (e.g., using cloud 3726). In at least one embodiment, system 3700 may be implemented locally with respect to a healthcare services facility, or as a combination of both cloud and local computing resources. In at least one embodiment, in embodiments where cloud computing is implemented, patient data may be separated from, or unprocessed by, by one or more components of system 3700 that would render processing non-compliant with H1PAA and/or other data handling and privacy regulations or laws. In at least one embodiment, access to APIs in cloud 3726 may be restricted to authorized users through enacted security measures or protocols. In at least one embodiment, a security protocol may include web tokens that may be signed by an authentication (e.g., AuthN, AuthZ, Gluecon, etc.) service and may carry appropriate authorization. In at least one embodiment, APIs of virtual instruments (described herein), or other instantiations of system 3700, may be restricted to a set of public Ws that have been vetted or authorized for interaction.[0553] In at least one embodiment, various components of system 3700 may communicate between and among one another using any of a variety of different network types, including but not limited to local area networks (LANs) and/or wide area networks (WANs) via wired and/or wireless communication protocols. In at least one embodiment, communication between facilities and components of system 3700 (e.g., for transmitting inference requests, for receiving results of inference requests, etc.) may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc. [0554] In at least one embodiment, training system 3604 may execute training pipelines 3704, similar to those described herein with respect to FIG. 36. In at least one embodiment, where one or more machine learning models are to be used in deployment pipelines 3710 by deployment system 3606, training pipelines 3704 may be used to train or retrain one or more (e.g., pre-trained) models, and/or implement one or more of pre-trained models 3706 (e.g., without a need for retraining or updating). In at least one embodiment, as a result of training pipelines 3704, output model(s) 3616 may be generated. In at least one embodiment, training pipelines 3704 may include any number of processing steps, such as but not limited to imaging data (or other input data) conversion or adaption (e.g., using DICOM adapter 3702A to convert DICOM images to another format suitable for processing by respective machine learning models, such as Neuroimaging Informatics Technology Initiative (NIfTI) format), AI-assisted annotation 3610, labeling or annotating of imaging data 3608 to generate labeled clinic data 3612, model selection from a model registry, model training 3614, training, retraining, or updating models, and/or other processing steps. In at least one embodiment, for different machine learning models used by deployment system 3606, different training pipelines 3704 may be used. In at least one embodiment, training pipeline 3704 similar to a first example described with respect to FIG. 36 may be used for a first machine learning model, training pipeline 3704 similar to a second example described with respect to FIG. 36 may be used for a second machine learning model, and training pipeline 3704 similar to a third example described with respect to FIG. 36 may be used for a third machine learning model. In at least one embodiment, any combination of tasks within training system 3604 may be used depending on what is required for each respective machine learning model. In at least one embodiment, one or more of machine learning models may already be trained and ready for deployment so machine learning models may not undergo any processing by training system 3604, and may be implemented by deployment system 3606.[0555] In at least one embodiment, output model(s) 3616 and/or pre-trained model(s) 3706 may include any types of machine learning models depending on implementation or embodiment. In at least one embodiment, and without limitation, machine learning models used by system 3700 may include machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naive Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, Long/Short Term Memory (LSTM), Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models.[0556] In at least one embodiment, training pipelines 3704 may include AI-assisted annotation, as described in more detail herein with respect to at least FIG. 40B. In at least one embodiment, labeled clinic data 3612 (e.g., traditional annotation) may be generated by any number of techniques. In at least one embodiment, labels or other annotations may be generated within a drawing program (e.g., an annotation program), a computer aided design (CAD) program, a labeling program, another type of program suitable for generating annotations or labels for ground truth, and/or may be hand drawn, in some examples. In at least one embodiment, ground truth data may be synthetically produced (e.g., generated from computer models or renderings), real produced (e.g., designed and produced from real-world data), machine-automated (e.g., using feature analysis and learning to extract features from data and then generate labels), human annotated (e.g., labeler, or annotation expert, defines location of labels), and/or a combination thereof. In at least one embodiment, for each instance of imaging data 3608 (or other data type used by machine learning models), there may be corresponding ground truth data generated by training system 3604. In at least one embodiment, AI-assisted annotation may be performed as part of deployment pipelines 3710; either in addition to, or in lieu of Al-assisted annotation included in training pipelines 3704. In at least one embodiment, system 3700 may include a multi-layer platform that may include a software layer (e.g., software 3618) of diagnostic applications (or other application types) that may perform one or more medical imaging and diagnostic functions. In at least one embodiment, system 3700 may be communicatively coupled to (e.g., via encrypted links) PACS server networks of one or more facilities. In at least one embodiment, system 3700 may be configured to access and referenced data (e.g., DICOM data, MS data, raw data, CIS data, REST compliant data, RPC data, raw data, etc.) from PACS servers (e.g., via a DICOM adapter 3702, or another data type adapter such as RIS, CIS, REST compliant, RPC, raw, etc.) to perform operations, such as training machine learning models, deploying machine learning models, image processing, inferencing, and/or other operations.[0557] In at least one embodiment, a software layer may be implemented as a secure, encrypted, and/or authenticated API through which applications or containers may be invoked (e.g., called) from an external environment(s) (e.g., facility 3602). Tn at least one embodiment, applications may then call or execute one or more services 3620 for performing compute, AI, or visualization tasks associated with respective applications, and software 3618 and/or services 3620 may leverage hardware 3622 to perform processing tasks in an effective and efficient manner.[0558] In at least one embodiment, deployment system 3606 may execute deployment pipelines 3710. In at least one embodiment, deployment pipelines 3710 may include any number of applications that may be sequentially, non-sequentially, or otherwise applied to imaging data (and/or other data types) generated by imaging devices, sequencing devices, genomics devices, etc. -including AI-assisted annotation, as described above. In at least one embodiment, as described herein, a deployment pipeline 3710 for an individual device may be referred to as a virtual instrument for a device (e.g., a virtual ultrasound instrument, a virtual CT scan instrument, a virtual sequencing instrument, etc.). In at least one embodiment, for a single device, there may be more than one deployment pipeline 3710 depending on information desired from data generated by a device. In at least one embodiment, where detections of anomalies are desired from an MRI machine, there may be a first deployment pipeline 3710, and where image enhancement is desired from output of an MRI machine, there may be a second deployment pipeline 3710.[0559] In at least one embodiment, applications available for deployment pipelines 3710 may include any application that may be used for performing processing tasks on imaging data or other data from devices. In at least one embodiment, different applications may be responsible for image enhancement, segmentation, reconstruction, anomaly detection, object detection, feature detection, treatment planning, dosimetry, beam planning (or other radiation treatment procedures), and/or other analysis, image processing, or inferencing tasks. In at least one embodiment, deployment system 3606 may define constructs for each of applications, such that users of deployment system 3606 (e.g., medical facilities, labs, clinics, etc.) may understand constructs and adapt applications for implementation within their respective facility. In at least one embodiment, an application for image reconstruction may be selected for inclusion in deployment pipeline 3710, but data type generated by an imaging device may be different from a data type used within an application. In at least one embodiment, DICOM adapter 3702B (and/or a DICOM reader) or another data type adapter or reader (e.g., RIS, CIS, REST compliant, RPC, raw, etc.) may be used within deployment pipeline 3710 to convert data to a form useable by an application within deployment system 3606. In at least one embodiment, access to DICOM, RIS, CIS, REST compliant, RPC, raw, and/or other data type libraries may be accumulated and pre-processed, including decoding, extracting, and/or performing any convolutions, color corrections, sharpness, gamma, and/or other augmentations to data. In at least one embodiment, DICOM, MS, CIS, REST compliant, RPC, and/or raw data may be unordered and a pre-pass may be executed to organize or sort collected data. In at least one embodiment, because various applications may share common image operations, in some embodiments, a data augmentation library (e.g., as one of services 3620) may be used to accelerate these operations. In at least one embodiment, to avoid bottlenecks of conventional processing approaches that rely on CPU processing, parallel computing platform 3730 may be used for GPU acceleration of these processing tasks.[0560] In at least one embodiment, an image reconstruction application may include a processing task that includes use of a machine learning model. In at least one embodiment, a user may desire to use their own machine learning model, or to select a machine learning model from model registry 3624. In at least one embodiment, a user may implement their own machine learning model or select a machine learning model for inclusion in an application for performing a processing task. In at least one embodiment, applications may be selectable and customizable, and by defining constructs of applications, deployment and implementation of applications for a particular user are presented as a more seamless user experience. In at least one embodiment, by leveraging other features of system 3700-such as services 3620 and hardware 3622 -deployment pipelines 3710 may be even more user friendly, provide for easier integration, and produce more accurate, efficient, and timely results.[0561] In at least one embodiment, deployment system 3606 may include a user interface 3714 (e.g., a graphical user interface, a web interface, etc.) that may be used to select applications for inclusion in deployment pipeline(s) 3710, arrange applications, modify or change applications or parameters or constructs thereof, use and interact with deployment pipeline(s) 3710 during set-up and/or deployment, and/or to otherwise interact with deployment system 3606. In at least one embodiment, although not illustrated with respect to training system 3604, user interface 3714 (or a different user interface) may be used for selecting models for use in deployment system 3606, for selecting models for training, or retraining, in training system 3604, and/or for otherwise interacting with training system 3604.[0562] In at least one embodiment, pipeline manager 3712 may be used, in addition to an application orchestration system 3728, to manage interaction between applications or containers of deployment pipeline(s) 3710 and services 3620 and/or hardware 3622. In at least one embodiment, pipeline manager 3712 may be configured to facilitate interactions from application to application, from application to service 3620, and/or from application or service to hardware 3622. In at least one embodiment, although illustrated as included in software 3618, this is not intended to be limiting, and in some examples (e.g., as illustrated in FIG. 38) pipeline manager 3712 may be included in services 3620. In at least one embodiment, application orchestration system 3728 (e.g., Kubernetes, DOCKER, etc.) may include a container orchestration system that may group applications into containers as logical units for coordination, management, scaling, and deployment. In at least one embodiment, by associating applications from deployment pipeline(s) 3710 (e.g., a reconstruction application, a segmentation application, etc.) with individual containers, each application may execute in a self-contained environment (e.g., at a kernel level) to increase speed and efficiency.[0563] In at least one embodiment, each application and/or container (or image thereof) may be individually developed, modified, and deployed (e.g., a first user or developer may develop, modify, and deploy a first application and a second user or developer may develop, modify, and deploy a second application separate from a first user or developer), which may allow for focus on, and attention to, a task of a single application and/or container(s) without being hindered by tasks of another application(s) or container(s). In at least one embodiment, communication, and cooperation between different containers or applications may be aided by pipeline manager 3712 and application orchestration system 3728. In at least one embodiment, so long as an expected input and/or output of each container or application is known by a system (e.g., based on constructs of applications or containers), application orchestration system 3728 and/or pipeline manager 3712 may facilitate communication among and between, and sharing of resources among and between, each of applications or containers. In at least one embodiment, because one or more of applications or containers in deployment pipeline(s) 3710 may share same services and resources, application orchestration system 3728 may orchestrate, load balance, and determine sharing of services or resources between and among various applications or containers. In at least one embodiment, a scheduler may be used to track resource requirements of applications or containers, current usage or planned usage of these resources, and resource availability. In at least one embodiment, a scheduler may thus allocate resources to different applications and distribute resources between and among applications in view of requirements and availability of a system. In some examples, a scheduler (and/or other component of application orchestration system 3728) may determine resource availability and distribution based on constraints imposed on a system (e.g., user constraints), such as quality of service (QoS), urgency of need for data outputs (e.g., to determine whether to execute real-time processing or delayed processing), etc. [0564] In at least one embodiment, services 3620 leveraged by and shared by applications or containers in deployment system 3606 may include compute services 3716, M services 3718, visualization services 3720, and/or other service types. In at least one embodiment, applications may call (e.g., execute) one or more of services 3620 to perform processing operations for an application. In at least one embodiment, compute services 3716 may be leveraged by applications to perform super-computing or other high-performance computing (HPC) tasks. In at least one embodiment, compute service(s) 3716 may be leveraged to perform parallel processing (e.g., using a parallel computing platform 3730) for processing data through one or more of applications and/or one or more tasks of a single application, substantially simultaneously. In at least one embodiment, parallel computing platform 3730 (e.g., NVIDIA's CUDA) may enable general purpose computing on GPUs (GPGPU) (e.g., CPUs 3722). In at least one embodiment, a software layer of parallel computing platform 3730 may provide access to virtual instruction sets and parallel computational elements of GPUs, for execution of compute kernels. In at least one embodiment, parallel computing platform 3730 may include memory and, in some embodiments, a memory may be shared between and among multiple containers, and/or between and among different processing tasks within a single container. In at least one embodiment, inter-process communication (IPC) calls may be generated for multiple containers and/or for multiple processes within a container to use same data from a shared segment of memory of parallel computing platform 3730 (e.g., where multiple different stages of an application or multiple applications are processing same information). In at least one embodiment, rather than making a copy of data and moving data to different locations in memory (e.g., a read/write operation), same data in same location of a memory may be used for any number of processing tasks (e.g., at a same time, at different times, etc.). In at least one embodiment, as data is used to generate new data as a result of processing, this information of a new location of data may be stored and shared between various applications. In at least one embodiment, location of data and a location of updated or modified data may be part of a definition of how a payload is understood within containers.[0565] In at least one embodiment, Al services 3718 may be leveraged to perform inferencing services for executing machine learning model(s) associated with applications (e.g., tasked with performing one or more processing tasks of an application). In at least one embodiment, AT services 3718 may leverage AI system 3724 to execute machine learning model(s) (e.g., neural networks, such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inferencing tasks. In at least one embodiment, applications of deployment pipeline(s) 3710 may use one or more of output models 3616 from training system 3604 and/or other models of applications to perform inference on imaging data (e.g., DICOM data, RIS data, CIS data, REST compliant data, RPC data, raw data, etc.). In at least one embodiment, two or more examples of inferencing using application orchestration system 3728 (e.g., a scheduler) may be available. In at least one embodiment, a first category may include a high priority/low latency path that may achieve higher service level agreements, such as for performing inference on urgent requests during an emergency, or for a radiologist during diagnosis. In at least one embodiment, a second category may include a standard priority path that may be used for requests that may be non-urgent or where analysis may be performed at a later time. In at least one embodiment, application orchestration system 3728 may distribute resources (e.g., services 3620 and/or hardware 3622) based on priority paths for different inferencing tasks of Al services 3718.[0566] In at least one embodiment, shared storage may be mounted to AT services 3718 within system 3700. In at least one embodiment, shared storage may operate as a cache (or other storage device type) and may be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, a request may be received by a set of API instances of deployment system 3606, and one or more instances may be selected (e.g., for best fit, for load balancing, etc.) to process a request. In at least one embodiment, to process a request, a request may be entered into a database, a machine learning model may be located from model registry 3624 if not already in a cache, a validation step may ensure appropriate machine learning model is loaded into a cache (e.g., shared storage), and/or a copy of a model may be saved to a cache. In at least one embodiment, a scheduler (e.g., of pipeline manager 3712) may be used to launch an application that is referenced in a request if an application is not already running or if there are not enough instances of an application. In at least one embodiment, if an inference server is not already launched to execute a model, an inference server may be launched. In at least one embodiment, any number of inference servers may be launched per model. In at least one embodiment, in a pull model, in which inference servers are clustered, models may be cached whenever load balancing is advantageous. In at least one embodiment, inference servers may be statically loaded in corresponding, distributed servers.[0567] In at least one embodiment, inferencing may be performed using an inference server that runs in a container. In at least one embodiment, an instance of an inference server may be associated with a model (and optionally a plurality of versions of a model). In at least one embodiment, if an instance of an inference server does not exist when a request to perform inference on a model is received, a new instance may be loaded. In at least one embodiment, when starting an inference server, a model may be passed to an inference server such that a same container may be used to serve different models so long as inference server is running as a different instance.[0568] In at least one embodiment, during application execution, an inference request for a given application may be received, and a container (e.g., hosting an instance of an inference server) may be loaded (if not already), and a start procedure may be called. In at least one embodiment, pre-processing logic in a container may load, decode, and/or perform any additional pre-processing on incoming data (e.g., using a CPU(s) and/or GPU(s)). In at least one embodiment, once data is prepared for inference, a container may perform inference as necessary on data. In at least one embodiment, this may include a single inference call on one image (e.g., a hand X-ray), or may require inference on hundreds of images (e.g., a chest CT). In at least one embodiment, an application may summarize results before completing, which may include, without limitation, a single confidence score, pixel level-segmentation, voxel-level segmentation, generating a visualization, or generating text to summarize findings. In at least one embodiment, different models or applications may be assigned different priorities. For example, some models may have a real-time (TAT less than one minute) priority while others may have lower priority (e.g., TAT less than 10 minutes). In at least one embodiment, model execution times may be measured from requesting institution or entity and may include partner network traversal time, as well as execution on an inference service.[0569] In at least one embodiment, transfer of requests between services 3620 and inference applications may be hidden behind a software development kit (SDK), and robust transport may be provide through a queue. In at least one embodiment, a request will be placed in a queue via an API for an individual application/tenant ID combination and an SDK will pull a request from a queue and give a request to an application. In at least one embodiment, a name of a queue may be provided in an environment from where an SDK will pick it up. In at least one embodiment, asynchronous communication through a queue may be useful as it may allow any instance of an application to pick up work as it becomes available. In at least one embodiment, results may be transferred back through a queue, to ensure no data is lost. In at least one embodiment, queues may also provide an ability to segment work, as highest priority work may go to a queue with most instances of an application connected to it, while lowest priority work may go to a queue with a single instance connected to it that processes tasks in an order received. In at least one embodiment, an application may run on a GPU-accelerated instance generated in cloud 3726, and an inference service may perform inferencing on a GPU.[0570] In at least one embodiment, visualization services 3720 may be leveraged to generate visualizations for viewing outputs of applications and/or deployment pipeline(s) 3710. In at least one embodiment, GPUs 3722 may be leveraged by visualization services 3720 to generate visualizations. In at least one embodiment, rendering effects, such as ray-tracing, may be implemented by visualization services 3720 to generate higher quality visualizations. In at least one embodiment, visualizations may include, without limitation, 2D image renderings, 3D volume renderings, 3D volume reconstruction, 2D tomographic slices, virtual reality displays, augmented reality displays, etc. In at least one embodiment, virtualized environments may be used to generate a virtual interactive display or environment (e.g., a virtual environment) for interaction by users of a system (e.g., doctors, nurses, radiologists, etc.). In at least one embodiment, visualization services 3720 may include an internal visualizer, cinematics, and/or other rendering or image processing capabilities or functionality (e.g., ray tracing, rasterization, internal optics, etc.).[0571] In at least one embodiment, hardware 3622 may include GPtJs 3722, AT system 3724, cloud 3726, and/or any other hardware used for executing training system 3604 and/or deployment system 3606. In at least one embodiment, GPUs 3722 (e.g., NVIDIA's TESLA and/or QUADRO CPUs) may include any number of GPUs that may be used for executing processing tasks of compute services 3716, M services 3718, visualization services 3720, other services, and/or any of features or functionality of software 3618. For example, with respect to Al services 3718, GPUs 3722 may be used to perform pre-processing on imaging data (or other data types used by machine learning models), post-processing on outputs of machine learning models, and/or to perform inferencing (e.g., to execute machine learning models). In at least one embodiment, cloud 3726, AT system 3724, and/or other components of system 3700 may use GPUs 3722. In at least one embodiment, cloud 3726 may include a GPU-optimized platform for deep learning tasks. In at least one embodiment, Al system 3724 may use GPUs, and cloud 3726 -or at least a portion tasked with deep learning or inferencing -may be executed using one or more Al systems 3724. As such, although hardware 3622 is illustrated as discrete components, this is not intended to be limiting, and any components of hardware 3622 may be combined with, or leveraged by, any other components of hardware 3622.[0572] In at least one embodiment, Al system 3724 may include a purpose-built computing system (e.g., a super-computer or an HPC) configured for inferencing, deep learning, machine learning, and/or other artificial intelligence tasks. In at least one embodiment, AI system 3724 (e.g., NNTDIA's DGX) may include GPU-optimized software (e.g., a software stack) that may be executed using a plurality of GPUs 3722, in addition to CPUs, RAM, storage, and/or other components, features, or functionality. In at least one embodiment, one or more AT systems 3724 may be implemented in cloud 3726 (e.g., in a data center) for performing some or all of AI-based processing tasks of system 3700.[0573] In at least one embodiment, cloud 3726 may include a GPU-accelerated infrastructure (e.g., NV1DIA's NGC) that may provide a GPU-optimized platform for executing processing tasks of system 3700. In at least one embodiment, cloud 3726 may include an AT system(s) 3724 for performing one or more of AI-based tasks of system 3700 (e.g., as a hardware abstraction and scaling platform). In at least one embodiment, cloud 3726 may integrate with application orchestration system 3728 leveraging multiple GPUs to enable seamless scaling and load balancing between and among applications and services 3620. In at least one embodiment, cloud 3726 may tasked with executing at least some of services 3620 of system 3700, including compute services 3716, AI services 3718, and/or visualization services 3720, as described herein. In at least one embodiment, cloud 3726 may perform small and large batch inference (e.g., executing NVIDIA's TENSOR RT), provide an accelerated parallel computing API and platform 3730 (e.g., NVIDIA's CUDA), execute application orchestration system 3728 (e.g., KUBERNE ELS), provide a graphics rendering API and platform (e.g., for ray-tracing, 2D graphics, 3D graphics, and/or other rendering techniques to produce higher quality cinematics), and/or may provide other functionality for system 3700.[0574] In at least one embodiment, in an effort to preserve patient confidentiality (e.g., where patient data or records are to be used off-premises), cloud 3726 may include a registry -such as a deep learning container registry. In at least one embodiment, a registry may store containers for instantiations of applications that may perform pre-processing, post-processing, or other processing tasks on patient data. In at least one embodiment, cloud 3726 may receive data that includes patient data as well as sensor data in containers, perform requested processing for just sensor data in those containers, and then forward a resultant output and/or visualizations to appropriate parties and/or devices (e.g., on-premises medical devices used for visualization or diagnoses), all without having to extract, store, or otherwise access patient data. In at least one embodiment, confidentiality of patient data is preserved in compliance with HIPAA and/or other data regulations.[0575] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0576] FIG. 38 includes an example illustration of a deployment pipeline 3710A for processing imaging data, in accordance with at least one embodiment. In at least one embodiment, system 3700-and specifically deployment system 3606 -may be used to customize, update, andlor integrate deployment pipeline(s) 3710A into one or more production environments. In at least one embodiment, deployment pipeline 3710A of FIG. 38 includes a non-limiting example of a deployment pipeline 3710A that may be custom defined by a particular user (or team of users) at a facility (e.g., at a hospital, clinic, lab, research environment, etc.). In at least one embodiment, to define deployment pipelines 3710A for a CT scanner 3802, a user may select -from a container registry, for example -one or more applications that perform specific functions or tasks with respect to imaging data generated by CT scanner 3802. In at least one embodiment, applications may be applied to deployment pipeline 3710A as containers that may leverage services 3620 and/or hardware 3622 of system 3700. In addition, deployment pipeline 3710A may include additional processing tasks or applications that may be implemented to prepare data for use by applications (e.g., DICOM adapter 3702B and DICOM reader 3806 may be used in deployment pipeline 3710A to prepare data for use by CT reconstruction 3808, organ segmentation 3810, etc.). In at least one embodiment, deployment pipeline 3710A may be customized or selected for consistent deployment, one time use, or for another frequency or interval. In at least one embodiment, a user may desire to have CT reconstruction 3808 and organ segmentation 3810 for several subjects over a specific interval, and thus may deploy pipeline 3710A for that period of time. In at least one embodiment, a user may select, for each request from system 3700, applications that a user wants to perform processing on that data for that request. In at least one embodiment, deployment pipeline 3710A may be adjusted at any interval and, because of adaptability and scalability of a container structure within system 3700, this may be a seamless process.[0577] In at least one embodiment, deployment pipeline 3710A of FIG. 38 may include CT scanner 3802 generating imaging data of a patient or subject. In at least one embodiment, imaging data from CT scanner 3802 may be stored on a PACS server(s) 3804 associated with a facility housing CT scanner 3802. In at least one embodiment, PACS server(s) 3804 may include software and/or hardware components that may directly interface with imaging modalities (e.g., CT scanner 3802) at a facility. In at least one embodiment, DICOM adapter 3702B may enable sending and receipt of DICOM objects using DICOM protocols. In at least one embodiment, DICOM adapter 3702B may aid in preparation or configuration of DICOM data from PACS server(s) 3804 for use by deployment pipeline 3710A. In at least one embodiment, once DICOM data is processed through DICOM adapter 3702B, pipeline manager 3712 may route data through to deployment pipeline 3710A. In at least one embodiment, DICOM reader 3806 may extract image files and any associated metadata from DICOM data (e.g., raw sinogram data, as illustrated in visualization 3816A). In at least one embodiment, working files that are extracted may be stored in a cache for faster processing by other applications in deployment pipeline 3710A. In at least one embodiment, once DICOM reader 3806 has finished extracting and/or storing data, a signal of completion may be communicated to pipeline manager 3712. In at least one embodiment, pipeline manager 3712 may then initiate or call upon one or more other applications or containers in deployment pipeline 3710A.[0578] In at least one embodiment, CT reconstruction 3808 application and/or container may be executed once data (e.g., raw sinogram data) is available for processing by CT reconstruction 3808 application. In at least one embodiment, CT reconstruction 3808 may read raw sinogram data from a cache, reconstruct an image file out of raw sinogram data (e.g., as illustrated in visualization 3816B), and store resulting image file in a cache. In at least one embodiment, at completion of reconstruction, pipeline manager 3712 may be signaled that reconstruction task is complete. In at least one embodiment, once reconstruction is complete, and a reconstructed image file may be stored in a cache (or other storage device), organ segmentation 3810 application and/or container may be triggered by pipeline manager 3712. In at least one embodiment, organ segmentation 3810 application and/or container may read an image file from a cache, normalize or convert an image file to format suitable for inference (e.g., convert an image file to an input resolution of a machine learning model), and run inference against a normalized image. In at least one embodiment, to run inference on a normalized image, organ segmentation 3810 application and/or container may rely on services 3620, and pipeline manager 3712 and/or application orchestration system 3728 may facilitate use of services 3620 by organ segmentation 3810 application and/or container. In at least one embodiment, for example, organ segmentation 3810 application and/or container may leverage AT services 3718 to perform inference on a normalized image, and AT services 3718 may leverage hardware 3622 (e.g., AT system 3724) to execute AT services 3718. In at least one embodiment, a result of an inference may be a mask file (e.g., as illustrated in visualization 3816C) that may be stored in a cache (or other storage device).[0579] In at least one embodiment, once applications that process DICOM data and/or data extracted from DICOM data have completed processing, a signal may be generated for pipeline manager 3712. In at least one embodiment, pipeline manager 3712 may then execute DICOM writer 3812 to read results from a cache (or other storage device), package results into a DICOM format (e.g., as DICOM output 3814) for use by users at a facility who generated a request. In at least one embodiment, DICOM output 3814 may then be transmitted to DICOM adapter 3702B to prepare DICOM output 3814 for storage on PACS server(s) 3804 (e.g., for viewing by a DICOM viewer at a facility). In at least one embodiment, in response to a request for reconstruction and segmentation, visualizations 3816B and 381 6C may be generated and available to a user for diagnoses, research, and/or for other purposes.[0580] Although illustrated as consecutive application in deployment pipeline 3710A, CT reconstruction 3808 and organ segmentation 3810 applications may be processed in parallel in at least one embodiment. In at least one embodiment, where applications do not have dependencies on one another, and data is available for each application (e.g., after DICOM reader 3806 extracts data), applications may be executed at a same time, substantially at a same time, or with some overlap. In at least one embodiment, where two or more applications require similar services 3620, a scheduler of system 3700 may be used to load balance and distribute compute or processing resources between and among various applications. In at least one embodiment, in some embodiments, parallel computing platform 3730 may be used to perform parallel processing for applications to decrease run-time of deployment pipeline 3710A to provide real-time results.[0581] In at least one embodiment, and with reference to FIGS. 39A-39B, deployment system 3606 may be implemented as one or more virtual instruments to perform different functionalities -such as image processing, segmentation, enhancement, Al, visualization, and inferencing -with imaging devices (e.g., CT scanners, X-ray machines, MRT machines, etc.), sequencing devices, genomics devices, and/or other device types. In at least one embodiment, system 3700 may allow for creation and provision of virtual instruments that may include a software-defined deployment pipeline 3710 that may receive raw/unprocessed input data generated by a device(s) and output processed/reconstructed data In at least one embodiment, deployment pipelines 3710 (e.g., 3710A and 3710B) that represent virtual instruments may implement intelligence into a pipeline, such as by leveraging machine learning models, to provide containerized inference support to a system. In at least one embodiment, virtual instruments may execute any number of containers each including instantiations of applications. In at least one embodiment, such as where real-time processing is desired, deployment pipelines 3710 representing virtual instruments may be static (e.g., containers and/or applications may be set), while in other examples, container and/or applications for virtual instruments may be selected (e.g., on a per-request basis) from a pool of applications or resources (e.g., within a container registry) [0582] In at least one embodiment, system 3700 may be instantiated or executed as one or more virtual instruments on-premise at a facility in, for example, a computing system deployed next to or otherwise in communication with a radiology machine, an imaging device, and/or another device type at a facility. In at least one embodiment, however, an on-premise installation may be instantiated or executed within a computing system of a device itself (e.g., a computing system integral to an imaging device), in a local datacenter (e.g., a datacenter on-premise), and/or in a cloud-environment (e.g., in cloud 3726). In at least one embodiment deployment system 3606, operating as a virtual instrument, may be instantiated by a supercomputer or other HPC system in some examples. In at least one embodiment, on-premise installation may allow for high-bandwidth uses (via, for example, higher throughput local communication interfaces, such as RF over Ethernet) for real-time processing. In at least one embodiment, real-time or near real-time processing may be particularly useful where a virtual instrument supports an ultrasound device or other imaging modality where immediate visualizations are expected or required for accurate diagnoses and analyses. In at least one embodiment, a cloud-computing architecture may be capable of dynamic bursting to a cloud computing service provider, or other compute cluster, when local demand exceeds on-premise capacity or capability. In at least one embodiment, a cloud architecture, when implemented, may be tuned for training neural networks or other machine learning models, as described herein with respect to training system 3604. In at least one embodiment, with training pipelines in place, machine learning models may be continuously learn and improve as they process additional data from devices they support. In at least one embodiment, virtual instruments may be continually improved using additional data, new data, existing machine learning models, and/or new or updated machine learning models.[0583] In at least one embodiment, a computing system may include some or all of hardware 3622 described herein, and hardware 3622 may be distributed in any of a number of ways including within a device, as part of a computing_ device coupled to and located proximate a device, in a local datacenter at a facility, and/or in cloud 3726. In at least one embodiment, because deployment system 3606 and associated applications or containers are created in software (e.g., as discrete containerized instantiations of applications), behavior, operation, and configuration of virtual instruments, as well as outputs generated by virtual instruments, may be modified or customized as desired, without having to change or alter raw output of a device that a virtual instrument supports.[0584] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0585] FIG. 39A includes an example data flow diagram of a virtual instrument supporting an ultrasound device, in accordance with at least one embodiment. In at least one embodiment, deployment pipeline 3710B may leverage one or more of services 3620 of system 3700. In at least one embodiment, deployment pipeline 3710B and services 3620 may leverage hardware 3622 of a system either locally or in cloud 3726. In at least one embodiment, although not illustrated, process 3900 may be facilitated by pipeline manager 3712, application orchestration system 3728, and/or parallel computing platform 3730.[0586] In at least one embodiment, process 3900 may include receipt of imaging data from an ultrasound device 3902. In at least one embodiment, imaging data may be stored on PACS server(s) in a DICOM format (or other format, such as RIS, CIS, REST compliant, RPC, raw, etc.), and may be received by system 3700 for processing through deployment pipeline 3710 selected or customized as a virtual instrument (e.g., a virtual ultrasound) for ultrasound device 3902. In at least one embodiment, imaging data may be received directly from an imaging device (e.g., ultrasound device 3902) and processed by a virtual instrument. In at least one embodiment, a transducer or other signal converter communicatively coupled between an imaging device and a virtual instrument may convert signal data generated by an imaging device to image data that may be processed by a virtual instrument. In at least one embodiment, raw data and/or image data may be applied to DICOM reader 3806 to extract data for use by applications or containers of deployment pipeline 3710B. In at least one embodiment, DICOM reader 3806 may leverage data augmentation library 3914 (e.g., NVIDIA's DALT) as a service 3620 (e.g., as one of compute service(s) 3716) for extracting, resizing, rescaling, and/or otherwise preparing data for use by applications or containers.[0587] In at least one embodiment, once data is prepared, a reconstruction 3906 application and/or container may be executed to reconstruct data from ultrasound device 3902 into an image file. In at least one embodiment, after reconstruction 3906, or at a same time as reconstruction 3906, a detection 3908 application and/or container may be executed for anomaly detection, object detection, feature detection, and/or other detection tasks related to data. In at least one embodiment, an image file generated during reconstruction 3906 may be used during detection 3908 to identify anomalies, objects, features, etc. In at least one embodiment, detection 3908 application may leverage an inference engine 3916 (e.g., as one of Al service(s) 3718) to perform inference on data to generate detections. In at least one embodiment, one or more machine learning models (e.g., from training system 3604) may be executed or called by detection 3908 application.[0588] In at least one embodiment, once reconstruction 3906 and/or detection 3908 is/are complete, data output from these application and/or containers may be used to generate visualizations 3910, such as visualization 3912 (e.g., a grayscale output) displayed on a workstation or display terminal. In at least one embodiment, visualization may allow a technician or other user to visualize results of deployment pipeline 3710B with respect to ultrasound device 3902. In at least one embodiment, visualization 3910 may be executed by leveraging a render component 3918 of system 3700 (e.g., one of visualization service(s) 3720). In at least one embodiment, render component 3918 may execute a 2D, OpenGL, or ray-tracing service to generate visualization 3912.[0589] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0590] FIG. 39B includes an example data flow diagram of a virtual instrument supporting a CT scanner, in accordance with at least one embodiment. In at least one embodiment, deployment pipeline 3710C may leverage one or more of services 3620 of system 3700. In at least one embodiment, deployment pipeline 3710C and services 3620 may leverage hardware 3622 of a system either locally or in cloud 3726. In at least one embodiment, although not illustrated, process 3920 may be facilitated by pipeline manager 3712, application orchestration system 3728, and/or parallel computing platform 3730.[0591] In at least one embodiment, process 3920 may include CT scanner 3922 generating raw data that may be received by DICOM reader 3806 (e.g., directly, via a PACS server 3804, after processing, etc.). In at least one embodiment, a Virtual CT (instantiated by deployment pipeline 3710C) may include a first, real-time pipeline for monitoring a patient (e.g., patient movement detection Al 3926) and/or for adjusting or optimizing exposure of CT scanner 3922 (e.g., using exposure control Al 3924). In at least one embodiment, one or more of applications (e.g., 3924 and 3926) may leverage a service 3620, such as Al service(s) 3718. In at least one embodiment, outputs of exposure control Al 3924 application (or container) and/or patient movement detection All 3926 application (or container) may be used as feedback to CT scanner 3922 and/or a technician for adjusting exposure (or other settings of CT scanner 3922) and/or informing a patient to move less.[0592] In at least one embodiment, deployment pipeline 3710C may include a non-real-time pipeline for analyzing data generated by CT scanner 3922. In at least one embodiment, a second pipeline may include CT reconstruction 3808 application and/or container, a coarse detection AT 3928 application and/or container, a fine detection AT 3932 application and/or container (e.g., where certain results are detected by coarse detection AT 3928), a visualization 3930 application and/or container, and a DICOM writer 3812 (and/or other data type writer, such as RIS, CIS, REST compliant, RFC, raw, etc.) application and/or container. In at least one embodiment, raw data generated by CT scanner 3922 may be passed through pipelines of deployment pipeline 3710C (instantiated as a virtual CT instrument) to generate results. In at least one embodiment, results from DICOM writer 3812 may be transmitted for display and/or may be stored on PACS server(s) 3804 for later retrieval, analysis, or display by a technician, practitioner, or other user.[0593] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0594] FIG. 40A illustrates a data flow diagram for a process 4000 to train, retrain, or update a machine learning model, in accordance with at least one embodiment. In at least one embodiment, process 4000 may be executed using, as a non-limiting example, system 3700 of FIG. 37. In at least one embodiment, process 4000 may leverage services 3620 and/or hardware 3622 of system 3700, as described herein. In at least one embodiment, refined models 4012 generated by process 4000 may be executed by deployment system 3606 for one or more containerized applications in deployment pipelines 3710.[0595] In at least one embodiment, model training 3614 may include retraining or updating an initial model 4004 (e.g., a pre-trained model) using new training data (e.g., new input data, such as customer dataset 4006, and/or new ground truth data associated with input data). In at least one embodiment, to retrain, or update, initial model 4004, output or loss layer(s) of initial model 4004 may be reset, or deleted, and/or replaced with an updated or new output or loss layer(s). In at least one embodiment, initial model 4004 may have previously fine-tuned parameters (e.g., weights and/or biases) that remain from prior training, so training or retraining 3614 may not take as long or require as much processing as training a model from scratch. In at least one embodiment, during model training 3614, by having reset or replaced output or loss layer(s) of initial model 4004, parameters may be updated and re-tuned for a new data set based on loss calculations associated with accuracy of output or loss layer(s) at generating predictions on new, customer dataset 4006 (e g, image data 3608 of FIG. 36).[0596] In at least one embodiment, pre-trained models 3706 may be stored in a data store, or registry (e.g., model registry 3624 of FIG. 36). In at least one embodiment, pre-trained models 3706 may have been trained, at least in part, at one or more facilities other than a facility executing process 4000. In at least one embodiment, to protect privacy and rights of patients, subjects, or clients of different facilities, pre-trained models 3706 may have been trained, on-premise, using customer or patient data generated on-premise. In at least one embodiment, pre-trained models 3706 may be trained using cloud 3726 and/or other hardware 3622, but confidential, privacy protected patient data may not be transferred to, used by, or accessible to any components of cloud 3726 (or other off premise hardware). In at least one embodiment, where a pre-trained model 3706 is trained at using patient data from more than one facility, pre-trained model 3706 may have been individually trained for each facility prior to being trained on patient or customer data from another facility. In at least one embodiment, such as where a customer or patient data has been released of privacy concerns (e.g., by waiver, for experimental use, etc.), or where a customer or patient data is included in a public data set, a customer or patient data from any number of facilities may be used to train pre-trained model 3706 on-premise and/or off premise, such as in a datacenter or other cloud computing infrastructure.[0597] In at least one embodiment, when selecting applications for use in deployment pipelines 3710, a user may also select machine learning models to be used for specific applications. In at least one embodiment, a user may not have a model for use, so a user may select a pre-trained model 3706 to use with an application. In at least one embodiment, pre-trained model 3706 may not be optimized for generating accurate results on customer dataset 4006 of a facility of a user (e.g., based on patient diversity, demographics, types of medical imaging devices used, etc.). In at least one embodiment, prior to deploying pre-trained model 3706 into deployment pipeline 3710 for use with an application(s), pre-trained model 3706 may be updated, retrained, and/or fine-tuned for use at a respective facility.[0598] In at least one embodiment, a user may select pre-trained model 3706 that is to be updated, retrained, and/or fine-tuned, and pre-trained model 3706 may be referred to as initial model 4004 for training system 3604 within process 4000. In at least one embodiment, customer dataset 4006 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3614 (which may include, without limitation, transfer learning) on initial model 4004 to generate refined model 4012. In at least one embodiment, ground truth data corresponding to customer dataset 4006 may be generated by training system 3604. In at least one embodiment, ground truth data may be generated, at least in part, by clinicians, scientists, doctors, practitioners, at a facility (e.g., as labeled clinic data 3612 of FIG. 36).[0599] In at least one embodiment, AI-assisted annotation 3610 may be used in some examples to generate ground truth data. In at least one embodiment, AI-assisted annotation 3610 (e.g., implemented using an AI-assisted annotation SDK) may leverage machine learning models (e.g., neural networks) to generate suggested or predicted ground truth data for a customer dataset. In at least one embodiment, user 4010 may use annotation tools within a user interface (a graphical user interface (GUI)) on computing device 4008.[0600] In at least one embodiment, user 4010 may interact with a GUI via computing device 4008 to edit or fine-tune annotations or auto-annotations. In at least one embodiment, a polygon editing feature may be used to move vertices of a polygon to more accurate or fine-tuned locations.[0601] In at least one embodiment, once customer dataset 4006 has associated ground truth data, ground truth data (e.g., from AI-assisted annotation, manual labeling, etc.) may be used by during model training 3614 to generate refined model 4012. In at least one embodiment, customer dataset 4006 may be applied to initial model 4004 any number of times, and ground truth data may be used to update parameters of initial model 4004 until an acceptable level of accuracy is attained for refined model 4012. In at least one embodiment, once refined model 4012 is generated, refined model 4012 may be deployed within one or more deployment pipelines 3710 at a facility for performing one or more processing tasks with respect to medical imaging data.[0602] In at least one embodiment, refined model 4012 may be uploaded to pre-trained models 3706 in model registry 3624 to be selected by another facility. In at least one embodiment, his process may be completed at any number of facilities such that refined model 4012 maybe further refined on new datasets any number of times to generate a more universal model.[0603] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0604] FIG. 40B is an example illustration of a client-server architecture 4032 to enhance annotation tools with pre-trained annotation models, in accordance with at least one embodiment. In at least one embodiment, Al-assisted annotation tools 4036 may be instantiated based on a client-server architecture 4032. In at least one embodiment, annotation tools 4036 in imaging applications may aid radiologists, for example, identify organs and abnormalities. In at least one embodiment, imaging applications may include software tools that help user 4010 to identify, as a non-limiting example, a few extreme points on a particular organ of interest in raw images 4034 (e.g., in a 3D MR' or CT scan) and receive auto-annotated results for all 2D slices of a particular organ. In at least one embodiment, results may be stored in a data store as training data 4038 and used as (for example and without limitation) ground truth data for training. In at least one embodiment, when computing device 4008 sends extreme points for Al-assisted annotation 3610, a deep learning model, for example, may receive this data as input and return inference results of a segmented organ or abnormality. In at least one embodiment, pre-instantiated annotation tools, such as AI-Assisted Annotation Tool 4036B in FIG. 40B, may be enhanced by making API calls (e.g., API Call 4044) to a server, such as an Annotation Assistant Server 4040 that may include a set of pre-trained models 4042 stored in an annotation model registry, for example. In at least one embodiment, an annotation model registry may store pre-trained models 4042 (e.g., machine learning models, such as deep learning models) that are pre-trained to perform AI-assisted annotation on a particular organ or abnormality. In at least one embodiment, these models may be further updated by using training pipelines 3704. In at least one embodiment, pre-installed annotation tools may be improved over time as new labeled clinic data 3612 is added.[0605] Inference and/or training logic 715 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B.[0606] In at least one embodiment, one or more circuits, processors, computing systems, or other devices or techniques are adapted, with reference to said figure, to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks. In at least one embodiment, this is performed by embodiments of said figure, according to embodiments described herein in relation to preceding FIGS. 1-6.[0607] At least one embodiment of the disclosure can be described in view of the following clauses: A processor comprising: one or more circuits to use one or more neural networks to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.2. The processor of clause 1, wherein the one or more neural networks comprises a generative model framework to generate a plurality of complete images based on an input image.3. The processor of clause I or clause 2, wherein the one or more neural networks comprises a variational autoencoder, and the variational autoencoder comprises the encoder and decoder.4. The processor of any of clauses 1-3, wherein the decoder is trained based, at least in part, on a dataset comprising images of complete objects and excluding images of portions of objects.5. The processor of any of clauses 1-4, wherein parameters of the decoder are not adjusted while training the encoder using the training data.6. The processor of clause 5, wherein output of the trained encoder causes the decoder to generate an image of a complete object based on input, to the one or more neural networks, of an image of a portion of an object.7. The processor of any of clauses 1-6, the one or more circuits to generate the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.8. The processor of any of clauses 1-7, the one or more circuits to refine training of the encoder based, at least in part, on a plurality of real-images of portions of objects after training the encoder with the generated training data.9. The processor of any of clauses 1-8, wherein one or more of the one or more neural networks is trained to spatially transform output of the decoder.A system, comprising: one or more processors to train one or more neural networks to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.11. The system of clause 10, wherein the one or more neural networks comprise a generative model framework to generate a plurality of complete images based on an input image, wherein the generative model framework comprises at least one of a variational autoencoder, a generative adversarial network, or a normalizing flow.12. The system of clause 10 or clause 11, wherein the encoder and decoder are components of a variational autoencoder.13. The system of any of clauses 10-12, the one or more processors to train the decoder, using a dataset comprising images of complete objects, to generate variations of images of complete objects.14. The system of any of clauses 10-13, wherein parameters of the decoder are frozen while training the encoder using the training data generated based, at least in part, on output of the decoder.15. The system of clause 14, wherein output of the trained encoder causes the decoder to generate an image of a complete object and, optionally, a plurality of probabilities corresponding to the plurality of images, based on input, to the one or more neural networks, of an image of a portion of an object.16. The system of any of clauses 10-15, the one or more processors to generate the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.17. The system of any of clauses 10-16, the one or more processors to train one or more of the one or more neural networks to spatially transform output of the decoder.18. A method, comprising: training a neural network to generate an image of a complete object based, at least in part, on an image of a portion of the object, wherein an encoder of the one or more neural networks is trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.19. The method of claim 18, wherein the one or more neural networks comprise at least one of a variational autoencoder, generative adversarial network, or normalizing flow.20. The method of clause 18 or claim 19, wherein the encoder and decoder are components of a variational autoencoder.21. The method of any of clauses 18-20, further comprising: training the decoder, using a dataset comprising images of complete objects, to generate variations of images of complete objects.22. The method of any of clauses 18-21, further comprising: freezing the parameters of the decoder while training the encoder using the training data generated based, at least in part, on output of the decoder.23. The method of clause 22, wherein output of the trained encoder causes the decoder to generate an image of a complete object and, optionally, a plurality of probabilities corresponding to the plurality of images, based on input, to the one or more neural networks, of an image of a portion of an object.24. The method of any of clauses 18-23, further comprising: generating the training data by generating a first image of a complete first object and a second image of a complete second object, and combining the first and second images so that the first object is at least partially occluded by the second object.25. The method of any of clauses 18-24, further comprising: training one or more of the one or more neural networks to spatially transform output of the decoder.26 A machine-readable medium having stored thereon instructions, which if performed by one or more processors, cause the one or more processors to at least: alter an image to incorporate a depiction of a complete object that is generated, by one or more neural networks, from a depiction of a portion of the object in the image, the one or more neural networks trained using training data generated based, at least in part, on output of a decoder of the one or more neural networks.27. The machine-readable medium of clause 26, having stored thereon further instructions, which if performed by one or more processors, cause the one or more processors to at least: generate a plurality of variations of a complete object, based at least in part on the image of the portion of the object.28. The machine-readable medium of any of clauses 26-27, wherein the parameters of the decoder are frozen after training the decoder using images of complete objects.29. The machine-readable medium of any of clauses 26-28, wherein the training data comprises an image depicting a plurality of objects generated based on the output of the decoder.30. The machine-readable medium of clause 29, wherein a first object of the plurality of objects overlaps a second object of the plurality of objects.31. The machine-readable medium of any of clauses 26-30, wherein the alteration of the image comprises replacing a depiction of a partial object in the image with a depiction of the complete object.32. The machine-readable medium of any of clauses 26-31, wherein the alteration of the image comprises removing an object in the image that occludes the portion of the object.[0608] In at least one embodiment, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit or chip. In at least one embodiment, multi-chip modules may be used with increased connectivity which simulate on-chip operation, and make substantial improvements over utilizing a conventional central processing unit ("CPU') and bus implementation. In at least one embodiment, various modules may also be situated separately or in various combinations of semiconductor platforms per desires of user.[0609] In at least one embodiment, referring back to FIG. 13, computer programs in form of machine-readable executable code or computer control logic algorithms are stored in main memory 1304 and/or secondary storage. Computer programs, if executed by one or more processors, enable system 1300 to perform various functions in accordance with at least one embodiment. In at least one embodiment, memory 1304, storage, and/or any other storage are possible examples of computer-readable media. In at least one embodiment, secondary storage may refer to any suitable storage device or system such as a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk ("FWD") drive, recording device, universal serial bus ("USB") flash memory, etc. In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of CPU 1302, parallel processing system 1312, an integrated circuit capable of at least a portion of capabilities of both CPU 1302, parallel processing system 1312, a chipset (e.g., a group of integrated circuits designed to work and sold as a unit for performing related functions, etc.), and/or any suitable combination of integrated circuit(s).[0610] In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and more. In at least one embodiment, computer system 1300 may take form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant ("PDA"), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic.[0611] In at least one embodiment, parallel processing system 1312 includes, without limitation, a plurality of parallel processing units ("PPIJO 1314 and associated memories 1316. In at least one embodiment, PPUs 1314 are connected to a host processor or other peripheral devices via an interconnect 1318 and a switch 1320 or multiplexer. In at least one embodiment, parallel processing system 1312 distributes computational tasks across PPUs 1314 which can be parallelizable -for example, as part of distribution of computational tasks across multiple graphics processing unit ("CPU") thread blocks. In at least one embodiment, memory is shared and accessible (e.g., for read and/or write access) across some or all of PPUs 1314, although such shared memory may incur performance penalties relative to use of local memory and registers resident to a PPU 1314. In at least one embodiment, operation of PPUs 1314 is synchronized through use of a command such as syncthreads(), wherein all threads in a block (e.g., executed across multiple PPUs 1314) to reach a certain point of execution of code before proceeding.[0612] Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims.[0613] Use of terms "a" and "an" and "the and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms "comprising," "having," "including," and "containing-are to be construed as open-ended terms (meaning "including, but not limited to,") unless otherwise noted. "Connected," when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. In at least one embodiment, use of term "set" (e.g., "a set of items") or "subset" unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term "subset" of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal.[0614] Conjunctive language, such as phrases of form "at least one of A, B, and C," or "at least one of A, B and C," unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. For instance, in illustrative example of a set having three members, conjunctive phrases "at least one of A, B, and C" and "at least one of A, B and C" refer to any of following sets: {A}, {B{, ICI, {A, B}, {A, C}, {B, C}, {A, B, C{. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term "plurality" indicates a state of being plural (e.g., "a plurality of items" indicates multiple items). In at least one embodiment, number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase "based on" means "based at least in part on" and not "based solely on." [0615] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium, for example, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. In at least one embodiment, set of non-transitory computer-readable storage media comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors -for example, a non-transitory computer-readable storage medium store instructions and a main central processing unit ("CPU") executes some of instructions while a graphics processing unit ("GPU") executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions.[0616] Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations.[0617] Use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure.[0618] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.[0619] In description and claims, terms "coupled-and "connected," along with their derivatives, may be used. It should be understood that these terms may he not intended as synonyms for each other. Rather, in particular examples, "connected" or "coupled" may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. "Coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.[0620] Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as "processing," "computing," "calculating," "determining," or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system's registers and/or memories into other data similarly represented as physical quantities within computing system's memories, registers or other such information storage, transmission or display devices.[0621] In a similar manner, term "processor may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting examples, "processor" may be a CPU or a GPU. A "computing platform-may comprise one or more processors As used herein, "software" processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. In at least one embodiment, terms "system" and "method" are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system.[0622] In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer-implemented machine. In at least one embodiment, process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In at least one embodiment, processes of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In at least one embodiment, processes of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. In at least one embodiment, references may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, processes of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism.[0623] Although descriptions herein set forth example implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities may be defined above for purposes of description, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances.[0624] Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims.[0625] It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.[0626] Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims. |
PROBLEM TO BE SOLVED: To enable efficient concurrent, or timely-distributed, delivery of streamed media data.SOLUTION: A system 100 comprises block serving infrastructure 101, and comprises an ingestion system 103 for packaging content 102 by storing it into a content store 110 that is accessible to both the ingestion system 103 and an HTTP streaming server 104. A client 108 sends requests 112 to the HTTP streaming server 104 and receives responses 114 from the HTTP streaming server 104 or an HTTP cache 106. |
In a communication system in which a client device requests media files from a media capture system, the media capture system includes constructing a forward error correction (FEC) block corresponding to data in a media block, and using the media capture system , Media files including media blocks and FEC files including FEC blocks are named according to derivable patterns that can be derived at the client device, whereby the client device responds based on the media blocks needed by the client device Naming the plurality of files in such a manner as to enable deriving the names of those FEC files in order to make a request for the FEC files.A client device adapted to receive media blocks from a media capture system via a file server corresponding to a file request, the processor and program code in a non-transitory computer readable storage. Program code for determining which media block is needed for the desired presentation, and a program for requesting the required media block for the desired presentation. Code, program code for determining which forward error correction (FEC) blocks can be used to fill for lost data in the required media block, the lost data or the need And Client comprising program code for determining a file name for an FEC file containing said FEC block that can be used based on a media block, and program code for requesting said determined FEC file device. |
Extended block-request streaming with cooperative concurrent HTTP and forward error correctionCROSS-REFERENCES TO RELATED APPLICATIONS This application is based on Michael G. Non-provisional patent applications that claim the benefit of the following provisional applications, named after Luby (Michael G. Raby), etc., each claiming the title "Enhanced Block-Request Streaming System" is there.US Provisional Patent Application No. 61 / 244,767 (filing date: September 22, 2009), US Provisional Patent Application No. 61/257, 719 (filing date: November 3, 2009), US provisional patent application No. 61 / 258,088 (filing date: November 4, 2009), US Provisional Patent Application No. 61 / 285,779 (filing date: December 11, 2009), and US Provisional Patent Application No. 61/255 296,725 (filing date: January 20, 2010).This application claims the name of Ying Chen (In Chen), et al., And is a US Provisional Patent Application No. 61 / 372,399, entitled "HTTP Streaming Extensions" (filing date: 2010.10). It also claims the benefits of May 10).Each provisional application cited above is hereby incorporated by reference for all purposes. The present disclosure incorporates by reference the following commonly assigned applications / patents for all purposes as if they were fully described in this document:No. 6,307, 487 to Luby (hereinafter "Luby I"); Shokrollahi, et al. U.S. Patent No. 7,068,729 to (Shokrollahi et al.) (Hereinafter "Shokrollahi I"); Named "Luby II et al." (Hereinafter "Luby II"), "Forward Error-Correcting (FEC) Coding and Streaming" U.S. Patent Application No. 11 / 423,391 (filing date: June 9, 2006) with the title (Forward Error Correction (FEC) coding and streaming); Luby, et al. (Hereinafter "Luby III") U.S. patent application Ser. No. 12 / 103,605, filed on April 15, 2008, entitled "Dynamic Stream Interleaving and Sub-Stream Based Delivery". Pakzad, et al. U.S. patent application Ser. No. 12 / 705,202, entitled "Block Partitioning for a Data Stream", entitled (Pachzad et al.) (Hereinafter "Pakzad"), entitled "Block Partitioning for a Data Stream" Day: February 12, 2010); and Luby, et al. Named “Luby IV” below, “FEC Code with Permanent Deactivation of Symbols for Encoding and Decoding Process” and “Methods and Apparatus Employing FEC Codes with Permanent Inactivation for Symbols for Encoding and Decoding Processes”. U.S. Patent Application No. 12 / 859,161 (filing date: August 18, 2010), entitled "Methods and Apparatus forThe present invention relates to an improved media streaming system and method. The present invention relates more specifically to a system and method for adapting network and buffer conditions to optimize presentation of streamed media, and efficiently and simultaneously of streamed media data. Enables parallel or timely distributed delivery.Streaming media delivery is becoming increasingly common as high quality voice and video is being delivered through packet-based networks such as the Internet, cellular and wireless networks, powerline networks, and other types of networks. It will be important. The quality at which the delivered streaming media can be presented includes the resolution (or other attributes) of the original content, the encoding quality of the original content, the ability of the receiving device to decode and present the media, the signal received at the receiver It can depend on several factors, including the timeliness and quality of the In order to create a perceived good streaming media experience, the transfer and timely nature of the signal received at the receiver may be particularly important. A good transfer can provide the fidelity of the stream received at the receiver with respect to what the sender sends, while timelyness is how quickly the receiver can receive its content after its initial request. Can represent what can be started to playout.A media delivery system can be characterized as a system having media sources, media destinations, and channels (time and / or space) separating sources and destinations. Typically, the source electronically controls the reception of the media (or an approximation thereof), a transmitter having access to the electronically manageable form of the media, and the media consumer (e.g. a receiver) , A receiver having the ability to provide it to a user having a display device somehow coupled to a storage device or element, another channel).While many variations are possible, in one common example, the media distribution system has one or more servers with access to media content in electronic form, and one or more client systems or devices are servers Request the media, the server transmits the media using the transmitter as part of the server and sends it to the receiver at the client so that the received media can be consumed by the client in some way . In a simple example, there is one server and one client for a given request and response, but this need not be the case.Traditionally, media distribution systems can be characterized as either "download" or "streaming" models. The "download" model can be characterized by timing independence between the delivery of media data and the playout of media to a user or receiving device.As an example, the media is downloaded in advance well in advance of when it is needed or used, and when it is used, the necessary amount is already available at the recipient. Delivery in the context of download is often done using file transfer protocols, such as HTTP, FTP or file delivery with unidirectional transfer (FLUTE), etc., and the delivery rate is the underlying flow and / or congestion control protocol , For example, by TCP / IP. The operation of the flow or congestion control protocol can be independent of the playout of the media to the user or destination device, which can occur concurrently with the download or at some other time.The "streaming" mode can be characterized as a tight coupling between the timing of delivery of media data and the playout of media to a user or recipient device. Delivery in this context is often done using streaming protocols, such as Real-Time Streaming Protocol (RTSP) for control and Real-time Transfer Protocol (RTP) for media data. The delivery rate can be determined by the streaming server and is often matched with the playout rate of the data.Some drawbacks of the "download" model are that due to timing independence of delivery and playout, media data (e.g. due to the available bandwidth being smaller than the media data rate) It is not available when needed for playout, temporarily stopping playout ("stop"), which results in a poor user experience, or (eg, available bandwidth is not It is required to download media data much earlier than playout due to being larger than the rate, consuming storage resources in the receiving device that may be missing, valuable storage resources It is consumed for delivery and is wasted if the content is not eventually played out or otherwise used. Les is that there is.One advantage of the "download" model would be that the technology required to perform the download, such as HTTP, is very mature, widely deployed, and available across a wide range of applications. Download servers and solutions for large scale scalability of the file download (eg HTTP web servers and content delivery networks) are easily available, making deployment of services based on this technology simple and low cost .Some of the shortcomings of the "streaming" model are generally that dedicated media data delivery rates are not optimized for available bandwidth on the server-to-client connection, and provide guarantees regarding bandwidth and delays. Streaming servers or more complex network architectures will be required. There are streaming systems (eg, Adobe Flash Adaptive Streaming (Adobe Flash Adaptive Streaming)) that support delivery data rate fluctuations due to available bandwidth, but generally they utilize all available bandwidth Sometimes it is not as efficient as download transfer flow control protocols such as TCP.Recently, new media delivery systems have been developed and deployed based on a combination of "streaming" and "downloading" models. An example of such a model is referred to herein as a "block-request streaming" model, where the media client requests a block of media data from the serving infrastructure using a download protocol such as HTTP. One concern in the system is the ability to initiate playout of the stream, such as decoding and rendering audio and video streams received using a personal computer, and displaying video on a computer screen for embedded speakers Can be playing the audio through, or as another example, decoding and rendering the received audio and video stream using the set top box and displaying the video on the television display and Playing the audio through a stereo system would be.Other concerns, such as being able to decode source blocks at a rate that can be matched with the source streaming rate, minimize decoding latency and reduce the use of available CPU resources is there. Another concern is to provide a robust and scalable streaming delivery solution that allows system components to be disabled without adversely affecting the quality of the stream delivered to the receiver. It is. As rapidly changing information about a presentation is delivered, other problems may arise based on it. Therefore, it is desirable to have an improved process and apparatus.FIG. 5 depicts elements of a block-request streaming system according to an embodiment of the present invention.FIG. 2 is an illustration of the block-request streaming system of FIG. 1, showing more detail in the elements of the client system coupled to the block serving infrastructure to receive data processed by the content capture system.FIG. 2 illustrates a hardware / software implementation of a capture system.FIG. 2 illustrates a hardware / software implementation of a client system.FIG. 2 illustrates a possible structure of the content store shown in FIG. 1 and includes segment and media presentation descriptor ("MPD") files and details of segments, timings, and other structures in the MPD file. .FIG. 6 illustrates details of an exemplary source segment that may be stored in the content store illustrated in FIGS. 1 and 5;FIG. 6 illustrates simple and hierarchical indexing within a file.FIG. 6 illustrates simple and hierarchical indexing within a file.FIG. 6 illustrates variable block size settings with aligned seek points across multiple versions of a media stream.FIG. 6 illustrates variable block size settings with unmatched seek points across multiple versions of a media stream.It is the figure which illustrated the metadata table.It is the figure which illustrated transmission of the block and the metadata table from the server to the client.FIG. 6 illustrates a block that is independent of RAP boundaries.FIG. 5 illustrates continuous and non-continuous timing between segments.FIG. 7 is a diagram illustrating an aspect of a scalable block.FIG. 5 depicts a graphical representation of the evolution of some variables within a block-request streaming system over time.FIG. 5 depicts another graphical representation of the evolution of some variables within a block-request streaming system over time.FIG. 7 depicts a cell grid of states as a function of threshold value.FIG. 7 is a flowchart of a process that can be performed at a receiver that can request a single block and multiple blocks per request.Figure 4 is a flow chart of a flexible pipeline process.FIG. 5 illustrates an example of a set of candidate requests, their priorities, and a connection that can bring them out at a certain time.FIG. 5 illustrates an example of a set of candidate requests, their priorities, and the connections that can be brought out of them, evolving from one time to another.FIG. 7 is a flowchart of consistent caching server proxy selection based on file identifiers.FIG. 6 illustrates syntax definitions for a suitable formula language.It is a figure showing an example of a suitable hash function.It is a figure showing an example of a file identifier construction rule.It is the figure which illustrated the bandwidth fluctuation of the TCP connection.FIG. 5 illustrates a plurality of HTTP requests for source and repair data.FIG. 6 shows an example of channel zapping time with and without FEC.FIG. 2 illustrates the details of a repair segment generator that generates repair segments from source segments and control parameters as part of the capture system shown in FIG.FIG. 6 illustrates the relationship between source blocks and repair blocks.FIG. 7 illustrates a procedure for live service at different times on the client.In the figures, similar items are referenced using similar numerals, and sub-indexes are provided in parentheses to indicate multiple instances of similar or identical items. Unless otherwise indicated, the last subindex (e.g., "N" or "M") is not intended to be limited to any particular value, and the number of instances of one item is the same Even when numbers are shown and sub-indexes are reused, they can differ from the number of instances of other items.As described herein, one goal of the streaming system is to consume the media from its storage location (or where it is being generated) it is consumed, ie presented to the user or human To be "placed out" by consumers of or on the move to a place. Ideally, the streaming system can provide uninterrupted playback (more generally, uninterrupted "consumption") at the receiving end and the stream or stream shortly after the user requests the stream or streams. You can start playing the set of For efficiency reasons, the user may no longer need a stream, for example when the user is switching from one stream to another or when it follows the presentation of a stream, eg a "subtitle" stream It is also desirable that each stream be stopped at the time indicated by. If a media component, for example video, is subsequently presented, but a different stream is selected to present this media component, the new stream often takes up limited bandwidth and stops the old stream preferable.Block-request streaming systems according to the embodiments described herein provide many benefits. Since some applications can provide a well-satisfied experience with less than all of the features described herein, viable systems need to include all the features described herein. It should be understood that there is no.HTTP Streaming HTTP streaming is a particular type of streaming. If HTTP streaming is used, the source can be a standard web server and content delivery network (CDN) and can use standard HTTP. This technique can include stream segmentation and use of multiple streams, all within the context of a standardized HTTP request. Media, eg, video, may be encoded at multiple bit rates to form different versions or representations. The terms "version" and "expression" are used interchangeably in this document. Each version or representation can be subdivided into smaller parts, perhaps of the order of a few seconds each, to form a segment. Each segment can now be stored as a separate file on the web server or CDN.On the client side, requests can be made using HTTP on individual segments spliced together seamlessly by the client. The client can switch to different data rates based on the available bandwidth. The client can also request multiple presentations, each presenting different media components, and can present media together and synchronously in these presentations. Triggers for switching can include, for example, buffer occupancy and network measurements. In steady state operation, the client can tune the request to the server to maintain the target buffer occupancy.The benefits of HTTP streaming can include bit rate optimization, fast start and seek, and minimal unnecessary delivery. These advantages include controlling delivery to just a short time before playout, maximizing available bandwidth (through variable bit rate media), and stream segmentation and intelligent clients Obtained from optimizing the procedure.Providing a media presentation description to an HTTP streaming client so that the client can use a collection of files (e.g. in the form specified by 3GPP, here called 3gp segments) to provide streaming service to the user Can. A media presentation description, and possibly an update of this media presentation description, allows the client to present the included media in a synchronized manner and extended features such as seeks, bit rate switching and different representations. Describes a media presentation, which is a structured collection of segments, each of which includes media components, so that a joint presentation of the media components of the media components can be provided. Clients can use media presentation description information in different ways for service provisioning. In particular, from the media presentation description, the HTTP streaming client can determine the client's capabilities within the streaming service and which segments in the collection are accessible to make the data useful to the user .In some implementations, media presentation descriptions can be static, but segments can be generated dynamically. Media presentation descriptions can be as compact as possible to minimize access and download times for services. Other dedicated server connectivity, such as periodic or frequent timing synchronization between client and server, can be minimized.Media presentation may be constructed to allow access by terminals with different capabilities, eg access to different access network types, different current network conditions, size of display, access bit level and codec support it can. The client can then extract the relevant information to provide the streaming service to the user.Media presentation descriptions may also allow for deployment flexibility and compactness according to requirements.In the simplest case, each alternative representation is a single 3GP file, ie a file according to the definition in 3GPP TS 26.244, or an ISO-based media file defined in ISO / IEC 14496-12 or a derived specification It may be stored in any other file that conforms to a format (eg, the 3GP file format described in 3GPP Technical Specification 26.244). In the remainder of this document, when referring to 3GP files, ISO / IEC 14496-12 and derived specifications are more general ISO based media files defined in ISO / IEC 14496-12 or any derived specifications It should be understood that all the described features can be mapped into a form. Now the client can request the first part of the file to know the media metadata (typically stored in the movie header box, also called "moov" box) along with the movie fragment time and byte offset . The client can now issue an HTTP partial get request to get movie fragments on demand.In some implementations, it may be desirable to divide each representation into several segments, where segments are If the segment type is based on the 3GP file type, the segment contains non-overlapping time slices of movie fragments called "time-based split". Each of these segments can include multiple movie fragments, and each can be a 3GP file that is itself valid. In another embodiment, the representation is divided into a first segment containing metadata (typically a movie header "moov" box) and a set of media segments each containing media data, either with the first segment The concatenation of the media segments of the forms a valid 3GP file, and the concatenation of the first segment of one representation and the entire media segment forms a valid 3GP file. The entire presentation can be formed by playing out each segment in turn and mapping the local timestamps in the file to the global presentation time by the start time of each presentation.Throughout this description, references to "segments" are any data that is fully or partially constructed or read from a storage medium or obtained as a result of a file download protocol request, including, for example, an HTTP request. It should be understood to include objects. For example, in the case of HTTP, data objects can be stored in actual files that reside on a disk or other storage medium connected to or part of an HTTP server, or data objects , CGI scripts or other dynamically executed programs that are executed in response to HTTP requests. The terms "file" and "segment" are used interchangeably in this document, unless otherwise specified. In the case of HTTP, a segment can be considered to be the entire body of an HTTP request response.The terms "presentation" and "content item" are used interchangeably in this document. In many instances, the presentation is an audio, video or other media presentation having a defined "playout" time, although other variations are possible.The terms "block" and "fragment" are used interchangeably in this document, unless otherwise specified, and generally refer to the smallest collection of indexed data. Based on the available indexing, the client can request different parts of the fragment in different HTTP requests, or can request parts of one or more consecutive fragments or fragments in one HTTP request. When segments based on the ISO base media file format or segments based on the 3GP file format are used, the fragments are typically a combination of movie fragment header ('moof') box and media data ('mdat') box Means a movie fragment that is defined to beHere, it is assumed that the network carrying the data is packet based to simplify the description herein, and one skilled in the art will read the disclosure and practice the invention described herein. It is recognized that the form is applicable to other types of transmission networks, such as continuous bit stream networks.Here, it is hypothesized that the FEC code provides protection against long variable data delivery times to simplify the description herein, and one skilled in the art, after reading this disclosure, will implement embodiments of the present invention. It has been recognized that it is applicable to other types of data transmission challenges, eg, bit-flip corruption of data. For example, if there is no FEC, then the content zapping time is large and variable if the last part of the requested fragment arrives much later than the previous part of the fragment or has a high deviation in arrival time Possibly, when using FEC and concurrent requests, only the majority of the requested data for the fragments need to arrive before the fragments can be recovered, so that the content zapping time And reduce the variability of the content zapping time. In this description, it can be assumed that the data to be encoded (ie the source data) is divided into "symbols" of equal length which can be of any length (up to a single bit) The symbols may be of different lengths for different parts of data, for example, different symbol sizes may be used for different blocks of data.In this description, to simplify the description herein, FEC is applied to one data "block" or "fragment" at a time, ie, "block" is for FEC coding and decoding purposes. It is assumed to be the "source block" for The client device can use segment indexing as described herein to help determine the source block structure of the segment. One skilled in the art can apply embodiments of the present invention to other types of source block structures, for example, the source block may be part of a fragment, or multiple parts of one or more fragments or fragments. Can be included.The FEC code that is considered for use with block-request streaming is typically a systematic FEC code, ie the source symbols of the source block can be included as part of the coding of the source block, Thus, source symbols are transmitted. As those skilled in the art will appreciate, the embodiments described herein apply to non-systematic FEC codes as well. The systematic FEC encoder generates a number of repair symbols from a source block of source symbols, and a combination of at least some of the source symbols and repair symbols is transmitted over a channel representing the source block. It is a symbol that has been Some FEC codes can help to efficiently generate as many repair symbols as you need, for example, "additive code" or "fountain code". Examples of these codes include "chain reaction code" and "multi-stage chain reaction code". Other FEC codes, such as Reed-Solomon codes, can in fact only generate a limited number of repair symbols for each source block.In many of these examples, the client is coupled to a media server or media servers, and it is assumed that the client requests streaming media from the media server or media servers over a channel or channels. However, more relevant measures are also possible.Benefit Example Block-In request streaming, the media client maintains a binding of the timing of these block requests and the timing of the media playout to the user. This model can retain the advantages of the "download" model described above while avoiding some of the shortcomings resulting from the normal detachment of media playout from data distribution. The block-request streaming model makes use of available rate and congestion control mechanisms in transport protocols, such as TCP, to ensure that the maximum available bandwidth is used for media data. Furthermore, the division of the media presentation into blocks allows each block of encoded media data to be selected from a plurality of available encoding sets.This choice matches the media data rate to the available bandwidth, matching the media resolution or decoding complexity to the client's ability or configuration, even when the available bandwidth is changing over time It can be based on any number of criteria, including making it match the user's preferences, eg language. The selection may also include the download and presentation of ancillary components, such as accessibility components, closed captioning, subtitles, sign language video, etc. Examples of existing systems that use the block-request streaming model include Move Networks (R), Microsoft Smooth Streaming, and Apple iPhone (R) Streaming Protocol.In common, each block of media data can be stored as an individual file on the server, and a protocol, eg HTTP, is HTTP server software executed on the server to request the file as one unit Used in conjunction with Typically, the client is a media presentation, typically referred to as "representation" in this document, such as available encoding features (eg, required bandwidth, resolution, encoding parameters, media type, language) And metadata files are provided which describe how their encoding is divided into blocks, which may be, for example, in an Extensible Markup Language (XML) format or a playlist text format or binary format it can. For example, the metadata can include a Uniform Resource Locator (URL) for each block. The URL itself should provide such a scheme that the string "http: //" is prepended to indicate that the protocol used to access the documented resource is HTTP. Can. Another example is "ftp: //" which indicates that the protocol to be used is FTP.In other systems, for example, media blocks can be constructed "on the fly" by the server in response in time to requests from clients indicating the required media presentation portions. For example, in the case of HTTP having the scheme "http: //", the execution of the request for this URL provides this request response, which contains some specific data throughout the body of the request response. The implementation in the network regarding how this request response is generated can vary widely depending on the implementation of the server that handles the request.Typically, each block can be independently decodable. For example, in the case of video media, each block can start from a "seek point". In some coding schemes, seek points are called "random access points" or "RAPs", but not all RAPs can be designated as seek points. Similarly, in other coding schemes, the seek point is H. In the case of H.264 video coding, it starts with an "independent data refresh" frame or "IDR", but not all IDRs can be specified as seek points. A seek point is a position in video (or other) media where the decoder can start decoding without requiring data on previous frames or data or samples, eg, the frame or sample to be decoded This is not the case of the independent scheme, but is encoded, for example, as the difference between the current frame and the previous frame.One concern in the system is the ability to start playing out streams, eg decoding and rendering audio and video streams received using a personal computer and displaying and incorporating video on a computer screen Playing audio through a speaker, or as another example, decoding and rendering audio and video streams received using a set-top box and displaying video on a television receiver for audio through a stereo system Would be playing. One major concern is that the user decides to view new content delivered as a stream and takes action representing that decision, eg the user clicks a link in the browser window or the play button on the remote control device It will be to minimize the delay between when you are and when the content starts to be displayed on the user's screen, hereafter called "content zapping time". Each of these concerns can be addressed by the elements of the expanded system described herein.One example of content zapping is that the user is watching the first content distributed via the first stream and the user is watching the second content distributed via the second stream It is time to decide and start taking action to start watching the second content. The second stream can be sent from the same or a different set of servers as the first stream. Another example of content zapping is that the user is visiting a website and starts watching the first content delivered via the first stream by clicking on the link in the browser window It is time to decide Similarly, the user may decide to start playing the content from any point in the stream rather than from the beginning. The user instructs their client devices to seek to a temporal position, and the user can expect the selected time to be rendered instantly. Minimizing content zapping time is important for video viewing in order to enable high-quality, high-speed content surfing experience when users search and sample a wide range of available content.Recently, it has become common practice to consider using forward error correction (FEC) codes to protect streaming media during transmission. When transmitted over the Internet and wireless networks, packet networks including, by way of example, those standardized by organizations such as 3GPP, 3GPP2 and DVB, source streams are generated or made available in packets as they become available. And so can be used to transport them in the order in which the source or content stream was generated or the receiver became available.In a typical application of FEC code to these types of scenarios, the encoder can use the FEC code in the generation of repair packets, which transmit in addition to the original source packet containing the source stream Be done. The repair packet has the property that it can use the received repair packet to recover the data contained in the lost source packet when the loss of the source packet occurs. The repair packet can be used to restore the content of the lost source packet that was completely lost, but regardless of whether it is a completely received repair packet or a partially received repair packet. It can also be used to recover from partial packet loss. In this way, completely or partially received repair packets can be used to recover completely or partially lost source packets.In still other instances, other types of corruption may occur with respect to transmitted data, eg, the value of a bit may be flipped, thus correcting the corruption to a source Repair packets can be used to provide the most accurate recovery of packets possible. In other examples, the source stream is not necessarily sent in separate packets, but can instead be sent, for example, as a continuous bit stream.There are numerous FEC code examples that can be used to provide protection of the source stream. The Reed-Solomon (Reed-Solomon) code is a well-known code for error and erasure correction in communication systems. For example, for erasure correction over packet data networks, a well-known efficient implementation of Reed-Solomon code is described by L. Rizzo, "Effective Erasure Codes for Reliable Computer Communication Protocols" (for effective computer communication protocols). Erasure code) Computer Communication Review 27 (2): 24-36 (April 1997) (hereinafter "Rizzo") and Bloemer, et al., "An XOR-Based Erasure-Resilient Coding Scheme" (XOR-based erasure elastic coding Format), Technical Report TR-95-48, International Computer Science Institute, Berkeley, California (1995) (hereinafter referred to as "XOR-Reed-Solomon") or Cauchy (Cauchy) or Vandermonde (Vandermonde) described elsewhere. Use a matrix.Other examples of FEC codes include LDPC codes and those of chain reaction codes, such as those described in Luby I, and multistage chain reaction codes, such as those in Shokrollahi I.Examples of FEC decoding process for Reed-Solomon code variants are described in Rizzo and XOR-Reed-Solomon. In those instances, decoding may be applied after sufficient source and repair data packets have been received. The decoding process can be computationally intensive and, depending on the available CPU resources, this takes a considerable amount of time to complete compared to the length of time the media in the block spans There is. The receiver can take into account the length of time this takes for decoding when calculating the required delay between the start of reception of the media stream and the playout of the media. This delay due to decoding is perceived by the user as a delay between the specific media stream request by the user and the start of playback. Therefore, it is desirable to minimize this delay.In many applications, packets can be further subdivided into symbols to which the FEC process is applied. A packet may contain one or more symbols (or may contain less than one symbol, but usually the symbols have a high correlation of error conditions between groups of packets Not divided between groups of packets unless known). The symbols can have any size, but often the size of the symbols is at most equal to the size of the packet. Source symbols are symbols that encode data to be transmitted. The repair symbol is a symbol generated directly or indirectly from the source symbol in addition to the source symbol (ie, the data to be transmitted has all source symbols available and no repair symbol available) Completely recoverable if not possible.Some FEC codes can be block based in that the encoding operation depends on the symbols present in the block and can be independent of the symbols not present in the block. Using block-based coding, the FEC encoder is able to generate repair symbols for that block from the source symbols in the block and then proceed to the next block, which is to be coded There is no need to refer to them other than the source symbols for the current block. In transmission, a source block comprising source symbols is represented by a coded block comprising encoded symbols (which may be several source symbols, several repair symbols, or both) Can. The presence of repair symbols does not require all source symbols in all coded blocks.For some FEC codes, in particular Reed-Solomon codes, the coding and decoding time may become unrealistically increasing as the number of coded symbols per source block increases is there. Thus, in practice, in particular in the typical case where the Reed-Solomon encoding or decoding process is performed by custom hardware, it is realistic in terms of the total number of encoded symbols that can be generated per source block. Upper bounds often exist (255 is a near realistic limit for some applications), eg, Reed included as part of the DVB-H standard to protect the stream against packet loss The MPE-FEC process with Solomon code is implemented in dedicated hardware in the mobile phone which is limited to 255 Reed-Solomon total coding symbols per source block. This places a practical upper limit on the maximum length of the source block to be coded, as symbols are often required to be in separate packet payloads. For example, if the packet payload is limited to 1024 bytes or less and each packet carries one encoded symbol, then the encoded source block can be up to 255 kilobytes, which is Naturally, it is also an upper limit on the size of the source block itself.Other concerns, such as being able to decode the source block at a rate that can keep pace with the source streaming rate, it is possible to minimize the decoding latency introduced by the FEC decoding, any time during the FEC decoding Again, only a small fraction of the available CPU of the receiving device can be used, which is addressed by the elements described herein.The need to provide a robust and scalable streaming delivery solution that allows components of the system to fail without adversely affecting the quality of the stream delivered to the receiver.The block request streaming system changes the presentation structure or metadata, eg changes in the number of available media codings or parameters of the media coding, eg bit rate, resolution, aspect ratio, audio or video codec or codec parameters It is necessary to support the change of the URL or other metadata associated with the content file, such as the URL. The change may be due to bulk editing of content from different sources, such as advertising segments of larger presentations or different segments, serving infrastructure due to URL or eg recovery from configuration changes, device failure or device failure or other reasons It may be required for several reasons, including the modification of URLs or other parameters that may be required as a result of structural changes.There are ways in which the presentation can be controlled by a continuously updated playlist file. Because this file is continuously updated, at least some of the changes described above can be made within these updates. The disadvantages of the conventional method are that client devices continuously search the playlist file and load the serving infrastructure, also called "polling", and this file caches longer than the update interval It is impossible to do, it makes the task for the serving infrastructure much more difficult. This is addressed by the elements herein, so updates of the type described above are provided without the need for continuous polling by the client on the metadata file.Another problem, especially in live services, is a known problem, typically from broadcast distribution, that the user can not view the content being broadcast prior to the time the user joined the program . Typically, local personal recording is not possible because it consumes unnecessary local storage or the client is not tuned to a program or is prohibited by content protection rules. Network recording and time-shifted viewing are preferred, but require separate delivery protocols and infrastructure other than the user's individual connections and live services to the server, resulting in redundant infrastructure and significant server costs. This is also addressed by the elements described herein.System Overview An embodiment of the present invention is described with reference to FIG. 1, which shows a simplified diagram of a block-request streaming system embodying the present invention.In FIG. 1, a block-streaming system 100 is illustrated and comprises a block serving infrastructure ("BSI") 101, which captures content 102, prepares that content, and captures the content of capture system 103 and HTTP streaming server 104. It comprises a capture system 103 for packaging it for service by the HPPT streaming server 104 by storing it in a content store 110 both accessible. As shown, system 100 can also include HTTP cache 106. In operation, client 108, eg, an HTTP streaming client, sends a request 112 to HTTP streaming server 104 and receives a response 114 from HTTP streaming server 104 or HTTP cache 106. In each case, the elements shown in FIG. 1 may be implemented at least in part in software comprising program code for execution on a processor or other electronic device.The content may comprise movies, sounds, 2D flat videos, 3D videos, other types of videos, images, timed texts, timed metadata, etc. Certain content is intended to play out data intended to be presented or consumed in a timely manner, such as auxiliary information (eg, broadcaster identification, advertisements, stock prices, Flash sequences, etc.) And data for presentation with the media. Other hybrid presentations may be used, combining other media and / or more than just audio and video.As illustrated in FIG. 2, media blocks may be stored in the block serving infrastructure 101 (1), for example, an HTTP server, a content delivery network device, an HTTP proxy, an FTP proxy or server, or It can be any other media server or system. The block serving infrastructure 101 (1) is connected to a network 122, which may be, for example, an internet protocol ("IP") network such as the internet. Six functional components, i.e. block selector 123, blocks which comprise the above mentioned metadata and which function to select the required blocks or partial blocks among the plurality of available blocks indicated by the metadata Receive request instructions from the selector 123 and send blocks request to the block serving infrastructure 101 (1) through the network 122 for a block, a portion of a block, or a plurality of blocks, and comprising blocks in return Block-request streaming with block requestor 124, block buffer 125, buffer monitor 126, media decoder 127 and one or more media converters 128 to facilitate media consumption, performing the operations required to receive data System The client is shown.The block data received by block requestor 124 is passed to block buffer 125, which stores media data, for temporary storage. Alternatively, received block data may be stored directly in block buffer 125 as illustrated in FIG. Media decoder 127 may provide media data provided by block buffer 125 to the media converter 128 as needed to provide appropriate input to the media converter 128 for rendering media in a format suitable for user or other consumption. Perform the conversion. Examples of media converters include visual display devices such as those found in cell phones, computer systems or televisions, and may also include audio rendering devices such as speakers or headphones.An example of a media decoder is described in H.264. H.264 video coding standard is the function of converting data of the type described in the H.264 video coding standard into an analog or digital representation of a video frame, eg, a YUV-type pixel map with associated presentation time stamps for each frame or sample.The buffer monitor 126 receives information regarding the contents of the block buffer 125 and, based on this and possibly other information, a block selection used to determine the selection of blocks to be requested, as described herein. Provide an input to theAs used herein, each block has a "playout time" or "duration" that represents the amount of time it will take for the receiver to play the media contained in that block at normal speed. In some cases, playout of media in a block can depend on having already received data from a previous block. In rare cases, the playout of part of the media in a block may depend on the subsequent block, in which case the playout time for the block is within the block without reference to the subsequent block. The playout time defined for the media that can be played out and for the subsequent block is increased by the playout time of the media in this block that can be played out only after receiving the subsequent block. Because inclusion of media in blocks that depend on subsequent blocks is a rare case, the remainder of this disclosure assumes that media in one block does not depend on subsequent blocks, although one skilled in the art Note that it will be recognized that this variant can be easily added to the embodiments described below.The receiver may have control of "pause", "fast-forward", "return", etc., which will result in blocks being consumed by playout at different rates, but the receiver does not If the sequence can be obtained and decoded within a total time equal to or less than the total playout time excluding the last block in the sequence sequence, the receiver can give the user without stopping. Media can be presented. In some descriptions herein, a particular location in the media stream is referred to as a particular "time" in the media, and is between the beginning of the media playout and the time to reach a particular location in the video stream. Corresponds to the time that will be passed. Time or position within the media stream is a conventional concept. For example, if the video stream comprises 24 frames per second, then it can be said that the first frame has a position or time of t = 0.0 seconds and the 241st frame is t = 10.0. It can be said to have a position or time of seconds. Of course, in a frame-based video stream, each of the bits in the stream from the first bit of the 241st frame to just before the first bit of the 242nd frame all have the same time value The position or time need not be continuous, as it can haveUsing the above terms, a block-request streaming system (BRSS) comprises one or more clients making requests to one or more content servers (e.g., an HTTP server, an FTP server, etc.). The capture system comprises one or more capture processors, which receive the content (in real time or non real time), process the content for use by BRSS, and possibly generate metadata generated by the capture processor And store it in a storage accessible to the content server.BRSS can also incorporate a content cache that coordinates with the content server. The content server and content cache can be a conventional HTTP server and HTTP cache receiving requests for files or segments in the form of HTTP requests including URLs, and files less than the entire file or segment indicated by the URL Or byte ranges can be included to request segments. The client may include a conventional HTTP client that makes requests to the HTTP server and processes responses to those requests, and the HTTP client provides the presentation player for presentation by the client device. Organize requests, pass them to HTTP clients, get responses from HTTP clients, process (or store, send, etc.) them, driven by the new client system. Typically, the client system does not know in advance the need (because which media are needed depends on user input, user input changes, etc.), so the media It is said to be a "streaming" system in that it is "consumed" as soon as it is received or shortly thereafter. As a result, response delays and bandwidth constraints can cause presentation delays, for example, the stream can cause presentation pauses when the user catches up with where it is at the time of consuming the presentation. is there.Several details can be implemented in BRSS at the client end, at the capture end, or both, in order to provide a presentation that is perceived to be good quality. In some cases, the implemented details are made in view of the client-server interface in the network and to address the client-server interface in the network. In some embodiments, both the client system and the capture system are aware of the extension, and in other embodiments only one side recognizes the extension. In this case, the whole system enjoys the benefits even if one side does not recognize the extension, and in the other, the benefit only occurs if both sides recognize it, Even when the side is not aware it still works without failure.As illustrated in FIG. 3, the capture system may be implemented as a combination of hardware and software components in accordance with various embodiments. The capture system can comprise a set of instructions that can be executed to cause the system to perform any one or more of the methods described herein. The system can be implemented as a specific machine in computer form. The system can be a server computer, personal computer (PC), or any system capable of executing a set of instructions (sequentially or otherwise) specifying the action to be taken by the system. Furthermore, although only a single system is illustrated, the term "system" individually or in combination (or sets) of instructions for performing any one or more of the methods described herein. It can also be meant to include a collection of systems that run together.The capture system may include a capture processor 302 (e.g., a central processing transmission (CPU)), a memory 304 capable of storing program code during execution, and a disk storage 306, all of which Communicate with one another via a bus 300. The system can further include an image display 308 (eg, a liquid crystal display (LCD) or a cathode ray tube (CRT)). The system can also include an alphanumeric input device 310 (eg, a keyboard) and a network interface device 312 for receiving content sources and delivering content stores.A disk storage device 306 is a machine readable medium capable of storing one or more sets of instructions (eg, software) embodying any one or more of the methods or functions described herein. Can be included. The instructions may also be completely or at least partially resident in memory 304 and / or in acquisition processor 302 during their execution by the system, and memory 304 and acquisition processor 302 may also be machine readable media. I will.As illustrated in FIG. 4, the client system may be implemented as a combination of hardware and software components in accordance with various embodiments. The client system can comprise a set of instructions that can be executed to cause the system to perform any one or more of the methods described herein. The system can be implemented as a specific machine in the form of a computer. The system can be a server computer, a personal computer (PC), or a system capable of executing a set of instructions (sequentially or otherwise) specifying an action to be taken by the system. Furthermore, although only a single system is illustrated, the term "system" individually or in combination (or sets) of instructions for performing any one or more of the methods described herein. It can also be meant to include a collection of systems that run together.The client system can include a client processor 402 (e.g., a central processing unit (CPU)), a memory 404 that can store program code during execution, and a disk storage device 406, all of which Communicate with one another via a bus 400. The system can further include an image display 408 (eg, a liquid crystal display (LCD) or a cathode ray tube (CRT)). The system may also include an alphanumeric input device 410 (e.g., a keyboard) and a network interface device 412 for sending requests and receiving responses.Disk storage device 406 is a machine-readable medium capable of storing one or more sets of instructions (eg, software) embodying any one or more of the methods or functions described herein. Can be included. The instructions may reside entirely or at least partially within memory 404 and / or within client processor 402 during their execution by the system, with memory 404 and client processor 402 also having machine readable media. I will.Use 3GPP file format Use 3GPP file format or any other file based on ISO base media file format, eg MP4 file format or 3GPP2 file format, use as container format for HTTP streaming with the following features be able to. A segment index can be included within each segment to signal time offsets and byte ranges, so the client can download the appropriate file or media segment as needed. The global presentation timing of the entire media presentation or the local timing within each 3GP file or media segment can be precisely aligned. Tracks in one 3GP file or media segment can be precisely aligned. Traffic across multiple representations can also be coordinated by assigning each of them to the global timeline, so switching between representations can be seamless and media components in different representations Co-presentation of can also be synchronous.The file format can include a profile for adaptive streaming with the following characteristics. All movie data can be included in the movie fragment, and the "moov" box can not contain any sample information. Audio and video sample data can be interleaved, with similar requirements for progressive download profiles as specified in TS 26.244. The "moov" box is placed at the beginning of the file and is followed by fragment offset data, also called segment index, containing time and offset information in bytes for at least a subset of each fragment or fragment in the containing segment. be able to.It is also possible for the media presentation description (Media Presentation Description) to refer to files that follow the existing progressive download profile. In this case, the client can use the media presentation description to simply select the appropriate alternate version among the plurality of available versions. The client can also use an HTTP partial get request with files compliant with the progressive download profile to request a subset of each alternative version, thereby implementing a less efficient form of adaptive streaming. In this case, different representations that include media in the progressive download profile can still adhere to a common global timeline to enable seamless switching across multiple representations.SUMMARY OF EXTENDED METHODS The following sections describe methods for an improved block-request streaming system. It should be understood that some of these improvements can be used with or without these improvements depending on the needs of the application. In general operation, the receiver makes a request for a particular block of data or a portion of a block of data to a server or other transmitter. A file, also referred to as a segment, can include multiple blocks, and is associated with one representation of the media presentation.Preferably, indexing information, also referred to as "segment indexing" or "segment map", is generated which provides a mapping from playout time or decoding time of corresponding blocks or fragments in the segment to byte offsets. This segment indexing can typically be included in the segment at the beginning of the segment (at least part of the segment map is initially present) and is often small. Segment indexes can also be provided in separate index segments or files. In particular, in the case where a segment index is included in the segment, the receiver can quickly download some or all of this segment map and is associated with time offsets and their time offsets in the file This can be used subsequently to determine the mapping between the corresponding byte position of the fragment.The byte offset to request data from the fragment associated with a particular time offset, without the receiver having to download all the data associated with other fragments not associated with the time offset of interest Can be used. In this way, the segment map or segment indexing can significantly improve the ability of the receiver to directly access the portion of the segment that corresponds to the current time offset of interest, the improved content zapping time, It has benefits including the ability to quickly switch from one representation to another as network conditions change, and reduced waste of network resources to download media that is not played out at the receiver.If switching from one representation (referred to here as the "switch-from" representation) to another representation (here referred to as the "switch-to" representation) is considered The segment index allows seamless switching in the sense that the media in the switched source is downloaded to the maximum presentation time so that playout of the switched destination can start seamlessly from the random access point It can also be used to identify the start time of the random access point in the switched destination representation to identify the amount of data required in the switched source representation.The blocks represent segments of the video media or other media that the requesting receiver needs to generate output for the user of the receiver. The receiver of the media can be a client device, such as when the receiver receives content from a server that transmits the content. Examples include set-top boxes, computers, game consoles, specially equipped televisions, handheld devices, specially equipped cell phones, or other client receivers.A number of extended buffer management methods are described herein. For example, the buffer management method allows the client to request the highest media quality blocks that can be received in time to play out contiguously. The variable block size feature improves compression efficiency. The ability to have multiple connections to send blocks to requesting devices while limiting the frequency of requests provides improved transmission performance. Partially received blocks of data can be used to continue the media presentation. A connection can be reused for multiple blocks without having to commit the connection to a particular set of blocks at startup time. The consistency of the selection of servers from multiple possible servers by multiple clients is improved, which reduces the frequency of duplicate content within nearby servers and improves the probability that the servers will contain the entire file. The client can request the media block based on metadata (eg, available media encodings) embedded in the URL for the file containing the media block. The system can provide calculation and minimization of the amount of buffering time required before content playout can begin without experiencing subsequent pauses in the media playout. The available bandwidth can be shared among multiple media blocks and adjusted as the playout time of each block approaches, so it has the most recent playout time if necessary. The blocks can allocate a larger percentage of the available bandwidth.HTTP streaming can employ metadata. Presentation level metadata, for example, stream duration, available encodings (bit rate, codec, spatial resolution, frame rate, language, media type), pointers to stream metadata for each encoding, and content Includes protection (digital rights management (DRM) information). Stream metadata can be a URL for a segment file.Segment metadata can include byte range vs. time information for requests within segments and identification of random access points (RAPs) or other seek points, some or all of this information being segment indexing or segment map Can be part of.The stream may comprise multiple encodings of the same content. Each encoding may be divided into multiple segments, each segment corresponding to one storage unit or file. In the case of HTTP, a segment is typically a resource that can be referenced by a URL, and the request of the URL results in the segment being returned as the entire body of the request response message. A segment may comprise multiple picture groups (GoP). Each GoP can further comprise a plurality of fragments, segment indexing provides time / byte offset information for each fragment, ie, the unit of indexing is a fragment.Fragments or portions of fragments can be requested through parallel TCP connections to improve throughput. This alleviates the problems that arise when connections are shared or lost due to congestion on links in the down link, which can improve the overall speed and reliability of the delivery, The speed and reliability of content zapping time can be substantially improved. While it is possible to reserve bandwidth in exchange for latency due to excessive demand, care should be taken to avoid making requests that may increase the risk of depletion for an excessively distant future.Multiple segment requests on the same server can be pipelined (the next request made before the current request completes) to avoid repetitive TCP start up delays. The requirements of consecutive segments can be combined into one requirement.Some CDNs are more preferable for large files, and can trigger a background fetch of the entire file from the origin server when it first sees a range request. However, most CDNs answer range requests from cache when data is available. Thus, it may be advantageous to target a portion of the client request to the entire segment file. These requirements can be canceled later if necessary.Valid switching points can be seek points in the target stream, in particular, for example, RAP. Different implementations (based on the beginning of the media or based on GoP) are possible, eg matching of RAP across fixed GoP structures or multiple streams.In one embodiment, segments and GoP can be matched across streams of different rates. In this embodiment, GoP can be of variable size and can include multiple segments, but fragments are not matched between streams of different rates.In some embodiments, file redundancy can be advantageously used. In these embodiments, erasure codes are applied to each segment to generate redundant versions of data. Preferably, the formatting of the source is not changed due to the usage of FEC, for example, as a dependent representation of the original expression, an additional repair segment containing FEC repair data is generated, an additional step in the capture system It will be made available. The client can reconstruct the fragment using only source data for the fragment, and only needs to request the server for source data for the fragment in the segment. If the server is unavailable or the connection to the server is slow, it can be determined before or after the source data request, and additional repair data can be requested for fragments from the repair segment, It reliably delivers sufficient data to recover a fragment, possibly using FEC encoding to use the combination of received source data and repair data to recover the fragment's source data To reduce the time for In addition, if the fragment becomes urgent, ie its playout time is approaching, additional repair data can be required to allow restoration of the fragment, which is It increases the proportion of data on fragments but is more efficient than closing other connections on the link to release bandwidth. This can also reduce the risk of exhaustion due to the use of parallel connections.The fragment format may be a stored stream of Real-time Transfer Protocol (RPT) packets where audio / video synchronization is achieved through the Real-time Transfer Control Protocol RTCP.The segment format may also be a stored stream of MPEG-2 TS packets with MPEG-2 TS internal timing achieved by audio / video synchronization.Use of Signaling to Make Streaming More Efficient and / or Block Generation In a block-to-request streaming system, some features may or may not be used to provide enhanced performance. The ability to play out a presentation without outages, to obtain media data within bandwidth constraints, and / or to do so within limited processor resources at the client, server and / or capture system It can be related. Now, some of these features will be described.In order to organize partial GET requests for indexing movie fragments within a segment, the byte offset and start time during decoding or presentation time of any media component included in the file or fragment within the segment and any fragments start The client can be contacted, or which fragment contains the random access point (and is therefore suitable for use as a switch point between alternate representations), this information is often referred to as segment indexing or segment map. The start time or presentation time during decoding can be represented directly or as a delta to a reference time.This time and byte offset indexing information can require at least 8 bytes of data per movie fragment. As an example, for a two hour movie with 500 ms movie fragments contained in a single file, this would total about 112 kilobytes of data. Downloading all this data when starting the presentation may result in significant additional start-up delays. However, time and byte offset data can be encoded hierarchically, so that the client can quickly find a small amount of time and offset data that corresponds to the point in the presentation that it wants to start Can. Information can also be distributed within the segment so that any refinement of the segment index can be arranged interleaved with media data.If the representation is segmented into segments in time, the use of this hierarchical encoding is not necessary as the complete time and offset data for each segment may already be quite small Note that there is a thing. For example, if the segment in the above example is one minute instead of two hours, then the time-byte offset indexing information is about one kilobyte of data, which is typically in a single TCP / IP packet It can fit.Different options are possible to add fragment time and byte offset data to the 3GPP file.First, Movie Fragment Random Access Box ("MFRA") can be used for this purpose. The MFRA provides a table, which can assist the reader with movie fragments to find random access points in the file. To support this function, the MFRA incidentally includes the byte offset of the MFRA box that contains the random access point. The MFRA can be placed at or near the end of the file, but this is not necessarily the case. By scanning from the end of the file to find the movie fragment random access offset box and using the size information therein, it is possible to locate the beginning of the movie fragment random access offset box. However, putting the MFRA last for HTTP streaming typically requires at least 3 to 4 HTTP requests to access the desired data, ie, at least to request the MFRA from the end of the file. Request one time to get the MFRA and finally one time to get the desired fragment in the file. Thus, putting it at the beginning would be desirable as it can download mfra with the first media data in a single request. Similarly, using MFRA for HTTP streaming may be inefficient as it does not require any information in "MFRA" except time and moof_offset, and specifies an offset instead of a length Things may require more bits.Second, an Item Location Box ("ILOC") can be used. "ILOC" provides their metadata resources by locating the files containing metadata resources in this or other files, their offsets in the files, and directories of their length Do. For example, the system can consolidate all externally referenced metadata resources into one file, and re-adjust file offsets and file references accordingly. However, "ILOC" is intended to provide a metadata location, which may be difficult to co-exist with realistic metadata.Finally, and perhaps most appropriate, is a new Time Index Box ("TIDX") called dedicated to providing accurate fragment times or durations and byte offsets in an efficient manner. It is the specification of the box. This is explained in more detail in the next section. An alternative box with the same function would be a segment index box ("SIDX"). Here, unless otherwise noted, both boxes are interchangeable as they provide the ability to provide the correct fragment time or duration and byte offset in an efficient manner. The differences between TIDX and SIDX are provided below. Because both TIDX and SIDX boxes implement segment indexing, the compatibility method for both boxes should be clear.Segment indexing Segments have an identified start time and an identified number of bytes. Multiple fragments can be concatenated into a single segment, and the client can issue a request to identify a particular byte range within the segment corresponding to the requested fragment or subset of fragments. For example, when HTTP is used as the request protocol, HTTP range headers can be used for this purpose. This approach requires that the client have access to the "segment index" of the segment which specifies the position within the segment of different fragments. This "segment index" can be provided as part of the metadata. This approach has the result that far fewer files need to be generated and managed as compared to the approach where all blocks are maintained in separate files. Managing the creation, transfer, and storage of very large files (for example, thousands of cases for an hour presentation) can be complex and error prone, so files The reduction in number represents one advantage.If the client only knows the desired start time of the smaller part of the segment, it can request the entire file and read through the entire file to determine the appropriate playout start location. To improve bandwidth usage, segments can include index files as metadata, and index files map the time range that blocks correspond to the byte range of individual blocks, segment indexing or segment map It is called. This metadata can be formatted as XML data, or they can be binary, for example following the atom and box structure in 3GPP file format. Indexing can be simple, the time and byte range of each block can be absolute with respect to the beginning of the file, or they can be hierarchical, and some blocks Grandfather blocks etc.) and the time and byte range for a given block is expressed in terms of the time and / or byte range of the parent block of that block.Example Indexing Map Structure In one embodiment, the original source data for one representation of a media stream can be put into one or more media files, referred to herein as "media segments", each media segment being And media data used to play back continuous time segments, eg, 5 minutes of media playback.FIG. 6 shows an example of the overall structure of a media segment. Within each segment, there may also be indexing information comprising time / byte offset segment maps, initially or distributed throughout the source segment. The time / byte-offset segment map in one embodiment is a time / byte-offset pair (T (0), B (0)), (T (1), B (1)),. . . , (T (i), B (i), ..., (T (n), B (n)), where T (i-1) is the first of the media in all media segments Represents the start time in the segment for playback of the ith fragment of the media relative to the start time of T, and T (i) represents the end time for the ith fragment (and thus the start time for the next fragment) , Byte-offset B (i-1) is the corresponding byte index of the beginning of the data in this source segment, where the ith fragment of the media starts at the beginning of the source segment, B ( i) is the corresponding last byte index of the ith fragment (and thus the index of the first byte of the next fragment). When including media components, T (i) and B (i) can be provided for each component in the segment in an absolute form or they are other media components that address the reference media component Can be expressed in terms ofIn this embodiment, the number of fragments in the source segment is n, where n can vary from segment to segment.In another embodiment, the time offset within the segment index for each fragment can be determined using the absolute start time of the first fragment and the duration of each fragment. In this case, the segment index can document the start time of the first fragment and the duration of all fragments included in the segment. Segment indexes can also document only a subset of fragments. In that case, the segment index documents the duration of subsegments that are defined to be one or more consecutive fragments at the end of the containing segment or at the beginning of the next subsegment.For each fragment, the fragment does not depend on the seek point, that is, any media after the point or any media before the point, so the media ahead from the fragment is There can also be a value indicating whether to start at or include at that point, which can be played out independently of fragments of. Seek points are generally points in media where playout can begin independently of all previous media. FIG. 6 also shows a simple example of possible segment indexing for a source segment. In that example, the time offset value is in milliseconds, so the first fragment of this source segment starts 20 seconds after the beginning of the media and the first fragment plays out 485 milliseconds. Have time. The first byte offset of the first fragment is 0, and the first byte offset of the end of the first fragment / second fragment is 50,245, so the size of the first fragment is 50,245 bytes It is. If the fragment or subsegment does not start from a random access point and the random access point is included in the fragment or subsegment, give the decoding time or presentation time difference between the start time and the actual RAP time it can. This enables the client to know exactly when it is necessary to present a switch from presentation when switching to this media segment.It is possible to use daisy-chained indexing and / or hybrid indexing in addition to or instead of simple or hierarchical indexing.Because the sample duration for different tracks may not be the same (e.g., video samples can be displayed for 33 ms and audio samples can last 80 ms), the different tracks in the movie fragment do not start at all simultaneously and It can not end, i.e. the audio can start slightly before or slightly after the video to compensate, and vice versa for leading fragments. To avoid ambiguity, the time stamps specified in the time and byte offset data can be specified for a particular track, which can be the same track for each representation. Usually this is a video track. This allows the client to correctly identify the next video frame while switching representations.Care should be taken to maintain a tight relationship between track timescale and presentation time during presentation to ensure smooth playout and maintenance of audio / video synchronization despite the above challenges Can.FIG. 7 illustrates some examples, such as a simple index 700 and a hierarchical index 702.Two specific examples of boxes containing segment maps are provided below, one called the time index box ('TIDX') and one called ('SIDX'). The definition conforms to the box structure in the ISO base media file format. Other designs for the box that define similar syntax and have the same semantics and functionality should be clear to the reader.Time Index Box Definition Box Type: 'tidx' Container: File Forced: None Quantity: Any number of zero or one hour index boxes are a set of time and byte offset indices that associate a certain area of the file with a certain time interval of the presentation Can be provided. The time index box may include a target type (targettype) field that indicates the type of data referenced. For example, a time index box with target type "moof" provides an index that indicates the media fragments contained in the file for both time and byte offset. A time index box having a target type of time index box can be used to build a hierarchical time index, enabling the user of the file to quickly navigate to the required portion of the index.A segment index can include, for example, the following syntax:aligned (8) class TimeIndexBoxextends FullBox ('frai') {unsigned int (32) targettype; unsigned int (32) time_reference_track_ID; unsigned int (32) number_of_elements; unsigned int (64) first_element_offset; unsigned int (64) first_element_time; for ( i = 1; i <= number_of_elements; i ++) {bit (1) random_access_flag; unsigned int (31) length; unsigned int (32) delta T;}} Semantic target type: Type of box data referenced by this time index box It is. This can be either a movie fragment header ("moof") or a time index box ("tidx").time_reference_track_id: Indicates a track for which a time offset in this index is specified.number_of_elements: Number of elements indexed by this time index box.first_element_offset: byte offset from the beginning of the file of the first indexed element.first_element_time: The start time of the first indexed element, using the time scale specified in the media header box of the track identified by time_reference_track_id.random_access_flag: 1 if the start time of the element is a random access point. Otherwise zero.length: The length of the indexed element, in bytes.deltaT: Difference on the time scale specified in the media header box of the track identified by time_reference_track_id between the start time of this element and the start time of the next element.Segment Index Box The Segment Index Box ('sidx') provides a compact index of movie fragments within the segment and other segment index boxes. There are two loop structures in the segment index box. The first loop documents the first sample of the subsegment, ie, the sample in the first movie fragment referenced by the second loop. The second loop provides the subsegment index. The container for the 'sidx' box is a file or direct segment.Syntax aligned (8) class SegmentIndexBox extends FullBox ('sidx', version, 0) {unsigned int (32) reference_track_ID; unsigned int (16) track_count; unsigned int (16) reference_count; for (i = 1; i <= track_count ; i ++) {unsigned int (32) track_ID; if (version = = 0) {unsigned int (32) decoding_time;} else {} unsigned int (64) decoding_time;}} for (i = 1; i <= reference_count; i ++) {bit (1) reference_type; unsigned int (31) reference_offset; unsigned int (32) subsegment_duration; bit (1) contains_RAP unsigned int (31) RAP_delta_time;} The semantics reference_track_ID provides the track_ID for the reference track.track_count: number of indexed tracks in the subsequent loop (one or more) reference_count: number of elements indexed by the second loop (one or more) track_ID: first movie fragment in which the track fragment is identified by this index ID of the track included in. Exactly one track_ID in this loop is equal to the reference_track_ID decoding_time: the decoding time for the first sample in the track identified by the track_ID in the movie fragment referenced by the first item in the second loop Expressed on the time scale of the track (documented in the time scale field of the media header box of the track).When set to reference_type: 0, it indicates that the movie fragment ('moof') box is referred to, and when set to 1, it indicates that the segment index ('sidx') box is referred to.reference_offset: When the unit from the first byte following the contained segment index box to the first byte of the referenced box refers to the distance of the bytes subsegment_duration: segment index box, this field is the second of that box Of the first movie fragment, or subsegment, documented by the indicated movie fragment and the next entry in the loop, when it has the sum of the subsegment_duration field in the The sum of the sample durations of the samples in the reference track in the subsequent movie fragments to the earlier of the last That. The duration is represented by the time scale of the track (documented in the time scale field of the media header box of the track).contains_RAP: When a movie fragment is referenced, this bit is 1 if the track fragment in that movie fragment for the track with tack_ID equal to the reference_track_ID contains at least one random access point, otherwise This bit is set to zero. When a segment index is referenced, this bit is set to 1 only if any of the references in that segment index have this bit set to 1; otherwise it is set to 0 .RAP_delta_time: Provides a presentation time (composition composition) of a random access point (RAP) if contains_RAP is 1, and is reserved with a value of 0 if contains_RAP is 0. The time is represented as the difference between the decoding time of the first sample of the subsegment documented by this entry and the presentation (composition) time of the random access point in the track with tack_ID equal to the reference_track_ID. .Differences between TIDX and SIDX SIDX and SIDX provide the same function with regard to indexing. The first loop of SIDX further provides global timing for the first movie fragment, but global timing can also be included within the movie fragment itself and is absolute or relative to the reference track.The second loop of SIDX implements the functionality of TIDX. Specifically, SIDX makes it possible to have a mixture of targets for reference for each index referenced by reference_type, while TIDX only refers to either TIDX only or MOOF only is there. Number_of_elements in TIDX corresponds to reference_count in SIDX, time_reference_track_id in TIDX corresponds to reference_track_ID in SIDX, tfirst_element_offset in TIDX corresponds to reference_offset in the first entry of the second loop, The first_element_time in TIDX corresponds to the decoding_time of reference_track in the first loop, the random_access_flag in TIDX corresponds to contains_RAP in SIDX, and the additional that RAP does not necessarily have to be placed at the beginning of the fragment in SIDX Has a free, and thus requests the RAP_delta_time, length within TIDX corresponds to reference_offset in SIDX, finally, deltaT in TIDX corresponds to subsegment_duration in SIDX. Thus, the functions of the two boxes are equivalent.For variable block sizing and sub GoP block video media, the relationship between the video coding structure for the request and the block structure can be important. For example, if each block starts with a sheet point, for example a random access point ("RAP"), and each block represents an equal period of video time, the placement of at least some seek points in the video media is fixed. Seek points occur at regular intervals in the video coding. As is well known by those skilled in the art of video coding, when seek points are placed by the relationship between video frames, especially when they are placed in frames that have little in common with previous frames Can improve the compression efficiency. Thus, this requirement that the blocks represent equal amounts of time imposes constraints on video coding, so that compression may not be optimal.Rather than requiring a fixed position seek point, it is desirable to make the position of the seek point within the video presentation selectable by the video coding system. Enabling the video coding system to select seek points results in improved video compression, and thus higher quality video media using a given available bandwidth. Can be provided, resulting in an improved user experience. Current block-request streaming systems can require that all blocks be of the same duration (video time) and that each block must start from a seek point, which is an existing system It is a drawback ofA novel block-request streaming system that provides advantages over the above is now described. In one embodiment, the video encoding process of the first version of the video component can be configured to select the placement of seek points to optimize compression efficiency, but the duration between seek points It is required that there is a maximum value for. This latter requirement reliably constrains the choice of seek points by the encoding process, thus reducing compression efficiency. However, if a reduction in compression efficiency requires a fixed fixed position with respect to seek points, provided that the maximum value of the duration between seek points is not too small (eg, greater than about 1 second) It is smaller than that of suffering. Furthermore, if the maximum duration between seek points is a few seconds, the reduction in compression efficiency compared to the placement of completely free seek points is generally very small.In many embodiments, including this embodiment, some RAPs may not be seek points, eg, due to the RAP being too close in time to surrounding seek points, or leading or following RAP Due to the amount of media data between the seek point and the RAP being too small, there may be a frame that is the RAP between two consecutive seek points not selected as seek points.The position of the seek point in all other versions of the media presentation can be constrained to be the same as the first (e.g., at the highest media data rate) version. This ensures that the compression efficiency for these other versions is reduced compared to allowing the encoder a free choice of seek points.The seek point usage typically requires that a frame be independently decodable, resulting in generally low compression efficiency for that frame. Frames that are not required to be independently decodable can be encoded with reference to data in other frames, which is the amount of commonality between the frame to be encoded and the reference frame Generally improve the compression efficiency for that frame by an amount dependent on. The efficient selection of seek point placement is by preferentially selecting as a seek point frame a frame that has low commonality with the previous frame, thereby encoding the frame in a form that is independently decodable. Minimize the compression efficiency penalty incurred.However, because the original content is the same, the level of commonality between the frame and the potential reference frame has a high correlation across the different representations of the content. As a result, constraining the seek points in other deformations to be at the same position as the seek points in the first deformation does not make a large difference in compression efficiency.The seek point structure is preferably used to determine the block structure. Preferably, each seek point determines the beginning of a block, and there can be one or more blocks that contain data between two consecutive seek points. The duration between seek points is not fixed for the purpose of coding with good compression, so not all blocks are required to have the same playout duration. In some implementations, blocks are matched between versions of content, ie, if there is a block that spans a particular group of frames in one version of content, then the frames of the same group in other versions of content There is a block that spans. The blocks for a given version of content do not overlap, and all frames of content are included in exactly one version of each version.Features that allow efficient use of variable durations between seek points, and thus variable durations GoP, can be included in the segment or otherwise provided to the client by segment indexing or It is a segment map, ie it is the metadata associated with this segment in this representation that can be provided, comprising the start time and the duration of each block of the presentation. The client can use this segment indexing data in determining which block to start the presentation when the user is requesting that the presentation start at a particular point present in the segment. If the metadata is not provided, the presentation may start block at the beginning of the content only, or by (eg, dividing the required start point (time) by the average block duration to give an index of the start block Can be started at a random or approximate point close to the desired point).In one embodiment, each block can be provided as a separate file. In other embodiments, multiple contiguous blocks can be combined into a single file to form a segment. In this second embodiment, metadata for each version is provided with the start time and duration of each block and the byte offset within the file where the block starts. This metadata can be provided in response to the initial protocol request, ie, separately available from the segment or file, or, for example, in the same file or segment as the block itself at the beginning of the file It can be included. As will be clear to the person skilled in the art, this metadata may be in compressed form, for example gzip or delta encoding or binary form, to reduce the network resources required to transfer the metadata to the client. Can be encoded withFIG. 6 shows an example of segment indexing where the blocks are of variable size and the range of blocks is partial GoP, ie partial amount of media data between one RAP and the next RAP. In this example, the seek point is indicated by the RAP indicator, a RAP indicator value of 1 indicates that the block starts from or is a RAP, or seek point, and a RAP indicator of 0 is , Indicates that the block does not contain RAP nor seek points. In this example, the first three blocks, ie, bytes 0 through 157, 033, comprise the first GoP, which has a presentation duration of 1.623 seconds, and the presentation time is 20 within the content. It is up to 21.623 seconds from the time of entering seconds. In this example, the first of the three first blocks comprises a presentation time of 0.485 seconds and comprises the first 50,245 bytes of media data in the segment. In this example, blocks 4, 5, and 6 comprise a second GoP, blocks 7 and 8 comprise a third GoP, and blocks 9, 10 and 11 comprise a fourth GoP. Note that other RAPs not specified as seek points and therefore not signaled as RAPs in the segment map may be present in the media data.Referring again to FIG. 6, if the client or receiver wishes to access the content and starts at a time offset of about 22 seconds into the media presentation, then the client will have the appropriate media data in this segment. Other information, such as the MPD, described in more detail later, can be used initially to first determine what is present. The client can download the first part of the segment to obtain segment indexing, which in this case is only a few bytes, for example using an HTTP byte range request. By using segment indexing, the client determines that the first block it should download is at most 22 seconds and has a time offset starting from the RAP, ie the seek point. can do. In this example, block 5 has a time offset of less than 22 seconds, ie, its time offset is 21.965 seconds, but segment indexing indicates that block 5 does not start with RAP, so instead Then, based on segment indexing, the client starts block 4 at most 22 seconds, ie it has a time offset of 21.623 seconds and downloads it as it starts from RAP To choose. Thus, based on segment indexing, the client makes an HTTP range request starting at byte offsets 157, 034.If segment indexing is not available, the client may have to download all previous 157,034 bytes of data before downloading this data, and a much longer rise time, or channel zapping time, Leading to wasted downloads of useless data. Alternatively, if segment indexing is not available, the client can approximate where the desired data starts in the segment, but the approximation may be bad, which means that the appropriate time may be It can be missed, requiring a return to increase the rise delay again.Generally, each block contains a portion of media data that can be played out by the media player along with previous blocks. Thus, blocking structure and segment indexing to the client Signaling of the blocking structure, whether contained within the segment or provided to the client through other means, is fast with the client facing network fluctuations and disruptions Channel zapping and the ability to provide seamless playout can be significantly improved. The support of variable duration blocks and blocks that contain only a portion of the GoP, enabled by segment indexing, can significantly improve the streaming experience. For example, referring again to FIG. 6 and the example above where the client wishes to start playout about 22 seconds of presentation, the client requests data in block 4 through one or more requests. It can then be supplied into the media player as soon as it is available for playback start. Thus, in this example, the playout begins playout as soon as a 42,011 byte block 4 is received at the client, thereby enabling fast channel zapping time. Instead, if the client needs to request the entire GoP before playout begins, this is 144,211 bytes of data, so the channel zapping time will be longer.In other embodiments, RAPs or seek points can also occur in the middle of a block, and data is present in the segment indexing that indicates where in the block or fragment the RAP or seek point is located. Can. In other embodiments, the time offset may be the decoding time of the first frame in the block instead of the presentation time of the first frame in the block.FIGS. 8 (a) and (b) illustrate an example of variable block sizing of aligned seek point structures across versions or representations, and FIG. 8 (a) aligns across versions of a media stream. FIG. 8 (b) illustrates variable block size settings having seek points, and FIG. 8 (b) illustrates variable block size settings having unmatched seek points across multiple versions of a media stream.The time is shown at the top, in seconds, and the blocks of two segments for the two representations and the seek points are shown from left to right with respect to their timing with respect to this timeline, so each shown The length of a block is proportional to its playout time and not to the number of bytes in the block. In this example, segment indexing for both segments of the two representations has the same time offset with respect to seek points, but with potentially different numbers of blocks or fragments between seek points and different media data in each block It will have different byte offsets to the block due to the quantity. In this example, if the client wishes to switch from representation 1 to representation 2 in a presentation time of about 23 seconds, the client requests up to block 1.2 in the segment for representation 1; It can start to request a segment for representation 2 from 2 and so switching occurs in the presentation that matches seek point 1.2 in representation 1 with seek point 2 in representation 2 At the same time.As should be clear from the above, the block-request streaming system described does not constrain video coding to place seek points at specific locations within the content, which is an improvement over that of existing systems. Reduce one of the problems.In the embodiment described above, it is configured such that seek points for different representations of the same content presentation are aligned. However, in many cases it is preferable to alleviate this alignment requirement. For example, encoding tools are sometimes used to generate representations that do not have the ability to generate representations that are seekpoint aligned. As another example, content presentations can be independently encoded into different representations without seek point matching between different representations. As another example, the representation has a lower rate and it is more common to need to switch, or it supports trick modes such as fast forward or rewind or fast seek More seek points can be included to include seek points. Thus, it is desirable to provide a method that enables block-request streaming systems to efficiently and seamlessly address unmatched seek points across different representations for content presentation.In this embodiment, the positions of seek points across multiple representations may not be aligned. The blocks are built such that new blocks start at each seek point, so there may be no consistency between blocks of different versions of the presentation. An example of the unmatched seek point structure between different representations is shown in FIG. 8 (b). The time is shown at the top, in seconds, and the blocks of the two segments for the two representations and the seek points are shown from left to right with respect to their timing relative to this timeline, so each shown The length of a block is proportional to its playout time and not to the number of bytes in the block. In this example, segment indexing for both segments of the two representations has potentially different time offsets with respect to seek points, and also potentially different numbers of blocks or fragments between seek points, and within each block It will have different byte offsets for blocks due to different amounts of media data. In this example, if the client wishes to switch from representation 1 to representation 2 in a presentation time of about 25 seconds, the client requests up to block 1.3 within the segment for representation 1, block 2. From 3 can start to request a segment for representation 2, so switching occurs in the presentation that matches seek point 2.3 in representation 2, which is block 1.3 in representation 1 Of the media for block 1.2 are not played out (but media data for the frame of block 1.3 are not played out). What must be loaded into the receiver buffer to decode 1.3 other frames A).In this embodiment, the operation of block selector 123 is such that the first frame follows the last frame of the last selected block, each time a block needs to be selected from a different representation than the previously selected version. The last block not later than the desired frame can be changed to be selected.This last-described embodiment can eliminate the requirement of constraining the position of seek points in versions other than the first version, thereby improving the compression efficiency for these versions, The result is a higher quality presentation for a given available bandwidth, which improves the user experience. One additional consideration is that video encoding tools that perform seek point matching functions across multiple encodings (versions) of content may not be widely available, and thus, of this last described embodiment. An advantage is the ability to use currently available video coding tools. Another advantage is that encoding of different versions of content can proceed in parallel without the need for coordination between the encoding processes for those different versions. Another advantage is that additional versions of the content can be encoded at a later time and added to the presentation without having to provide the encoding tool with a list of specific seek point locations.In general, if the picture is encoded as a picture group (GoP), the first picture in the sequence can be a seek point, but it need not always be the case.One of the issues of concern in an optimal block partitioned block-request streaming system is the interaction between the structure of the encoded media, eg video media, and the block structure used for block requests. . As will be known by those skilled in the art of video coding, the number of bits required for the coded representation of each video frame often varies substantially from frame to frame. As a result, the relationship between the amount of data received and the duration of the media encoded by that data may not be linear. Furthermore, dividing media data into blocks in a block-request streaming system adds an additional dimensional complexity. In particular, in some systems, media data of a block can not be played out until the entire block has been received, for example, within the block of placement of media data within the block and use of the erasure code. This property may occur as a result of the inter-media sample dependencies. As a result of these complex interactions between block size and block duration and that it may be necessary to receive the entire block before starting playout, the client system will playout It is customary to employ a conservative approach in which media data is buffered before the start of the. The buffering results in long channel zapping times and thus poor user experience.Pakzad describes a new and efficient way to determine how to divide a data stream into adjacent blocks based on the basic structure of the data stream, "block division method", and these in the context of streaming systems Some of the advantages of the method are further described. A further embodiment of the present invention applying Pakzad's block partitioning method to a block-request streaming system is now described. The method may comprise arranging the presented media data in approximate presentation time order, so that the playout time of any predetermined element of the media data (e.g. a video frame or an audio sample) is , Differ by a smaller value than the provided threshold of that of the adjacent media data element. The media data so ordered can be considered to be a data stream in the Pakzad term, and any of the Pakzad methods applied to this data stream have block boundaries with the data stream Identify Data between any pair of adjacent block boundaries is considered to be a "block" in the terminology of this disclosure and is applied to provide a presentation of media data in a block-request streaming system. As will be apparent to those skilled in the art upon reading this disclosure, several advantages of the method disclosed in Pakzad can be realized for block-request streaming systems.As explained in Pakzad, the determination of the block structure of the segment containing blocks that contain partial GoP or parts of two or more GoPs affects the ability of the client to enable fast channel zapping times there is a possibility. In Pakzad, if one target rise time is considered, if the client starts downloading expressions at any seek point and playout is started after the rise time of the target, at each time point A method is provided to provide a block structure and a target download rate that ensures that the amount of data downloaded by the client continues seamlessly without interruption for at least the target download rate x the elapsed time from the start of the download. This provides the client with a means to decide when to play out the representation at the earliest time, and continues the client to play out the presentation as long as the download meets the conditions described above To enable that, it is advantageous for the client to have access to the target's start time and target's download rate. Thus, the method described below provides a means for including the target rise time and target download rate in the media presentation description so that it can be used for the purposes described above.Media Presentation Data Model FIG. 5 illustrates the possible structure of the content store shown in FIG. 1 and segmenting of segment and media presentation description ("MPD") files and segment, timing, and other structures within MPD files. And. Details of possible implementations of the MPD structure or file will now be described. In many instances, MPDs are described as files, but non-file structures can be used as well.As illustrated there, the content store 110 holds a plurality of source segments 510, MPDs 500 and repair segments 512. The MPD can comprise a duration record 501, which can comprise a representation record 502 including segment information 503, eg, a reference to the initialization segment 504 and the media segment 505.FIG. 9 (a) shows an example metadata table 900, and FIG. 9 (b) shows how the HTTP streaming client 902 gets the metadata table 900 and media block 904 through a connection to the HTTP streaming server 906. Here is an example.In the method described herein, a "media presentation description" is provided that comprises information regarding the presentation of the media presentation available to the client. The representations may be substitutes in the sense that the client selects one of the different substitutes, or they may select some of the representations, perhaps also each from the substitute set, by the client And they can be complementary in the sense that they are presented together. Representations can advantageously be assigned to groups, and clients are programmed or configured such that they are each alternative to each other with respect to representations in one group, while two from representations from different groups The above expressions are presented together. In other words, if there is more than one representation in a group, the client selects one representation from that group, forms one presentation from the next group, and so on, to form a presentation, and so on. Can be.The information describing the presentation advantageously includes the details of the applied media codec, including the codec profile and level required to decode the presentation, the video frame rate, the video resolution, the data rate , Can be included. Clients receiving media presentation descriptions can use this information to predetermine if the representation is suitable for decoding or presentation. This represents one advantage, because if the information to be distinguished is contained only in the binary data of the representation, it requires binary data from all representations to find information about its appropriateness It is necessary to parse and extract the relevant information. The extraction of these multiple requests and data parsing annexes takes some time, resulting in long rise times and thus poor user experience.Furthermore, the media presentation description can comprise information that limits the client's request based on the time of day. For example, for live services, the client can be limited to requesting a presentation portion close to the "current airtime". This is an advantage, as for live broadcasts, it may be desirable to purge data from the serving infrastructure for content broadcasted more than the threshold provided prior to the current broadcast time. This would be desirable for reuse of storage resources in the serving infrastructure. This may also be desirable depending on the type of service provided. For example, in some cases, presentations can be made live only due to certain subscription models of the receiving client device, while other media presentations are available live and on demand Other presentations can be live only for client devices of the first class, only on demand for client devices of the second class, and clients of the third class The device can be made available live or on demand in combination. The method described in the Media Presentation Data Model (below) provides the policy to the client so that it can avoid making requests and coordinating offerings to users for data that may not be available in the serving infrastructure. Enable to contact. Alternatively, for example, the client can present the user with a notification that this data is not available.In a further embodiment of the invention, the media segment conforms to the ISO base media file format described in ISO / IEC 14496-12 or a derived specification (e.g. the 3GP file format described in 3GPP Technical Specification 26.244) can do. The section on usage of the 3GPP file format (above) describes a new extension of the ISO-based media file format that enables the efficient use of the data structure of this file format in a block-request streaming system. As described in this reference, information can be provided in the file, enabling fast and efficient mapping between time segments and byte ranges of media presentation in the file. Media data itself can be structured by movie fragment construction as defined in ISO / IEC 14496-12. This information, which provides time and byte offsets, can be structured hierarchically or as a single information block. This information can be provided at the beginning of the file. Providing this information with efficient encoding, as described in the section on usage of the 3GPP file format, results in, for example, in the case where the file download protocol used by the block request streaming system is HTTP, the client This information can be retrieved quickly using HTTP partial GET requests, resulting in short rise times, seek or stream switching times, and thus an improved user experience.The representations in the media presentation are global time to ensure seamless switching across multiple representations, which are typically alternates, and to ensure synchronous presentation of two or more representations. Synchronized on line. Thus, included media sample timing in a representation in an adaptive HTTP streaming media presentation can be associated with a continuous global timeline across multiple segments.Blocks of encoded media that include multiple types of media, eg, audio and video, can have different end of presentation times for different types of media. In a block request streaming system, the media block is such that each media type is played sequentially so that one type of media sample from one block can be played out before the other type of media samples of the preceding block. It can be regenerated continuously in the form, which is referred to herein as "continuous block slicing". Alternatively, the media block can be played out in such a way that the earliest samples of any type of one block are reproduced after the last samples of any type of the preceding block, here "discontinuously It is called "block splicing" (discontinuous blcok splicing). Sequential block splicing may be appropriate when both blocks contain the same content item and media from the same representation. Typically, within one representation, continuous block splicing can be applied when splicing two blocks. This is advantageous as existing coding can be applied and segmentation can be performed without the need to align media tracks at block boundaries. This is illustrated in FIG. 10, where the video stream 1000 comprises block 1202 and the other blocks and comprises a RAP, eg RAP 1204.Media Presentation Description Media presentation can be viewed as a collection of structured files on an HTTP streaming server. The HTTP streaming client can download sufficient information to present the streaming service to the user. The alternate representation is one or more of 3GP files or 3GP files conforming to the 3GPP file format or conforming to a set of appropriately defined data structures that can be easily converted from or to at least 3GP files Can consist of parts.Media presentations can be described by media presentation descriptions. Media Presentation Descriptions (MPDs) are metadata that can be used to construct appropriate file requests, eg, HTTP GET requests, to access data at appropriate times and to provide streaming services to users. Can be included. The media presentation description can provide sufficient information for the HTTP streaming client to select the corresponding 3GPP file and files. The units that are signaled to the client that they are accessible are called segments.The media presentation description can include the following elements and attributes.MediaPresentationDescription Element An element that encapsulates metadata used by HTTP streaming clients to provide streaming services to end users. The MediaPresentationDescription element can include one or more of the following attributes and elements:Version (Version): A version number of a protocol for guaranteeing extensibility.PresentationIdentifier (Presentation identifier): Information that can uniquely identify a presentation from other presentations. It can also contain private fields or names.Update Frequency: The update frequency of the media presentation description, ie the frequency with which the client can reload the actual media presentation description. If not present, the media presentation can be static. Updating the media presentation means that the media presentation can not be cached.MediaPresentationDescription URL (Media Presentation Description URL): URI for dated media presentation description.Stream: describes the type of stream or media presentation, ie video, audio or text. The video stream type can include audio and can include text.Service: Describes a service type with additional attributes. The service type can be live or on demand. This can be used to inform the client that seeks and accesses beyond a certain current time are not allowed.MaximumClientPreBufferTime: The maximum amount of time that a client can prebuffer media streams. This timing can distinguish between streaming and progressive download if the client is limited to download beyond this maximum pre-buffer time. There can not be a value indicating that the prebuffering restrictions can not be applied.SafetyGuardIntervalLiveService: Information on the maximum turnaround time of the live service on the server. Provide the client with an indication of the information already accessible at this time. This information may be needed if the client and server are expected to operate in UTC time and strict time synchronization is not provided.TimeShiftBufferDepth: Information on how much the client can go back in the live service with respect to the current time. This extension of depth can enable time-shifted viewing and catch-up services without specific changes in service provisioning.Local Caching Permitted: This flag indicates that the HTTP client can cache downloaded data locally after it has been replayed.Live Presentation Interval (Live Presentation Interval): Includes a time interval during which a presentation can be obtained by specifying StartTime and EndTime. StariTime indicates the start time of the service, and EndTime indicates the end time of the service. If EndTime is not specified, the end time is unknown at the current time, and UpdateFrequency can ensure that the client gets access to the end time before the actual end time of the service.On Demand Availability Interval (On Demand Availability Interval): The presentation interval indicates the availability of a service on the network. Multiple presentation intervals can be provided. The HTTP client can not access the service outside the designated time window. With OnDemand Interval provisioning, additional time-shifted viewing can be specified. This attribute can also exist for live services. If it exists for a live service, the server can ensure that the client can access the service as an OnDemand Service during all offered availability intervals. Therefore, LivePresentationInterval can not overlap with OnDemandAvailabilityInterval.MPDFileInfoDynamic (MPD File Information Dynamic): Describes the default dynamic construction of files in a media presentation. Further details are provided below. The default specification for MPD levels can avoid unnecessary repetition when the same rules are used for some or all of the alternative representations.MPDCodec Description (MPD Codec Description): Describes a main default codec in media presentation. Further details are provided below. The default designation at the MPD level can avoid unnecessary repetition when the same codec is used for some or all of the alternative representations.MPDMoveBoxHeaderSizeDoesNotChange (MPD move box header size unchanged): A flag to indicate whether the size of MoveBoxHeader (move box header) changes between individual files in the entire media presentation. This flag can be used to optimize the download and can only be present in certain segment types, especially those where the segment contains a moov header.FileURIPattern (File URI Pattern): A pattern used by a client to generate a request message for a file in a media presentation. Different attributes allow for the generation of a unique URI for each of the files in the media presentation. The base URI can be an HTTP URI.Alternative Representation: Describes a list of representations.AlternativeRepresentation element: An XML element that encapsulates all the metadata for one representation. The AlternativeRepresentation element can include the following attributes and elements.RepresentationID (Representation ID): A unique ID for this particular alternative presentation in the media presentation.FilesInfoStatic (File Information Static): Provides an explicit list of start times and URLs of all files of one alternative representation. Static provisioning of lists of files offers the advantage of describing the exact timing of the media presentation, which may not be as compact, especially if the alternative representation contains a large number of files. Furthermore, the file name can have any name.FileInfoDynamic (File Information Dynamic): Provides an implicit method for constructing a list of start times and URIs of one alternative presentation. Dynamic provisioning of lists of files can provide the advantage of a more compact representation. If only a sequence of start times is provided, the timing advantages are also valid here, but the file names are constructed dynamically based on the FilePatternURI. If only the duration of each segment is provided, the representation is compact and can be suitable for use in a live service, but file generation can be controlled by global timing.APMoveBoxHeaderSizeDoesNotChange: A flag to indicate whether the size of the moving box header changes between individual files in the alternative description. This flag can be used to optimize the download and can only be present in certain segment types, especially those where the segment contains a moov header.APCodec Description (AP Codec Description): Describes the main codec of a file in an alternative presentation.MediaDescription Element MediaDescription: An element capable of encapsulating all metadata for the media contained in this representation. Specifically, it may include information about the tracks in this alternate presentation and the group of recommended tracks, if applicable. The MediaDescription attribute contains the following attributes:Track Description: An XML attribute that encapsulates all metadata for the media contained within this representation. The TrackDescription attribute contains the following attributes:TrackID (traffic ID): A unique ID for the track in the alternative representation. This can be used if the track is part of a group classification description.Bitrate(ビットレート):トラックのビットレート。TrackCodec Description: An XML attribute that contains all the attributes in the codec used in this track. The TrackCodecDescription attribute contains the following attributes:MediaName (Media Name): An attribute that defines a media type. Media types include "audio", "video", "text", "application" and "message".Codec (Codec): CodeecType including profile and level.LanguageTag: Language tag if applicable.MaxWidth, MaxHeight: For the image, the height and width of the contained image, in pixels.Sampling Rate: Sampling rate for voice.Group Description: An attribute that provides the client with recommendations regarding the appropriate group classification based on different parameters.GroupType: An underlying type that allows clients to decide how to group tracks.The information in the media presentation description is advantageously used by the HTTP streaming client to make requests for files / segments or parts thereof at the appropriate time, for example its capabilities with regard to access bandwidth, display capabilities, codec capabilities etc. And select segments from the appropriate representation that match the user's preferences, such as language. Furthermore, because the media presentation description describes the representations that are time aligned and mapped to the global timeline, the client switches between the multiple representations, to present the representations together or to seek within the media presentation The information in the MPD can also be used during an ongoing media presentation to initiate the appropriate action of theThe signaling representation of the segment start time can be divided into multiple segments in time. An inter-track timing problem exists between the last fragment of one segment and the next fragment of the next segment. In addition, other timing issues exist if segments of constant duration are used.Using the same duration for all segments can have the advantage that the MPD is compact and static. However, all segments can still start at the random access point. Thus, video coding can be constrained to provide random access points at these particular points, or the actual segment duration can not be exactly as specified in the MPD. It is desirable that the streaming system not impose unnecessary constraints on the video encoding process, so the second option would be preferred.Specifically, if the file duration is specified in the MPD to be d seconds, then the nth file is a random access at time (n-1) d or immediately after time (n-1) d You can start from points.In this manner, each file may contain information about the exact start time of the segment in terms of global presentation time. Three possible ways to signal this include:(1) First, limit the start time of each segment to the exact timing specified in the MPD. However, media encoders may not be flexible regarding the placement of IDR frames, and may require special encoding for file streaming.(2) Second, add an accurate start time to the MPD for each segment. In the case of on demand, the compactness of the MPD may be reduced. In the live case, this may require periodic updates of the MPD, which may reduce scalability.(3) Third, add a global time or an accurate start time with respect to the published start time of the representation in the MPD or the published start time of the segment in the sense that the segment contains information. This can be added to a new box dedicated to adaptive streaming. This box may also contain the information provided by the "TIDX" or "SIDX" box. The result of one of this third approach is that when seeking to a specific location near the beginning of one of the segments, the client chooses to follow the segment containing the required seek point based on the MPD. What you can do. A simple response in this case can be to move the seek point forward to the beginning of the retrieved segment (ie, to the next random access point after the seek point). Usually, random access points are provided at least every few seconds (and often there is little coding gain by lowering their frequency), so in the worst case seek points May be moved a few seconds later than specified. Alternatively, when retrieving the header information for a segment, the client can determine that the requested seek point actually exists in the previous segment and request that segment instead. As a result, the time required to perform the seek operation may sometimes increase.The accessible segment list media presentation comprises a set of representations, each providing some different version of the encoding for the original media content. The representation itself advantageously contains information on the differentiation parameters of the representation compared to the other parameters. They also include a list of accessible segments, either explicitly or implicitly.The segments can be distinguished as timeless segments containing only metadata and media segments mainly containing media data. Media presentation descriptions ("MPDs"), implicitly or explicitly, advantageously identify each of the segments and assign different attributes. The attributes advantageously assigned to each segment comprise the duration in which the segment is accessible, the resources and the protocol in which the segment is accessible. Furthermore, media segments are advantageously assigned attributes, such as the start time of the segments in the media presentation and the duration of the segments in the media presentation.If the media presentation is of the "on demand" type, as is advantageously indicated by an attribute in the media presentation description, eg OnDemandAvailabilityInterval, then the media presentation description is typically: It describes the entire segment and also provides an indication of when the segment is accessible and when the segment is not accessible. The start time of the segment is advantageously expressed with respect to the beginning of the media presentation, so that two clients starting the playback of the same media presentation but at different times can use the same media presentation description and the same media segment . This advantageously improves the ability to cache segments.If the media presentation is of the “live” type, as advantageously indicated by the attribute in the media presentation description, eg the attribute Service, the segment comprising the media presentation after the actual time will be an MPD segment Generally not generated or at least not accessible despite being fully described therein. However, by using the indication that the media presentation service is of the "live" type, the client can use the client internal time at the actual wall-clock time based on the information contained in the MPD and the MPD download time. A list of accessible segments can be generated along with timing attributes for the NOW. The server operates advantageously in the sense that the resources are accessible so that the reference client operating with the instance of the MPD at the actual elapsed time NOW can access the resources.Specifically, the reference client generates a list of accessible segments along with timing attributes for the client internal time NOW at the actual elapsed time based on the information contained in the MPD and the download time of the MPD. As time progresses, the client uses the same MPD and generates a new accessible segment list that can be used to play out the media presentation continuously. Thus, the server can announce these segments before the segments in the MPD are actually accessible. This is advantageous because it reduces frequent updates and downloads of the MPD.It is assumed that the list of segments, each having a start time tS, is explicitly described by a playlist in an element such as FileInfoStatic, or implicitly using an element such as FileInfoDynamic. An advantageous method for generating a segment list using FileInfoDynamic is described below. Based on this construction rule, the client accesses the start time tS (r, i) for each segment with a list of URIs and an index i for each representation r, here called FileURI (r, i) Have.Use of the information in the MPD to generate an accessible time window of a segment can be done using the following rules.For the "on demand" type of service, as indicated advantageously by the attributes of the Service, etc., any availability currently represented by the MPD element such as OnDemandAvailabilityInterval, the current actual elapsed time of the client NOW. If present, all described segments of this on-demand presentation are accessible. If the current actual elapsed time at the client NOW is outside the availability range, then none of the described segments of this on-demand presentation are inaccessible.For the "live" type of service, the start time tS (r, i) advantageously represents the availability time at the actual elapsed time, as advantageously indicated by the attributes Service, etc. The availability start time can be derived as a combination of the live service time of the event and some turnaround time at the server for capturing, encoding and issuance. The time for this process may for example be specified in the MPD, for example using a safety guard interval tG specified as SafetyGuardIntervalLiveService in the MPD. This provides the lowest difference between UTC time and data availability at the HTTP streaming server. In another embodiment, the MPD explicitly specifies the availability time of segments in the MPD without providing the turn around time as the difference between the event live time and the turn around time. In the following description, it is assumed that every global time is designated as the availability time. One skilled in the art of live media broadcasting can derive this information from the appropriate information in the media presentation description after reading this description.If the current actual elapsed time at the client NOW falls outside any range of live presentation intervals that are advantageously represented by the MPD element, such as LivePresentationInterval, then any of the described segments of this live presentation Inaccessible If the current actual elapsed time at the client NOW is within the live presentation interval, then at least certain of the described segments of this live presentation may be accessible.The limits of accessible segments are controlled by the following values:Actual elapsed time NOW (available by the client).Permitted time shift buffer depth tTSB specified in the media presentation description as eg TimeShiftBufferDepth.The client at relative event time t1 is also included in the interval of (NOW-tTSB) and NOW, or the end time of the segment with duration d, so that the interval of NOW-tTSB-d and the result are obtained It is only permitted to request a segment with a start time tS (r, i) in the interval.Updating MPD In some embodiments, for example, the server does not know in advance the file or segment locator and start time of the segment, because the location of the server changes, or the media presentation may be from some different server An advertisement is included, or the duration of the media presentation is unknown, or the server wishes to obscure locators for subsequent segments.In the embodiment, the server can only describe segments that are already accessible or will be accessible shortly after this instance of MPD is issued. Furthermore, in some embodiments, the client advantageously consumes media similar to the media described in the MPD so that the user experiences media programming included as close as possible to the generation of media content. Do. As soon as it predicts that the client will reach the end of the described media segment in the MPD, it will continuously play out in anticipation of the server issuing a new MPD describing the new media segment. Requesting a new instance of MPD advantageously to continue. The server advantageously creates a new instance of the MPD and updates the MPD so that the client can rely on the continuous update procedure. The server can optimize its MPD update procedure, along with the creation and issuance of segments, to the reference client's procedure, which behaves the same way common clients act.If a new instance of MPD describes only a short time advance, then the client needs to request a new instance of MPD frequently. This may result in scalability problems and unnecessary uplink and downlink traffic due to unnecessary and frequent requests.Therefore, it is on the other hand appropriate to describe them as far as possible in the future without making the segments still accessible yet, allowing unexpected updates in the MPD to represent new server locations. On the other hand, it is appropriate to enable the insertion of new content, such as, advertisements, etc. or to provide a change of codec parameters.Furthermore, in some embodiments, the duration of the media segment can be small, for example within a few seconds. Last and last in the live service or in any other aspect of handling the storage or delivery of segments, to adjust the duration of the media segment to the appropriate segment size, which can be optimized for delivery or caching characteristics Are advantageously flexible to compensate for the delay between them and for other reasons. A significant amount of media segment resources and start times need to be described in the media presentation description, especially in the case where the segments are small compared to the media presentation duration. As a result, the size of the media presentation description may be large, which adversely affects the download time of the media presentation description, and thus the start delay of the media presentation and the bandwidth usage on the access link. May affect. Therefore, it is advantageous to enable not only the description of the list of media segments using playlists, but also the description by using template or URL construction rules. Template and URL construction rules are used interchangeably in this description.In addition, templates can be advantageously used to describe segment locators in live cases beyond the current time. In the case, the locator and segment list are described by the template, so updating of the MPD is not necessary per se. However, unforeseen events may still occur, requiring a change of expression or segment description. When content from multiple different sources is spliced together, eg, when an advertisement is being inserted, it may be necessary to change the adaptive HTTP streaming media presentation description. Content from different sources may differ in different ways. Another reason is that, during a live presentation, it is necessary to change the URL used for content files provided for failover from one live origin server to another.In some embodiments, when the MPD is updated, the reference client and thus any implemented clients will go from the previous instance of the MPD as much as any time to the previous MPD's effective time Advantageously, the updating of the MPD is performed such that the updated MPD is compatible with the previous MPD in the sense that it generates a list of identical functions of segments accessible from the updated MPD during the. This requirement allows (a) the new MPD to be compatible with the old MPD before the update time so that the client can immediately begin using it without synchronization with the old MPD, and (b) update Ensure that the time does not have to be synchronized with the time when the actual change of MPD takes place. In other words, MPD updates can be advertised in advance, and the server can exchange old instances of MPD as new information becomes available without having to maintain different versions of MPD.There can be two possibilities for media timing in MPD updates for a set of representations or a full representation. (A) the existing global timeline continues beyond the MPD update (here called "continuous MPD update") or (b) the current timeline ends and the new timeline continues to change segment Starting from (herein called "discontinuous MPD update").The difference between these options is that the tracks of media fragments, and thus segments, are generally considered not to start and end at the same time due to the different sample granularity between the tracks It will be obvious. During a normal presentation, samples of one track of a fragment can be rendered before some samples of other tracks of a previous fragment. That is, there can be no overlap within a single track, but there is some overlap between fragments.The difference between (a) and (b) is whether the overlap can be made to the MPD update. When the MPD update is entirely due to the splicing of the separate content, the overlap is generally difficult to achieve as the new content needs new coding to be spliced with the previous content. Thus, it is also advantageous to provide the ability to update the media presentation discontinuously by resuming the timeline for a given segment and possibly to define a new set of representations after the update. Furthermore, if the content is independently encoded and segmented, adjusting the timestamp to fit within the global timeline of the previous content is also avoided.For reasons that updating is less important, for example, only adding new media segments to the list of described media segments, or when the location of the URL is changed, overlapping and continuous updating Can be tolerated.In the case of a discontinuous MPD update, the timeline of the last segment of the previous representation ends at the end time of the last presentation of the samples in the segment. The timeline for the next presentation (more precisely, the first presentation time of the first media segment of the new part of the media presentation, also called the new period) ensures that seamless playout is seamless Typically and advantageously starts at the very same moment as the end of the last period presentation.Two cases are illustrated in FIG.It is preferable and advantageous to limit the MPD update to segment boundaries. The rationale for limiting the change or update to segment boundaries is as follows. First, changes in binary metadata for each representation, typically a movie header, can occur at least at segment boundaries. Second, the media presentation description can include a pointer (URL) pointing to the segment. In a sense, the MPD is an "umbrella" data structure that groups together all segment files associated with the media presentation into one. To maintain this containment relationship, each segment can be referenced by a single MPD, which is advantageously updated only at segment boundaries when the MPD is updated.Segment boundaries are generally not required to be aligned, but it is reasonable to align segment boundaries for cases of content spliced from different sources, and generally for discontinuous MPD updates (specifically , The last segment of each representation can end with the same video frame and can not contain audio samples with presentation start times later than the presentation time of that frame). Discontinuous updates can then start a new set of representations at a common moment called a period. The start time of the validity of this new set of representations is provided, for example, by the period start time. The relative start time of each representation is reset to zero, and the start time of the period puts the set of expressions within this new period in the global media presentation timeline.Segment boundaries need not be aligned for continuous MPD updates. Each segment of each alternative representation can be controlled by a single media presentation description, so an update request for a new instance of the media presentation description is generally based on the expectation that the MPD in operation will not describe additional media segments. It can occur at different times depending on the set of consumed expressions, including the set of expressions that are triggered and expected to be consumed.In order to support the updating of MPD elements and attributes in the more general case, any element that is not just a representation or a set of representations can be associated with the effective time. Thus, if certain elements of the MPD need to be updated, for example, if the number of representations is changed or the URL construction rules are changed, these elements may each be multiple copies of the element By providing a separate effective time, it is possible to update individually at a specified time.The validity is advantageously associated with the global media time, so that the described elements associated with the valid time are valid during the global timeline of the media presentation.As mentioned above, in one embodiment, the valid time is only added to the complete set of representations. Each complete set forms a period. The valid time forms the start time of the period. In other words, in the particular case of using the validity factor, the complete set of representations may be valid for the period indicated by the global valid time for the set of representations. The effective time of a set of expressions is called a period. At the beginning of the new period, the validity of the previous set of expressions expires, and the new expression set is effective. Note again that the effective time of the period is preferably disconnected.As noted above, changes to the media presentation description occur at segment boundaries, so for each representation, changes to elements actually occur at the next segment boundary. The client can form a valid MPD that contains a list of segments for each moment in the presentation time of the media.Discontinuous block splicing may be appropriate if the blocks include media data from different representations or from different content, for example from segments or advertisements of content, or otherwise. Block request streaming systems can require that presentation metadata changes only occur at block boundaries. This can be advantageous for implementation reasons, as updating media decoder parameters within blocks may be more complex than updating them only between blocks. In this case, as described above, the element is considered to be valid from the first block boundary not earlier than the start of the specified valid interval to the first block boundary not earlier than the end of the specified valid interval. It can be advantageously specified that the effective intervals can be interpreted as approximations.An example embodiment of the above-described new extension of the block-request streaming system is described in the section presented after the heading of media presentation modification.Segment Duration Signaling Discontinuous updates effectively divide the presentation into a series of discrete intervals called durations. Each period has its own timeline for media sampling timing. Media timing of representations within a period may be advantageously indicated by specifying a separate, compact list of segment durations for each period or for each representation within a period.An attribute associated with an element in the MPD, for example, called a period start time, can specify an effective time of a certain element in the media presentation time. This attribute can be added to any element of the MPD (attributes to which validity can be assigned can be changed to elements).For discontinuous MPD updates, segments of the entire representation can end at discontinuities. This generally implies that at least the last segments before the discontinuities have a different duration than those before. Signaling segment duration may include indicating that all segments have the same duration or indicating separate durations for all segments. It would be desirable to have a compact representation for a list of segment durations that is efficient if many of them have the same duration.The duration of each segment in one representation or set of representations is the beginning of the discontinuous update, ie all segments between a single interval from the beginning of the period to the last media segment described in the MPD This can be advantageously accomplished using a single string specifying the duration. In one embodiment, the form of this element is such that this representation is <mult> of the first entry segment of duration <dur> of the first entry, then duration <dur> of the second entry. Based on a production including a segment duration entry list in which each entry includes a second entry segment <mult>, an attribute duration attribute dur indicating including the same below, and an optional multiplier mult of the attribute It is a text string.Each duration entry specifies the duration of one or more segments. If the <dur> value is followed by a "*" letter and a digit, this digit specifies the number of consecutive segments with this duration in seconds. If the multiplier sign "*" does not exist, the number of segments is one. If the "*" is present without a trailing digit, then all subsequent segments have the specified duration and no further entries can be present in the list. For example, the string "30 *" means that all segments have a 30 second duration. The string "30 * 12 10.5" is a 12 segment of 30 seconds in duration, indicating that one of a 10.5 second duration will follow.If segment durations are specified separately for each alternative representation, the sum of segment durations within each interval may be the same for each representation. In the case of a video track, the spacing can end at the same frame in each alternate representation.One skilled in the art, upon reading this disclosure, will find similar and equivalent ways to represent segment duration in a compact manner.In another embodiment, the duration of the segment is signaled by the signal duration attribute <duration> to be constant for all segments in the representation except the last one. The duration of the last segment before the discontinuous update can be shorter as long as the start point of the next discontinuous update or the beginning of a new period is provided, which leads to the beginning of the next period Means the duration of the last segment up toIndicating changes in binary encoded representation metadata, such as modification of representation metadata and modification of the movie header "moov", can be accomplished in different ways. (A) there can be one moov box for all representations in separate files referenced in the MPD, (b) each in separate files referenced in each alternative representation There can be one moov box for alternative representation, (c) each segment can include moov box, and thus be self-contained, (d) one 3GP with MPD There can be a moov box for all representations in the file.In the case of (a) and (b), the single 'moov' above is valid in the sense that more 'moov' boxes can be referenced within the MPD as long as the 'moov' box's effectiveness is separated. It can be advantageously combined with the sexual concept. For example, according to the definition of period boundaries, the validity of 'moov' in the old period can expire with the start of a new period.For option (a), the validity element can be assigned to a single moov box reference. Multiple presentation headers can be tolerated, but only one can be active at a time. In another embodiment, the effective time of the entire set of expressions in the period or the entire period defined above can be used as effective time for this expression metadata, typically provided as a moov header .In the case of option (b), the validity element can be assigned to the reference of the moov box of each expression. Multiple representation headers can be tolerated, but only one can be valid at a time. In other embodiments, the effective time of the entire expression or period as defined above may be used as the effective time for this expression metadata, typically provided as a moov header.For option (c), no signaling in the MPD can be added, but additional signaling in the media stream may be added to indicate whether the moov box changes for any incoming segment it can. This is further described below in the context of "signaling updates within segment metadata".Signaling Updates in Segment Metadata It is advantageous to signal the updates with the media segment to avoid frequent updates of the media presentation description to gain knowledge about potential updates of the updates. An updated metadata, eg a media presentation description, is available, which can indicate that a certain amount of time must be accessed in order to successfully continue generating the accessible segment list Elements or elements of the media segment may be provided within the media segment itself. Additionally, the element can provide information that can be used to construct a file identifier, such as a URL, or a file identifier for the updated metadata file. The updated metadata file can include that equal to the metadata provided in the original metadata file for the presentation that has been modified to indicate the validity interval as well as the additional metadata that also accompany the validity interval. The instructions can be provided in the media segment of all available representations for media presentation. A client accessing the block request description streaming system may use a file download protocol or other means to retrieve the updated metadata file upon detecting the indication in the media block. Clients are thereby provided with information regarding changes in media presentation descriptions and the times at which they occurred or occurred. Advantageously, each client requests a media presentation description updated only once when the change occurs rather than "polling" and receiving the file many times for possible updates or changes. Do.Modifications include addition or deletion of expressions, modification of one or more expressions, such as bit rate, resolution, aspect ratio, modification of included track or codec parameters, and modification of URL construction rules, such as advertisement For different origin servers. Some changes may only affect the initialization segment, eg, the movie header ("moov") atom associated with the representation, while other changes may be to the media presentation description (MPD) May affect theFor on-demand content, these changes and their timing can be known in advance and can be signaled in the media presentation description.For live content, changes can not be known up to the point where they occur. One solution is to dynamically update the media presentation description available at a particular URL and to allow clients to request this MPD periodically to detect changes. It is. This solution has the disadvantage of scalability (origin server load and cache efficiency). In a scenario with a large number of viewers, the cache can receive many MPD requests after the previous version has been invalidated from the cache and before the new version is received, all of these origins It can be forwarded to the server. The origin server must always handle requests from the cache for each updated version of the MPD. Furthermore, updates can not easily be aligned in time with changes in the media presentation.One of the advantages of HTTP streaming is the ability to utilize standard web infrastructure and services for scalability, so the preferred solution is to only "static" (ie, cacheable) files And not relying on client "polling" files to see if they are changing.A solution is described and proposed to resolve updates of metadata including binary media presentation descriptions and binary representation metadata, such as "moov" atoms, in an adaptive HTTP streaming media presentation.For live content, it is not possible to know the point at which the MPD or "moov" may change when the MPD is built. Because of the bandwidth and scalability reasons, frequent "polling" of the MPD to check for updates should generally be avoided, so MPD updates are "in-band" within the segment file itself. It can be shown in). That is, each media segment can have an option to indicate an update. Different updates can be signaled depending on the segment types (a) to (c) above.In general, the following indications may be advantageously provided in the signal in the segment: That is, an indicator that indicates that the MPD can be updated before requesting the next segment in this representation or the next segment that has a start time greater than the start time of the current segment. Updates can be announced in advance, indicating that updates need only occur at any one later than the next segment. This MPD update can also be used to update binary representation metadata, such as movie headers, when the media segment's locator is changed. Other signals may indicate that no more segments ahead in time should be requested as this segment completes.If the segment is formatted according to segment type (c), ie each media segment contains self-initializing metadata, eg a movie header, then the following segment contains an updated movie header (moov) Other signals can be added to indicate. This advantageously allows the inclusion of a movie header within the segment, but the movie header is requested by the client when the previous segment indicates a movie header update or when seeking or random access when switching representations It is only necessary. Otherwise, the client can issue a byte range request to the segment which excludes the movie header from the download, thus advantageously saving bandwidth.In still other embodiments, if the MPD update indication is signaled, the signal may also include a locator, eg, a URL for an updated media presentation description. The updated MPD can describe the presentation both before and after the update using validity attributes, eg new and old periods in case of discontinuous update. This can be advantageously used to enable time-shifted viewing as described in more detail below, but it does not signal MPD updates any time before the changes that they contain become effective. Make it equally advantageous. The client can immediately download the new MPD and apply it to the ongoing presentation.In a specific implementation, include signaling of changes in media presentation description, moov header or end of presentation in a streaming information box formatted according to the rules of segment format using box structure of ISO based media file format Can. This box can provide specific signals for any of the different updates.Streaming Information Box Definition Box Type: 'sinf' Container: None Forced: None Quantity: Zero or 1 Streaming Information Box contains information about the stream presentation of which the file is part.Syntax aligned (8) class StreamingInformationBoxextends FullBox ('sinf') {unsigned int (32) streaming_information_flags; /// The following are optional fields. string mpd_location} Semantics streaming_information_flags contains zero or more of the following logical OR:0x00000001 Movie Header Update Follows 0x00000002 Presentation Description Update 0x00000004 Presentation End mpd_location only exists if the presentation description update flag is set, providing a Uniform Resource Locator for a new media presentation description . Use Case for MPD Update for Live Service Suppose that the service provider wishes to provide a live soccer event using the enhanced block-request streaming described herein. Perhaps millions of users would like to access the presentation of the event. This live event may be sporadically blocked by interruptions when a time-out is called, or by other conditions of rest, during which time advertisements can be added. Typically, there is no or little prior notification of the exact timing of those interruptions.The service provider may need to provide redundant infrastructure (eg, an encoder or server) to enable seamless switching if any of the components fail during a live event. is there.Suppose that the user Ann uses her mobile device to access a service in the bus and that the service is readily available. Another user, Paul, sits next to her and is watching the event on his laptop. A goal is decided and both parties celebrate this event at the same time. Paul tells Ann that his first goal in the game was even more exciting, and Ann uses a service to allow them to watch the event 30 minutes ago. She returns to a live event after seeing the goal. To address that use case, the service provider updates the MPD, signals the client that the updated MPD is available, and streaming service so that the client can present data in near real time It should be possible to enable access toThe MPD update can be performed asynchronously with the delivery of segments, as described elsewhere herein. The server can provide the receiver with a guarantee that the MPD will not be updated for a certain amount of time. The server can rely on the current MPD. However, explicit signaling is not required when the MPD is updated before some minimum update period.Because clients may operate on different MPD update instances, and thus clients may have drift, almost no synchronous playout is achieved. By using MPD updates, the server can communicate changes even while presenting, and can alert clients to changes. While in-band signaling per segment can be used to indicate MPD updates, so updates can be limited to segment boundaries, which should be acceptable for most applications.An MPD element can be added which provides an optional MPD update box added to the beginning of the segment to signal the issue time at the actual elapsed time of the MPD and that the MPD is required. Updates can be done hierarchically, as with MPD.The MPD "issue time" provides a unique identifier for the MPD and when the MPD is issued. It also provides an anchor for the update procedure. The MPD update box can be found after the "styp" box in the MPD and defined by Box Type = "mupe", no container is required, not mandatory and has a quantity of zero or one. The MPD Update Box contains information about the media presentation of which the segment is a part.The syntax example is as follows.aligned (8) class MPDUpdateBoxextends FullBox ('mupe') {unsigned int (3) mpd information flags; unsigned int (1) new-location flag; unsigned int (28) latest_mpd_update time; /// The following are optional fields. is there. The semantics of various objects of the string mpd_location} class MPDUpdateBox can be as follows. mpd_information_flags: contains a logical OR of zero or more of the following:0x00 Current Media Presentation Description Update 0x01 Future Media Presentation Description Update 0x02 Presentation End 0x03-0x07 Reserved new_location If set to flag: 1, a new media presentation description is available at the new location specified in mpd_location .latest_mpd_update time: Specifies the time (in ms) at which the MPD update is required with respect to the MPD issuance time of the latest MPD. The client can choose to update the MPD at any time between now.mpd_location: present only if new_location_flag is set, in which case mpd_location provides a uniform resource locator for the new media presentation description.If the bandwidth used by the update is an issue, the server can provide MPDs for these parts so that only certain device capabilities are updated.When time-shifted viewing and network PVR time-shifted viewing are supported, it may occur that more than one MPD or movie header is valid during the lifetime of the session. In this case, an effective MPD can exist over the entire time window by updating the MPD when needed, but adding an effectiveness mechanism or period concept. This means that the server can guarantee that the MPD and movie header will be released for any period of time that is within a valid time window for time-shifted viewing. It is the client's duty to ensure that the client's available MPD and the metadata for its current presentation time are valid. Migration of live sessions to network PVR sessions using only small MPD updates can also be supported.One issue when using the ISO / IEC 14496-12 file format in a special media segment block request streaming system, as described above, is to render media data for a single version of a presentation into multiple files It is advantageous to store and arrange in consecutive time segments. Furthermore, it can be advantageous to arrange each file to start from a random access point. In addition, it is advantageous to segment the presentation into multiple files, each starting from a seek point, based on the selection of seek point locations during the video encoding process and on the selection of seek points made during the encoding process. , And each random access point may or may not be placed at the beginning of the file, but each file starts from the random access point. In one embodiment having the characteristics described above, the presentation metadata, or media presentation description, may include the exact duration of each file, which may be, for example, the start time of the file's video media and the next The file is used to mean the difference between the start time of the video media. Based on this information in the presentation metadata, the client can construct a mapping between the global timeline for media presentation and the local timeline for media in each file.In other embodiments, the size of the presentation metadata can be advantageously reduced by specifying instead that all files or segments have the same duration. However, in this case and if the media file is constructed by the above method, the duration of each file may not be present, as there may be no random access point at the point which is the exact designated duration from the beginning of the file May not be exactly the same as the duration specified in the media presentation description.Further embodiments of the present invention to provide the correct operation of the block-request streaming system despite the above discrepancies will now be described. In this method, the local timeline of the media in each file, which is the time from the timestamp zero which is the reference when the decoding and composition timestamps of the media samples in the file are specified by ISO / IEC 14496-12 An element can be provided in the file that specifies the mapping of the lines (meaning lines) to the global presentation timeline. This mapping information may comprise a single timestamp in the global presentation time that corresponds to a timestamp of zero in the local file timeline. The mapping information specifies an offset value that specifies the difference between the global presentation time corresponding to the zero timestamp in the local file timeline and the global presentation time corresponding to the beginning of the file with the information provided in the presentation metadata. It can be provided alternatively.An example for the box can be, for example, a track fragment decode time ('tfdt') box or a track fragment adjust box ('tfad') box with a track fragment media adjust ('tfma') box.Example Client Including Segment List Generation An example client is now described. It can be used as a reference client to ensure that the server properly generates and updates MPD.The HTTP streaming client is guided by the information provided in the MPD. It is assumed that the client has access to the received MPD at time T, that is, the time it was able to successfully receive the MPD. Determining to be a successful reception may include the client obtaining an updated MPD or verifying that the MPD has not been updated since the last successful reception.An example of client operation is introduced. In order to provide a continuous streaming service to the user, the client will consider the MPD at the current system time, possibly considering the segment list generation procedure detailed below using a playlist or using URL construction rules. First parse and generate a list of accessible segments for each representation for client local time. The client then selects one or more representations based on the information in the presentation attributes and other information, such as available bandwidth and client capabilities. Depending on the group classification, expressions can be presented alone or with other expressions.For each representation, the client, if present, obtains binary metadata for that representation, such as a "moov" header, and a media segment of the selected representation. The client accesses media content by requesting segments or byte ranges of segments, possibly using segment lists. The client first buffers the media before beginning the presentation, and once the presentation starts, the client continuously requests the segment or part of the segment taking into account the MPD update procedure. Continue to consume media content.The client can switch representations taking into account updated MPD information and / or updated information from its environment, eg changes in available bandwidth. The client can switch to a different representation by using a media segment request that includes a random access point. When moving forward, i.e., when the current system time (referred to as the "NOW time" to represent the time for presentation) progresses, the client consumes the accessible segment. As the NOW time progresses, the client will possibly expand the list of accessible segments for each representation according to the procedure specified here.If the end of the media presentation has not yet been reached, the current playback time is within a threshold that the client expects to run out of media for the media described in the MPD for any consuming or consumed representation If so, the client can request an MPD update with a new fetch time receive time T. Once received, the client takes into account the possibly updated MPD and the new time T in the generation of the accessible segment list. FIG. 29 illustrates the procedure for live service at different times on the client.Create Accessible Segment List Suppose that the HTTP streaming client has access to the MPD and wants to create an accessible segment list for actual elapsed time NOW. Clients are synchronized to a global time reference with a certain degree of accuracy, but advantageously no direct synchronization with the HTTP streaming server is required.The accessible segment list for each representation is preferably defined as a list pair of segment start times and segment locators, where the segment start times relate to the start of the representation without loss of generality. It can be defined. The beginning of the expression can be consistent with the beginning of the period or when this concept applies. Otherwise, the presentation start can be the beginning of the media presentation.The client uses URL construction rules and timing, eg, as further defined herein. Once the list of segments to be described is obtained, this list is further restricted to those accessible, which can be a subset of the segments of the complete media presentation. The construction is controlled by the current value of the clock at the client NOW time. In general, segments are available only during any time NOW within the set of availability times. As for the time NOW outside this window, no segment is available. In addition, for live services, some time check time provides information as to what future the media will be described. The check time is defined on the MPD-documented media timeline, and advantageously requests a new MPD when the play time of the client reaches the check time.It advantageously requests a new MPD when the play time of the client reaches the check time.Next, the segment list is further limited by the check time with the MPD attribute TimeShiftBufferDepth, so the only available media segment is the OW of the segment where the sum of the media segment start time and presentation start time is the last described Minus timeShiftBufferDepth are those that fall within the interval between the minus duration and the smaller of check time or NOW.Scalable blocks The bandwidth available at times is sometimes very low, and the block or blocks currently being received at the receiver are completely received in time to play out without pausing the presentation. There is no possibility of The receiver can detect the situation in advance. For example, the receiver may decide that it is receiving a block encoding 5 units of media every 6 units of time and that it has a buffer of 4 units of media, so that the receiver It can be expected that the presentation should be stopped or paused after about 24 units of time. With sufficient notification, the receiver responds to the situation, for example by discarding the stream of current blocks, and plays block or blocks from different representations of the content, eg one unit It can begin to request that it use less bandwidth per hour. For example, if the receiver switches to a block-encoded representation for at least 20% more video time for blocks of the same size, the receiver should eliminate the need to shut down until the bandwidth situation improves. Can.However, it would be wasteful to have the receiver discard the data already received from the discarded representation completely. In one embodiment of the block-streaming system described herein, the data in each block is a fixed prefix of the data in the block so that the remainder of the block continues to be presented without being received. It can be encoded and arranged in such a way that it can be used. For example, well known techniques of scalable video coding can be used. An example of the video coding method is described in H.264. H.264 scalable video coding (SVC) or H.264. H.264 includes temporal scalability of advanced video coding (AVC). Advantageously, the method continues the presentation on the basis of block portions that have already been received, even when reception of the block or blocks is discarded, for example due to a change in available bandwidth. Make it possible. Another advantage is that a single data file can be used as a source for multiple different representations of content. This is possible, for example, by using an HTTP partial GET request to select a subset of blocks corresponding to the required representation.One refinement detailed herein is the scalable segment map, which is an expanded segment. The scalable segment map contains the locations of the different layers in the segment so that the client can access portions of the segment as appropriate to extract those layers. In another embodiment, the media data in the segment is ordered such that the quality of the segment increases while downloading data gradually from the beginning of the segment. In another embodiment, a progressive increase in quality is applied for each block or fragment included in the segment so that fragment requests can be made to address the scalable approach.FIG. 12 is a diagram showing an aspect of a scalable block. In that figure, the transmitter 1200 outputs metadata 1202, scalable layer 1 (1204), scalable layer 2 (1206), and scalable layer 3 (1208), the latter being delayed. Receiver 1210 can use metadata 1202, scalable layer 1 (1204), and scalable layer 2 (1206) to present media presentation 1212.As described on the independent scalability layer, a block-request streaming system when the receiver can not receive the requested block of a particular representation of the media datater in time for its playout. It is undesirable to have to stop because it often creates a bad user experience. Outages can be avoided, reduced or mitigated by limiting the data rate of the selected representation to be much smaller than the available bandwidth, so that certain parts of the presentation will be in time Although it will be almost impossible not to receive it, this strategy has the disadvantage that the media quality is always significantly reduced than it can in principle be supported by the available bandwidth. Presentations of lower quality than possible can be interpreted as poor user experience. Thus, the designer of the block-request streaming system may suffer from poor media quality in this case because it requests a content version having a data rate much lower than the available bandwidth, or In order to request a content version with a data rate close to the available bandwidth, in this case the user may suffer a high probability of outages during presentation in response to the change of available bandwidth. Face the choice in client procedure design, client programming or hardware configuration.To address the situation, the block-streaming system described herein allows the receiver to make stratified requests and allows the transmitter to respond to stratified requests. Multiple scalability layers can be configured to be processed independently.In the embodiment, the encoded media data for each block can be divided into a plurality of separated parts, referred to herein as "layers", so that the combination of layers is for the block A client that has the entire media data and is receiving a subset of the layers can decrypt and present the representation of the content. In this approach, the ordering of the data in the stream is the order in which the quality of the continuous range is progressively improved, and the metadata reflects this.An example of a technique that can be used to generate a layer having the above characteristics is, for example, the ITU-T standard H.323. H.264 / SVC is a scalable video coding technique. Another example of a technique that can be used to produce a layer having the above characteristics is the ITU-T standard H.323. It is a technique of the temporal scalability layer provided in H.264 / AVC. In these embodiments, the metadata may be built within the MPD that allows construction of the requirements of a combination of layers of one given layer and / or multiple blocks of individual layers of the given block and / or multiple blocks. Or can be provided within the segment itself. For example, a layer comprising blocks may be stored in a single file, and may provide metadata specifying byte ranges in the file corresponding to the individual layers. A file download protocol, such as HTTP 1.1, can be used that can specify byte ranges to request an individual layer or layers. Further, as will become apparent to those skilled in the art upon review of this disclosure, the techniques described above with respect to construction, request and download of variable sized blocks or variable combinations of blocks may be applied in this context as well. .A block-request streaming client to achieve an enhanced user experience and / or a reduced demand on the capabilities of the serving infrastructure as compared to existing techniques through the combination of tiered media data as described above. Several embodiments that can be advantageously adopted by are now described.In a first embodiment, the known techniques of the block request streaming system can be applied by making changes in which different versions of the content are replaced by different combinations of layers in some cases. That is, if the existing system can provide two separate representations of content, the expanded system described herein can provide two layers, one of the content in the existing system. The representation is similar to the first layer in the expanded system in terms of bit rate, quality and possibly other metrics, the second representation of the content in the existing system is two layers in the expanded system Similar in terms of combination and bit rate, quality and possibly other metrics. As a result, the storage capacity required in the expanded system is reduced compared to that required in existing systems. Furthermore, while clients of existing systems can issue requests for blocks of one representation or the other, clients of the expanded system can request either the first layer or both layers of blocks. It can be put out. As a result, the user experience in the two systems is similar. In addition, improved caching is provided because common segments that are cached with higher likelihood for different qualities are used.In a second embodiment, a client in an enhanced block-request streaming system that employs the layer just described method maintains separate data buffers for each of several layers of media encoding. be able to. These "separate" buffers may be buffered or buffered by allocation of physically or logically separate memory areas for separate buffers, as will be clear to those skilled in the art of data management within client devices. Data stored in one or more memory areas, and other techniques logically achieved through the use of data structures including the reference to storage locations of data from different layers. It should be understood that, from now on, the term "separate buffer" includes any method that can separately identify data of different layers. The client issues requests for individual layers of each block based on the occupancy of each buffer, for example, if a layer has a buffer occupancy for the lower layer in priority order below the threshold for the lower layer Can set priorities in such a way that it can not issue a request for data from one layer. In this method, priority is given to receiving data from lower layers, so if the available bandwidth is below that required to also receive higher layers, Only lower layers are required. Furthermore, the thresholds associated with different layers can be different, so for example lower layers have higher thresholds. In the case where the available bandwidth changes in such a way that data for the higher layer can not be received before the playout time of the block, data for the lower layer will necessarily have already been received. Because of this, the presentation can continue only in the lower layers. A threshold for buffer occupancy may be defined in terms of bytes of data, playout duration of data contained in the buffer, number of blocks or any other suitable measure.In a third embodiment, the methods of the first and second embodiments can be combined, so that multiple media representations each comprising a subset of layers (as in the first embodiment) Is provided and the second embodiment is applied to a subset of layers in the representation.In a fourth embodiment, the methods of the first, second and / or third embodiments can be combined with embodiments in which multiple independent representations of content are provided, so that, for example, At least one of the depicted representations comprises multiple layers to which the techniques of the first, second and / or third embodiments apply.In combination with the extended buffer manager buffer monitor 126 (see FIG. 2), an extended buffer manager can be used to optimize the client side buffer. The block-request stream system desires that the playout of the media start quickly and continue smoothly while simultaneously providing the highest media quality to the user or destination device. This requires the client to request a block that has the highest media quality but can also be started quickly and then be received in time without being forced to pause the presentation. can do.In embodiments that use an expanded buffer manager, the manager determines which blocks of media data should be requested and when those requests should be made. The extended buffer manager can, for example, provide a set of metadata for the content to be presented, which metadata is a list of representations available for the content and metadata for each representation including. Metadata for presentation affects the data rate of presentation and other parameters such as video, audio or other codecs and codec parameters, video resolution, decoding complexity, audio language and choice of presentation at the client Can provide information about certain other parameters.The metadata for the representation may also comprise an identifier for the block for which the representation is segmented, which provide the information necessary for the client to request the block. For example, if the request protocol is HTTP, then the identifier can be an HTTP URL with additional information that probably identifies a byte range or time span within the file identified by the HTTP URL, this byte range or time A span identifies a particular block in the file identified by the URL.In a specific implementation, the extended buffer manager can decide when the receiver will make a request for a new block, and handle the transmission of the request itself. In a novel aspect, the extended buffer manager requests a new block with a balancing ratio value that balances between using too much bandwidth during streaming playout and losing metadata. I do.The information received by the buffer monitor 126 from the block buffer 125 is each event when media data is received, how much has been received, when playout of media data is started and stopped, and An indication of the speed of the media playout can be included. Based on this information, buffer monitor 126 can calculate a variable that represents the current buffer size Bcurrent. In these examples, Bcurrent represents the amount of media included in the client or other device buffer or buffers and can be measured in units of time, so Bcurrent is an additional block or block. It represents the amount of time it takes to play out the entire media represented by the block or partial blocks stored in the buffer or buffers if the partial block is not received. Thus, Bcurrent represents the "playout duration" at the normal playout speed of media data available at the client but not yet played.As time passes, the value of Bcurrent may decrease as media is played out and may increase each time new data for the block is received. For the purpose of this description, a block is assumed to be received when the entire data of that block is available at block requester 124, but other measures may be taken, for example to take into account the reception of partial blocks. Note that it can be used instead. In practice, reception of the block can occur over a period of time.FIG. 13 illustrates the variation of the value of Bcurrent over time as the media is played out and blocks are received. As shown in FIG. 13, the value of Bcurrent is zero for times smaller than t0, indicating that no data has been received. At t0, the first block is received and the value of Bcurrent is increased until it is equal to the playout duration of the received block. At this point, the playout has not yet started, so the value of Bcurrent will remain constant until time t1, at which point the second block has arrived, and Bcurrent will have this second block Increase by the size of. At this point, playout begins and the value of Bcurrent begins to decrease linearly until time t2, at which point the third block arrives.Bcurrent development continues in this "saw-saw" fashion, increasing progressively each time a block is received (at times t2, t3, t4, t5 and t6), during which time data is played out Smoothly decrease. In this example, the playout proceeds at the normal playout speed for the content, so the slope of the curve between block reception is exactly -1, one second for each second of real time elapsed. It means that media data is played back. With frame-based media playing out at a predetermined number of frames per second, eg, 24 frames per second, a slope of -1 is a small step function that indicates the playout of each individual data frame For example, it is estimated by steps of -1/24 of one second as each frame is played out. FIG. 14 shows another example of Bcurrent evolution over time. In the example, the first block arrives at t0 and playout begins immediately. The arrival and reclamation of blocks continues until t3, at which point the value of Bcurrent reaches zero. When that happens, there is no additional media data for playout, forcing a pause in media presentation. At time t4, the fourth block may be received and playout may resume. Thus, this example illustrates the case where reception of the fourth block is slower than desired, resulting in pauses in playout and thus a poor user experience. Thus, the goal of the extended buffer manager and other features is to reduce the probability of this event while simultaneously maintaining high media quality.The buffer monitor 126 may also calculate the other metric, Ratio (t), which is the ratio of the media received during a given time period to the length of that time period. More specifically, Bratio (t) is equal to Treceived / (Tnow-t), and Treceived is received in a period from t which is a time earlier than the current time to the current time Tnow (that is The amount of media measured by the playout time ofBratio (t) can be used to measure the rate of change of Bcurrent. Bratio (t) = 0 is a case where no data is received after time t, and assuming that the media is being played out, Bcurrent is decreased by (Tnow−t) after that time It will be. Bratio (t) = 1 is the case where the same amount of media is being played out with respect to time (Tnow-t) and Bcurrent will have the same value at time Tnow as time t . Bratio (t)> 1 is the case that more data is received than is necessary to play out in time (Tnow-t), and Bcurrent is increasing from time t to time Tnow become.The buffer monitor 126 further calculates the value State, which can take a distinct number of values. The buffer monitor 126 further equips a function NewState (Bcurrent, Braio), which provides a new State value as an output, given the current value of Bcurrent and the value of Bratio for t <Tnow. Whenever Bcurrent and Bratio cause this function to return a value different from the current value of State, a new value is assigned to State, and this new State value is indicated to block selector 123.The function NewState can be evaluated with reference to the space of all possible values of the pair (Bcurrent, Bratio (Tnow-Tx), where Tx is a fixed (set) value For example, it can be derived from Bcurrent by a configuration table that maps values of Bcurrent to values of Tx, or it can be dependent on the previous value of State. Provided that each partitioning comprises a set of disjointed regions, each region is annotated with a State value, so that the evaluation of the function NewState identifies the partitioning and pairs (Bcurrent , Determine the region where Braio (Tnow-Tx) falls In this case, the return value is the annotation associated with the area, in the simple case only one partitioning is provided, in more complex cases the partitioning is the NewState function It may depend on the pair (Bcurrent, Bratio (Tnow-Tx)) or other factors at the pre-evaluation time of.In one specific embodiment, the partitioning described above can be based on a configuration table that includes several threshold values for Bcurrent and several threshold values for Bratio. Specifically, the threshold values for Bcurrent are Bthresh (0) = 0, Bthresh (1),. . . , Bthresh (n1), Bthresh (n1 + 1) = ∞, where n1 is the number of non-zero threshold values for Bcurrent. The threshold values for Bratio are taken as Br-thresh (0) = 0, Br-thresh (1),. . . Let Br-thresh (n2), Br-thresh (n2 + 1) =. Infin., Where n2 is the number of threshold values for Bratio. These threshold values define a partitioning comprising cells of the (n1 + 1) × (n2 + 1) grid, where the i-th cell of the j-th row is Bthresh (i-1) ≦ Bcurrent <Bthresh ( i) and regions corresponding to Br-thresh (j-1) .ltoreq.Bratio <Br-thresh (j). Each cell of the grid described above is annotated with state values, for example by associating it with a specific value stored in memory, and the function NewState associates with the cell indicated by the values Bcurrent and Bratio (Tnow-Tx) Returns the state value returned.In further embodiments, hysteresis values may be associated with each threshold value. In this extended method, the evaluation of the function NewState can be based on temporal partitioning, which is constructed using a set of temporally modified threshold values, as follows. For each Bcurrent threshold value less than the Bcurrent range corresponding to the cell selected at the last evaluation of NewState, the threshold value is reduced by reducing the hysteresis value associated with that threshold. For each Bcurrent threshold value greater than the Bcurrent range corresponding to the cell selected at the last evaluation of NewState, the threshold value is increased by adding the hysteresis value associated with that threshold. For each Bratio threshold value that is smaller than the Bratio range corresponding to the cell selected at the last evaluation of NewState, the threshold value is reduced by reducing the hysteresis value associated with that threshold. For each Bratio threshold value greater than the Bratio range corresponding to the cell selected at the last evaluation of NewState, the threshold value is increased by adding the hysteresis value associated with that threshold. The modified threshold values are used to evaluate the value of NewState, and then the threshold values are returned to their original values.Other ways of defining the partitioning of space will become apparent to those skilled in the art upon reading this disclosure. For example, partitioning is an inequality based on a linear combination of Bratio and Bcurrent, eg, the form α1 · Bratio + α2 for α0, α1 and α2 given real values to define half-space in the overall space • It can be defined by using linear inequality thresholds with Bcurrent ≦ α0 and defining each disjoint set as the intersection of several such half spaces.The above description is an illustration of the basic process. Efficient implementation is possible as will be clear to the person skilled in the art of real time programming upon reading this disclosure. For example, each time new information is provided to the buffer monitor 126, it is possible to calculate the future time for NewState to transition to a new value, for example, if no further data for the block is received. A timer is set for this time, and expiration of this timer causes the block select 123 to transmit a new State value if there is no further input. As a result, the calculation is not continuous and needs to be performed only when new information is provided to the buffer monitor 126 or when the timer has expired.Suitable values of State may be "Low", "Stable" and "Full". An example of a suitable set of threshold values and the resulting cell grid are shown in FIG.In FIG. 15, the Bcurrent threshold is shown on the horizontal axis in milliseconds, and the hysteresis value is shown below as the "+/- value". The Bratio threshold is shown on the vertical axis in units of per mil (i.e. multiplied by 1000) and the hysteresis value is shown below as "+/- value". The state values are annotated in the grid cell as "L", "S" and "F" representing "low", "stable" and "full", respectively.The block selector 123 receives a notification from the block requestor 124 whenever there is an opportunity to request a new block. As mentioned above, the block selector 123 is provided with information regarding the plurality of available blocks and metadata for those blocks, including, for example, information regarding the media data rate of each block.Information on the media data rate of a block is the actual media data rate of a particular block (ie the block size in bytes divided by the playout time in seconds), the average media data rate of the representation to which the block belongs Or a measure of the available bandwidth that is continuously required to play out the representation to which the block belongs without pauses, or a combination of the above.The block selector 123 selects a block based on the State value last indicated by the buffer monitor 126. When this State value is "stable", the block selector 123 selects a block from the same representation as the previously selected block. The selected block is the first block (playout order) containing media data relating to the time period in the presentation where no media data was previously requested.When the State value is "low", the block selector 123 selects a block from the representation that has it lower than the media data rate of the previously selected block. Several factors can influence the exact choice of representation in this case. For example, block selector 123 may provide an indication of the overall rate of incoming data and may select a representation having a media data rate less than that value.When the State value is "full", the block selector 123 selects a block from the representation that has it higher than the media data rate of the previously selected block. Several factors can influence the exact choice of representation in this case. For example, block selector 123 can be provided with an indication of the total rate of incoming data and can select a representation with a media data rate not greater than that value.Several additional factors may further affect the operation of block selector 123. In particular, the frequency at which the media data rate of the selected block is increased can be limited, even if the buffer monitor 126 continues to indicate a "full" condition. In addition, block selector 123 receives an indication of "full" but is available higher (e.g., because the last selected block was already present for the highest available media data rate) Media data rate blocks may not exist. In this case, the block selector 123 can delay the selection of the next block by the time selected so that the total amount of media data buffered in the block buffer 125 is limited as described above.Additional factors may affect the set of blocks considered during the selection process. For example, the available blocks can be limited to those from the representation where the coding resolution falls within the particular range provided to the block selector 123.Block selector 123 may also receive input from other components that monitor other aspects of the system, such as availability of computing resources for media decoding. If the resource is exhausted, the block selector 123 can select blocks in the metadata that are shown to have lower computational complexity of decoding (eg, lower resolution or frame rate The representations they have generally have lower decoding complexity).The embodiment described above allows the use of the value Bratio in the evaluation of the NewState function in the buffer monitor 126 to enable a faster quality increase at the beginning of the presentation compared to the method considering only Bcurrent. Brings substantial benefits. If Bratio is not considered, a large amount of buffered data may accumulate before the system can select blocks with higher media data rates and thus higher quality. However, when the Bratio value is high, this means that the available bandwidth is much higher than the media data rate of previously received blocks and relatively small buffered data (ie for Bcurrent) Even in the case of low values, it indicates that it is safe to request blocks with higher media data rates and thus higher quality. Similarly, if the Bratio value is low (eg, <1), this indicates that the available bandwidth is falling below the media data rate of the previously requested block, and so Bcurrent Even if high, the system switches to a lower media data rate, and thus lower quality, to avoid reaching the point at which the playout of the media stops, for example at Bcurrent = 0. This enhanced operation can be particularly important in environments where the network conditions and thus the delivery rate may change rapidly and dynamically, for example when a user streams to a mobile device.Another advantage is provided by the use of configuration data which specifies the space partitioning of (Bcurrent, Bratio) values. The configuration data may be provided to the buffer monitor 126 as part of presentation metadata or by other dynamic means. In practical deployments, it may be difficult to predict a properly functioning partitioning for all users, as user network connection behavior is highly variable between users and over time for a single user. The ability to dynamically provide the setting information to the user enables good setting values to be formulated over time by accumulated experience.Variable Request Sizing If the requests are for a single block and each block encodes for a short media segment, a high frequency of requests may be required. If the media block is short, the video playout is moving rapidly from block to block, which is more frequent for the receiver to adjust or change its selected data rate by changing the representation Provide an opportunity to improve the probability that playout can continue without stopping. However, the drawback of frequent requests is that in wireless WAN networks, eg 3G and 4G wireless WANs, the available bandwidth may not be sustainable in certain networks where the client network is constrained to the server network, eg client The capacity of the data link from the network to the network may be limited or limited for a short or long time due to changes in radio conditions.High frequency demand also implies high load on the serving infrastructure, which incurs an associated cost in terms of capacity demand. Therefore, it would be desirable to have some of the benefits of high frequency demand without all the drawbacks.In some embodiments of the block streaming system, the high demand frequency flexibility is combined with the lower frequency demand. In this embodiment, the blocks may be constructed as described above and, as also described above, may be grouped together into segments including multiple blocks. At the beginning of the presentation, each request can be a single block or multiple simultaneous to request that part of the block be applied to ensure a fast channel zapping time at the beginning of the presentation and thus a good user experience The above-described process is performed that references concurrent requests. Subsequently, the client can issue a request that includes multiple blocks in a single request when certain conditions described below are met. This is possible because the blocks are organized into larger files or segments and can be requested using bytes or time ranges. Consecutive bytes or time ranges can be combined into a single larger byte or time range so that a single request can be made for multiple blocks, even with discontinuous blocks in one request. Can be requested.One basic configuration that can be driven by determining whether to request a single block (or partial block) or multiple consecutive blocks is likely to play out the requested block The determination of whether or not to be the basis of its composition. For example, if it is likely that it will soon need to change to another representation, it is better for the client to make a request for a single block, ie a small amount of media data. One reason for this is that if more than one block is requested while switching to another representation is imminent, the switching is done before the last few blocks of the request are played out. There is Thus, the download of these last few blocks can delay the delivery of the media data of the presentation to which the switching takes place, which can cause the playout of the media to stop.However, single block requests will result in more frequent requests. On the other hand, if it is not likely that it will soon be necessary to change to another representation, it will be preferable to make requests for these blocks, as it is likely that all blocks will be played out. The result is a less frequent request, which can substantially reduce the required overhead, especially when there are no imminent changes of expression.In conventional block aggregation systems, the amount requested in each request is not dynamically adjusted, ie, typically each request is for the entire file, or each request is for the file of the representation Approximately the same amount is of interest (sometimes in units of time, sometimes in units of bytes). Thus, if the total demand is smaller, then the demand overhead is high, while if the total demand is larger, this increases the chances of media outage events and / or network conditions change. In order to avoid having to change the presentation quickly depending on the lower quality presentation is selected, providing a lower quality media playout.An example of a condition that allows subsequent requests to reference multiple blocks when satisfied is the threshold for buffer size Bcurrent. If Bcurrent is below the threshold, each request issued refers to a single block. If Bcurrent is greater than or equal to the threshold, then each request issued refers to multiple blocks. If a request is made to reference multiple blocks, the number of blocks required in each single request can be determined in one of several possible ways. For example, the number can be constant, for example two. Alternatively, the number of blocks required in a single request can depend on the buffer status and in particular on Bcurrent. For example, several thresholds can be set and the number of blocks required in a single request is derived from the highest of the plurality of thresholds less than Bcurrent.Another example of a condition that can cause a request to refer to multiple blocks when satisfied is the variable value Sate (state) described above. For example, when State is "stable" or "full", requests can be issued for multiple blocks, but when State is "low", all requests target one block. be able to.Another embodiment is shown in FIG. In this embodiment, when the next request is issued (determined in step 1300), the current Sate value and Bcurrent are used to determine the size of the next request. If the current Sate value is "low" or the current State value is "full" and the current representation is not the highest available (as determined in step 1310 and the answer is "yes") Are selected to be short, for example, just for the next block (the block determined in step 1320 and the request made). The rationale for this is that they are in a state where it is likely that there will be a change of expression very soon. If the current State value is "stable" or the current State value is "full" and the current representation is the highest available (as determined in step 1130, the answer is "no") The duration of the successive blocks required in the next request is selected to be proportional to the α portion of Bcurrent for any fixed α <1 (the block determined in step 1330, performed in step 1340) Requests, eg, for α = 0.4, if Bcurrent = 5 seconds, then the next request can cover a block of about 2 seconds, and if Bcurrent = 10 seconds, the next request is A block of about 4 seconds can be targeted. One rationale for this is that in these states, there is no prospect of switching to a new representation during an amount of time proportional to Bcurrent.A flexible pipelined block-streaming system can use a file request protocol with a specific underlying transfer protocol such as TCP / IP. At the start of a TCP / IP or other transport protocol connection, it may take considerable time to achieve full utilization of available bandwidth. As a result, each time a new connection is started, a "connection start penalty" may be incurred. For example, in the case of TCP / IP, the connection upset penalty is the congestion control protocol to achieve full utilization of the time and available bandwidth for the initial TCP handshake to establish a connection. Due to both the time it takes to take place.In this case, it may be desirable to issue multiple requests using a single connection in order to reduce the frequency of connection startup penalties. However, some file transfer protocols, such as HTTP, do not provide a mechanism for canceling requests other than closing the transport layer connection completely, so that when a new connection is established instead of the old connection Start-up disadvantages occur. The issued request needs to be canceled if the available bandwidth has changed and it has been decided that a different media data rate is required instead, ie there is a decision to switch to a different representation. There is one thing. Another reason to cancel an issued request is when the user has requested that the media presentation be terminated and that a new presentation be initiated (possibly of the same content item or possibly a new content item at a different point in the presentation) be able to.As is known, the disadvantages of connection startup can be avoided by keeping the connection open and reusing the same connection for subsequent requests, as also known by multiple requests. A connection can be kept fully utilized if it is simultaneously issued on the same connection (a technique called "pipelining" in the context of HTTP). However, at the same time, or more generally, the drawback of issuing multiple requests in such a way that multiple requests are issued before the previous request is completed through the connection is that the connection carries responses to those requests. It is necessary to close the connection if it becomes necessary to cancel the request that has already been issued and is no longer desired if it becomes desirable to change the request that should have been issued. it can.The probability of needing to cancel an issued request is large when the time interval between issuing the request and the playout time of the requested block is large (the available bandwidth may change during that interval It may depend in part on the duration of this time interval in the sense that there is also a high probability that the request issued should be canceled due to some).As is known, some file download protocols have the property that a single underlying transport layer connection can be advantageously used for multiple download requests. For example, HTTP has this property because re-use of a single connection for multiple requests avoids the "connection launch penalty" described above for requests other than the first. However, one drawback of this approach is that the connection is put in to transfer the requested data in each issued request, thus closing the connection if the request or requests need to be canceled. May suffer from connection startup penalties when a substitute connection is established, or may wait for the client to receive data that is no longer needed, causing a delay in receiving subsequent data It is.We now describe an embodiment that retains the benefits of reusing connections without suffering this drawback and further improves the frequency with which connections can be reused.Embodiments of the block-streaming system described herein are configured to reuse connections for multiple requests without having to put the connection into a particular set of requests at startup. Basically, when a request already issued on an existing connection is not yet completed but is close to completion, a new request is issued on that connection. One reason not to wait for existing requests to complete is that the connection speed may degrade if the previous requests are completed, ie, the underlying TCP session may go to sleep Or the TCP cwnd variable may be substantially reduced, thereby substantially reducing the initial download rate of new requests issued on that connection. One reason to wait close to completion before issuing an additional request is that if a new request is issued long before the previous request completes, the new request will start for some substantial period of time It is not even possible, during this period before the new issued request starts, it may be that the decision to make the new request will no longer be valid, for example due to the presentation switching decision. Thus, client embodiments implementing this technique will issue new requests on the connection as late as possible without reducing the download capability of the connection.The method comprises monitoring the number of bytes received on the connection in response to a last request issued on the connection, and testing on the number. This can be done by configuring the receiver (or transmitter, if applicable) to perform monitoring and testing.If the test passes, you can issue further requests on the connection. An example of a suitable test is if the number of bytes received is larger than a fixed part of the size of the requested data. For example, this portion can be 80%. Another example of a suitable test is based on the following calculations, as illustrated in FIG. In the calculation, the estimated value of the connection data rate is R, and the estimated value of Round Trip Time ("RTT") is T, for example, set to a value between 0.5 and 2 Let X be a numerical coefficient that can be a constant, where the estimates of R and T are updated periodically (updated in step 1410). Let S be the size of the data requested in the last request, and B be the number of bytes of the requested data received (calculated in step 1420).One appropriate test is to have the receiver (or transmitter, if applicable) execute a routine to evaluate the inequality (S−B) <X · R · T (test at step 1430), “Yes If it is, it is to make the action happen. For example, a test can be performed to see if there are other requests ready on the connection (step 1440), and if yes, issue the request on the connection (Step 1450), if no, the process returns to step 1410 to continue updating and testing. If the result of the test in step 1430 is "No", the process returns to step 1410 to continue the update and test.The inequality test at step 1430 (performed by an appropriately programmed element, for example) is that the amount of remaining data to be received can be received at the current estimated reception rate within one RTT. Causes each subsequent request to be issued equal to X times the amount of. Several methods for estimating data rate R in step 1410 are known in the art. For example, the data rate may be estimated as Dt / t, where Dt is the number of bits received in the preceding t seconds, where t is, for example, 1 second or 0.5 It can be seconds or some other interval. Another method is an exponentially weighted average of incoming data rates, or a first-order finite impulse response (IIR) filter. Several methods for estimating RTT, T at step 1410 are known in the art.The test at step 1430 can be applied to the set of all active connections on the interface, as described in further detail below.The method further comprises constructing a candidate request list that associates each candidate request with an appropriate set of servers to which the request can be made, and setting the order of the candidate request list in priority order. Prepare. Several entries on the candidate request list may have the same priority. The servers on the list of appropriate servers associated with each candidate request are identified by host name. Each host name corresponds to a set of internet protocol addresses that can be obtained from the domain name system as is well known. Thus, each possible request on the candidate request list is a union of the set of internet protocol addresses, specifically the set of internet protocol addresses associated with the host name associated with the server associated with the candidate request. Associated with Whenever the test described in step 1430 passes for a connection, and no new requests have been issued on that connection, the highest destination on the list of candidate requests with which the connection's destination Internet Protocol address is associated A priority request is selected and this request is issued on the connection. The request is also deleted from the candidate request list.Candidate requests can be deleted (cancelled) from the candidate request list, and new requests can be added to the candidate list with higher priority than existing requests on the candidate list, and existing requests on the candidate list Requests can change their priority. The dynamic nature of the requirement being on the candidate requirement list, and the dynamic nature of their priority on the candidate list depend on when the type of test described in step 1430 passes. You can change whether you can issue a request next.For example, if the answer to the test described in step 1430 is "Yes" at some time t, then the next request issued will be request A, while the time to an answer to the test described in step 1430 will be If not yes until t '> t, then request A is removed from the candidate request list between times t and t', or request B is required between times t and t ' Due to being added to the candidate request list with higher priority than A, or request B is present on the candidate list at time t, but has lower priority than request A, The next request issued due to the priority of request B being higher than that of request A between time t and time t 'becomes request B instead.FIG. 18 shows an example of a list of requests on the candidate request list. In this example, there are three connections, and there are six requests labeled A, B, C, D, E and F on the candidate list. Each of the requests on the candidate list can be issued in a subset of connections according to the instructions, for example, request A can be issued on connection 1, while request F can be issued on connection 2 or connection 3. The priority of each request is also labeled in FIG. 18, with lower priority values indicating that the request is higher priority. Thus, requests A and B with priority 0 are the highest priority requests, while request F with a priority value of 3 has the lowest priority of the requests on the candidate list is there.At time t, if connection 1 passes the test described in step 1430, either request A or request B is issued at connection 1. Alternatively, if connection 3 passes the test described in step 1430 at this time t, then request D is issued at connection 3 because request D is the request with the highest priority that can be issued at connection 3. BeFor all connections, the answer to the test described in step 1430 from time t to some subsequent time t ′ is “No”, and between times t and t ′, request A has its priority of 0 Change from 5 to 5, assume that request B is removed from the candidate list and a new request G with priority 0 is added to the candidate list. Now, at time t ', the new candidate list is as shown in FIG.At time t ′, if connection 1 passes the test described in step 1430, then request C with priority 4 is the highest priority request on the candidate list that can be issued on connection 1 at this time. As it is, it is issued at connection 1.In this same situation, instead request A will be issued on connection 1 at time t (one of the two highest priority options for connection 1 at time t as shown in FIG. 18) It is assumed that Since the answer to the test described in step 1430 from time t to some subsequent time t ′ is “No” for all connections, connection 1 is still due to requests issued before time t. It will be delivering data at least until time t ', so request A will not have started at least until time t'. Since request C starts at the same time that request A after t 'would have started, and by that time request C has a higher priority than request A, so time t' Issuing a demand C at is a better decision than having had a demand A at time t.As another alternative, if a test of the type described in step 1430 is applied to the set of active connections, then the internet protocol address has the same priority as the first request or the first request on the candidate request list. A connection can be selected that has a destination associated with another request having a degree.Several methods are possible for the construction of the candidate request list. For example, the candidate list may include n requests representing requests for the next n parts of the data of the present presentation in chronological order, with the request for data in the earliest part having the highest priority And the last part of the data request has the lowest priority. In some cases, n can be one. The value of n can depend on the buffer size Bcurrent, or other measures such as a State variable or client buffer occupancy. For example, several threshold values can be set for Bcurrent, and the value associated with each threshold, and thus the value of n, is taken as the value associated with the highest threshold less than Bcurrent.The embodiment described above guarantees flexible assignment of requests to connections (due to the destination IP address of the connection not being assigned to any of the host names associated with the requests) Ensure that priority is given to reusing connections even if the highest priority request is not appropriate for the existing connection. The reliance on Bcurrent or State or other measures of client buffer occupancy can be used when the client urgently needs to issue and complete a request associated with the next portion of data to be played out in a time series. Ensure that no out-of-priority requests are issued.These methods can be advantageously combined with cooperative HTTP and FEC.Consistent Server Selection As is well known, files downloaded using a file download protocol are commonly identified by an identifier comprising a host name and a file name. For example, this is the case for the HTTP protocol, in which case the identifier is a Uniform Resource Identifier (URI). The host name can correspond to multiple hosts identified by the internet protocol address. For example, this is a common way of distributing the load of requests from multiple clients across multiple physical machines. In particular, this approach is commonly adopted by the Content Delivery Network (CDN). In this case, requests made on connections to any of the physical hosts are expected to succeed. Several methods are known that allow the client to select among the internet protocol addresses associated with the host name. For example, these addresses are typically provided to clients via the domain name system and are provided according to priority. The client can select the highest priority (first) internet protocol address. However, in general, there is no coordination between clients as to how this selection is made, so that different clients can request the same file from different servers. As a result, the same file may be stored in the caches of multiple nearby servers, which reduces the efficiency of the cache infrastructure.This can be handled by a system that advantageously increases the probability that two clients requesting the same block request this block from the same server. The novel method described herein is shaped such that different clients presented the same or similar choices of internet protocol address and file identifier make the same choice in a manner determined by the identifier of the file to be requested And selecting from among the available internet protocol addresses.A first embodiment of the method is described with reference to FIG. The client, as shown in step 1710, receives the internet protocol addresses IP1, IP2,. . . , Obtain the set of IPn first. If the determination at step 1720 indicates that there is a file for which a request is to be made, then the client determines the internet protocol address for making a request for the file, as determined at steps 1730 through 1770. Taking into account the set of internet protocol addresses and the identifier for the requested file, the method comprises arranging the order of internet protocol addresses in a manner determined by the file identifiers. For example, as shown in step 1730, for each internet protocol address, a byte string comprising the concatenation of internet protocol address and file identifier is constructed. As shown in step 1740, a hash function is applied to this byte string, and the resulting hash value is, as shown in step 1750, with fixed ordering, including ordering of Internet Protocol addresses. For example, they are arranged in ascending order of numbers. The same hash function can be used by all clients, thereby ensuring that all clients produce the same result by the hash function for a given input. The hash function may be statically configured in all clients in the set of clients, or all clients in the set of clients may have their hash function when they get a list of internet protocol addresses. A partial or complete description can be obtained, or all clients in the set of clients obtain a partial or complete description of the hash function when those clients obtain the file identifier Or the hash function can be determined by other means. As shown in steps 1760 and 1770, the Internet Protocol address that is the first in this ordering is selected, and this address is used to establish a connection and to issue a request for all or part of the file.The above method can be applied when a new connection is established to request a file. It can also be applied when several established connections are available and one of them can be selected to issue a new request.Furthermore, when the established connection is available and it is possible to select a request from among a set of candidate requests having equal priority, for example by means of the same hash value method described above ordering on the candidate requests is The candidate request that is derived and first appears in this ordering is selected. Those methods again compute the hashes for each combination of connection and request, set the order of these hash values with fixed ordering, and first in the derived ordering for the set of request and connection combinations By selecting the combination that occurs in, it is possible to combine to select both the connection and the candidate request from among the connection and the set of requirements of equal priority.This method has advantages for the following reasons. That is, the typical approach taken by the block serving infrastructure, such as that shown in FIG. 1 (BS 101) or FIG. 2 (BS 1s 101), and in particular the common approach by the CDN, receives client requests Provide multiple caching proxy servers. The caching proxy server can not provide the required file in a given request, in which case the server typically forwards the request to another server, typically the requested file. Receive an included response from that server and forward the response to the client. The caching proxy server may also store (caching) the file so that it can respond immediately to subsequent requests for the requested file. The common approach described above has the property that the set of files stored on a given caching proxy server is largely determined by the set of requests that the caching proxy server is receiving.The method described above has the following advantages. If all clients in the set of clients are provided the same list of internet protocol addresses, then these clients use the same internet protocol address for all requests issued for the same file. If there are two different lists of internet protocol addresses, and each client is provided with one of these two lists, then the client should at most two for all the requests issued for the same file. Use different internet protocol addresses. Generally, if the list of internet protocol addresses provided to the client is similar, the client uses a small set of internet protocol addresses provided for all requests issued for the same file. Because nearby clients tend to be provided with a similar list of internet protocol addresses, nearby clients may make file requests to only a small fraction of the caching proxy servers available to those clients. Thus, only a small portion of the caching proxy server caching files will be present, which advantageously minimizes the amount of caching resources used to cache files.Preferably, for a given set of internet protocol addresses, the proportion of files in which the given one of the internet protocol addresses is the first in the sorted list generated by step 1750 is approximately for all internet protocol addresses in the list. In order to be identical, the hash function has the property that very small parts of different inputs are mapped to the same output, and different inputs are mapped to essentially random outputs. On the other hand, for a given input, it is important that the hash function is decision oriented, in the sense that the output of the hash function is the same for all clients.Other advantages of the method described above are as follows. Suppose that all clients in the set of clients are provided with the same list of internet protocol addresses. Due to the nature of the hash function just described, the different file requests from these clients are evenly distributed across the set of internet protocol addresses, which distributes those requests evenly across the caching proxy server Means Thus, the caching resources used to store these files are evenly distributed across the caching proxy server, and file requests are evenly distributed across the caching proxy server. Thus, the method provides both storage balancing and load balancing across the caching infrastructure.Several variations of the above-described approach are known to those skilled in the art, and in many cases these variations are a set of files stored in a given proxy that the caching proxy server is receiving requests for It retains the property of being at least partially determined by the set. In the common case where a given host name resolves to multiple physical caching proxy servers, it may be the case that all these servers will eventually store a copy of the given file that is frequently requested. It becomes common. The replication may be undesirable because the storage resources on the caching proxy server are limited, and as a result, files may be occasionally deleted (purged) from the cache. The novel method described herein directs the request for a given file to the caching proxy server in such a manner that this duplication is reduced, thereby reducing the need for deleting the file from the cache and thereby giving the given file Ensures that the likelihood of being present in the proxy cache (ie not purged from the proxy cache) is increased. When the file exists in the proxy cache, the response sent to the client is faster, which causes the requested file to arrive late and result in media playout pauses, thus a bad user experience. It has the advantage of reducing the probability of causing it. Furthermore, when the file is not present in the proxy cache, requests may be sent to other servers, which may impose additional load on both the serving infrastructure and the network connection between the servers. In many cases, the server to which the request is sent may be located far away, and returning a file from this server to the caching proxy server may result in transmission costs. Thus, the novel method described herein results in reducing these transmission costs.One particular concern in the case where the probabilistic file-wide request HTTP protocol is used with the range request is the operation of the cache server commonly used to provide scalability in the serving infrastructure. While it is common for HTTP cache servers to support HTTP range headers, the exact behavior of different HTTP cache servers varies from implementation to implementation. Most cache server implementations handle range requests from the cache when the file is available in the cache. A common implementation of the HTTP cache server always forwards downstream HTTP requests containing range headers to the upstream node (cache server or origin server) unless the cache server has a copy of the file. In some implementations, the upstream response to a range request is the entire file, the entire file is cached, and responses to downstream range requests are extracted from this file and sent. However, in at least one implementation, the upstream response to the range request is just the data bytes within the range request itself, and these data bytes are not cached and instead are sent as a response to the downstream range request Only. As a result, the use of the range header by the client may have the consequence that the file itself is never brought into the cache and the desired scalability properties of the network are lost.In the above, the operation of the caching proxy server has been described, and a method of requesting blocks from a file, which is a collection of blocks, has also been described. For example, this can be accomplished by the use of an HTTP Range Request Header. The request is in the following called "partial request". Further embodiments will now be described which have advantages in the case where the block serving infrastructure 101 does not provide complete support for HTTP range headers. In common, servers in block serving infrastructures, such as content delivery networks, support partial requests but can not store responses to partial requests in local storage (cache). The server can fulfill a partial request by forwarding the request to another server, as long as the entire file is not stored in the local storage, in which case the response forwards the request to the other server It can be sent without anything.The block aggregations described above, as all requests that are partial requests are forwarded to other servers, and none of the requests are handled by the caching proxy server, frustrating the purpose of providing the caching proxy server in the first place Block-request streaming systems that take advantage of this new extension may become bad if the block serving infrastructure exhibits this behavior. As mentioned above, during the block-request streaming process, the client can request the block present at the beginning of the file at some point.The novel method described herein allows the request to be translated from the request of the first block in the file to the request of the entire file whenever certain conditions are met. When a request for an entire file is received by the caching proxy server, the proxy server typically stores the response. Thus, the use of these requests, regardless of whether subsequent requests are full files or partial requests, files in the cache of the local caching proxy server can be handled directly by the caching proxy server Let the The condition is in terms of at least a provided portion of these requests in a set of requests associated with a given file, eg, a set of requests generated by a set of clients viewing the content item of interest. The condition can be met.An example of a suitable condition is that the randomly chosen number is above the provided threshold. This threshold can be set so that the conversion of a single block request to an entire file's request occurs on average for the provided portion of those requests, eg, once every 10 If so, a random number can be selected from the interval [0, 1], and the threshold can be 0.9). Another example of a suitable condition is that the hash function calculated for some information associated with the block and some information associated with the client take one of the provided set of values. This method significantly changes the behavior of the block-request streaming system from the standard operation in which each request targets a single block, although the file is cached in the local proxy server for frequently requested files. It has the advantage of not being In many cases where conversion of a request from a single block request to an entire file request occurs, the client procedure proceeds to request other blocks in the file. If this is the case, the request can be suppressed as the target block is received in any case as a result of the request for the entire file.URL construction and segment list generation and seek segment list generation are specific to the start of the media presentation for the on-demand case or for a specific representation starting at some start time starttime represented by the actual elapsed time Address the issue of how the client can generate a segment list from the MPD at the client's local time NOW. The segment list may comprise a locator, for example, the URL of optional first presentation metadata, and a list of media segments. Each media segment can be assigned a start time, a duration and a locator. The start time typically represents an approximation of the media time of the contained media in the segment, however, the exact time of the sample does not necessarily. The start time is used by the HTTP streaming client to issue a download request at the appropriate time. The generation of a segment list that includes each start time can be done in different ways. The URLs can be provided as playlists or URL construction rules can be advantageously used for compact representation of segment lists.A segment list based on URL construction may be implemented, for example, when the MPD signals it with a particular attribute or element, eg FileDynamicInfo or equivalent signal. A general method of generating a segment list from URL construction is provided in the "URL Construction Overview" section below. The playlist based construction can, for example, be signaled by different signals. Seeking within the segment list to reach the correct media time is also advantageously implemented in this context.URL Builder Overview As described above, in one embodiment of the present invention, a metadata file can be provided that includes a URL building rule that allows client devices to build file identifiers for blocks of a presentation. . This time, changing the URL construction rules, changing the number of available encodings and metadata associated with the available encodings, eg bit rate, aspect ratio, resolution, audio or video codec or codec parameters or A further new extension of the block request streaming system is described which provides changes in the metadata file, including changes of other parameters.This new extension can provide additional data associated with each element of the metadata file that indicates time intervals within the overall presentation. Within this time interval, the element can be considered valid and outside the time interval the element can be ignored. In addition, the syntax of the metadata can be extended to allow multiple occurrences of elements previously allowed to appear only once or at most once. In this case, additional constraints can be applied which specify that for the element the specified time intervals must be separated from one another. Considering only those elements whose time intervals contain predetermined moments at any given moment results in a metadata file that matches the original metadata syntax. The time interval is called an effective interval. Thus, this method provides for signaling changes of the type described above in a single metadata file. Advantageously, the method can be used to provide a media presentation that supports the described type of change at a specified point in the media presentation.URL Constructer As described herein, one common feature of block-request streaming systems is needed by clients to identify available media encodings and request blocks from those encodings. Need to provide clients with “metadata” that provides For example, in the case of HTTP, this information may comprise the URL for the file containing the media block. A playlist file can be provided that describes URLs for blocks for a given encoding. A plurality of playlist files are provided, one for each encoding, with a master playlist of playlists describing playlists corresponding to different encodings. One disadvantage of this system is that the metadata can be quite large, so it takes some time for the client to request when it starts the stream. A further disadvantage of this system is that it generates "on-the-fly" from media streams where files corresponding to media data blocks are being captured in real time, eg live sporting events or news programs Notable in the case of live content being played. In this case, the playlist file can be updated as new blocks become available (e.g., every few seconds). The client device can repeatedly fetch the playlist file to determine if new blocks are available and obtain their URLs. This can impose a significant load on the serving infrastructure, and in particular means that metadata files can not be cached for longer than the update interval equal to the block size on the order of a few seconds.One important aspect of the block-request streaming system is the method used to inform the client of the file identifier, eg, URL, to be used with the file download protocol to request the block. For example, a method is provided wherein a playlist is provided that describes the URLs of files that contain blocks of media data for each presentation of the presentation. One disadvantage of this method is that at least a portion of the playlist file itself needs to be downloaded before playout can begin, increasing channel zapping time and thus creating a poor user experience . For long media presentations with several or many representations, the list of file URLs may be large, and thus the playlist file may be large, further increasing channel zapping time.Another drawback of this method arises in the case of live content. In this case, the complete list of URLs can not be made available in advance, and in order to receive the updated version, the playlist file becomes available for new blocks and the client schedules playlist files It is updated regularly as required. Because this file is updated frequently, it can not be stored for a long time in the caching proxy server. This means that a large number of requests for this file will be forwarded to other servers and ultimately to the server that generates the file. In the case of popular media presentations, this results in high load on this server and network, which may result in slow response times and thus long channel zapping times and poor user experience. is there. In the worst case, the server is overloaded and this results in some users not being able to view the presentation.In the design of block-request streaming systems, it is desirable to avoid imposing a constraint on the type of file identifier that can be used. The reason is that some considerations motivate the use of a particular type of identifier. For example, if the block serving infrastructure is a content delivery network, it can not be predicted at the time of file naming or storage conventions or system design associated with wishing to distribute storage or serving load across the network There may be other requirements that lead to specific forms of file identifiers.A further embodiment is now described which mitigates the above drawbacks while maintaining the flexibility of choosing the appropriate file identification convention. In this way, metadata can be provided for each representation of the media presentation comprising file identifier construction rules. The file identifier construction rules may comprise, for example, text strings. In order to determine the file identifier for a given block of the presentation, an interpretation method of the file identifier construction rules can be provided, which comprises determining the input parameters and evaluating the file identification construction rules with the input parameters Equipped with The input parameters may, for example, include the index of the file to be identified, where the first file has index 0, the second has index 1 and the third has index 2, The same applies to the following. For example, if all files span the same duration (or nearly the same duration), the index of the file associated with any given time in the presentation can be easily determined. Alternatively, the time in the presentation that each file spans can be provided in the presentation or version metadata.In one embodiment, the file identifier construction rules may comprise text strings that may include certain special identifiers corresponding to input parameters. The method of evaluating file identifier construction rules comprises determining the position of a special identifier within a text string and replacing each said special identifier with a string representation of the value of the corresponding input parameter.In another embodiment, the file identifier construction rules may comprise text strings conforming to the formula language. The expression language comprises a definition of syntax to which expressions in the language can conform, and a set of rules for evaluating strings in accordance with the syntax.Now, one embodiment is described with reference to FIG. 21, see below. An example of a syntactic definition for a suitable expression language, as defined in the Augmented Backus-Naur Form, is as shown in FIG. An example of a rule for evaluating a string conforming to <expression> (<expression>) production in FIG. 21 is a <literal string conforming to <expression> production (<expression>) as follows: > (<Literal>) comprising iteratively converting to a production compliant string.<Expression> conforming to <literal> production is immutable.The <expression> conforming to the <variable> (<variable>) production is replaced with the value of the variable identified by the <variable> generated <token> (<token>) string. An <expression> conforming to a <function> (<function>) production evaluates each of its arguments according to these rules and relies on the <token> element of a <function> production, as described below Are evaluated by applying transformations to these arguments.An <expression> conforming to the last alternative of the <expression> production evaluates two <expression> elements and the last alternative <operator> (<operator of the <expression> production, as described below >) Evaluated by applying operations to these arguments depending on the element. In the method described above, it is assumed that the evaluation takes place in a relationship where multiple variables can be defined. A variable is a (name, value) pair, where "name" is a string conforming to a <token> production, and "value" is a string conforming to a <literal> production. Several variables can be defined outside the evaluation process before the evaluation starts. Other variables can be defined within the evaluation process itself. All variables are "global" in the sense that only one variable exists with each possible "name".An example of a function is the "printf" function. This function accepts one or more arguments. The first argument may conform to <string> (<string>) production (hereinafter "string"). The printf function evaluates up to the translated version of its first argument. The transformations applied are the same as the "printf" function in C's standard library, and the <function> production contains additional arguments that supply the additional arguments expected by C's standard library printf function.Another example of a function is a "hash" function. This function accepts two arguments, the first of which can be a string, the second of which can conform to <number) productions (hereinafter "numbers"). The "hash" function applies a hash algorithm to the first argument and returns a result that is a non-negative integer smaller than the second argument. An example of a suitable hash function is given by the C function shown in FIG. 22 and their arguments are input strings (except for the enclosing quotation marks) and numeric input values. Other examples of hash functions are well known to those skilled in the art.Another example of a function is the "Subst" function which takes one, two or three string arguments. If one argument is provided, the result of the "Subst" function is the first argument. If two arguments are supplied, the result of the "Subst" function is modified by removing the presence of the second argument (except for the enclosing quotation marks) in the first argument and so Calculated by returning the first argument. If three arguments are supplied, the result of the “Subst” function is the third argument (quoting the presence of the second argument in the first argument, excluding the enclosing quotation marks) Calculated by replacing the) with the) and returning the first argument so modified.Some examples of operators are the addition, subtraction, division, multiplication and remainder operators, <operator> (<operator>) production, '+', '-', '/', '* Identified by ','% 'respectively. These operators require that <expression> productions on either side of the <operator> production be evaluated to numbers. The evaluation of the operators is based on applying the relevant arithmetic operations (addition, subtraction, division, multiplication and remainder respectively) to these two numbers in the usual way, and <number) (<number>) productions And returning the result in the form.Another example of an operator is the assignment operator identified by <operator> production '='. This operator requires that the left argument evaluate to a string whose content conforms to the <token> production. The content of the string is defined to be a character string within enclosing quotation marks. The equality operator causes a variable whose name is <token> equal to the content of the left argument to be assigned a value equal to the evaluation result of the right argument. This value is also the evaluation result of the operator expression.Another example of an operator is the sequence operator identified by <operator> production ';'. The evaluation result of this operator is the right argument. As with all operators, note that both arguments are evaluated and the left one is evaluated first.In one embodiment of the present invention, the identifier of the file can be obtained by evaluating the file identifier construction rules according to the above rules, using a specific set of input variables identifying the requested file. An example of an input variable is a variable having the name "index" and a value equal to the numeric index of the file in the presentation. Another example of an input variable is a variable having the name "bitrate" and a value equal to the average bit rate of the required version of the presentation.Figure 23 shows a few examples of file identifier construction rules, the input variables are "id" giving an identifier for the presentation of the desired presentation and "seq" giving a sequence number for the file.Many variations of the above-described method are possible, as will be apparent to those skilled in the art upon reading this disclosure. For example, not all functions and operators described above need be provided and additional functions or operators may be provided.URL Construction Rules and Timing This section provides basic URL construction rules and representations for assigning file or segment URIs and a start time for each segment in the media presentation.For this section, the availability of media presentation descriptions at the client is assumed.Suppose that the HTTP streaming client is playing out the media to be downloaded in the media presentation. The actual presentation time of the HTTP client can be defined as to when the presentation time is with respect to the beginning of the presentation. At the time of initialization, it is possible to assume a presentation time t = 0.At any point in time t, the HTTP client interacts with any data and user with a playback time tP that is at least MaximumClientPreBufferTime (and related to the beginning of the presentation) more than the actual presentation time t, eg seek, fast forward, etc. You can download any data required due to. In some embodiments, MaximumClientPreBufferTime may even not be specified in the sense that the client can download data prior to the current playback time tP without constraints.The HTTP client can avoid downloading unnecessary data, for example, typically not download any segments from a representation that is not expected to be played out.The basic process in providing the streaming service is the generation of the data by generating the corresponding request for downloading the whole file / segment or the file / segment subset, for example using an HTTP GET request or an HTTP partial GET request. It can be download. Although this description addresses how to access data for a particular playback time tP, in general, the client downloads data for a larger time range playback time to avoid inefficient requests. Can. The HTTP client can minimize the number / frequency of HTTP requests when providing streaming services.In order to access media data at a playback time tP or at least near the playback time tP in a particular representation, the client determines the URL of the file including this playback time, and the bytes in the file for accessing this playback time Determine the range further.The media presentation description can assign a representation id, r, to each representation, for example by use of the RepresentationID attribute. In other words, the content of the MPD is interpreted as an assignment exists when written by the capture system or read by the client. In order to download data on a particular playback time iP for a particular representation having id, r, the client can construct the corresponding URL for the file.The media presentation description may assign the following attributes to each file or segment of each representation r:(A) Sequence numbers i, i = 1, 2,. . . , Nr, (b) file index i relative to the relative start time and presentation time of the file having the expression idr, (c) for the file / segment with the expression idr and the file index i It is expressed as a file URL, FileURL (r, i).In one embodiment, the start time of the file and the file URL can be provided explicitly for representation. In another embodiment, a list of file URLs can be explicitly provided, each file URL being uniquely assigned an index i by its position in the list, and the start time of the segment is 1 to i-1 It is derived as the sum of all segment durations for the segment. The duration of each segment may be provided by any of the rules described above. For example, one skilled in the art of basic mathematics can use other methods to derive a starting time easily from the location / index of the file URL in a single element or attribute and representation.If dynamic URL construction rules are provided in the MPD, the start time of each file and each file URI may be provided by the construction rules, the index of the requested file, and potentially some additional ones provided in the media presentation description. It can be built dynamically by using parameters. Information can be provided, for example, in MPD attributes and elements, such as FileURIPattern and FileInfoDynamic. FileURLPattern provides information on how to construct a URI based on file index sequence number i and representation IDr. FileURIFormat is constructed as follows.FileURIFormat = sprintf (“% s% s% s% s% s.% S”, BaseURL, BaseFileName, RepresentationIDFormat, SeparatorFormat, FileSequenceIDFormat, FileExtension); and FileURL (r, i) are constructed as follows. FileURL (r, i) = sprintf (FileURIFormat, r, i); The relative start time ts (r, i) for each file / segment is something included in the MPD that describes the duration of the segment in this representation It can be derived by an attribute, for example, FileInfoDynamic attribute. The MPD may also include a sequence of FileInfoDynamic attributes that are global with respect to all representations in the media presentation or at least in a certain period as described above. If media data for a particular playback time tP in the representation r is required, the corresponding index i (r, tP) is derived as i (r, tp), so the playback time of this index is ts ( The start time of r, i (r, tP)) and the interval of ts (r, i (r, tP) +1) exist. Segment access may be further constrained by the above cases, eg, the segments can not be accessed.Access to the correct playback time tP when the corresponding segment index and URL are obtained depends on the actual segment type. In this example, it is assumed that the media segment has a local timeline starting at 0 without loss of generality. To access and present data at playback time iP, the client can access data corresponding to the local time from a file / segment that can be accessed through URLFileURI (r, i) where i = i (r, tp) It can be downloaded.In general, the client can download the entire file and can access the playback time tP. However, because 3GP files provide a structure for mapping local timing to byte ranges, it is not necessary to download all the information in 3GP files. Thus, as long as sufficient random access information is available, only a specific byte range for accessing the playback time tP may be sufficient to play the media. In addition, segment indexes can be used, for example, to provide sufficient information about byte-range structure and mapping and local timing of media segments in the first part of the segment. By having access to the first eg 1200 bytes of the segment, the client can have enough information to directly access the byte range needed for playback time tP.In a further example, it is assumed that a segment index, possibly specified as the "tidx" box below, can be used to identify the byte offset of the requested fragment or fragments. A partial GET request can be formed for the requested fragment or fragments. Other alternatives may exist, for example, the client may issue a standard request for the file and cancel it when the first "tidx" box is received.The seek client can attempt to seek to a particular presentation time tp in the representation. Based on the MPD, the client has access to the media segment start time and media segment URL of each segment in the representation. The client can obtain the segment index segment_index of the segment that is most likely to include the media sample for presentation time tp as the maximum segment index i, and the start time tS (r, i) is less than or equal to the presentation time tp That is, segment_index = max {i | tS (r, i) ≦ tp}. The segment URL is obtained as File URL (r, i).Note that the timing information in the MPD is approximate due to issues related to random access point placement, media track alignment and media timing drift. As a result, the segment identified by the above procedure can start at a time slightly after tp, and media data for presentation time tp can be present in the previous media segment . In the case of a seek, the seek time can be updated to be equal to the first sample time of the retrieved file, or the preceding file can be retrieved instead. However, during continuous playout, including the case where there is a switch between alternative representations / versions, the media data for the time between time tp and the beginning of the retrieved segment is available To pay attention.The HTTP streaming client needs to access a random access point (RAP) for accurate seek up to the presentation time tp. In order to determine a random access point in the media segment in the case of 3GPP adaptive HTTP streaming, the client is present, for example, the information in the 'tidx' or 'sidx' box for locating the random access point And the corresponding presentation time in the media presentation can be used. In the case where the segment is a 3GPP movie segment, 'moof' in order for the client to locate eg the RAP and obtain the required presentation time from the information in the movie fragment and the segment start time derived from the MPD. It is also possible to use the information in the 'mdat' box. If a RAP with a presentation time earlier than the requested presentation time tp is not available, the client may access the previous segment or may use the first random access point as the seek result. These procedures are simple when the media segment starts from RAP.Also note that it is not necessary to download all the information of the media segment to access the presentation time tp. The client can request the 'tidx' or 'sidx' box first from the beginning of the media segment, for example, using a byte range request. Segment timing can be mapped to the byte range of the segment by using the 'tidx' or 'sidx' box. By using partial HTTP requests sequentially, it is only necessary to access the relevant part of the media segment, for improved user experience and low start-up delay.Segment List Generation As described herein, a direct HTTP streaming client that uses the information provided by the MPD to generate a list of segments for a representation that has a signaled approximate segment duration of dur. It should be clear how to implement. In some embodiments, the client may index successive indices i = 1, 2, 3,. . . , The first media segment is assigned index i = 1, the second media segment is assigned index i = 2, and so on. Next, startTime [i] is assigned to the list of media segments having segment index i, and URL [i] is generated, for example, as follows. Initially, index i is set to one. The start time of the first media segment is obtained as 0, startTime [1] = 0. The URL of the media segment i, URL [i], is obtained as FileURL (r, i). The process is continued for all described media segments with index i, startTime [i] of media segment i is obtained as (i-1) * dur, and URL [i] is as FileURI (r, i) It is obtained.Concurrent HTTP / TCP Request Blocks-One concern in request streaming systems is the desire to always request the highest quality blocks that can be completely received in time for playout. However, the data arrival rate can not be known in advance, so the requested block may not arrive in time for playout. This results in the need to pause the playout of the media, resulting in a poor user experience. This problem requires lower quality (and hence smaller size) blocks that are more likely to be received in time, even if the data arrival rate is reduced during block reception. This can be mitigated by client algorithms that take a conservative approach of selecting which blocks to request. However, this conservative approach has the disadvantage of possibly delivering lower quality playout to the user or destination device, which is also a poor user experience. The problem is that multiple available HTTP connections are used simultaneously, as described below, as available network resources are shared among connections, and thus are simultaneously in use for blocks with different playout times. Sometimes it is amplified.It would be advantageous for clients to issue requests for multiple blocks concurrently. In this context, "concurrently" means that responses to requests occur at overlapping time intervals, which is not necessarily the case where requests are made at all or nearly simultaneously. In the case of the HTTP protocol, this approach can improve the utilization of available bandwidth due to the operation of the TCP protocol (as known). In this regard, when new content is first requested, the corresponding HTTP / TCP connection where the data for the block is requested can be slow to start, and so some HTTPs at this point Using TCP / TCP connections is especially important to improve content zapping time, as it can dramatically increase the speed of data delivery time for the first block. However, the request for blocks to be first played out is in conflict with the request for subsequent blocks, and competing HTTP / TCP downloads will have their delivery times fluctuate significantly and so the completion time of the requests will be significant It is generally not possible to control which HTTP / TCP downloads are completed quickly and which is slower, so at least some of the time the first few blocks of HTTP / TCP downloads Requesting different blocks or fragments over different HTTP / TCP connections may result in degraded performance, as it may end up in the end, resulting in large and variable channel zapping times.Each block or fragment of the segment is downloaded through a separate HTTP / TCP connection, and the number of parallel connections is n, the playout duration of each block is t seconds, and the streaming rate of the content associated with the segment Suppose that is S. When the client first starts streaming content, it can issue a request for the first n blocks representing n * t seconds of media data.As known to those skilled in the art, there are large fluctuations in data rates of TCP connections. However, to simplify this explanation, ideally all connections are in parallel, so the first block is ideally completely received almost simultaneously with the other n-1 blocks requested. Assume. To further simplify the description, it is assumed that the total bandwidth utilized by n download connections is fixed at value B for the entire download duration, and that the streaming rate S is constant throughout the representation. The playout of a block is such a structure that can be done when the whole block is available at the client, ie the playout of the block is for example due to the structure of the underlying video coding or The encryption is employed to encrypt the fragments or blocks separately, so the whole block is received as the block is regenerated, because the entire fragment or block needs to be received before it can be decrypted. Further assume that it can only start after being done. Thus, to simplify the description below, it is assumed that the entire block needs to be received before any of the blocks can be played out. Now the time required before the first block can arrive and play out is about n * t * S / B.Since it is desirable to minimize content zapping time, it is desirable to minimize n * t * S / B. The value of t can be determined by factors such as how the underlying video coding structure and capture method are used, etc., so t can be made reasonably small, but t Very small values result in overly complex segment maps and, if used, may probably not be compatible with efficient video encoding and decoding. The value of n can also affect the value of B, ie B can be larger for a larger number n of connections, so reducing the number n of connections is used It may have adverse side effects that potentially reduce the amount of available bandwidth, and thus may not be effective in achieving the goal of reducing content zapping time. The value of S depends on which representation is selected to download and playout, and ideally, S will maximize the playout quality of the media for a given network condition. Should be as close as possible to B. Thus, to simplify this description, assume that S is approximately equal to B. Thus, the channel zapping time is proportional to n * t. Thus, using more connections to download different fragments is typically the case, but if the total bandwidth used by those connections is nonlinearly proportional to the number of connections, then the channel It can degrade the zapping time.As an example, assuming that t = 1 second, the value of B = 500 Kbps when n = 1, the value of B = 700 Kbps when n = 2, and the value of B = 800 Kbps when n = 3. Assuming that a representation with S = 700 Kbps is selected, the download time for the first block is 1 * 700/500 = 1.4 seconds for n = 1, and the first for n = 2. The download time for the block is 2 * 700/700 = 2 seconds, where n = 3, the download time for the first block is 3 * 700/800 = 2.625 seconds. Furthermore, as the number of connections increases, the variability of the individual download speeds of those connections increases (although there may be some significant variability even with only one connection) there is a possibility. Thus, in this example, the variability of channel zapping time and channel zapping time increases as the number of connections increases. Intuitively, the blocks being delivered have different priorities, ie, the first block has the earliest delivery deadline, the second block has the second earliest deadline, etc. Download connections whose blocks are being delivered are contending for network resources during delivery, so the block with the earliest deadline is longer as more competing blocks are requested To delay. On the other hand, even in this case, using two or more download connections in the end makes it possible to support higher streaming rates in a sustainable way, for example, in the case of three connections the maximum in this example Streaming rates of 800 Kbps can be supported, while one connection can support only 500 Kbps streams.In practice, as noted above, the data rate of a connection can be highly variable both within and between connections over time, so that n requested blocks are generally at the same time Not complete, in fact, one block can be commonly true to complete in half the time of the other block. As a result, in some cases the first block may complete much faster than the other blocks, and in other cases the first block may complete much slower than the other blocks Unpredictable behavior results, so that the beginning of playout may, in some cases, be relatively fast and in other cases it may be slow to occur. This unpredictable behavior can be frustrating to the user and can therefore be considered a poor user experience.Thus, what is needed is a method that can utilize multiple TCP connections to improve the channel zapping time and the variability of the channel zapping time while simultaneously supporting possible good streaming rates. Also necessary is that each can be allocated as the playout time of the block approaches, so that a higher percentage of the available bandwidth can be allocated towards the block with the closest playout time if necessary. A method of making the proportion of available bandwidth allocated to a block adjustable.Cooperative HTTP / TCP Requests We now describe a method for cooperatively using concurrent HTTP / TCP requests. The receiver may employ multiple concurrent cooperative HTTP / TCP requests, eg, using multiple HTTP byte range requests, while each such request may be a portion or fragment of a fragment in the source segment The entire fragment of the segment, or a portion of the repair segment or repair fragment, or the entire repair fragment of the repair segment is of interest.The benefits of cooperative HTTP / TCP requests with the use of FEC repair data may be particularly important to consistently provide fast channel zapping times. For example, at channel zapping time, TCP connections may be just started or may be dormant for a period of time, in which case the congestion window cwnd is a minimum value for the connections, and so these The delivery speed of the TCP connection on the TCP will take several round trip times (RTT) to ramp up, and during this rise time there will be high variability of delivery speed across different TCP connections Become.This time, an overview of the non-FEC method, which is a cooperative HTTP / TCP request method, is described, and only media data of the source block is requested using multiple concurrent HTTP / TCP connections, ie FEC repair data is required I will not. When using the non-FEC method, for example, portions of the same fragment may be requested over different connections using HTTP byte range requests for portions of fragments, thus, for example, each HTTP byte range request may be a segment for a fragment The part of the byte range indicated in the map is targeted. Each HTTP / TCP request increases its delivery rate to fully utilize the available bandwidth over several RTTs (round trip times), so the delivery rate is less than the available bandwidth There may be a relatively long period of time, eg channel zapping time may be increased if a single HTTP / TCP connection is used to download the first fragment of the content to be played out It will be true. Using non-FEC methods, downloading different portions of the same fragment through different HTTP / TCP connections can significantly reduce channel zapping time.This time, an overview of the FEC method, which is a cooperative HTTP / TCP request method, is described, and FEC repair data generated from media data and media data of the source segment is requested using multiple concurrent HTTP / TCP connections. Ru. When using the FEC method, portions of the same fragment and FEC repair data generated from the fragment are requested over different connections using HTTP byte range requests for portions of fragments, thus, for example, each HTTP byte range request The target is a portion of the byte range indicated in the segment map for fragments. Individual HTTP / TCP requests increase their delivery rate to fully utilize available bandwidth over several RTTs (round trip times), so delivery time is less than available bandwidth There is a relatively long period of time, which can increase channel zapping time if, for example, a single HTTP / TCP connection is used to download the first fragment of the content being played out That would apply. Using the FEC method has the same advantages as the non-FEC method, and the entire requested data does not have to arrive before the fragment can be recovered, thus further reducing the channel zapping time and the channel zapping time Have the added benefit of further improving their variability. A first requested fragment that enables media playback to be initiated by making requests over different TCP connections and making excessive requests by also requesting FEC repair data on at least one of the connections For example, the time required to deliver a sufficient amount of data to restore the data can be significantly reduced and be made more consistent than if the cooperative TCP connection and FEC repair data are not used it can.FIGS. 24 (a) to 24 (e) show five TCP connections running through the same link from the same HTTP web server of the emulated evolution data optimized (EVDO) (emulated evolution data optimized) network to the same client. An example of delivery rate fluctuation is shown. In FIGS. 24 (a)-(e), the x-axis represents time in seconds, and the Y-axis represents bits on the client through each of the five TCP connections measured at one-second intervals for each connection. Indicates the rate to be received. In this particular emulation, there are a total of 12 TCP connections running through this link, and the load on the network during the time shown is relatively high, which means that two or more clients are streaming in the same cell of the mobile network It will be typical when you are. Note that although the delivery rates have some correlation over time, there are significant differences in delivery rates of the five connections at many points in time.FIG. 25 shows a possible request structure for a fragment that is 250,000 bits (about 31.25 kilobytes) in size, and there are 4 HTTP byte range requests made in parallel for different parts of the fragment Do. That is, the first HTTP connection requests the first 50,000 bits, the second HTTP connection requests the next 50,000 bits, and the third HTTP connection the next 50,000 bits. The fourth HTTP connection requests the next 50,000 bits. If FEC is not used, ie non-FEC methods, these are only four requirements for fragments in this example. If FEC is used, ie FEC method, then in this example there is one additional HTTP connection requesting additional 50,000 bits of FEC repair data of the repair segment generated from the fragment.FIG. 26 is an enlargement of the first few seconds of the five TCP connections shown in FIGS. 24 (a)-(e), in which the X-axis shows the time interval of 100 milliseconds and the Y-axis is , Indicates the rate at which bits are received at the client over each of the five TCP connections measured at intervals of 100 ms. One line is the total amount of bits being received at the client for fragments from the first four HTTP connections (except for the HTTP connection for which FEC data is required), ie, arriving using non-FEC methods, Indicates The other line shows the total amount of bits being received at the client for fragments from all five HTTP connections (including the HTTP connection for which FEC data is required), ie, those that arrive using the FEC method. For the FEC method, it is assumed that the fragment can be FEC decoded from the reception of 200,000 of any of the 250,000 required bits, for example when the Reed-Solomon FEC code is used. It can be implemented and basically implemented when the Raptor Q code, described for example in Luby IV, is used. For the FEC method of this example, sufficient data is received after 1 second to recover the fragment using FEC decoding (requesting data for the subsequent fragment before the first fragment is fully played out And enables channel zapping time of 1 second, assuming that it can be received. For the non-FEC method in this example, all data for the four requests must be received before the fragment can be recovered, which occurs after 1.7 seconds and channel zapping for 1.7 seconds. It will be time. Thus, in the example shown in FIG. 26, the non-FEC method is 70% inferior to the FEC method in terms of channel sampling time. One of the reasons for the advantages presented by the FEC method in this example is that, for the FEC method, reception of 80% of the requested data enables fragment recovery while for the non-FEC method it is required It is required that 100% of the received data be received. Thus, the non-FEC method has to wait for the slowest TCP connection to finish the delivery, and due to the inherent variation of the TCP delivery rate, it is the largest compared to the average TCP connection. There is a tendency for significant deviations in the delivery speed of slow TCP connections. With the FEC method of this example, one slow TCP connection does not determine when the fragment is recoverable. Instead, for the FEC method, sufficient data delivery is most often a function of the average TCP delivery rate rather than the worse case TCP delivery rate.There are many variants of the non-FEC method and the FEC method described above. For example, cooperative HTTP / TCP requests can be used for only the first few fragments after channel zapping has occurred, and then a single HTTP to download additional fragments, multiple fragments, or the entire segment. Only / TCP requests are used. As another example, the number of cooperative HTTP / TCP connections used is both the urgency of the fragments being requested, ie how imminent the playout time of these fragments and the current network state are It can be a function.In some variations, multiple HTTP connections can be used to request repair data from the repair segment. Other variations may require different amounts of data in different HTTP connections depending on, for example, the current size of the media buffer and the data reception rate at the client. In other variations, the source representations are not independent of one another, but instead represent layered media coding, eg, the expanded source representations can depend on the base source representation. In this case, there may be a repair representation corresponding to the base source representation, and other repair representations corresponding to a combination of base and extended source representations.Additional overall elements enhance the advantages that can be realized by the method disclosed above. For example, the number of HTTP connections used may vary depending on the amount of media currently in the media buffer and / or the rate of reception into the media buffer. Cooperative HTTP requests using FEC, ie the FEC method described above and variants of that method, may be performed, for example, on the first fragment as the media buffer is relatively empty and the media buffer is increased. More cooperative HTTP requests are made in parallel for different parts, requiring a relatively large part of the whole source fragment and the repair data from the corresponding repair fragment, and then a reduced number of concurrent Migrate to HTTP requests, request a larger portion of media data per request, request a smaller portion of repair data, for example, migrate to one, two or three concurrent HTTP requests Transitioning to making a request for the entire fragment or multiple consecutive fragments per request, and requesting repair data When moving to Ikoto, it can be positively used.As another example, the amount of FEC repair data can be varied as a function of media buffer size, ie more FEC repair data can be requested when the media buffer is small, increasing the media buffer In response, the amount of FEC repair data required can be reduced, and at any time the media buffer is large enough the FEC repair data can not be requested and data from the source segment of the source representation Only. The benefit of the extension technique is the additional bandwidth used beyond the amount that would be consumed by delivering the media in the source segment by reducing both the request message traffic and the FEC repair data. Faster and more consistent channel zapping times, and potential media stutterers (stutter (stutter), while simultaneously minimizing the amount of time and simultaneously enabling the highest possible media rate support for a given network condition. Or higher resiliency to stopping can be enabled.Concurrent use of concurrent HTTP connections When expanding additional conditions are met when using HTTP / TCP requests can be discarded, and data that can replace the requested data in discarded requests can be discarded Another HTTP / TCP request can be made to download, and the second HTTP / TCP request is the same data as in the original request, eg source data, or duplicate data, eg request on the first request A portion of the same source data and repair data that was not being sent, or completely disconnected data, eg, repair data that was not requested on the first request, can be requested. An example of appropriate conditions may be that the request has not been answered from the block server infrastructure (BSI) within the provided time, or a failure to establish a forward connection to the BSI or receipt of an explicit failure message from the server or other The request failed due to a failure condition ofOther examples of suitable conditions are a measure of connection speed (data arrival rate in response to the subject request) and comparison with the expected connection speed or depending on the playout time of the media data contained therein or its time The reception of data is progressing slower than normal by comparison of the connection speed estimates that are required to receive a response before another time.This approach has advantages when the BSI occasionally exhibits failure or poor performance. In this case, the above approach increases the probability that the client can continue to reliably play out media data regardless of failure or poor performance within the BSI. In some cases, there may be advantages in designing BSI to occasionally exhibit the failure or poor performance, eg, the design does not exhibit the failure or bad performance or It can have lower costs than alternative designs that present less frequently. In this case, the method described herein has the further advantage in that it enables the use of the lower cost design for BSI without consequent degradation of the user experience.In other embodiments, the number of requests issued for data corresponding to a given block can depend on whether the appropriate conditions for that block are met. If the condition is not met, constrain the client to make additional requests for the block if it allows recovery of the block with a high probability of successful completion of all currently uncompleted data requests for the block. Can. If the condition is fulfilled, then a greater number of requests for blocks can be issued, ie the above constraints do not apply. An example of a suitable condition is that the time to a scheduled playout time of a block or some other time dependent on that time will be lower than the provided threshold. This method has the advantage that an additional data request for the block is issued when the reception of the block is closer due to the close playing time of the media data comprising the block. In the case of a common transport protocol, such as HTTP / TCP, these additional requests have the effect of increasing the proportion of available bandwidth dedicated to data that contributes to the reception of the block in question. This reduces the time it takes to receive enough data to recover the block to complete, and thus the probability that the block can not be recovered before the scheduled playout time of the media data comprising the block. Pull down. As mentioned above, if the block can not be restored before the scheduled playout time of the media data comprising the block, the playout may be paused, resulting in a bad user experience, and thus The method described herein advantageously reduces the probability of this poor user experience.Throughout this specification, reference to the scheduled playout time of a block is that the encoded media data comprising the block is first available at the client to achieve presentation playout without pauses. It should be understood to mean the time that can be done. As will be apparent to those skilled in the art of media presentation systems, this time requires that several transform functions be applied to the media data comprising the block in order to validate the actual playout of the block and these functions May require a certain amount of completion time, so in practice it may be more than the actual time of appearance of the media comprising the blocks in the physical transducers (screens, speakers, etc.) used for playout. Slightly before. For example, media data may be transferred, generally in compressed form, to apply decompression conversion.Methods for Generating File Structures Supporting Cooperative HTTP / FEC Methods Embodiments for generating file structures that can be advantageously used by clients employing cooperative HTTP / FEC methods are now described. In this embodiment, for each source segment, there is a corresponding repair segment generated as follows. The parameter R indicates on average how much FEC repair data is generated for source data in the source segment. For example, R = 0.33 indicates that if the source segment contains 1,000 kilobytes of data, the corresponding repair segment contains approximately 330 kilobytes of repair data. The parameter S indicates the symbol size of a byte, the unit used for FEC encoding and decoding. For example, S = 64 indicates that the source data and the restoration data each comprise a symbol of 64 bytes in size for the purpose of FEC encoding and decoding.The repair segment can be generated for the source segment as follows. Each fragment of the source segment is considered to be a source block for FEC coding purposes, so each fragment is treated as a sequence of source symbols of the source block for which repair symbols are generated. The total number of repair symbols generated for the first i fragments is calculated as TNRS (i) = ceilingR * B (i) / S), and ceiling (x) has a value which is at least x It is a function that outputs the smallest integer. Thus, the number of repair symbols generated for fragment i is NRS (i) = TNRS (i) -TNRS (i-1).The repair segments comprise a concatenation of repair symbols for the fragments, the order of the repair symbols in the repair segment being the order of the fragments in which they are generated, and in the fragments the repair symbols are their encoded symbols It is the order of identifiers (ESI). The repair segment structure corresponding to the source segment structure is shown in FIG. 27 and includes a repair segment generator 2700.By defining the number of repair symbols for a fragment as described above, the total number of repair symbols for all previous fragments, and thus the byte index into the repair segment, is given by R, S, B (i -1) and B (i) only, not on any of the preceding or following structure of the fragment in the source segment. This allows the client to quickly calculate the start position of the repair block within the repair segment, using only local information about the structure of the corresponding fragment of the source segment for which the repair block is generated, and the repair block within the repair block. It is advantageous because it allows the number of repair symbols to be calculated quickly as well. Thus, if the client decides to start downloading and playing out fragments from the middle of the source segment, it can also quickly generate and access the corresponding repair block from within the corresponding repair segment .The number of source symbols in the source block corresponding to fragment i is calculated as NSS (i) = ceiling ((B (i) -B (i-1)) / S). The last source symbol is padded with zero bytes for the purpose of FEC encoding and decoding if B (i) -B (i-1) is not a multiple of S, ie, the last source symbol Is padded with zero bytes, so it is S bytes in size for the purpose of FEC encoding and decoding, but these zero padding bytes are not stored as part of the source segment. In this embodiment, the ESI for the source symbol is 0, 1,. . . , NSS (i) -1 and ESI for the repair symbol is: NSS (i),. . . , NSS (i) + NRS (i) -1.The URL for the repair segment in this embodiment can be generated from the URL for the corresponding source segment, for example by simply attaching the prefix ". Repair" (repair) to the URL of the source segment.As described herein, the repair indexing information and FEC information for the repair segment is implicitly defined by the indexing information for the corresponding source segment and from the values of R and S. The fragment structure comprising the time offset and the repair segment is determined by the time offset and structure of the corresponding source segment. The byte offset to the end of the repair symbol in the repair segment corresponding to fragment i can be calculated as RB (i) = S * ceiling (R * B (i) / S). Therefore, the number of bytes in the repair segment corresponding to fragment i is RB (i) −RB (i−1), and thus the number of repair symbols corresponding to fragment i is NRS (i) = (RB ( i) Calculated as -RB (i-1) / S. The number of source symbols corresponding to fragment i can be calculated as NSS (i) = ceiling ((B (i) -B (i-1)) / S). Thus, in this embodiment, repair indexing information and corresponding FEC information for repair blocks in the repair segment may be implicitly derived from R, S and indexing information for the corresponding fragment of the corresponding source segment. it can.As an example, consider the example of FIG. 28 which shows fragment 2 starting at byte offset B (1) = 6,410 and ending at byte offset B (2) = 6,770. In this example, the symbol size is S = 64 bytes, and the vertical line of points indicates the byte offset within the source segment corresponding to a multiple of S. The overall repair segment size as part of the source segment size is set to R = 0.5 in this example. The number of source symbols in the source block for fragment 2 is calculated as NSS (2) = ceiling ((6,770-6410) / 64) = ceil (5.625) = 6, and these six source symbols Are ESI 0,. . . , 5 respectively, the first source symbol is the first 64 bytes of fragment 2 starting at byte index 6,410 in the source segment and the second source symbol is byte index 6 in the source segment , 474, the next 64 bytes of fragment 2, and so on. The last byte offset of the repair block corresponding to fragment 2 is: RB (2) = 64 * ceiling (0.5 * 6,770 / 64) = 64 * ceiling (52.89...) = 64 * 53 = The starting byte offset of the repair block, calculated as 3,392 and corresponding to fragment 2, is: RB (1) = 64 * ceiling (0.5 * 6,410 / 64) = 64 * ceiling (50.07 ... ) = 64 * 51 = 3,264, so in this example there are two repair symbols in the repair block corresponding to fragment 2 with ESI 6 and 7 respectively, and the byte offset 3 in the repair segment , 264 and end at byte offset 3,392.In the example shown in FIG. 28, even though R = 0.5 and there are six source symbols corresponding to fragment 2, the number of source symbols is simply used to calculate the number of repair symbols. Note that, as can be expected, the number of repair symbols is not three, but two in the manner described herein. Contrary to simply using the number of source symbols of a fragment to determine the number of repair symbols, the embodiment described above only uses the index information associated with the corresponding source block of the corresponding source segment from within the repair segment Allows to calculate the placement of repair blocks. In addition, the number of source symbols in the source block K. In general, KR is at most ceil (K * R) and KR is at least floor ((K-1) * R), as K increases, so the number KR of repair symbols of the corresponding repair block Is approximately approximated by K * R, where floor (x) is the largest integer which is at most x.As those skilled in the art will appreciate, there are many variations of the above-described embodiment for generating file structures that can be advantageously used by clients employing cooperative HTTP / FEC methods. As an example of an alternative embodiment, the original segment for representation can be divided into N> 1 parallel segments, i = 1,. . . , N, the designated part Fi of the original segment is included in the i-th parallel segment, where Fi i = 1,. . . , N is equal to one. In this embodiment, there is one master segment map used to derive segment maps for all parallel segments, similar to the way the repair segment map is derived from the source segment map in the embodiment described above be able to. For example, the master segment map may show a fragment structure if all the source media data is not split into parallel segments but instead included in one original segment, and the first prefix of the fragments of the original segment If the amount of media data in the dictionary is L bytes, then the total number of bytes of this prefix between the first i concurrent segments is ceil (L * Gi), where Gi is Fj j = 1,. . . The segment map for the i-th parallel segment can be derived from the master segment map by calculating it as a sum at i. As another example of an alternative embodiment, the segment may consist of a combination of the original source media data for each fragment and the repair data for that fragment immediately following it, such that the source media data and its A segment is obtained that contains a combination of repair data generated from the source media data using the FEC code. As another example of an alternative embodiment, a segment containing a combination of source media data and repair data can be divided into multiple parallel segments containing a combination of source media data and repair data.After reading this disclosure, further embodiments can be envisaged for the person skilled in the art. In other embodiments, combinations or subcombinations of the above disclosed inventions may be advantageously made. It should be understood that exemplary arrangements of components are shown for purposes of illustration and that alternative embodiments of the present invention contemplate combinations, additions, rearrangements, etc. Thus, while the present invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible.For example, the processes described herein can be implemented using hardware components, software components, and / or combinations thereof. In some cases, software components may be provided on hardware for implementation on tangible non-transitory media-equipped hardware or hardware separate from the media. The specification and drawings should be considered as exemplary only and not limiting. However, various modifications and changes can be made thereto without departing from the broader spirit and scope of the present invention as detailed in the claims, and the present invention It will be apparent that it is intended to cover all modifications and equivalents falling within the scope of the appended claims. |
The invention relates to a method of forming a microelectronic device and a related microelectronic device and electronic system. A method of forming a microelectronic device includes forming a first microelectronic device structure, the first microelectronic device structure includes a first semiconductor structure, a control logic circuit at least partially overlying the first semiconductor structure, a first back-end-of-line (BEOL) structure over and in electrical communication with the control logic circuit, and a first isolation material covering the control logic circuit and the first BEOL structure. A second microelectronic device structure is bonded over the first BEOL structure to form a first assembly. The first assembly is vertically inverted. A third microelectronic device structure including a second semiconductor structure is bonded over the vertically inverted first component to form a second component. A memory cell including a portion of the second semiconductor structure is formed after forming the second component. A second BEOL structure is formed over the memory cell. |
1. A method of forming a microelectronic device comprising:forming a first microelectronic device structure comprising a first semiconductor structure, control logic circuitry at least partially overlying the first semiconductor structure, overlying the control logic circuitry and in communication with the A first back-end process BEOL structure electrically connected to the control logic circuit, and a first isolation material covering the control logic circuit and the first BEOL structure;bonding a second microelectronic device structure over the first BEOL structure of the first microelectronic device structure to form a first assembly;vertically inverting the first assembly;bonding a third microelectronic device structure including a second semiconductor structure over the vertically inverted first assembly to form a second assembly;forming a memory cell comprising a portion of the second semiconductor structure after forming the second component; andA second BEOL structure is formed over the memory cells.2. The method of claim 1, wherein bonding a second microelectronic device structure over the first BEOL layout structure of the first microelectronic device structure comprises bonding a second isolation of the second microelectronic device structure Material bonded to the first isolation material of the first microelectronic device structure.3. The method of claim 1 , further comprising forming the first microelectronic device structure to further include a conductive contact structure in a contact region horizontally offset from a region containing the control logic circuitry. shift.4. The method of claim 3, further comprising after vertically inverting the first assembly:thinning the first semiconductor structure to expose the first isolation material and the conductive contact structure;forming a conductive contact pad structure on the conductive contact structure; andA second isolation material is formed over the conductive contact pad structure and the remainder of the first semiconductor structure.5. The method of claim 4, wherein bonding a third microelectronic device structure comprising a second semiconductor structure over the vertically inverted first assembly comprises bonding a third microelectronic device structure of the third microelectronic device structure. An isolation material is bonded to the second isolation material.6. The method of claim 5, wherein forming the memory cell comprises:forming an access device using the portion of the second semiconductor structure; andstorage node means formed over and in electrical communication with the access means to form the memory cells, each of the memory cells individually including one of the access means and one of the storage node devices.7. The method of claim 6, wherein forming the access device comprises:removing a segment of the second semiconductor structure after forming the second component;patterning a remaining section of the second semiconductor structure to form the portion of the second semiconductor structure;forming a word line extending through the portion of the second semiconductor structure in a first horizontal direction; andforming a digit line vertically overlying the word line and the portion of the second semiconductor structure and extending horizontally along a second horizontal direction orthogonal to the first horizontal direction, the first digit A line contact structure extends vertically from the digit line and to the portion of the second semiconductor structure.8. The method according to claim 7, wherein the storage node device formed above the access device and in electrical communication with the access device comprises:forming an additional contact structure on the portion of the second semiconductor structure;forming a conductive routing structure on the additional contact structure; andThe storage node arrangement is formed on the conductive routing structure at least partially horizontally offset from the additional contact structure.9. The method of claim 7, further comprising:forming second digit line contact structures extending vertically across the digit line and to some of the conductive contact pad structures; andA word line contact structure is formed extending vertically through the word line and to some other of the conductive contact pad structures.10. The method of claim 1, further comprising:The first BEOL structure of the first microelectronic device structure is formed to include a conductive routing structure over and in electrical communication with the control logic circuitry, and a conductive routing structure over and in electrical communication with the control logic circuitry. The conductive pad structure electrically connected to the conductive layout structure; andforming the second BEOL structure to include an additional conductive routing structure over the memory cell and in electrical communication with the control logic circuit, and an additional conductive routing structure over the additional conductive routing structure and in electrical communication with the additional conductive routing structure. Additional conductive pad structure.11. The method of claim 10, further comprising:forming the conductive routing structure and the additional conductive routing structure to each include copper; andThe conductive pad structure and the additional conductive pad structure are each formed to include aluminum.12. A method of forming a microelectronic device comprising:forming a semiconductor wafer comprising semiconductor material, trenches within the semiconductor material, control logic overlying the semiconductor material, routing structures overlying the control logic, and material extending to contact structures of some of said arrangements;attaching an additional die to the semiconductor die using oxide-oxide bonding to form an assembly;vertically inverting the assembly;removing portions of the semiconductor material to expose portions of the contact structure after vertically inverting the assembly;forming a contact pad structure on the exposed portion of the contact structure;after forming said contact pad structure, attaching an additional semiconductor die comprising additional semiconductor material to said assembly using additional oxide-oxide bonding;forming an access device using a portion of the additional semiconductor material;forming word lines and digit lines operatively associated with the access device;forming additional contact structures across the word lines and the digit lines and extending to some of the contact pad structures;forming further contact structures extending to some other of said contact pad structures;a storage node device formed over and coupled to the access device; andAdditional routing structures are formed over the storage node device, at least some of the additional routing structures being coupled to the additional contact structures.13. The method of claim 12, wherein forming an access device using portions of the additional semiconductor material comprises:removing the upper region of the additional semiconductor material after attaching the additional semiconductor wafer to the assembly;patterning the lower region of the additional semiconductor material to form discrete semiconductor structures; andPortions of the discrete semiconductor structures are removed to form semiconductor pillars serving as channel structures for the access devices.14. The method of claim 13, wherein forming word lines and digit lines operatively associated with the access devices comprises:forming the word line to be horizontally adjacent to the semiconductor pillar and to extend in a first horizontal direction; andThe digit line is formed to vertically overlie and horizontally adjacent to the semiconductor pillar and extend in a second horizontal direction perpendicular to the first horizontal direction, a digit line contact structure extending from the discrete semiconductor structure extends to the digit lines.15. The method of any one of claims 12-14, further comprising forming the layout structure of the semiconductor wafer to include:a tungsten routing structure over and in electrical communication with a transistor of the control logic device; andA copper routing structure over and in electrical communication with the tungsten routing structure.16. The method of claim 15, further comprising forming the routing structure of the semiconductor wafer to further include an aluminum pad structure over and in electrical communication with the copper routing structure.17. The method of claim 15, further comprising forming the additional routing structure to include:an additional tungsten layout over the storage node device;an additional copper routing structure over and in electrical communication with the tungsten routing structure; andAn aluminum pad structure over and in electrical communication with the additional copper routing structure.18. A microelectronic device comprising:an array area, which individually includes:a memory unit comprising access means and storage node means;a digit line coupled to the access device and extending in a first direction;a word line coupled to the access device and extending in a second direction orthogonal to the first direction; anda control logic device vertically offset from and in electrical communication with the memory unit;digit line exit areas which alternate horizontally with said array areas along said first direction and which independently comprise:a portion of a digit line extending beyond said array region adjacent thereto;a contact pad structure underlying the portion of the digit line;a digit line contact structure extending through at least some of said portion of said digit line to said contact pad structure;routing structures underlying the contact pad structures and in electrical communication with some of the control logic devices; anda contact structure extending from the contact pad structure to the routing structure; anda word line exit area horizontally alternating with the array area along the second direction and independently comprising:a portion of the word line extending beyond the array region adjacent thereto;an additional contact pad structure underlying the portion of the word line;a wordline contact structure extending through at least some of said portion of said wordline to said additional contact pad structure;an additional routing structure underlying the additional contact pad structure and in electrical communication with some other of the control logic devices; andAn additional contact structure extending from the additional contact pad structure to the additional routing structure.19. The microelectronic device of claim 18, further comprising:A first back-end process BEOL structure, which covers the memory unit and the control logic device and is electrically connected to one or more deep contact components, and the one or more deep contact components are connected to the control logic device. one or more control logic devices in electrical communication; andA second BEOL structure underlies the memory cells and the control logic and is in electrical communication with the one or more deep contact components.20. The microelectronic device of claim 19, wherein:The first BEOL structure includes:a first routing structure comprising copper overlying the memory cells and the control logic; anda first pad structure comprising aluminum overlying and coupled to the first routing structure; andThe second BEOL structure includes:a second layout comprising copper underlying the memory cells and the control logic; andA second pad structure comprising aluminum underlying and coupled to the second routing structure.21. The microelectronic device of claim 19 , further comprising a socket area horizontally offset from the array area, the digit line exit area, and the word line exit area, the socket areas individually Ground includes the one or more deep contact components.22. The microelectronic device of claim 21, wherein the socket block additionally includes additional control logic having a different configuration and operational functionality than the control logic.23. The microelectronic device of claim 22 , wherein the socket area additionally comprises a capacitor in electrical communication with one or more of: at least some of the control logic devices and the additional control logic devices at least some of them.24. The microelectronic device of any one of claims 18-23, wherein the control logic within each of the array regions comprises:sense amplifier devices within a plurality of sense amplifier regions positioned proximate to corners of the array region that are diagonally opposite each other; andSub-wordline driver means within a plurality of sub-wordline driver regions positioned proximate to additional corners of the array region that are diagonally opposite each other.25. The microelectronic device of claim 24, wherein, for each sense amplifier region of the plurality of sense amplifier regions within the array region:some of the sense amplifier devices within the sense amplifier region are in electrical communication with some of the digit lines extending through the array region; andSome other of the sense amplifier devices within the sense amplifier region are in electrical communication with some of the digit lines extending through additional ones of the array regions adjacent to the array region.26. The microelectronic device of claim 25, wherein:The ones of the sense amplifier devices are in electrical communication with the ones of the digit lines extending through the array region by means of some of the digit line contact structures, the contacts some of the pad structures, some of the contact structures, and the routing structures inserted in one of the digit line exit areas between the array area and the additional ones of the array areas some of theThe other ones of the sense amplifier devices are in electrical communication with the some of the digit lines extending horizontally through the additional ones of the array regions by: the Some other of the digit line contact structures, some other of the contact pad structures, some other of the contact structures, and some of the routing structures within the one of the digit line exit regions some others.27. The microelectronic device according to claim 24, wherein, for each sub-word line driver area in the plurality of sub-word line driver areas in the array area:some of the sub-wordline driver devices within the sub-wordline driver region are in electrical communication with some of the wordlines extending through the array region; andSome other of the sub-wordline driver devices within the sub-wordline driver region are in electrical communication with some of the wordlines extending through additional ones of the array regions adjacent to the array region .28. The microelectronic device of claim 27, wherein:Said ones of said sub-wordline driver means are in electrical communication with said ones of said wordlines extending through said array region by means of: some of said wordline contact structures, said Some of the additional contact pad structures, some of the additional contact structures, and all of the digit line exit regions interposed between the array regions and the additional ones of the array regions some of the additional layouts described above; andThe other ones of the sub-wordline driver devices are in electrical communication with the ones of the wordlines extending horizontally through the additional ones of the array regions by means of: Some other of the word line contact structures, some other of the additional contact pad structures, some other of the additional contact structures, and the additional Some others in the layout structure.29. The microelectronic device of claim 18, wherein each of the contact pad structures and each of the additional contact pad structures comprise copper.30. An electronic system comprising:input device;output device;processor means operatively connected to said input means and said output means; anda memory device operatively connected to said processor device and comprising:memory array regions each including dynamic random access memory DRAM cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and vertically offset from and connected to the DRAM cells control logic means for electrical communication of the units;A digit line contact area between two memory array areas adjacent to each other along a first direction in the memory array area, the digit line contact area comprising:end portions of some of the digit lines extending beyond a horizontal region of the two of the memory array regions;conductive pads positioned vertically below said ones of said digit lines;digit line contacts extending vertically through the end portions of the ones of the digit lines to the conductive pads;a conductive routing, which is located vertically below the conductive pad; anda conductive contact extending vertically from the conductive pad to the conductive routing; andA word line contact area between two other memory array areas adjacent to each other along a second direction perpendicular to the first direction in the memory array area, the word line contact area comprising:end portions of some of the word lines extending beyond a horizontal region of the two other ones of the memory array regions;additional conductive pads positioned vertically below said ones of said word lines;word line contacts extending completely vertically through said end portions of said some of said word lines to said additional conductive pad;an additional conductive routing, which is located vertically below the additional conductive pad; andAn additional conductive contact extending vertically from the additional conductive pad to the additional conductive routing. |
Method of forming microelectronic device and related microelectronic device and electronic systempriority claimThis application claims Serial No. 17/364,429, filed June 30, 2021, entitled "METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS" Interest in the filing date of the U.S. patent application relating to the application dated June 30, 2021 listing Fatma Arzum Simsek-Ege, Kun Kunal R. Parekh, Terrence B. McDaniel, and Beau D. Barry are inventors and are entitled "Methods of forming microelectronic devices and US Patent Application No. 17/364,281 of METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS; filed June 30, 2021, listing Fatima Ajumsim Sekegger and Kunau R. Parker are inventors and are entitled "METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS" U.S. Patent Application No. 17/364,335; filed June 30, 2021, listing Fatima Ajum Simsek Eq as inventor and titled "Methods of Forming Microelectronic Devices and Related Microelectronic Devices" and Electronic Systems (METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS)" U.S. Patent Application No. 17/364,377; filed June 30, 2021, listing Fatima Adjum Simsek Eq and Kunau R. Parker as inventors and titled "METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS)", and U.S. Patent Application No. 17/364,476; and the June 30, 2021 application, listing Fatima Ajum Simsek Eq as the inventor and titled "Methods for Forming Microelectronic Devices Methods and Related Microelectronic Devices and Electronic Systems (METHODS OF FORMING MICROELECTRONIC DEVICES, ANDRELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS)" US Patent Application No. 17/364,379. The disclosure of each of the above documents is incorporated herein by reference in its entirety.technical fieldIn various embodiments, the present disclosure relates generally to the field of microelectronic device design and fabrication. More particularly, the present disclosure relates to methods of forming microelectronic devices and memory devices, and to related microelectronic devices, memory devices, and electronic systems.Background techniqueMicroelectronic device designers often desire to increase the integration or density of features within a microelectronic device by reducing the size of individual features and by reducing the separation distance between adjacent features. In addition, microelectronic device designers often desire to design architectures that are not only compact but also provide performance advantages, as well as simplified designs that are easier and less expensive to manufacture.One example of a microelectronic device is a memory device. Memory devices are typically provided as internal integrated circuits in a computer or other electronic device. There are many types of memory devices, including but not limited to volatile memory devices. One type of volatile memory device is a dynamic random access memory (DRAM) device. A DRAM device may include a memory array including DRAM cells arranged in rows extending in a first horizontal direction and columns extending in a second horizontal direction. In one design configuration, an individual DRAM cell includes an access device (eg, a transistor) and a storage node device (eg, a capacitor) electrically connected to the access device. The DRAM cells of the DRAM device are electrically accessible through digit and word lines arranged along the rows and columns of the memory array and in electrical communication with control logic within the base control logic structure of the DRAM device.Control logic within the substrate control logic structure underlying the memory array of the DRAM device has been used to control operations on the DRAM cells of the DRAM device. The control logic devices of the substrate control logic structure may be placed in electrical communication with digit lines and word lines coupled to the DRAM cells by way of wiring structures and contact structures. Unfortunately, processing conditions (eg, temperature, pressure, materials) used to form memory arrays over a substrate control logic structure can limit the configuration and performance of control logic devices within the substrate control logic structure. Furthermore, the number, size, and arrangement of different control logic devices employed within the substrate control logic structure may also undesirably prevent reductions in the size (e.g., horizontal footprint) of memory devices, and/or improvements in the performance of DRAM devices ( For example, faster memory cell turn-on/turn-off speeds, lower threshold switch voltage requirements, faster data transfer rates, lower power consumption).Contents of the inventionIn some embodiments, a method of forming a microelectronic device includes forming a first microelectronic device structure comprising a first semiconductor structure at least partially overlying the first semiconductor structure The above control logic circuit, a first back end of line (BEOL) structure above and in electrical communication with the control logic circuit, and a first BEOL structure covering the control logic circuit and the first BEOL structure isolation material. A second microelectronic device structure is bonded over the first BEOL structure of the first microelectronic device structure to form a first assembly. The first assembly is vertically inverted. A third microelectronic device structure including a second semiconductor structure is bonded over the vertically inverted first microelectronic device structure assembly to form a second assembly. A memory cell comprising a portion of the second semiconductor structure is formed after forming the second component. A second BEOL structure is formed over the memory cell.In additional embodiments, a method of forming a microelectronic device includes forming a semiconductor wafer including a semiconductor material, trenches within the semiconductor material, control logic overlying the semiconductor material , wiring structures overlying the control logic device, and contact structures extending from the semiconductor material to some of the wiring structures. An additional die is attached to the semiconductor die using oxide-oxide bonding to form an assembly. The assembly is inverted vertically. After vertically inverting the assembly, portions of the semiconductor material are removed to expose portions of the contact structures. A contact pad structure is formed on the exposed portion of the contact structure. After forming the contact pad structure, an additional semiconductor die comprising additional semiconductor material is attached to the assembly using additional oxide-oxide bonding. An access device is formed using a portion of the additional semiconductor material. Word lines and digit lines operatively associated with the access devices are formed. Additional contact structures are formed to penetrate the word lines and the digit lines and extend to some of the contact pad structures. Additional contact structures are formed to extend to some other of the contact pad structures. A storage node device is formed overlying and coupled to the access device. An additional layout structure is formed above the storage node device. At least some of the additional routing structures are coupled to the further contact structures.In additional embodiments, a microelectronic device includes an array region, a digit line exit region, and a word line exit region. The array region separately includes: memory cells including access means and storage node means; digit lines coupled to the access means and extending in a first direction; word lines coupled to the access means and extending in a second direction orthogonal to the first direction; and a control logic device vertically offset from and in electrical communication with the memory unit. The digit line exit region alternates horizontally with the array region along the first direction and includes separately: a portion of a digit line extending beyond the array region adjacent thereto; a contact pad structure located on the below the portion of the digit line; a digit line contact structure extending through at least some of the portion of the digit line to the contact pad structure; a routing structure located below the contact pad structure and in contact with the contact pad structure some of the control logic devices in electrical communication; and a contact structure extending from the contact pad structure to the routing structure. The word line exit regions alternate horizontally with the array region along the second direction and individually include: a portion of the word line extending beyond the array region adjacent thereto; an additional contact pad structure which under the portion of the word line; a word line contact structure extending through at least some of the portion of the word line to the additional contact pad structure; an additional wiring structure located under the additional contact a pad structure below and in electrical communication with some other of the control logic devices; and an additional contact structure extending from the additional contact pad structure to the additional routing structure.In yet another embodiment, an electronic system includes: an input device; an output device; a processor device operatively connected to said input device and said output device; and a memory device operably connected to the processor means. The memory device includes: a memory array area; a digit line contact area between two memory array areas adjacent to each other in a first direction in the memory array area; and a word line contact area in the memory array area. Between two other memory array regions adjacent to each other along a second direction perpendicular to the first direction in the array region. The memory array regions each include dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and vertically offset from and aligned with the DRAM cells. A control logic device electrically connected to the DRAM unit. The digit line contact area includes: end portions of some of the digit lines extending beyond horizontal regions of the two of the memory array areas; conductive pads vertically located on the below said some of the digit lines; digit line contacts extending vertically through said end portions of said some of said digit lines to said conductive pads; below the conductive pad; and a conductive contact extending vertically from the conductive pad to the conductive routing. The word line contact areas include: end portions of some of the word lines extending beyond horizontal regions of the two other ones of the memory array areas; additional conductive pads vertically located underneath said some of said word lines; word line contacts extending completely vertically through said end portions of said some of said word lines to said additional conductive pads; additional conductive routing which vertically below the additional conductive pad; and an additional conductive contact extending vertically from the additional conductive pad to the additional conductive routing.Description of drawings1 is a simplified plan view of a microelectronic device structure at a processing stage of a method of forming a microelectronic device according to an embodiment of the disclosure.2A to 2D are the array region (FIG. 2A), digit line exit region (FIG. 2B), word line exit region (FIG. 2C) and slot region (FIG. 2A) of the microelectronic device structure shown in FIG. Figure 2D) Simplified partial longitudinal cross-sectional view.3A to 3D are another process stage of the method of forming a microelectronic device following the process stage of FIGS. 2A to 2D , respectively in the array region ( FIG. 3A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 3B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 3C) and socket area (FIG. 3D).4A to 4D are another process stage of the method of forming a microelectronic device following the process stage of FIGS. 3A to 3D , respectively in the array region ( FIG. 4A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 4B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 4C) and socket area (FIG. 4D).5A to 5D are another processing stage of the method of forming a microelectronic device following the processing stage of FIGS. 4A to 4D , respectively in the array region ( FIG. 4A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 4B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 4C) and socket area (FIG. 4D).6A to 6D are another processing stage of the method of forming a microelectronic device following the processing stage of FIGS. 5A to 5D , respectively in the array region ( FIG. 6A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 6B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 6C) and socket area (FIG. 6D).7 is a simplified partial longitudinal cross-sectional view of an additional microelectronic device structure at a processing stage of a method of forming a microelectronic device according to an embodiment of the disclosure.8A to 8D are another processing stage of the method of forming a microelectronic device following the processing stage of FIGS. 6A to 6D and the processing stage of FIG. 7, respectively in the array region shown in FIGS. 2A to 2D (FIG. ), the digit line exit area (FIG. 8B), the word line exit area (FIG. 8C) and the simplified partial longitudinal cross-sectional view of the socket area (FIG. 8D).9A to 9D are another process stage of the method of forming a microelectronic device following the process stage of FIGS. 8A to 8D , respectively in the array region ( FIG. 9A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 9B), word line exit area (FIG. 9C) and socket area (FIG. 9D) simplified partial longitudinal cross-sectional views.10A to 10D are another process stage of the method of forming a microelectronic device following the process stage of FIGS. 9A to 9D , respectively in the array region ( FIG. 10A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 10B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 10C) and socket area (FIG. 10D).11A to 11D are another processing stage of the method of forming a microelectronic device following the processing stage of FIGS. 10A to 10D , respectively in the array region ( FIG. 11A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 11B), simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 11C) and socket area (FIG. 11D).12A to 12D are another processing stage of the method of forming a microelectronic device following the processing stage of FIGS. 11A to 11D , respectively in the array region ( FIG. 12A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 12B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 12C) and socket area (FIG. 12D).13A to 13D are another process stage of the method of forming a microelectronic device following the process stage of FIGS. 12A to 12D , respectively in the array region ( FIG. 13A ), digit line exit region shown in FIGS. 2A to 2D . (FIG. 13B), a simplified partial longitudinal cross-sectional view of the word line exit area (FIG. 13C) and socket area (FIG. 13D).14 is a simplified plan view of a microelectronic device according to an embodiment of the disclosure.FIG. 15 is a schematic block diagram of an electronic system according to an embodiment of the present disclosure.detailed descriptionThe following description provides specific details, such as material composition, shape and size, in order to provide an adequate description of the embodiments of the present disclosure. However, it will be understood by those of ordinary skill in the art that embodiments of the present disclosure may be practiced without these specific details. Indeed, embodiments of the present disclosure may be practiced in conjunction with conventional microelectronic device fabrication techniques employed in the industry. Furthermore, the descriptions provided below do not form a complete process flow for fabricating microelectronic devices (eg, memory devices). The structures described below do not form complete microelectronic devices. Only those process actions and structures necessary for understanding the embodiments of the present disclosure are described in detail below. Additional acts to form a complete microelectronic device from the structure can be performed by conventional fabrication techniques.The drawings presented herein are for illustrative purposes only and are not intended to be actual views of any particular material, component, structure, device or system. Variations from the shapes depicted in the drawings, eg, due to manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments described herein should not be construed as limited to the particular shapes or regions as shown but are to include deviations in shapes that result, for example, from manufacturing. For example, a region shown or described as a box may have rough and/or nonlinear features, and a region shown or described as a circle may contain some rough and/or linear features. Furthermore, sharp corners shown may be rounded and vice versa. Thus, the regions shown in the figures are schematic in nature and their shapes are not intended to illustrate the precise shape of a region and are not intended to limit the scope of the claimed invention. The drawings are not necessarily drawn to scale. Additionally, common elements between the figures may retain the same numerical designation.As used herein, "memory device" means and includes a microelectronic device that exhibits, but is not necessarily limited to, memory functionality. In other words, and by way of non-limiting example only, the term "memory device" includes not only conventional memories (e.g., conventional volatile memories; conventional nonvolatile memories), but also application-specific integrated circuits (ASICs) (e.g., A system on a chip (SoC)), a microelectronic device that combines logic and memory, and a graphics processing unit (GPU) that incorporates memory.As used herein, the term "configured to" refers to one or more of at least one structure and at least one device in order to facilitate the operation of one or more of said structure and said device in a predetermined manner Size, shape, material composition, orientation and arrangement.As used herein, the terms "vertical", "longitudinal", "horizontal" and "transverse" refer to the principal plane of the structure and are not necessarily defined by the Earth's gravitational field. A "horizontal" or "transverse" direction is a direction substantially parallel to the main planes of the structure, while a "vertical" or "longitudinal" direction is a direction substantially perpendicular to the main planes of the structure. The main plane of the structure is defined by the surface of the structure having a relatively larger area compared to the other surfaces of the structure. Referring to the figures, a "horizontal" or "lateral" direction may be perpendicular to the indicated "Z" axis, and may be parallel to the indicated "X" axis and/or parallel to the indicated "Y" axis; and "vertical" or The "longitudinal" direction can be parallel to the indicated "Z" axis, can be perpendicular to the indicated "X" axis, and can be perpendicular to the indicated "Y" axis.As used herein, features (eg, regions, structures, devices) described as being "adjacent" to each other mean and include features having a disclosed identity (or identities) that are positioned proximate (eg, closest) to each other. Additional features (eg, additional regions, additional structures, additional devices) that do not match the disclosed identification (or identifications) of "adjacent" features may be disposed between "adjacent" features. In other words, "adjacent" features may be positioned directly adjacent to each other such that no other features intervene between the "adjacent" features; or "adjacent" features may be positioned indirectly adjacent to each other such that there is At least one feature of an identity other than the associated identity is positioned between "adjacent" features. Thus, features described as being "vertically adjacent" to each other refer to and include disclosed one or more identified features that are located vertically closest (eg, vertically closest) to each other. Furthermore, features described as being "horizontally adjacent" to each other refer to and include features having the disclosed identifier (or identifiers) that are positioned horizontally closest (eg, horizontally closest) to each other.As used herein, for example, "below", "below", "lower", "bottom", "above", "upper", "top", "front", "behind", "left side", Spatially relative terms such as "right side" may be used for ease of description to describe the relationship of one element or feature to another element or feature as shown in the figures. Unless otherwise specified, spatially relative terms are intended to encompass different orientations of materials in addition to the orientation depicted in the drawings. For example, if the material in the figures is turned over, elements described as "below" or "beneath" or "beneath" or "on bottom" other elements or features would then be oriented "above" or "on the bottom" of the other elements or features. "on top". Thus, the term "below" can encompass both an orientation of above as well as below, depending on the context in which the term is used, as will be apparent to those of ordinary skill in the art. The material may be otherwise oriented (eg, rotated 90 degrees, inverted, turned over) and the spatially relative descriptors used herein interpreted accordingly.As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise.As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.As used herein, the phrase "coupled to" refers to structures that are electrically connected to each other in an operative manner, such as by a direct resistive connection or by an indirect connection (eg, by means of another structure).As used herein, the term "substantially" with respect to a given parameter, characteristic or condition means and includes the degree to which a given parameter, characteristic or condition complies with variance (eg, within an acceptable tolerance) as would be understood by one of ordinary skill in the art Degree. By way of example, depending on the particular parameter, characteristic or condition being substantially satisfied, the parameter, characteristic or condition may be at least 90.0% satisfied, at least 95.0% satisfied, at least 99.0% satisfied, at least 99.9% satisfied, or even 100.0% satisfied.As used herein, "about" or "approximately" with reference to a value for a particular parameter includes that value, and those of ordinary skill in the art will understand that deviations from the value are within acceptable tolerances for the particular parameter. For example, "about" or "approximately" with respect to a value may include additional values that are within the range of 90.0% to 110.0% of the stated value, such as within the range of 95.0% to 105.0% of the stated value, within the stated Within the range of 97.5% to 102.5% of the stated value, within the range of 99.0% to 101.0% of the stated value, within the range of 99.5% to 100.5% of the stated value, or within the range of 99.9% to 100.1% of the stated value .As used herein, "conductive material" refers to and includes a conductive material such as one or more of the following: metals (e.g., tungsten (W), titanium (Ti), molybdenum (Mo), niobium (Nb), vanadium (V), hafnium (Hf), tantalum (Ta), chromium (Cr), zirconium (Zr), iron (Fe), ruthenium (Ru), osmium (Os), cobalt (Co), rhodium (Rh), iridium (Ir), nickel (Ni), palladium (Pa), platinum (Pt), copper (Cu), silver (Ag), gold (Au), aluminum (Al)); alloys (e.g., Co-based alloys, Alloys based on Fe, alloys based on Ni, alloys based on Fe and Ni, alloys based on Co and Ni, alloys based on Fe and Co, alloys based on Co and Ni and Fe, alloys based on Al, alloys based on Cu, alloys based on Magnesium (Mg) alloys, Ti-based alloys, steel, mild steel, stainless steel); conductive metal-containing materials (e.g., conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides); and Conductively doped semiconductor material (eg, conductively doped polysilicon, conductively doped germanium (Ge), conductively doped silicon germanium (SiGe)). Additionally, "conductive structure" means and includes a structure formed of and including a conductive material.As used herein, "insulating material" means and includes an electrically insulating material such as one or more of the following: at least one dielectric oxide material (e.g., silicon oxide (SiOx), phosphosilicate glass, borosilicate Salt glass, borophosphosilicate glass, fluorosilicate glass, aluminum oxide (AlOx), hafnium oxide (HfOx), niobium oxide (NbOx), titanium oxide (TiOx), zirconium oxide (ZrOx), tantalum oxide ( TaOx) and magnesium oxide (MgOx)); at least one dielectric nitride material (e.g., silicon nitride (SiNy)); at least one dielectric oxynitride material (e.g., silicon oxynitride (SiOxNy) )); at least one dielectric oxycarbide material (eg, silicon oxycarbide (SiOxCy)); at least one hydrogenated dielectric oxycarbide material (eg, hydrogenated silicon oxycarbide (SiCxOyHz)); and at least one dielectric oxycarbide Nitride materials (eg, silicon oxycarbide (SiOxCzNy)). A chemical formula herein that includes one or more of "x," "y," and "z" (e.g., SiOx, AlOx, HfOx, NbOx, TiOx, SiNy, SiOxNy, SiOxCy, SiCxOyHz, SiOxCzNy) represents an element containing "x" atoms, "y" atoms of the other element, and "z" atoms of the additional element (if present) for each atom of the other element (e.g., Si, Al, Hf, Nb, Ti) Average ratio of materials. Since chemical formulas represent relative atomic ratios rather than strict chemical structures, insulating materials may include one or more stoichiometric compounds and/or one or more non-stoichiometric compounds, and "x", "y" and "z" ( If present, the value of ) may be an integer or may be non-integer. As used herein, the term "non-stoichiometric compound" means and includes a compound having a composition of an element that cannot be represented by a ratio of well-defined natural numbers and violates the law of definite ratio. In addition, an "insulating structure" refers to and includes a structure formed of and including an insulating material.As used herein, the term "uniform" means that the relative amounts of elements contained in a feature (e.g., material, structure) do not vary throughout different portions of the feature (e.g., different horizontal portions, different vertical portions) . Conversely, as used herein, the term "non-uniform" means that the relative amounts of elements contained in a feature (eg, material, structure) vary throughout different portions of the feature. If the feature is non-uniform, the amount of one or more elements contained in the feature may vary stepwise (e.g., abruptly), or may vary continuously (e.g., gradually, Such as linearly, parabolicly changing). A feature may, for example, be formed from and include a stack of at least two different materials.Unless the context dictates otherwise, the materials described herein may be formed by any suitable technique including, but not limited to, spin coating, blanket coating, chemical vapor deposition (CVD), plasma enhanced CVD (PECVD), Atomic layer deposition (ALD), plasma enhanced ALD (PEALD), physical vapor deposition (PVD) (eg, sputtering), or epitaxial growth. Depending on the particular material to be formed, the technique used to deposit or grow the material can be selected by one skilled in the art. Furthermore, unless the context dictates otherwise, the material removal described herein may be accomplished by any suitable technique including, but not limited to, etching (e.g., dry etching, wet etching, vapor phase etching), ion milling, abrasive planarization (eg, chemical mechanical planarization (CMP)) or other known methods.1-14 are various views (described in further detail below) illustrating different processing stages of a method of forming a microelectronic device (eg, a memory device, such as a DRAM device) according to an embodiment of the present disclosure. Considering the description provided below, it will be apparent to those of ordinary skill in the art that the methods described herein can be used to form a variety of devices. In other words, the methods of the present disclosure can be used whenever it is desired to form microelectronic devices. From the description provided below, it will be apparent to those of ordinary skill in the art that the methods and structures described herein can be used to form various devices and electronic systems.1 shows a simplified plan view of a first microelectronic device structure 100 (eg, a first wafer) at an early processing stage in a method of forming a microelectronic device (eg, a memory device, such as a DRAM device) according to an embodiment of the present disclosure. . As shown in FIG. 1, a first microelectronic device structure 100 may be formed to include an array region 102, a digit line exit region interposed between pairs of array regions 102 that are horizontally adjacent to each other along a first horizontal direction (eg, Y direction). 104 (also referred to as "digit line contact slot area"), interposed between additional pairs of array areas 102 that are horizontally adjacent to each other along a second horizontal direction (eg, the X direction) that is orthogonal to the first horizontal direction. word line exit area 106 (also referred to as "word line contact slot area"), and one or more horizontally adjacent to some of the array area 102 along one or more of the first horizontal direction and the second horizontal direction socket block 108 (also referred to as "back end of line (BEOL) contact socket block"). Array area 102 , digit line exit area 104 , word line exit area 106 , and socket area 108 are each described in further detail below.The array region 102 of the first microelectronic device structure 100 may comprise a horizontal region of the first microelectronic device structure 100 configured and positioned to have an array of memory cells (e.g., DRAM cells) subsequently formed within its horizontal boundaries. array), as described in further detail below. In addition, array region 102 may also be configured and positioned to have a desired arrangement of control logic devices subsequently formed within its horizontal boundaries, also described in further detail below. Control logic devices to be formed within the horizontal boundaries of array region 102 may be formed vertically offset (eg, in the Z direction) from memory cells to be formed within the horizontal boundaries of array region 102 .The first microelectronic device structure 100 can be formed to include a desired number of array regions 102 . For clarity and ease of understanding of the drawings and associated description, FIG. 1 depicts a first microelectronic device structure 100 as being formed to include four (4) array regions 102: a first array region 102A, a second array region 102B, a third array region area 102C and the fourth array area 102D. As shown in FIG. 1, the second array region 102B may be horizontally adjacent to the first array region 102A along the Y direction, and may be horizontally adjacent to the fourth array region 102D along the X direction; the third array region 102C may be horizontally adjacent to the X direction. The first array region 102A, and may be horizontally adjacent to the fourth array region 102D along the Y direction; and the fourth array region 102D may be horizontally adjacent to the third array region 102C along the Y direction, and may be horizontally adjacent to the second array region along the Y direction District 102B. In additional embodiments, the first microelectronic device structure 100 is formed to include a different number of array regions 102 . For example, the first microelectronic device structure 100 may be formed to include more than four (4) array regions 102, such as greater than or equal to eight (8) array regions 102, greater than or equal to sixteen (16) array regions 102, Thirty-two (32) array areas 102 or greater, sixty-four (64) array areas 102 or greater, one hundred and twenty-eight (128) array areas 102 or greater, two hundred and five Sixteen (256) array regions 102, greater than or equal to five hundred and twelve (512) array regions 102, or greater than or equal to one thousand and twenty-four (1024) array regions 102.Additionally, the first microelectronic device structure 100 can be formed to include a desired distribution of array regions 102 . As shown in FIG. 1 , in some embodiments, a first microelectronic device structure 100 is formed to include rows 103 of array regions 102 extending along an X direction, and columns 105 of array regions 102 extending along a Y direction. Row 103 of array section 102 may, for example, include a first row including first array section 102A and third array section 102C, and a second row including second array section 102B and fourth array section 102B. District 102D. Column 105 of array section 102 may, for example, include a first column including first array section 102A and second array section 102B, and a second column including third array section 102C and fourth array section 102C. District 102D.With continued reference to FIG. 1 , the digit line exit region 104 of the first microelectronic device structure 100 can include a horizontal region of the first microelectronic device structure 100 configured and positioned so that at least some subsequently formed digit lines (e.g., , bit lines, data lines) terminate horizontally in it. For individual digit line exit regions 104, at least some subsequently formed digit lines operatively associated with array regions 102 flanking digit line exit region 104 (e.g., at opposite boundaries in the Y direction) may exit at the digit line exit region 104. Region 104 has ends within its horizontal boundaries. Furthermore, digit line exit region 104 may also be configured and positioned to contain within its horizontal boundaries contact structures and routing structures operatively associated with at least some subsequently formed digit lines. As described in further detail below, some contact structures to be formed within digit line exit region 104 may couple subsequently formed digit lines to the control logic of a control logic device (e.g., a sense amplifier (SA) device) for subsequent formed in the array region 102 . As shown in FIG. 1 , in some embodiments, the digit line exit area 104 extends horizontally along the X direction and is horizontally inserted between horizontally adjacent rows of the array area 102 along the Y direction. The digit line exit regions 104 may horizontally alternate with rows of the array region 102 , for example, along the Y direction.The individual digit line exit area 104 may be divided into multiple sub-areas. For example, as shown in FIG. 1 , a single digit line exit region 104 may include a first digit line exit subregion 104A and a second digit line exit subregion 104B. In some embodiments, the first digit line exit sub-sections 104A horizontally alternate with the second digit line exit sub-sections 104B along the X direction. A pair (e.g., two (2)) of horizontally adjacent array regions 102 within a single column of array regions 102 may include one (1) first digit line exit subregion 104A horizontally positioned therebetween along the Y direction. and one (1) second digit line exit subsection 104B. By way of non-limiting example, the first array region 102A and the second array region 102B of the first column array region 102 may include one (1) first digit line exit sub-region 104A and one ( 1) A second digit line exit subsection 104B. The one (1) first digit line exit subsection 104A and the one (1) second digit line exit subsection 104B may be at least partially (eg, substantially) covered by the first array section 102A and the second array section 102A. bounded by the horizontal boundary in the X direction of the region 102B.As described in further detail below, individual first digit line exit subsections 104A may be configured and positioned to facilitate communication between a group of digit lines (e.g., an odd number of digit lines or an even number of digit lines) and a group of control logic devices (e.g., An electrical connection between an odd number of SA devices or an even number of SA devices) that is connected to a portion of one (1) array region 102 (e.g., the first array region 102A) of a pair of horizontally adjacent array regions 102 (e.g., half along the X direction) is operatively associated and also facilitates an additional set of digit lines (e.g., an additional odd number of digit lines or an additional even number of digit lines) with an additional set of control logic (e.g., an additional An odd number of SA devices or an additional even number of SA devices) that is connected to a corresponding portion of the additional array region 102 (e.g., the second array region 102B) in a pair of horizontally adjacent array regions 102 (e.g., , the corresponding half along the X direction) are operatively associated. Furthermore, as described in further detail below, a separate second digit line exit subsection 104B may be configured and positioned to facilitate electrical connection between a further set of digit lines and a further set of control logic devices, the electrical connections being with the Another portion (e.g., the other half along the X direction) of one (1) array region 102 (e.g., the first array region 102A) is operatively associated and also facilitates a set of yet further digit lines to a set of yet another digit lines. Electrical connections between logic devices that are operatively associated with a corresponding other portion (eg, a corresponding other half along the X direction) of the additional array region 102 (eg, second array region 102B) are additionally controlled.Still referring to FIG. 1 , the word line exit region 106 of the first microelectronic device structure 100 may comprise a horizontal region of the first microelectronic device structure 100 configured and positioned such that at least some subsequently formed word lines (e.g., , access line) terminates horizontally in it. For individual word line exit regions 106, at least some of the subsequently formed word lines operatively associated with array regions 102 flanking word line exit regions 106 (e.g., at opposite boundaries along the X direction) may be at the word line exit region 106. Region 106 has ends within its horizontal boundaries. In addition, word line exit region 106 may also be configured and positioned to contain within its horizontal boundaries contact structures and routing structures operatively associated with subsequently formed word lines. As described in further detail below, some of the contact structures to be formed within the wordline exit region 106 may couple the subsequently formed wordlines to the control logic of additional control logic devices, such as sub-wordline driver (SWD) devices, to be subsequently formed in the array region 102 . As shown in FIG. 1 , in some embodiments, the word line exit region 106 extends horizontally along the Y direction, and is horizontally interposed between horizontally adjacent columns of the array region 102 along the X direction. The word line exit regions 106 may, for example, alternate horizontally with the columns of the array region 102 along the X direction.A single word line exit area 106 can be divided into a plurality of sub-areas. For example, as shown in FIG. 1 , a single wordline exit region 106 may include a first wordline exit subregion 106A and a second wordline exit subregion 106B. In some embodiments, the first word line exit sub-regions 106A horizontally alternate with the second word line exit sub-regions 106B along the Y direction. A pair (eg, two (2)) of horizontally adjacent array regions 102 within a single row of the array region 102 may include one (1) first word line exit sub-region 106A and One (1) second word line exit sub-section 106B. By way of non-limiting example, the first array region 102A and the third array region 102C of the first row array region 102 may include one (1) first word line exit sub-region 106A and one ( 1) second word line exit sub-area 106B. The one (1) first word line exit sub-region 106A and the one (1) second word line exit sub-region 106B may be at least partially (eg, substantially) surrounded by the first array region 102A and the third array region 102A. bounded by the horizontal boundary in the Y direction of the zone 102C.As described in further detail below, the individual first wordline exit sub-regions 106A may be configured and positioned to facilitate communication between a set of wordlines (eg, an odd number of wordlines or an even number of wordlines) with a set of control logic devices (eg, An electrical connection between an odd number of SWD devices or an even number of SWD devices) that is connected to a portion of one (1) array region 102 (eg, the first array region 102A) of a pair of horizontally adjacent array regions 102 (e.g., half along the Y direction) is operatively associated and also facilitates an additional set of word lines (e.g., an additional odd number of word lines or an additional even number of word lines) with an additional set of control logic devices (e.g., an additional An odd number of SWD devices or an additional even number of SWD devices) to a corresponding portion of another array region 102 (e.g., third array region 102C) in a pair of horizontally adjacent array regions 102 (e.g., , the corresponding half along the Y direction) are operatively associated. In addition, as described in further detail below, a separate second digit line exit subsection 106B may be configured and positioned to facilitate electrical connection between a further set of word lines and a further set of control logic devices, the electrical connections being with the The other portion (e.g., the other half along the Y direction) of one (1) array region 102 (e.g., the first array region 102A) is operatively associated and also facilitates a set of yet another digit lines with a set of yet another digit lines. Electrical connections between the additional control logic devices are operatively associated with a corresponding other portion (eg, a corresponding other half along the Y direction) of a further array region 102 (eg, third array region 102C).With continued reference to FIG. 1 , the socket area 108 of the first microelectronic device structure 100 may comprise a horizontal region of the first microelectronic device structure 100 configured and positioned (e.g., by means of Contact structures and routing structures) facilitate electrical connections between subsequently formed control logic circuits and additional subsequently formed structures (eg, BEOL structures), as described in further detail below. The socket area 108 may be horizontally adjacent to one or more peripheral horizontal borders of one or more sets of array areas 102 (eg, along the Y direction, along the X direction). For clarity and ease of understanding of the drawings and associated description, FIG. 1 depicts a first microelectronic device structure 100 as being formed to include one (1) microelectronic device structure 100 formed to include a shared horizontal boundary horizontally adjacent to a second array region 102B and a fourth array region 102D. slot area 108 . However, the first microelectronic device structure 100 can be formed to include one or more of the socket regions 108 in different numbers and in different horizontal positions. As a non-limiting example, socket areas 108 may be horizontally adjacent to shared horizontal boundaries of different sets of array areas 102 (e.g., shared horizontal boundaries of third array area 102C and fourth array area 102D, first array area 102A and third array area 102A). The shared horizontal boundary of the array region 102C, the shared horizontal boundary of the first array region 102A and the second array region 102B). As another non-limiting example, the first microelectronic device structure 100 may be formed to include a plurality (e.g., a plurality, more than one) of socket areas 108 that are connected to different sets of array areas 102 with each other. horizontally adjacent. In some embodiments, the plurality of socket areas 108 collectively substantially horizontally surround (eg, substantially horizontally surround) the array area 102 .2A to 2D show simplified partial longitudinal cross-sectional views of different regions of the first microelectronic device structure 100 previously described with reference to FIG. 1 . 2A shows a simplified partial longitudinal view from the Y-direction (so as to describe the XZ plane) of one array region 102 (e.g., first array region 102A) of the first microelectronic device structure 100 shown in FIG. Cross-sectional view. 2B shows a simplified partial longitudinal cross-sectional view from the perspective of the Y direction (so as to depict the XZ plane) of a digit line exit region 104 of the first microelectronic device structure 100 shown in FIG. 1 . 2C shows a simplified partial longitudinal cross-sectional view from the perspective of the X direction (so as to depict the YZ plane) of one word line exit region 106 of the first microelectronic device structure 100 shown in FIG. 1 . 2D shows a simplified partial longitudinal cross-sectional view from the perspective of the X-direction (so as to depict the YZ plane) of one of the socket regions 108 of the first microelectronic device structure 100 shown in FIG. 1 .2A to 2D collectively, a first microelectronic device structure 100 can be formed to include a first base semiconductor structure 110, a filled trench 112, a transistor 114 (FIGS. 2A and 2D), a first isolation material 116, a first contact structure 118 ( FIGS. 2A and 2D ), the second contact structure 120 ( FIGS. 2A and 2D ), the third contact structure 122 ( FIGS. 2B to 2D ), and at least one first wiring level 124 including the first wiring structure 126 . The filled trench 112 extends vertically (eg, along the Z direction) into the first base semiconductor structure 110 . The transistor 114 at least partially vertically overlies the first base semiconductor structure 110 and the filled trench 112 . The first contact structure 118 and the second contact structure 120 contact the transistor 114 . The third contact structure 122 extends vertically through the filled trench 112 in the digit line exit area 104 ( FIG. 2B ), the word line exit area 106 ( FIG. 2C ) and the slot area 108 ( FIG. 2D ) and contacts the first substrate. semiconductor structure 110 . Some of the first wiring structures 126 contact some of the first contact structures 118 ( FIGS. 2A and 2D ), some other of the first wiring structures 126 contact some of the second contact structures 120 ( FIGS. 2A and 2D ), And still others of the first wiring structures 126 contact some of the third contact structures 122 ( FIGS. 2B , 2C and 2D ). The first isolation material 116 may substantially cover and surround the first base semiconductor structure 110 , the transistor 114 , the first contact structure 118 , the second contact structure 120 , the third contact structure 122 and the first wiring structure 126 .The first base semiconductor structure 110 includes a base material or construction on which additional features (eg, materials, structures, devices) of the first microelectronic device structure 100 are formed. The first base semiconductor structure 110 may include a base semiconductor material on a semiconductor structure (eg, a semiconductor wafer) or a support structure. For example, the first base semiconductor structure 110 may comprise a conventional silicon substrate (eg, a conventional silicon wafer), or another bulk substrate comprising a semiconductor material. In some embodiments, the first base semiconductor structure 110 includes a silicon wafer. The first base semiconductor structure 110 may include one or more layers, structures and/or regions formed therein and/or thereon.Filling trench 112 may include at least partially (eg, substantially) filling a trench (eg, opening, via, aperture) within first base semiconductor structure 110 with first isolation material 116 . Filled trench 112 may be used, for example, as a shallow trench isolation (STI) structure within first base semiconductor structure 110 . The filled trench 112 may be formed to partially (eg, less than completely) vertically extend through the first base semiconductor structure 110 . Each of the filled trenches 112 may be formed to exhibit substantially the same size and shape as each other of the filled trenches 112, or at least one of the filled trenches 112 may be formed to exhibit the same size and shape as the filled trenches 112. At least one other of the at least one other is one or more of a different size and a different shape. As a non-limiting example, each of the filled trenches 112 may be formed to exhibit substantially the same vertical dimension and substantially the same vertical cross-sectional shape as each other of the filled trenches 112; At least one of the trenches 112 may be formed to exhibit one or more of a different vertical dimension and a different vertical cross-sectional shape than at least one other of the filled trenches 112 . In some embodiments, the filled trenches 112 are all formed to extend vertically into and terminate at substantially the same depth into the first base semiconductor structure 110 . In an additional embodiment, at least one of the filled trenches 112 is formed to extend vertically to a relatively deeper depth into the first base semiconductor structure 110 than at least one other of the filled trenches 112 and at the relative Terminate at deeper depths. As another non-limiting example, each of the filled trenches 112 may be formed to exhibit substantially the same horizontal dimension and substantially the same horizontal cross-sectional shape as each other of the filled trenches 112; At least one of the grooves 112 may be formed to exhibit a different horizontal dimension (e.g., a relatively larger horizontal dimension, a relatively smaller horizontal dimension) and a different horizontal cross-sectional shape than at least one other of the filled trenches 112. one or more of . In some embodiments, at least one of the filled trenches 112 is formed to have one or more horizontal dimensions (eg, along the X direction and/or along the Y direction) different than at least one other of the filled trenches 112 .Referring collectively to FIGS. 2A and 2D , transistor 114 may be separately formed to include conductively doped region 128 , channel region 130 , gate structure 132 and gate dielectric material 134 . For the transistor 114, a conductively doped region 128 may be formed within the first base semiconductor structure 110 (eg, within a relatively elevated portion of the first base semiconductor structure 110), filling at least one of the trenches 112; the channel region 130 may be located within the first base semiconductor structure 110 and may be horizontally interposed between its conductive doped regions 128; a gate structure 132 may vertically overlie the channel region 130; and a gate dielectric material 134 (eg, a dielectric oxide) may be interposed vertically (eg, along the Z direction) between the gate structure 132 and the channel region 130 . Conductively doped regions 128 of individual transistors 114 may include source regions 128A and drain regions 128B.Referring collectively to FIGS. 2A and 2D , for individual transistors 114 , the conduction-doped regions 128 may include semiconductor material in the first base semiconductor structure 110 doped with one or more desired conduction-enhancing dopants. In some embodiments, conductively doped region 128 of transistor 114 comprises a semiconductor material (eg, silicon) doped with at least one N-type dopant (eg, one or more of phosphorus, arsenic, antimony, and bismuth). ). In some of such embodiments, channel region 130 of transistor 114 includes a semiconductor material doped with at least one P-type dopant (eg, one or more of boron, aluminum, and gallium). In some other of such embodiments, the channel region 130 of the transistor 114 includes a substantially undoped semiconductor material (eg, substantially undoped silicon). In an additional embodiment, for individual transistors 114, their conductively doped regions 128 comprise a semiconductor material (eg, silicon). In some of such additional embodiments, channel region 130 of transistor 114 includes a semiconductor material doped with at least one N-type dopant (eg, one or more of phosphorus, arsenic, antimony, and bismuth) . In some other of such additional embodiments, the channel region 130 of the transistor 114 includes a substantially undoped semiconductor material (eg, substantially undoped silicon).Still collectively referring to FIGS. 2A and 2D , gate structures 132 (eg, gate electrodes) may individually extend horizontally (eg, along the X direction) between and be employed by the plurality of transistors 114 . The gate structure 132 may be formed of and include a conductive material. The gate structures 132 may individually be substantially uniform, or the gate structures 132 may individually be non-uniform. In some embodiments, gate structures 132 are each substantially uniform. In additional embodiments, gate structures 132 are each non-uniform. Individual gate structures 132 may, for example, be formed of and include stacks of at least two different conductive materials.Still referring to FIGS. 2A and 2D , first contact structures 118 may be separately formed to extend vertically between gate structures 132 (and thus transistors 114 ) and couple to one or more first contact structures in first layout level 124 . The structure 126 is laid out. The first contact structure 118 may solely be formed of and contain a conductive material. By way of non-limiting example, the first contact structure 118 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the first contact structure 118 is formed of and includes W. In additional embodiments, the first contact structure 118 is formed of and includes Cu.As also shown in FIGS. 2A and 2D , a second contact structure 120 may be formed to extend vertically between conductively doped regions 128 (eg, source region 128A, drain region 128B) of transistor 114 and to Impurity regions 128 are coupled to some of the first routing structures 126 of the first routing level 124 . The second contact structure 120 may be solely formed of and contain a conductive material. By way of non-limiting example, the second contact structure 120 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). The material composition of the second contact structures 120 may be substantially the same as the material composition of the first contact structures 118, or the material composition of one or more of the second contact structures 120 may be different from that of one or more of the first contact structures 118. material composition. In some embodiments, the second contact structure 120 is formed of and includes W. In an additional embodiment, the second contact structure 120 is formed of and includes Cu.Referring collectively to FIGS. 2B to 2D , within (eg, inside) the horizontal boundaries (eg, along the X and Y directions) of some of the filled trenches 112 , at least some of the third contact structures 122 may be at the second Extend vertically (for example, along the Z direction) between some other parts of the first base semiconductor structure 110 (for example, relatively vertically recessed parts) among some other ones of the layout structures 126, and the filled trenches such as the first Some of the trenches 112 are filled in the digit line exit area 104 ( FIG. 2B ), the word line exit area 106 ( FIG. 2C ), and the socket area 108 ( FIG. 2D ) of the microelectronic device structure 100 . 2B to 2D, in some embodiments, at least some of the third contact structures 122 vertically extend from the first wiring structure 126, through one or more of the filled trenches 112, and in the filled trenches. Horizontal boundaries of one or more of the trenches 112 extend to one or more vertical lower surfaces of the first base semiconductor structure 110 . As described in further detail below, after subsequent processing (eg, subsequent thinning) of the first base semiconductor structure 110, at least some of the third contact structures 122 may be employed to facilitate communication between some of the first wiring structures 126 to be formed on the Electrical connection between one or more features (eg, structures, materials, devices) at opposite sides (eg, backside, bottom surface) of the first base semiconductor structure 110 . The third contact structures 122 may each be individually formed of and contain a conductive material. By way of non-limiting example, the third contact structure 122 is formed of and includes one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the third contact structure 122 is formed of and includes W. In an additional embodiment, the third contact structure 122 is formed of and includes Cu.Referring to FIGS. 2A to 2D simultaneously, the first wiring structure 126 of the first wiring level 124 may be formed of and contain a conductive material. By way of non-limiting example, the first layout 126 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the first routing structure 126 is formed of and includes W. In additional embodiments, the first wiring structure 126 is formed of and includes Cu. At least some of the first layout structures 126 may be used as local layout structures for microelectronic devices (eg, memory devices such as DRAM devices) of the present disclosure.Although 2A to 2D depict the first microelectronic device structure 100 as being formed to include a single (eg, only one) first layout level 124 having a first layout structure 126, the first microelectronic device structure 100 may be formed to include individual A plurality (eg, more than one) of first layout levels 124 individually comprising a desired arrangement (eg, pattern) of first layout structures 126 . By way of non-limiting example, the first microelectronic device structure 100 can be formed to include two or more (eg, three or more) of the first layout levels 124, wherein different first layout levels 124 are arranged vertically. are vertically offset from each other and each individually contains the desired arrangement of the first wiring structures 126 therein. At least some of the first routing structures 126 in at least one of the first routing levels 124 may be coupled to ones of the first routing structures 126 in at least one other of the first routing levels 124 by means of conductive interconnect structures. at least some.Continuing to collectively refer to FIGS. 2A to 2D, the transistor 114, the first contact structure 118, the second contact structure 120, and the first wiring structure 126 may form control logic circuits for various control logic devices 136 (FIGS. 2A and 2D), the The control logic is configured to control various operations of various features (eg, memory cells) of a microelectronic device (eg, a memory device, such as a DRAM device) to be formed by the methods of the present disclosure. In some embodiments, control logic 136 includes CMOS circuitry. As a non-limiting example, control logic 136 may include one or more (e.g., each) of: a charge pump (e.g., VCCP charge pump, VNEGWL charge pump, DVC2 charge pump), a delay locked loop (DLL) Circuitry (e.g., Ring Oscillator), Vdd Regulator, Drivers (e.g., Main Word Line Driver, Sub Word Line Driver (SWD)), Page Buffers, Decoders (e.g., Local Stack Decoder, Column Decoder , row decoder), sense amplifiers (e.g., equalization (EQ) amplifier, isolation (ISO) amplifier, NMOS sense amplifier (NSA), PMOS sense amplifier (PSA)), repair circuitry (e.g., column repair circuit system, row repair circuitry), I/O devices (e.g., local I/O devices), memory test devices, array multiplexers (MUX), error checking and correction (ECC) devices, self-refresh/wear leveling devices, and other chip/stack control circuitry. Different regions (eg, array region 102 (FIG. 2A), socket region 108 (FIG. 2D)) may have different control logic devices 136 formed within their horizontal boundaries.Still collectively referring to FIGS. 2A to 2D , the first isolation material 116 may be formed on or over the surface of the first base semiconductor structure 110 filling the inside and outside of the horizontal boundary of the trench 112 . In addition, first isolation material 116 may be formed over transistor 114, first contact structure 118 (FIGS. 2A and 2D), second contact structure 120 (FIGS. 2A and 2D), third contact structure 122 (FIGS. 2B-2D), and On or above the surface of a layout structure 126 . An uppermost vertical boundary (eg, an uppermost surface) of the first isolation material 116 may vertically cover an uppermost vertical boundary (eg, an uppermost surface) of the first wiring structure 126 . As described in further detail below, the first isolation material 116 can be used to attach the first microelectronic device structure 100 to a second microelectronic device structure (eg, the second wafer). The first isolation material 116 may be formed of and include at least one insulating material. By way of non-limiting example, first isolation material 116 may be formed of and include one or more of: at least one dielectric oxide material (eg, SiOx, phosphosilicate glass, one or more of borosilicate glass, borophosphosilicate glass, fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (e.g., SiNy), at least one A dielectric oxynitride material (eg, SiOxNy), at least one dielectric oxynitride material (eg, SiOxCzNy), and amorphous carbon. In some embodiments, the first isolation material 116 is formed of and includes SiOx (eg, SiO2). The first isolation material 116 may be substantially uniform, or the first isolation material 116 may be non-uniform. In some embodiments, first isolation material 116 is substantially uniform. In additional embodiments, the first isolation material 116 is non-uniform. The first isolation material 116 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 3A to 3D , there is shown another processing stage of the method of forming a microelectronic device subsequent to the processing stages previously described with reference to FIGS. 1 and 2A to 2D , viewed from the perspective of the previously described orientation. Simplified partial longitudinal cross-sectional views of array region 102 (FIG. 3A), digit line exit region 104 (FIG. 3B), word line exit region 106 (FIG. 3C) and socket region 108 (FIG. 3D). As collectively depicted in FIGS. 3A-3D , a BEOL structure may be formed over the first routing level 124 . For example, at least one second layout level 138 including the second layout structure 140 may be formed above the first layout level 124; at least one third layout level 142 including the third layout structure 144 may be formed above the second layout level 138; And optionally, at least one fourth layout level 146 including a fourth layout structure 148 may be formed above the third layout level 142 . One or more of the second routing structures 140 of the second routing level 138 may be coupled to one or more of the first routing structures 126 of the first routing level 124 by means of a fourth contact structure 150 ( FIGS. 3A and 3D ). indivual. Furthermore, one or more of the third routing structures 144 in the third routing level 142 may be coupled to the second routing structures 140 in the second routing level 138 by means of fifth contact structures 152 ( FIGS. 3A and 3D ). one or more. Furthermore, if formed, one or more of the fourth layout structures 148 in the fourth layout level 146 (eg, one or more conductive pad structures) may be coupled to the first contact structure 154 (FIG. One or more of the third routing structures 144 in the three routing levels 142 . In an additional embodiment, at least some (eg, all) of the sixth contact structures 154 ( FIG. 3D ) are omitted (eg, not formed), and one or A plurality is formed to directly physically contact one or more of the third routing structures 144 in the third routing level 142 . In another embodiment, the fourth layout level 146 including the fourth layout structure 148 is not formed in the processing stages described with reference to FIGS. 3D depiction of the processing stages.The second wiring structure 140, the third wiring structure 144, the fourth wiring structure 148 (if present), the fourth contact structure 150 (FIGS. 3A and 3D), the fifth contact structure 152 (FIGS. 3A and 3D) and the sixth contact structure. Structures 154 (FIG. 3D), if present, may each be formed from and include a conductive material. By way of non-limiting example, the second wiring structure 140, the third wiring structure 144, the fourth wiring structure 148, the fourth contact structure 150 (FIGS. 3A and 3D), the fifth contact structure 152 (FIGS. 3A and 3D) and the The six-contact structure 154 (FIG. 3D) may be individually formed from and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., conductive metal nitrogen compounds, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the second wiring structures 140 are each formed of W and include W; the third wiring structures 144 are each formed of Cu and include Cu; the fourth wiring structures 148 are formed of Al and include Al; and the fourth contact structure 150 ( FIGS. 3A and 3D ), the fifth contact structure 152 ( FIGS. 3A and 3D ), and the sixth contact structure 154 ( FIG. 3D ) are each formed of and contain W. Referring to FIG.Still collectively referring to FIGS. 3A to 3D , the second isolation material 156 may be formed on at least the first isolation material 116 , the second wiring structure 140 , the third wiring structure 144 , the fourth wiring structure 148 (if present), the fourth contact structure 150 , fifth contact structure 152 ( FIGS. 3A and 3D ), and sixth contact structure 154 ( FIG. 3D ), if present, are partially on or over. The second isolation material 156 may be formed of and include at least one insulating material. In some embodiments, the second isolation material 156 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The second isolation material 156 may be substantially uniform, or the second isolation material 156 may be non-uniform. In some embodiments, second isolation material 156 is substantially uniform. In additional embodiments, the second isolation material 156 is non-uniform. The second isolation material 156 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 4A to 4D , there is shown an array region viewed from the previously described orientation at another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 3A to 3D . 102 (FIG. 4A), digit line exit area 104 (FIG. 4B), word line exit area 106 (FIG. 4C) and socket area 108 (FIG. 4D). As collectively depicted in FIGS. 4A through 4D , a second microelectronic device structure 158 (eg, a second wafer) including an additional base structure 160 and a third isolation material 162 may be attached to the first microelectronic device structure 100 . Two isolation materials 156 are formed to form the first microelectronic device structure component 164 .The additional base structure 160 of the second microelectronic device structure 158 includes a base material or construction upon which the formed additional features (eg, materials, structures, devices) are formed. In some embodiments, additional base structure 160 includes a wafer. The additional base structure 160 may be formed from and include one or more of: a semiconductor material (eg, one or more of a silicon material, such as monocrystalline or polycrystalline silicon; silicon-germanium; germanium; gallium arsenide; gallium nitride; gallium phosphide; indium phosphide; indium gallium nitride; one or more of silicate glass, alkaline earth boroaluminosilicate glass, quartz, titania silicate glass, and soda lime glass), and ceramic materials (e.g., p-AlN, SOPAN, AlN, alumina ( For example, one or more of sapphire; α-Al2O3) and silicon carbide). By way of non-limiting example, the additional base structure 160 may comprise a semiconductor wafer (eg, a silicon wafer), a glass wafer, or a ceramic wafer. Additional base structure 160 may include one or more layers, structures, and/or regions formed therein and/or thereon.The third isolation material 162 in the second microelectronic device structure 158 can be formed from and include at least one insulating material. The material composition of the third isolation material 162 can be substantially the same as the material composition of the second isolation material 156 in the first microelectronic device structure 100; or the material composition of the third isolation material 162 can be different from the material composition of the second isolation material 156 composition. In some embodiments, the third isolation material 162 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The third isolation material 162 may be substantially uniform, or the third isolation material 162 may be non-uniform. In some embodiments, third isolation material 162 is substantially uniform. In additional embodiments, the third isolation material 162 is non-uniform. The third isolation material 162 may, for example, be formed from and comprise a stack of at least two different dielectric materials.To attach the second microelectronic device structure 158 to the second isolation material 156 in the first microelectronic device structure 100, the second microelectronic device structure 158 can be vertically inverted (eg, in the Z direction) with its second microelectronic device structure 158 inverted. Third isolation material 162 may be provided in physical contact with second isolation material 156, and third isolation material 162 and second isolation material 156 may be exposed to annealing conditions to form a bond between third isolation material 162 and second isolation material 156 (eg, oxide-to-oxide bonding). By way of non-limiting example, third insulating material 162 and second insulating material 156 may be exposed to temperatures greater than or equal to about 400°C (eg, greater than about 800°C in the range of about 400°C to about 800°C) to An oxide-to-oxide bond is formed between the second isolation material 156 and the third isolation material 162 . In some embodiments, second isolation material 156 and third isolation material 162 are exposed to at least one temperature greater than about 800° C. to form an oxide-to-oxide bond between second isolation material 156 and third isolation material 162 .As shown in FIGS. 4A-4D , bonding the third isolation material 162 to the second isolation material 156 may form the first connected isolation structure 166 . In FIGS. 4A to 4D , the third isolation material 162 and the second isolation material 156 of the first connected isolation structure 166 are distinguished from each other by means of dashed lines. However, the third isolation material 162 to the second isolation material 156 may be integral and continuous with each other. In other words, the first connected isolation structure 166 may be a substantially monolithic structure including the third isolation material 162 as its first region and the second isolation material 156 as its second region. For the first connected isolation structure 166, its third isolation material 162 can be attached to its second isolation material 156 without bonding wires.Referring next to FIGS. 5A to 5D , there is shown an array region viewed from the previously described orientation at another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 4A to 4D . 102 (FIG. 5A), digit line exit area 104 (FIG. 5B), word line exit area 106 (FIG. 5C) and socket area 108 (FIG. 5D). As collectively depicted in FIGS. 5A-5D , the first microelectronic device structure assembly 164 can be inverted vertically (e.g., in the Z direction) and the upper portion of the first base semiconductor structure 110 can be removed (FIGS. 4A-4D ) to expose (eg, expose) the first isolation material 116 filling the trenches 112 ( FIGS. 168 (FIGS. 5A and 5D). As shown in FIGS. 5B to 5D , the material removal process may also partially expose the third contact structure 122 . The upper surface of the third contact structure 122 may be formed substantially coplanar with the upper surface of the first semiconductor structure 170 .After vertically inverting the first microelectronic device structure assembly 164, the upper portion of the first base semiconductor structure 110 ( FIGS. 4A to 4D ) vertically overlying the filled trenches 112 ( FIGS. 4A to 4D ) may use at least one A conventional wafer thinning process (eg, a conventional CMP process; a conventional etch process, such as a conventional dry etch process or a conventional wet etch process). Through the material removal process, the first semiconductor structure 170 may be formed to exhibit a desired vertical height (eg, along the Z direction). The material removal process may also remove portions of the first isolation material 116 (eg, the upper portion after vertically inverting the first microelectronic device structure assembly 164 ). Additionally, the material removal process may remove portions of the third contact structure 122 ( FIGS. 5B-5C ) (eg, the upper portion after vertically inverting the first microelectronic device structure assembly 164 ).Referring next to FIGS. 6A to 6D , there is shown an array region viewed from the previously described orientation at another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 5A to 5D . 102 (FIG. 6A), digit line exit area 104 (FIG. 6B), word line exit area 106 (FIG. 6C) and socket area 108 (FIG. 6D). As collectively depicted in FIGS. 6A to 6D , a fourth isolation material 172 may be formed on the surfaces of the first isolation material 116 , the first semiconductor structure 170 ( FIGS. 6A and 6D ), and the third contact structure 122 ( FIGS. 6B to 6D ). or above; a portion of the fourth isolation material 172 can be removed, and a contact pad structure 174 can be formed on the third contact structure 122 ( FIGS. 6B to 6D ); and then, a fifth isolation material 176 can be formed on the fourth isolation material 172 and the surface of contact pad structure 174.The fourth isolation material 172 may be formed of and include at least one insulating material. The material composition of the fourth isolation material 172 may be substantially the same as the material composition of the first isolation material 116 , or the material composition of the fourth isolation material 172 may be different from the material composition of the first isolation material 116 . In some embodiments, the fourth isolation material 172 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The fourth isolation material 172 may be substantially uniform, or the fourth isolation material 172 may be non-uniform. In some embodiments, fourth isolation material 172 is substantially uniform. In additional embodiments, fourth isolation material 172 is non-uniform. The fourth isolation material 172 may, for example, be formed from and comprise a stack of at least two different dielectric materials. An upper surface of the fourth isolation material 172 may be formed to be substantially planar.Referring to FIGS. 6B to 6D , a contact pad structure 174 may be formed to physically contact the third contact structure 122 . The geometric configuration, horizontal position and horizontal spacing of the contact pad structures 174 depend at least in part on the geometric configuration, horizontal position and horizontal spacing of the third contact structures 122 . The individual contact pad structure 174 may be formed to at least partially overlap the individual third contact structure 122 horizontally. In some embodiments, each contact pad structure 174 is formed to substantially cover an upper surface of the third contact structure 122 with which it is in physical contact. The individual contact pad structures 174 may be formed to have horizontal dimensions (eg, along the X direction and along the Y direction) that are greater than or equal to the corresponding horizontal dimensions of the individual third contact structures 122 that are in physical contact therewith.The contact pad structure 174 may be formed of and include a conductive material. By way of non-limiting example, contact pad structure 174 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrogen compounds, conductive metal silicides, conductive metal carbides, conductive metal oxides). The material composition of each of the contact pad structures 174 may be substantially the same as the material composition of each of the third contact structures 122 ( FIGS. 6B to 6D ), or the material composition of one or more of the contact pad structures 174 may be The material composition of one or more of the third contact structures 122 (FIGS. 6B-6D) is different. In some embodiments, the contact pad structures 174 are each individually formed of and include Cu. In additional embodiments, the contact pad structures 174 are each individually formed of and include W. As shown in FIG. Each of contact pad structures 174 may be substantially uniform, or one or more of contact pad structures 174 may individually be non-uniform. In some embodiments, each of contact pad structures 174 is substantially uniform. In additional embodiments, each of the contact pad structures 174 is non-uniform. Each contact pad structure 174 may, for example, be formed from and contain at least two stacks of different conductive materials.The fifth isolation material 176 ( FIGS. 6B to 6D ) formed to cover the fourth isolation material 172 and the contact pad structure 174 may be formed of and contain at least one insulating material. The material composition of the fifth isolation material 176 may be substantially the same as that of the fourth isolation material 172 , or the material composition of the fifth isolation material 176 may be different from that of the fourth isolation material 172 . In some embodiments, fifth isolation material 176 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). Fifth isolation material 176 may be substantially uniform, or fifth isolation material 176 may be non-uniform. In some embodiments, fifth isolation material 176 is substantially uniform. In additional embodiments, fifth isolation material 176 is non-uniform. The fifth isolation material 176 may, for example, be formed from and comprise a stack of at least two different dielectric materials. As shown in FIGS. 6A to 6D , an upper surface of the fifth isolation material 176 may be formed to be substantially planar.Referring next to FIG. 7 , there is shown a view from the Y direction (so as to describe the XZ plane) that may be formed to include a second base semiconductor structure 180 and a first semiconductor structure formed on, over, or within the second base semiconductor structure 180 . A simplified partial longitudinal cross-sectional view of a third microelectronic device structure 178 (eg, third wafer) of six isolation materials 182 . The third microelectronic device structure 178 may be formed separately from the first microelectronic device structure assembly 164 (FIGS. 6A-6D). After being separately formed, the third microelectronic device structure 178 may be attached to the first microelectronic device structure assembly 164 (FIGS. 6A-6D), as described in further detail below with reference to FIGS. 8A-8D.The second base semiconductor structure 180 of the third microelectronic device structure 178 includes a base material or construction upon which the formed additional features (eg, materials, structures, devices) are formed. In some embodiments, the second base semiconductor structure 180 includes a wafer. The second base semiconductor structure 180 may be formed from and include the following: a semiconductor material (e.g., one or more of silicon materials, such as monocrystalline or polycrystalline silicon; silicon germanium; germanium; gallium arsenide; gallium nitride; phosphorus gallium nitride; indium gallium phosphide; indium nitride; and aluminum gallium nitride). By way of non-limiting example, the second base semiconductor structure 180 may include a semiconductor wafer (eg, a silicon wafer). The second base semiconductor structure 180 may include one or more layers, structures and/or regions formed therein and/or thereon.As shown in FIG. 7 , optionally, the second base semiconductor structure 180 may include therein at least one separation region 184 configured to facilitate or facilitate access to the second base semiconductor structure 180 (eg, Adjacent to) the separation of the portion 180A of the sixth isolation material 182 from the additional portion 180B of the second base semiconductor structure 180 that is relatively further away from the sixth isolation material 182 . By way of non-limiting example, separation region 184 may contain dopants (e.g., hydrogen), void spaces, and/or in structural features (e.g., defects, damages) that facilitate or facilitate subsequent separation of portion 180A from additional portion 180B. One or more, as described in further detail below. The vertical depth D1 (eg, along the Z direction) of the separation region 184 within the second base semiconductor structure 180 may correspond to a desired vertical height of the portion 180A of the second base semiconductor structure 180 . The vertical height of portion 180A may be based at least in part on additional features (e.g., structures, materials, devices) to be formed using portion 180A of second base semiconductor structure 180 after it is separated from additional portion 180B of second base semiconductor structure 180 to select the desired configuration. In some embodiments, the vertical depth D1 of the separation region 184 (and thus the vertical height of the portion 180A of the second base semiconductor structure 180 ) is in the range of about 400 nanometers (nm) to about 800 nm. In additional embodiments, the separation region 184 is not present in the second base semiconductor structure 180 . In some of such embodiments, the additional portion 180B of the second base semiconductor structure 180 may subsequently be subjected to a different process (eg, a non-separation based process, such as a conventional grinding process) relative to the portion 180A of the second base semiconductor structure 180. And removed.The sixth isolation material 182 in the third microelectronic device structure 178 can be formed from and include at least one insulating material. The material composition of the sixth isolation material 182 in the third microelectronic device structure 178 can be substantially the same as the material composition of the fifth isolation material 176 ( FIGS. 6A to 6D ) in the first microelectronic device structure assembly 164 ( FIGS. 6A to 6D ). or the material composition of the sixth isolation material 182 may be different from the material composition of the fifth isolation material 176 ( FIGS. 6A to 6D ). In some embodiments, sixth isolation material 182 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The sixth isolation material 182 may be substantially uniform, or the sixth isolation material 182 may be non-uniform. In some embodiments, sixth isolation material 182 is substantially uniform. In additional embodiments, sixth isolation material 182 is non-uniform. The sixth isolation material 182 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 8A to 8D , there is shown another process of the method of forming a microelectronic device following the processing stages previously described with reference to FIGS. 6A to 6D and following the processing stages previously described with reference to FIG. 7 stage, array region 102 (FIG. 8A), digit line exit region 104 (FIG. 8B), word line exit region 106 (FIG. 8C) and slot previously described with reference to FIGS. Simplified partial longitudinal cross-sectional view of zone 108 (FIG. 8D). Although the different regions shown in FIGS. 8A to 8D were previously described as the first microelectronic device structure 100 ( FIGS. 1 and 2A to 2D ) and the first microelectronic device structure 100 formed by processing the first microelectronic device structure 100 according to the methods of the present disclosure, different regions of the microelectronic device structure assembly 164 (FIGS. 6A through 6D), but it should be understood that these regions become regions of the microelectronic device of the present disclosure formed using the first microelectronic device structure assembly 164 and the third microelectronic device structure 178. , as described in further detail below. Accordingly, these distinct regions are not limited to features (eg, structures, materials, devices) and/or portions of features of first microelectronic device structure 100 and first microelectronic device structure assembly 164 . Indeed, these regions evolve through the methods of the present disclosure to encompass and include additional features (eg, additional structures, additional materials, additional devices), portions of additional features, and/or modified features.8A to 8D, the third microelectronic device structure 178 can be vertically inverted (eg, inverted in the Z direction), and its sixth isolation material 182 can be attached (eg, bonded, eg, by oxide- oxide bonding) to the fifth isolation material 176 of the first microelectronic device structure assembly 164 to form a second microelectronic device structure assembly 186 . Attaching (eg, bonding) the sixth isolation material 182 of the third microelectronic device structure 178 to the fifth isolation material 176 of the first microelectronic device structure assembly 164 can form a second connection of the second microelectronic device structure assembly 186 type isolation structure 188 . Alternatively, first microelectronic device structure assembly 164 may be vertically inverted (eg, flipped in the Z direction to be inverted) and attached to third microelectronic device structure 178 to form second microelectronic device structure assembly 186 .To form the second connected isolation structure 188 of the second microelectronic device structure assembly 186, the fifth isolation material 176 of the first microelectronic device structure assembly 164 is physically connected to the sixth isolation material 182 of the third microelectronic device structure assembly 178. After contacting, first microelectronic device structure assembly 164 and third microelectronic device structure 178 may be exposed to annealing conditions to form a bond (eg, an oxide-to-oxide bond) between fifth isolation material 176 and sixth isolation material 182 ). By way of non-limiting example, fifth insulating material 176 and sixth insulating material 182 may be exposed to temperatures greater than or equal to about 400°C (eg, greater than about 800°C in the range of about 400°C to about 800°C) to An oxide-to-oxide bond is formed between the fifth isolation material 176 and the sixth isolation material 182 . In some embodiments, fifth isolation material 176 and sixth isolation material 182 are exposed to at least one temperature greater than about 800° C. to form an oxide-to-oxide bond between fifth isolation material 176 and sixth isolation material 182 , And the first microelectronic device structure assembly 164 is attached to the third microelectronic device structure 178 .Although the fifth isolation material 176 and the sixth isolation material 182 of the second connected isolation structure 188 of the second microelectronic device structure assembly 186 are distinguished from each other in FIGS. 8A to 8D by the dashed lines in FIGS. The fifth isolation material 176 and the sixth isolation material 182 may be integral and continuous with each other. In other words, second connected isolation structure 188 may be a substantially monolithic structure comprising fifth isolation material 176 as its first region (eg, vertically lower region) and as its second region (eg, vertically lower region). The sixth isolation material 182 in the straight upper region). For the second connected isolation structure 188, its fifth isolation material 176 can be attached to its sixth isolation material 182 without bonding wires.Still referring to FIGS. 8A through 8D , attaching the third microelectronic device structure 178 to the first microelectronic device structure assembly 164 in the manner described above to form the second microelectronic device structure assembly 186 can facilitate forming the same microelectronic device structure as conventional microelectronic device structures. An individual socket block 108 ( FIG. 8D ) has a relatively reduced horizontal area compared to the configuration. For example, by incorporating the third microelectronic device prior to forming the various devices (e.g., access devices, storage node devices) and associated additional interconnect features (e.g., contact structures, routing structures) of the microelectronic device of the present disclosure The attachment of structure 178 to first microelectronic device structure assembly 164 can reduce various alignment considerations and can reduce the horizontal footprint that would otherwise need to account for such alignment considerations. The horizontal area of the individual socket regions 108 (FIG. 8D) can be, for example, about 40% to about 60% smaller than the horizontal area of conventional socket regions for conventional microelectronic device configurations. The size reduction of such socket regions can facilitate relatively enhanced areal density for sub-20 nanometer (nm) technology nodes.Referring next to FIGS. 9A to 9D , there is shown the array viewed from the previously described orientation at another processing stage of the method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 8A to 8D Simplified partial longitudinal cross-sectional view of region 102 (FIG. 9A), digit line exit region 104 (FIG. 9B), word line exit region 106 (FIG. 9C) and socket region 108 (FIG. 9D). As shown in FIGS. 9A to 9D , an additional portion 180B ( FIGS. 8A to 8D ) of the second base semiconductor structure 180 ( FIGS. 8A to 8D ) is removed while at least partially maintaining the second base semiconductor structure 180 ( FIGS. 8A to 8D ). The portion 180A ( FIGS. 8A to 8D ), and then, the at least partially maintained portion 180A ( FIGS. 8A to 8D ) may be patterned to form a second semiconductor layer 190 comprising a second semiconductor structure 192 ( FIGS. 9A and 9D ). (FIGS. 9A and 9D). The second semiconductor structure 192 may be used to subsequently form additional features (eg, structures; devices such as transistors), as described in further detail below. Furthermore, a seventh isolation material 194 may be formed horizontally adjacent to the second semiconductor structure 192 of the second semiconductor level 190 .The additional portion 180B (FIG. 8A to 8D). By way of non-limiting example, in some embodiments in which the second base semiconductor structure 180 ( FIGS. 8A-8D ) includes a separation region 184 ( FIGS. 8A-8D ), the separation region 184 includes an enabling or promoting portion 180A ( FIGS. 8A-8D ). 8A to 8D) with one or more of subsequent separation of dopants (e.g., hydrogen), void spaces, and/or structural features (e.g., defects, damages) of additional portions 180B ( FIGS. 8A to 8D ), the second The base semiconductor structure 180 ( FIGS. 8A-8D ) may act to effectuate such detachment at or near a separation region 184 ( 8A- 8D ). Furthermore, the portion 180A ( FIGS. ) may be further processed (e.g., polished, patterned) to form a second semiconductor level using conventional processes (e.g., conventional CMP processes, conventional masking processes, conventional etching processes) and conventional processing equipment, also not described in detail herein The second base semiconductor structure 192 in 190 . The vertical height (eg, along the Z direction) of the second semiconductor structure 192 may be less than or equal to the vertical height of the portion 180A ( FIGS. 8A-8D ) of the second base semiconductor structure 180 ( FIGS. 8A-8D ). In some embodiments, the vertical height of the second semiconductor structure 192 is formed to be smaller than the vertical height of the portion 180A ( FIGS. 8A-8D ) of the second base semiconductor structure 180 ( FIGS. 8A-8D ). For example, the vertical height of the second semiconductor structure 192 may be formed in a range of about 100 nm to about 300 nm, such as about 150 nm to about 250 nm, or about 200 nm.As depicted collectively in FIGS. 9A-9D , after processing additional portions 180B ( FIGS. 8A-8D ) of the second base semiconductor structure 180 ( FIGS. 8A-8D ), some of the regions (eg, the array shown in FIG. 9A region 102, socket region 108 shown in FIG. 9D) contains the resulting second semiconductor structure 192, and some others in the region (e.g., digit line exit region 104 shown in FIG. 9B, word line The exit region 106) is substantially free of the resulting second semiconductor structure 192. For example, the array region 102 shown in FIG. 9A may include some of the second semiconductor structures 192 , wherein horizontally adjacent second semiconductor structures 192 are separated from each other by a seventh isolation material 194 . As another example, each of the digit line exit region 104 shown in FIG. 9B and the word line exit region 106 shown in FIG. 9C may be substantially free of the second semiconductor structure 192 . As collectively shown in FIGS. 9A to 9D , in some embodiments, the upper surface of the seventh isolation material 194 is formed substantially coplanar with the upper surface of the second semiconductor structure 192 in the second semiconductor level 190 .The seventh isolation material 194 may be formed of and include at least one insulating material. The material composition of the seventh isolation material 194 may be substantially the same as the material composition of the second connected isolation structure 188 , or the material composition of the seventh isolation material 194 may be different from the material composition of the second connected isolation structure 188 . In some embodiments, seventh isolation material 194 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The seventh isolation material 194 may be substantially uniform, or the seventh isolation material 194 may be non-uniform. In some embodiments, seventh isolation material 194 is substantially uniform. In additional embodiments, seventh isolation material 194 is non-uniform. The seventh isolation material 194 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 10A to 10D , there is shown the array viewed from the previously described orientation at another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 9A to 9D Simplified partial longitudinal cross-sectional view of region 102 (FIG. 10A), digit line exit region 104 (FIG. 10B), word line exit region 106 (FIG. 10C) and socket region 108 (FIG. 10D). As collectively depicted in FIGS. 10A-10D , access devices 196 ( FIG. 10A ), such as access transistors, may be formed within array region 102 ( FIG. 10A ). In addition, digit lines 198 (FIGS. 10A and 10B) (eg, data lines, bit lines) may be formed to couple to access devices 196 (FIG. 10A) and to extend horizontally in the Y direction through array region 102 (FIG. 10A). At least some of digit lines 198 (FIGS. 10A and 10B) may terminate (eg, end) within digit line exit region 104 (FIG. 10B). In addition, word lines 200 (eg, access lines) may be formed to couple to access devices 196 (FIG. 10A) and to extend horizontally in the X direction through array region 102 (FIG. 10A). At least some of the wordlines 200 (FIGS. 10A and 10C) may terminate within the wordline exit region 106 (FIG. 10C).Referring to FIG. 10A , the access devices 196 formed within the array region 102 may be used as components of memory cells (eg, DRAM cells) to be formed within the array region 102 . By way of non-limiting example, each access device 196 may be individually formed to include: a channel region comprising a portion of one of the second semiconductor structures 192; a source region and a drain region each individually comprising One or more of: at least one conductive doped portion of a second semiconductor structure 192 and/or at least one conductive structure formed in, on, or over one of the second semiconductor structures 192; and at least one gate structure , which includes a portion of at least one of the word lines 200 . Each access device 196 may also include a gate dielectric material (eg, a dielectric oxide material) formed interposed between its channel region and its gate structure.The number lines 198 may exhibit a horizontally elongated shape extending in parallel in the Y direction; and the word lines 200 may exhibit a horizontally elongated shape extending in parallel in an X direction orthogonal to the Y direction. As used herein, the term "parallel" means substantially parallel. Digit line 198 and word line 200 may each be individually formed of and include a conductive material. By way of non-limiting example, digit lines 198 and word lines 200 may be formed from and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, Conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, digit line 198 and word line 200 are each independently formed of and include one or more of W, Ru, Mo, and titanium nitride (TiNy). one or more of . Each of digit lines 198 and each of word lines 200 may individually be substantially uniform, or one or more of digit lines 198 and/or one or more of word lines 200 may individually be Basically non-uniform. In some embodiments, each of digit lines 198 and each of word lines 200 are formed substantially uniformly.Still referring to FIG. 10A , within array region 102 , additional features (eg, structures, materials) are also formed on, over, and/or between access devices 196 , digit lines 198 , and word lines 200 . For example, as shown in FIG. 10A, a seventh contact structure 202 (e.g., a digit line contact structure, also known as a so-called "bitcon" structure) may be formed to extend vertically between the access devices 196 and connect the access devices 196 to The device is coupled to the digit line 198; an eighth contact structure 204 (e.g., a cell contact structure, also referred to as a so-called "cellcon" structure) may be formed in contact with the access device 196, and may be configured and positioned to connect the access device 196 is coupled to a subsequently formed storage node device (eg, a capacitor); a dielectric cap structure 206 may be formed on or over digit line 198 ; and an additional dielectric cap structure 208 may be formed on or over word line 200 . The seventh contact structure 202 and the eighth contact structure 204 may individually be formed of and contain at least one conductive material. In some embodiments, seventh contact structure 202 and eighth contact structure 204 are individually formed of and include one or more of: at least one metal (eg, W), at least one alloy , at least one conductive metal silicide (e.g., one or more of titanium silicide (TiSix), cobalt silicide (CoSix), tungsten silicide (WSix), tantalum silicide (TaSix), molybdenum silicide (MoSix), and Nickel silicide (NiSix)), and at least one conductive metal nitride (for example, titanium nitride (TiNy), tungsten nitride (WNy), tantalum nitride (TaNy), cobalt nitride (CoNy), molybdenum nitride (MoNy ) and one or more of nickel nitride (NiNy)). Furthermore, dielectric cap structure 206 and additional dielectric cap structure 208 may be individually formed from and include at least one insulating material. In some embodiments, dielectric cap structure 206 and additional dielectric cap structure 208 are solely formed of and include a dielectric nitride material (eg, SiNy, such as Si3N4).Referring to FIG. 10B , within digit line exit region 104 , at least some of digit lines 198 may terminate horizontally (eg, end) in the Y direction. Each of the digit lines 198 extending horizontally through the array region 102 ( FIG. 10A ) and terminating horizontally within the digit line exit region 104 may be formed to terminate at substantially the same horizontal position along the Y direction; At least one of the number lines 198 within the number line exit area 104 may be formed to terminate in the Y direction within the number line exit area 104 compared to at least one other of the number lines 198 terminating horizontally within the number line exit area 104. at different horizontal positions. In some embodiments, at least some of the digit lines 198 that are horizontally adjacent to each other in the X direction have terminating ends (eg, terminating surfaces) that are horizontally offset from each other in the Y direction. For example, a horizontal offset of the terminating ends of some of the digit lines 198 from the terminating ends of some other of the digit lines 198 within the digit line exit region 104 may, for example, facilitate or facilitate a desired contact structure arrangement within the digit line exit region 104 .Referring next to FIG. 10C , within word line exit region 106 , at least some of word lines 200 may terminate (eg, end) horizontally along the X direction. Each of the word lines 200 extending horizontally through the array region 102 ( FIG. 10A ) and horizontally terminated in the word line exit region 106 may be formed to terminate at substantially the same horizontal position along the X direction; At least one of the word lines 200 in the line exit region 106 may be formed to terminate in the X direction within the word line exit region 106 compared to at least one other of the word lines 200 that terminate horizontally in the word line exit region 106. at different horizontal positions. In some embodiments, at least some of the word lines 200 that are horizontally adjacent to each other in the Y direction have termination ends (eg, termination surfaces) that are horizontally offset from each other in the X direction. For example, a horizontal offset of the terminations of some of the wordlines 200 from the terminations of some other of the wordlines 200 within the wordline exit region 106 may, for example, contribute to or facilitate a desirable contact structure arrangement within the wordline exit region 106 .10A to 10D collectively, the eighth isolation material 210 may be formed on at least the access device 196 ( FIG. 6A ), the digit line 198 ( FIGS. 6A and 6B ), the word line 200 ( FIGS. 6A and 6C ), the eighth contact structure 204 and a portion of the seventh isolation material 194 . The eighth isolation material 210 may be formed of and include at least one insulating material. The material composition of the eighth isolation material 210 may be substantially the same as that of the seventh isolation material 194 , or the material composition of the eighth isolation material 210 may be different from that of the seventh isolation material 194 . In some embodiments, the eighth isolation material 210 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The eighth isolation material 210 may be substantially uniform, or the eighth isolation material 210 may be non-uniform. In some embodiments, eighth isolation material 210 is substantially uniform. In additional embodiments, the eighth isolation material 210 is non-uniform. The eighth isolation material 210 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 11A to 11D , there is shown the array region viewed from the previously described orientation during another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 10A to 10D . 102 (FIG. 11A), digit line exit area 104 (FIG. 11B), word line exit area 106 (FIG. 11C) and socket area 108 (FIG. 11D). As collectively depicted in FIGS. 11B through 11D , a ninth contact structure 212 may be formed in each of the digit line exit region 104 ( FIG. 11B ), the word line exit region 106 ( FIG. 11C ), and the socket region 108 ( FIG. 11D ). Inside. The ninth contact structure 212 may be formed to extend vertically (eg, along the Z direction) to the contact pad structure 174 and to be in contact with the contact pad structure 174 . Additionally, as described in further detail below, some of ninth contact structures 212 may be formed in contact with portions of digit line 198 ( FIG. 11B ) within digit line exit region 104 ( FIG. Some others of 10 may be formed in contact with portions of wordline 200 (FIG. 11C) within wordline exit region 106 (FIG. 11C).Referring to FIG. 11B , within digit line exit region 104 , first group 212A of ninth contact structures 212 may be formed to contact at least some of digit lines 198 extending horizontally (eg, in the Y direction) to digit line exit region 104 . Each ninth contact structure 212 in the first group 212A of ninth contact structures 212 may be considered a digit line contact structure (eg, a so-called "array edge" digit line contact structure). As shown in FIG. 11B , each ninth contact structure 212 in the first group 212A of ninth contact structures 212 may be formed in physical contact and extend completely vertically across an individual digit line 198 . For example, within digit line exit region 104 , each ninth contact structure 212 in first set 212A may be formed in physical contact and extend vertically through each of: eighth isolation material 210 , digit line 198 one, the seventh isolation material 194 and the second connected isolation structure 188 . An outer sidewall of each ninth contact structure 212 in the first group 212A of ninth contact structures 212 may physically contact an inner sidewall of an individual digit line 198 . Furthermore, each ninth contact structure 212 in the first group 212A may be formed to terminate vertically on or within one contact pad structure 174 located within the digit line exit region 104 . Accordingly, each ninth contact structure 212 in the first group 212A may be formed to be coupled to one of the digit lines 198 and one of the contact pad structures 174 .Referring to FIG. 11C , within the word line exit region 106 , a second group 212B of ninth contact structures 212 may be formed to contact at least some of the word lines 200 extending horizontally (eg, along the X direction) to the word line exit region 106 . Each ninth contact structure 212 in the second group 212B of ninth contact structures 212 may be considered a word line contact structure (eg, a so-called "array edge" word line contact structure). As shown in FIG. 11C , each ninth contact structure 212 in the second group 212B of ninth contact structures 212 may be formed in physical contact and extend completely vertically through an individual word line 200 . For example, within the word line exit region 106, each ninth contact structure 212 in the second set 212B may be formed in physical contact and extend vertically through each of the eighth isolation material 210, the seventh isolation material 194 , one of the word lines 200 and the second connection isolation structure 188 . An outer sidewall of each ninth contact structure 212 in the second group 212B of ninth contact structures 212 may physically contact an inner sidewall of an individual word line 200 . Furthermore, each ninth contact structure 212 in the second group 212B may be formed to terminate vertically on or within one of the contact pad structures 174 located within the word line exit region 106 . Accordingly, each ninth contact structure 212 in the second group 212B may be formed to be coupled to one of the word lines 200 and one of the contact pad structures 174 .Referring next to FIG. 11D , within the socket area 108 , the third group 212C of the ninth contact structures 212 may be formed to extend vertically to the contact pad structures 174 located within the socket area 108 . Each ninth contact structure 212 in the third group 212C of ninth contact structures 212 may be considered a deep contact structure (eg, a deep contact structure to be electrically connected to one or more additional BEOL structures formed subsequently). Within socket region 108, each ninth contact structure 212 in third set 212C may be formed in physical contact and extend vertically through each of: eighth isolation material 210, seventh isolation material 194, and and may vertically terminate on or within a contact pad structure 174 located within the socket region 108 .Referring again to FIGS. 11B to 11D collectively, the ninth contact structure 212 comprising a first group 212A ( FIG. 11B ), a second group 212B ( FIG. 11C ) and a third group 212C ( FIG. 11D ) may be formed of and include a conductive material . By way of non-limiting example, the ninth contact structure 212 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the ninth contact structures 212 are each individually formed of and include W. Referring to FIG. Each of the ninth contact structures 212 may be substantially uniform, or one or more of the ninth contact structures 212 may be individually non-uniform. In some embodiments, each of ninth contact structures 212 is substantially uniform. In additional embodiments, each of the ninth contact structures 212 is non-uniform. Each ninth contact structure 212 may, for example, be formed from and contain at least two stacks of different conductive materials.Referring next to FIGS. 12A to 12D , there is shown an array region viewed from the previously described orientation during another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 11A to 11D. 102 (FIG. 12A), digit line exit area 104 (FIG. 12B), word line exit area 106 (FIG. 12C) and socket area 108 (FIG. 12D). As collectively depicted in FIGS. 12A to 12D , at least one fifth layout level 214 including a fifth layout structure 216 may be formed over the access device 196 ( FIG. 12A ) and the ninth contact structure 212 ( FIGS. 12B to 12D ); storing Node devices 218 (e.g., capacitors) may be formed over and in electrical communication with at least some of the fifth routing structures 216 within the array region 102 (FIG. 12A); tenth contact structures 220 may be formed at the socket region 108 (FIG. 12D) over and in electrical communication with at least some of the ninth contact structures 212 ; and at least one sixth layout level 222 including a sixth layout structure 224 may be formed over the storage node device 218 and the tenth contact structure 220 .Continuing to collectively refer to FIGS. 12A-12D , fifth routing structure 216 of fifth routing level 214 may be used to facilitate electrical communication between additional features (eg, structures, materials, devices) coupled thereto. The fifth wiring structures 216 may each be individually formed of and contain a conductive material. By way of non-limiting example, fifth arrangement 216 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, the fifth layout structure 216 is formed of and includes W.Referring to FIG. 12A, within the array region 102, at least some of the fifth layout structures 216 may be formed and configured to couple the access device 196 to a storage node device 218 (eg, a capacitor) to form memory cells within the array region 102. 226 (eg, DRAM cells). Each memory unit 226 may individually comprise one of: access means 196; one of storage node means 218; one of eighth contact structures 204 interposed between access means 196 and storage node means 218; And one of the fifth layout structures 216 interposed between the eighth contact structure 204 and the storage node device 218 . At least some of the fifth layout structures 216 within the array region 102 may, for example, be configured and employed as redistribution material (RDM) structures (also referred to as "redistribution layer (RDL)" structures) to effectively displace (e.g., Stagger, adjust, modify) the lateral position of the semiconductor pillars of the access device 196 to accommodate the desired arrangement of the storage node devices 218 vertically above and in electrical communication with the access device 196 (e.g., hexagonal close package layout).Although FIGS. 12A to 12D illustrate the formation of a single (eg, only one) fifth layout level 214 ( FIG. 12A ) that includes the fifth layout structure 216 ( FIG. 12A ), each individually including the fifth layout structure 216 may be formed. A plurality (eg, more than one) of fifth layout levels 214 in a desired arrangement (eg, pattern) of . By way of non-limiting example, two or more (eg, three or more) of fifth layout levels 214 may be formed, wherein different fifth layout levels 214 are vertically offset from each other and each individually The desired arrangement of the fifth routing structure 216 is contained therein. At least some of the fifth routing structures 216 in at least one of the fifth routing levels 214 may be coupled to ones of the fifth routing structures 216 in at least one other of the fifth routing levels 214 by means of conductive interconnect structures. at least some.Referring to FIG. 12A , within array region 102 , storage node devices 218 may be separately formed and configured to store charges representing programmable logic states of memory cells 226 containing storage node devices 218 . In some embodiments, storage node device 218 includes a capacitor. During use and operation, a charged capacitor may represent a first logic state, such as a logic one; and an uncharged capacitor may represent a second logic state, such as a logic zero. Each of storage node devices 218 may, for example, be formed to include a first electrode (eg, bottom electrode), a second electrode (eg, top electrode), and a dielectric material between the first electrode and the second electrode.Referring next to FIG. 12D , within the socket region 108 at least some of the tenth contact structures 220 may be formed to contact at least some of the ninth contact structures 212 within the third group 212C of ninth contact structures 212 . For example, one or more tenth contact structures 220 may be formed to extend vertically to one or more of the ninth contact structures 212 located in the socket area 108 and terminate in a contact structure located in the socket area 108 . One or more of the ninth contact structures 212 are on or inside. The tenth contact structure 220 may be solely formed of and include a conductive material. By way of non-limiting example, the tenth contact structure 220 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, each of the tenth contact structures 220 is formed of and includes W. Each of the tenth contact structures 220 may be substantially uniform, or one or more of the tenth contact structures 220 may be individually non-uniform. In some embodiments, each of the tenth contact structures 220 is substantially uniform. In additional embodiments, each of the tenth contact structures 220 is non-uniform. Each tenth contact structure 220 may eg be formed from and contain at least two stacks of different conductive materials.As shown in FIG. 12D , within the socket area 108 , one or more sets of storage node devices 218 (eg, capacitors) may also optionally be formed. If formed within the socket region 108 , the storage node devices 218 may be coupled to at least some of the sixth routing structures 224 positioned within the socket region 108 . If formed, storage node devices 218 may be used to enhance the performance of microelectronic devices formed by the methods of the present disclosure. The storage node device 218 may, for example, be coupled to and used to power additional devices (eg, control logic devices, access devices) of the microelectronic device of the present disclosure. In some embodiments, storage node devices 218 are coupled to and used to power at least some of control logic devices 136 ( FIG. 12A ). The storage node devices 218 formed within the socket region 108 can be coupled to the BEOL structures of the microelectronic devices of the present disclosure, as described in further detail below.Continuing to collectively refer to FIGS. 12A-12D , sixth routing structure 224 in sixth routing level 222 may be used to facilitate electrical communication between additional features (eg, structures, materials, devices) coupled thereto. In some embodiments, one or more of sixth layout structures 224 are formed as at least some of storage node devices 218 (and thus memory cells 226) ( FIG. 12A ) within array region 102 ( FIG. 12A ). Extend horizontally therebetween and couple to one or more tenth contact structures 220 ( FIG. 12D ) within socket region 108 ( FIG. 12D ). The sixth wiring structures 224 may each be formed of and include a conductive material. By way of non-limiting example, sixth arrangement 224 may be formed of and include one or more of: at least one metal, at least one alloy, and at least one conductive metal-containing material (eg, a conductive metal nitrides, conductive metal silicides, conductive metal carbides, conductive metal oxides). In some embodiments, each of the sixth routing structures 224 in the sixth routing level 222 is formed of and contains W.Continuing to refer to FIGS. 12A to 12D , the ninth isolation material 228 can be formed on at least the eighth isolation material 210 , the fifth layout structure 216 ( FIG. 12A ), the storage node device 218 ( FIGS. 12A and 12D ), the tenth contact structure 220 ( FIG. 12D) and part of the sixth layout structure 224. The ninth isolation material 228 may be formed of and include at least one insulating material. The material composition of the ninth isolation material 228 may be substantially the same as that of the eighth isolation material 210 , or the material composition of the ninth isolation material 228 may be different from that of the eighth isolation material 210 . In some embodiments, the ninth isolation material 228 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The ninth isolation material 228 may be substantially uniform, or the ninth isolation material 228 may be non-uniform. In some embodiments, ninth isolation material 228 is substantially uniform. In additional embodiments, the ninth isolation material 228 is non-uniform. Ninth isolation material 228 may, for example, be formed from and comprise a stack of at least two different dielectric materials.Referring next to FIGS. 13A to 13D , there is shown an array region viewed from the previously described orientation during another processing stage of a method of forming a microelectronic device subsequent to the processing stage previously described with reference to FIGS. 12A to 12D . 102 (FIG. 13A), digit line exit area 104 (FIG. 13B), word line exit area 106 (FIG. 13C) and socket area 108 (FIG. 13D). As collectively depicted in FIGS. 13A-13D , additional BEOL structures may be formed over the sixth routing level 222 . For example, at least one seventh layout level 230 including the seventh layout structure 231 may be formed above the sixth layout level 222; and at least one eighth layout level 232 including the eighth layout structure 233 may be formed above the seventh layout level 230 . One or more of the seventh layout structures 231 in the seventh layout level 230 may be coupled to one or more of the sixth layout structures 224 in the sixth layout level 222 by means of an eleventh contact structure 234 ( FIG. 13D ). indivual. In addition, one or more of the eighth layout structures 233 in the eighth layout level 232 (eg, one or more conductive pad structures) may be coupled to the seventh layout level by means of a twelfth contact structure 235 ( FIG. 13D ). One or more of the seventh arrangement 231 in 230 . In additional embodiments, at least some (eg, all) of the twelfth contact structures 235 ( FIG. 13D ) are omitted (eg, not formed), and one of the eighth routing structures 233 in the eighth routing level 232 One or more are formed to directly physically contact one or more of the seventh arrangement structures 231 in the seventh arrangement level 230 .The seventh routing structure 231 , the eighth routing structure 233 , the eleventh contact structure 234 ( FIG. 13D ) and the twelfth contact structure 235 ( FIG. 13D ), if present, may each be formed of and contain a conductive material. By way of non-limiting example, the seventh arrangement structure 231 , the eighth arrangement structure 233 , the eleventh contact structure 234 ( FIG. 13D ), and the twelfth contact structure 235 ( FIG. 13D ), if present, may each be independently formed by Formed and comprising one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., conductive metal nitride, conductive metal silicide, conductive metal carbide, conductive metal oxides). In some embodiments, the seventh wiring structure 231 is each formed of Cu and includes Cu; the eighth wiring structure 233 is formed of Al and includes Al; and the eleventh contact structure 234 ( FIG. 13D ) and the twelfth contact structure 235 ( FIG. 13D ) are each formed of and contain W. FIG.Still collectively referring to FIGS. 13A to 13D , the tenth isolation material 236 may be formed on at least the ninth isolation material 228 , the seventh wiring structure 231 , the eighth wiring structure 233 , the eleventh contact structure 234 ( FIG. 13D ) and the twelfth wiring structure. On or over part of the contact structure 235 (FIG. 13D), if present. The tenth isolation material 236 may be formed of and include at least one insulating material. The material composition of the tenth isolation material 236 may be substantially the same as that of the ninth isolation material 228 , or the material composition of the tenth isolation material 236 may be different from that of the ninth isolation material 228 . In some embodiments, the tenth isolation material 236 is formed of and includes a dielectric oxide material such as SiOx (eg, SiO2). The tenth isolation material 236 may be substantially uniform, or the tenth isolation material 236 may be non-uniform. In some embodiments, tenth isolation material 236 is substantially uniform. In additional embodiments, the tenth isolation material 236 is non-uniform. The tenth isolation material 236 may, for example, be formed from and comprise a stack of at least two different dielectric materials.With continued reference to FIGS. 13A to 13D , after forming the eighth layout level 232 including the eighth layout structure 233 , additional processing may be performed on the second microelectronic device structure assembly 186 . By way of non-limiting example, after forming the eighth layout level 232 including the eighth layout structure 233, at least one additional microelectronic device structure assembly may be attached to the second microelectronic device structure assembly 186 to form a relatively larger Structural components of microelectronic devices. In some embodiments, after the processing stages of FIGS. 13A through 13D , at least one additional microelectronic device structural component exhibits a configuration substantially similar to second microelectronic device structural component 186 . As a non-limiting example, a relatively large microelectronic device structure assembly may be formed by combining one or more of the eighth routing structures 233 (e.g., conductive pad structures) with one or more of the additional microelectronic device structure assemblies. One or more additional routing structures (e.g., additional conductive pad structures) are horizontally aligned and brought into physical contact; The structures form one or more interconnect structures. As another non-limiting example, in conjunction with or as an alternative to the non-limiting examples mentioned above, a relatively large microelectronic device structural assembly may be formed by at least Partially remove the additional base structure 160 and the first connecting isolation structure 166 to expose at least some of the fourth wiring structures 148 (eg, conductive pad structures) in the fourth wiring level 146; the fourth wiring structures 148 (eg, One or more of the conductive pad structures) are horizontally aligned and physically contacted with one or more additional routing structures (e.g., additional conductive pad structures) of the additional microelectronic device structural components; and then performing at least one thermocompression process One or more interconnect structures are formed from the one or more fourth layout structures 148 and the one or more additional layout structures. Furthermore, any desired number of additional microelectronic device structural assemblies may be attached to a relatively larger microelectronic device structural assembly by substantially similar processing.Although the method of forming a microelectronic device described above with reference to FIGS. 1 and 2A to 13D describes forming a fourth layout level including a fourth layout structure 148 (eg, a conductive pad structure) at a processing stage previously described with reference to FIGS. 3A to 3D 146, but the disclosure is not limited thereto. In additional embodiments, the fourth layout level 146 including the fourth layout structure 148 is formed after the second microelectronic device structure 158 ( FIGS. 4A-4D ) is connected to the first microelectronic device structure 100 ( FIGS. 4A-4D ). For example, the fourth routing level 146 including the fourth routing structure 148 may be formed during and/or after forming the eighth routing level 232 including the eighth routing structure 233 (eg, an additional conductive pad structure). By way of non-limiting example, after forming the eighth wiring level 232 including the eighth wiring structure 233, at least part of the additional base structure 160 and the first connecting isolation structure 166 may be removed, and then a fourth wiring level may be formed 146 , the fourth layout level 146 includes at least some fourth layout structures 148 in electrical communication with at least some of the third layout structures 144 of the third layout level 142 . Thereafter, the resulting microelectronic device structure assembly having the eighth layout level 232 including the eighth layout structure 233 and the fourth layout level 146 including the fourth layout structure 148 may be coupled to one or more additional microelectronic device structure components to form Relatively large microelectronic device structural assemblies are contemplated by the present disclosure.Still referring to FIGS. 13A through 13D , the methods described above with reference to FIGS. 1 and 2A through 13D can implement a microelectronic device 238 (eg, a memory device, such as DRAM device) formation. In some embodiments, third routing structure 144 and fourth routing structure 148 are used as global routing structures for microelectronic device 238; and/or seventh routing structure 231 and eighth routing structure 233 are used as global routing structures for microelectronic device 238 global layout structure. The combination of third routing structure 144 and fourth routing structure 148 and/or the combination of seventh routing structure 231 and eighth routing structure 233 may, for example, be configured to receive global signals from an external bus and relay the global signals to the microelectronics Other characteristics of device 238 (eg, structure, device). In addition, referring to FIG. 13D , in some embodiments, at least some of the third layout structure 144 , the fourth layout structure 148 , the seventh layout structure 231 and the eighth layout structure 233 are formed by means of At least one deep contact component extending between seventh layout levels 230 is in electrical communication with at least some of at least some of sixth layout structures 224 coupled to memory cells 226 ( FIG. 13A ) within array region 102 ( FIG. 13A ). The deep contact assembly may, for example, include some of the contact structures (e.g., at least one of the eleventh contact structures 234, at least one of the tenth contact structures 220, at least one of the ninth contact structures 212) located within the socket region 108. one, at least one of the third contact structure 122, at least one of the fourth contact structure 150, at least one of the fifth contact structure 152), and the wiring structure coupled to some of the contact structures in the socket area 108 .Accordingly, according to an embodiment of the present disclosure, a method of forming a microelectronic device includes forming a first microelectronic device structure including a first semiconductor structure at least partially overlying the first microelectronic device structure. Control logic on a semiconductor structure, a first back end of line (BEOL) structure above and in electrical communication with the control logic, and overlying the control logic and the first BEOL structure the first isolation material. A second microelectronic device structure is bonded over the first BEOL structure of the first microelectronic device structure to form a first assembly. The first assembly is vertically inverted. A third microelectronic device structure including a second semiconductor structure is bonded over the vertically inverted first microelectronic device structure assembly to form a second assembly. A memory cell comprising a portion of the second semiconductor structure is formed after forming the second component. A second BEOL structure is formed over the memory cell.Referring next to FIG. 14 , depicted is a simplified plan view of a microelectronic device 238 illustrating various regions of a microelectronic device 238 (e.g., array region 102 , such as first array region 102A, The arrangement of different control logic sections (described in further detail below) in the second array area 102B, the third array area 102C, and the fourth array area 102D; slot area 108), and the different control logic sections in the different control logic sections Routing arrangement of logic devices (eg, corresponding to control logic device 136 (FIG. 13A)). Different control logic devices in different control logic sections may be vertically (eg, in the Z direction) offset from memory unit 226 ( FIG. 13A ) of microelectronic device 238 . In some embodiments, microelectronics 238 are oriented such that different control logic devices in different control logic sections overlay vertically (eg, in the Z direction) over memory unit 226 ( FIG. 13A ). For example, the orientation of microelectronic device 238 may be vertically inverted (eg, flipped) relative to the orientation depicted in FIGS. 13A-13D . In some embodiments, microelectronics 238 are oriented such that different control logic devices in different control logic sections are located vertically below memory unit 226 ( FIG. 13A ), as depicted in FIGS. 13A through 13D . At least some of the different control logic devices may be coupled to memory unit 226 (FIG. 13A) in the manner previously described with reference to FIGS. 13A-13D. For clarity and ease of understanding of the present description, not all features (eg, structures, materials, devices) of microelectronic device 238 previously described with reference to FIGS. 13A-13D are shown in FIG. 14 .As shown in FIG. 14 , within the horizontal area of each array region 102 , microelectronic devices 238 may be formed in a desired arrangement including sense amplifier (SA) sections 240 and sub-wordline driver (SWD) sections 242 . SA section 240 may include an SA device coupled to digit line 198 of microelectronic device 238, as described in further detail below. In some embodiments, digit line 198 is located vertically (eg, in the Z direction) below the SA devices in SA section 240 within microelectronic device 238 . In an additional embodiment, the digit line 198 vertically (eg, along the Z direction) overlies the SA devices in the SA section 240 within the microelectronic device 238 . SWD section 242 may include SWD devices coupled to word lines 200 of microelectronic devices 238, as described in further detail below. In some embodiments, wordline 200 is located vertically (eg, in the Z direction) below the SWD devices in SWD section 242 within microelectronic device 238 . In additional embodiments, wordline 200 vertically (eg, in the Z direction) overlies the SWD devices in SWD section 242 within microelectronic device 238 .The SA section 240 in the horizontal area of an individual array region 102 (for example, the first array region 102A, the second array region 102B, the third array region 102C, or the fourth array region 102D) may include the first SA section 240A and the second SA section 240A. Two SA segments 240B. For an individual array region 102 , the first SA segment 240A and the second SA segment 240B may be positioned at or near corners in the array region 102 that are opposite each other (eg, diagonally opposite). For example, as shown in FIG. 14, for an individual array region 102, a first SA segment 240A may be positioned at or near a first corner 246A of the array region 102, and a second SA segment 240B A second corner 246B in the array region 102 that is diagonally opposite (eg, diagonally) from the first corner 246A may be positioned at or near the second corner 246B.For each SA segment 240 (e.g., first SA segment 240A, second SA segment 240B) within a single array region 102, the SA devices of the SA segment 240 may be coupled to A set of digit lines 198 extend horizontally (eg, in the Y direction) through array region 102 . The digit line routing and contact structures 248 may, for example, correspond to some of the routing structures (eg, some of the first routing structures 126 ( FIGS. 13A and 13B ), some of the contact pad structures (eg, some of the contact pad structures 174 ( 13B)), and some of the previously described contact structures (for example, some of the first group 212A ( FIG. 13B ) of the ninth contact structure 212 ( FIG. 13B ), some of the third contact structures 122 ( Some of Fig. 13B).The SA devices in the SA sections 240 of the array regions 102 horizontally adjacent to each other in the Y direction (for example, the first array region 102A and the second array region 102B; the third array region 102C and the fourth array region 102D) may be coupled to each other There are 198 groups of different number lines. For example, each of the SA sections 240 of the first array region 102A (eg, each of the first SA section 240A and the second SA section 240B) may contain The wire routing and contact structures 248 are coupled to the so-called "even" SA devices of the even digit lines 198B of the microelectronic device 238; and each of the SA sections 240 of the second array region 102B (e.g., the first SA section 240A and second SA segment 240B) may contain a so-called "odd number" of an odd number of digit lines 198A coupled to microelectronic device 238 by means of digit line routing and contact structures 248 associated with SA segment 240. SA device; and vice versa. The even digit lines 198B of the microelectronic device 238 may horizontally alternate with the odd digit lines 198A of the microelectronic device 238 along the X direction. The SA device of each of the SA sections 240 of the first array region 102A may not be coupled to any odd number of digit lines 198A; and the SA device of each of the SA sections 240 of the second array region 102B may not be coupled to any even number of digit lines 198A; digit line 198B; or vice versa. Similarly, each of the SA segments 240 of the third array region 102C horizontally adjacent to the first array region 102A along the X direction (for example, each of the first SA segment 240A and the second SA segment 240B) may include an additional even number of SA devices coupled to an additional even number of digit lines 198B of the microelectronic device 238 by means of digit line routing and contact structures 248 associated with the SA segment 240; and horizontal to the second array region 102B in the X direction Each of the SA sections 240 of adjacent fourth array regions 102D (e.g., each of the first SA section 240A and the second SA section 240B) may contain The line routing and contact structures 248 are coupled to the additional odd number of SA devices of the additional odd number of digit lines 198A of the microelectronic device 238 ; or vice versa.As shown in FIG. 14 , the SA devices (e.g., odd-numbered SA devices or even-numbered SA devices) within individual SA sections 240 of individual array regions 102 may be coupled to digit lines (e.g., odd-numbered SA devices) extending horizontally across array region 102. digit lines 198A or even digit lines 198B), and may also be coupled to additional digit lines extending horizontally along the Y direction through another array region 102 horizontally adjacent to the array region 102 (e.g., additional odd digit lines 198A or Additional even number of digit lines 198B). For example, some odd number of SA devices in the first SA section 240A of the second array area 102B may be extended to and through the first digit line exit sub-area 104A horizontally adjacent to the second array area 102B by means of Some digit line routing and contact structures 248 are coupled to odd digit lines 198A extending horizontally across second array region 102B; and some additional odd number of SA devices within first SA section 240A of second array region 102B may be Some of the additional digit line routing and contact structures 248 extending to and through the first digit line exit sub-region 104A are coupled to an additional odd number of digit lines 198A extending horizontally through the first array region 102A. As another example, some even-numbered SA devices within the second SA section 240B of the first array region 102A may extend horizontally in the Y direction to and through the second digit line exit horizontally adjacent to the first array region 102A. Some digit line routings and contact structures 248 of sub-region 104B are coupled to an even number of digit lines 198B extending horizontally across first array region 102A; and some additional even number of SAs within second SA section 240B of first array region 102A Devices may be coupled to an additional even number of digit lines 198B extending horizontally through the second array region 102B by means of some additional digit line routing and contact structures 248 extending to and through the second digit line exit sub-region 104B.14, the SWD section 242 in the horizontal region of an individual array region 102 (eg, first array region 102A, second array region 102B, third array region 102C, or fourth array region 102D) may include a first SWD section 242A and a second SWD section 242B. For individual array regions 102, first SWD segment 242A and second SWD segment 242B may be positioned at or near different corners than first SA segment 240A and second SA segment 240B. . Furthermore, the corners of the array region 102 associated with the first SWD section 242A may be opposite (eg, diagonally opposite) the corners of the array region 102 associated with the second SWD section 242B. For example, as shown in FIG. 14, for a separate array region 102, a first SWD segment 242A may be positioned at or near a third corner 246C of the array region 102, and a second SWD segment 242B A fourth corner 246D in the array region 102 that is diagonally opposite (eg, diagonally) from the third corner 246C may be positioned at or near the fourth corner 246D.For each SWD section 242 (e.g., first SWD section 242A, second SWD section 242B) within a single array region 102, the SWD devices in the SWD section 242 can be coupled by means of word line routing and contact structures 250. to a set of word lines 200 extending horizontally (eg, along the X direction) through the array region 102 . The word line routing and contact structures 250 may, for example, correspond to some of the routing structures (eg, some of the first routing structures 126 (FIGS. 13A and 13C )), some of the contact pad structures (eg, some of the contact pad structures 174 ( 13C)), and some of the previously described contact structures (for example, some of the second group 212B ( FIG. 13C ) of the ninth contact structure 212 ( FIG. 13C ), some of the third contact structures 122 ( Some of Fig. 13C).The SWD devices in the SWD sections 242 of the array regions 102 that are horizontally adjacent to each other along the X direction (e.g., the first array region 102A and the third array region 102C; the second array region 102B and the fourth array region 102D) may be coupled to each other 200 groups of different word lines. For example, each of SWD sections 242 of first array region 102A (eg, each of first SWD section 242A and second SWD section 242B) may contain The wire routing and contact structure 250 is coupled to the so-called "even" SWD devices of the even word lines 200B of the microelectronic device 238; and each of the SWD sections 242 (e.g., the first SWD section 242A and second SWD segment 242B) may contain a so-called "odd number" of an odd number of word lines 200A coupled to microelectronic device 238 by means of word line routing and contact structures 250 associated with SWD segment 242. SWD device; and vice versa. The even number of word lines 200B of the microelectronic device 238 may horizontally alternate with the odd number of word lines 200A of the microelectronic device 238 along the Y direction. The SWD devices of each of the SWD sections 242 of the first array region 102A may not be coupled to any odd number of word lines 200A; and the SWD devices of each of the SWD sections 242 of the third array region 102C may not be coupled to any even number of word line 200B; or vice versa. Similarly, each of the SWD sections 242 of the second array region 102B horizontally adjacent to the first array region 102A in the Y direction (for example, each of the first SWD section 242A and the second SWD section 242B) may include an additional even number of SWD devices coupled to microelectronic device 238 by way of word line routing and contact structures 250 associated with SWD section 242; and horizontal to third array region 102C in the Y direction Each of the SWD sections 242 of adjacent fourth array region 102D (e.g., each of the first SWD section 242A and the second SWD section 242B) may contain The line routing and contact structures 250 are coupled to the additional odd number of SWD devices of the additional odd number of word lines 200A of the microelectronic device 238 ; or vice versa.As shown in FIG. 14 , SWD devices (e.g., an odd number of SWD devices or an even number of SWD devices) within an individual SWD section 242 of an individual array region 102 may be coupled to word lines (e.g., an odd number of SWD devices) extending horizontally across the array region 102. word lines 200A or even word lines 200B), and may also be coupled to additional word lines extending horizontally along the X direction through another array region 102 horizontally adjacent to the array region 102 (for example, additional odd word lines 200A or additional even number of word lines 200B). For example, some odd number of SWD devices in the first SWD section 242A of the third array region 102C may be extended by means of Some of the word line routing and contact structures 250 are coupled to the odd number of word lines 200A extending horizontally across the third array region 102C; and some additional odd number of SWD devices within the first SWD section 242A of the third array region 102C can be Some of the additional word line routing and contact structures 250 extending to and through the second word line exit sub-region 106B are coupled to an additional odd number of word lines 200A extending horizontally through the first array region 102A. As another example, some even-numbered SWD devices in the second SWD section 242B of the first array region 102A may extend horizontally in the X direction to and through the first word line exit horizontally adjacent to the first array region 102A. Some of the word line routing and contact structures 250 of the sub-region 106A are coupled to an even number of word lines 200B extending horizontally across the first array region 102A; and some additional even number of SWDs within the second SWD section 242B of the first array region 102A The device may be coupled to an additional even number of wordlines 200B extending horizontally through the third array region 102C by means of some additional wordline routing and contact structures 250 extending to and through the first wordline exit subregion 106A.14, within the horizontal region of each array region 102, the microelectronic devices 238 may contain additional control logic sections that individually contain additional control logic devices (e.g., different from SA device and control logic device of SWD device). For example, for each array region 102 , additional control logic section 252 may be positioned horizontally between SA section 240 and SWD section 242 (eg, at a relatively more horizontally centered location within array region 102 ). Additional control logic section 252 may include, but is not limited to, a column decoder device section including column decoder devices, and a main word line (MWD) section including MWD devices.Still referring to FIG. 14, within the horizontal region of each socket block 108, the microelectronic device 238 may contain an additional control logic section 254 that separately contains additional control logic devices (e.g., other than those located in control logic devices other than those within the horizontal region of the array region 102). For example, for each socket region 108, one or more additional control logic sections 254 may be positioned horizontally within the deep contact structure components within the socket region 108 (e.g., from one or more seventh routing structures 231 (FIG. 13D) vertically extending between one or more third layout structures 144 ( FIG. 13D )) and the array region 102 horizontally adjacent to the socket region 108 . At least some of the additional control logic devices within additional control logic section 254 may have different configurations and different operating functions than the control logic devices located within the horizontal region of array region 102 . By way of non-limiting example, additional control logic sections 254 may include group logic sections with group logic devices.Accordingly, in accordance with an embodiment of the present disclosure, a method of forming a microelectronic device includes forming a semiconductor wafer including a semiconductor material, trenches within the semiconductor material, overlying semiconductor material, A control logic device, routing structures overlying the control logic device, and contact structures extending from the semiconductor material to some of the routing structures. An additional die is attached to the semiconductor die using oxide-oxide bonding to form an assembly. The assembly is inverted vertically. After vertically inverting the assembly, portions of the semiconductor material are removed to expose portions of the contact structures. A contact pad structure is formed on the exposed portion of the contact structure. After forming the contact pad structure, an additional semiconductor die comprising additional semiconductor material is attached to the assembly using additional oxide-oxide bonding. An access device is formed using a portion of the additional semiconductor material. Word lines and digit lines operatively associated with the access devices are formed. Additional contact structures are formed to penetrate the word lines and the digit lines and extend to some of the contact pad structures. Additional contact structures are formed to extend to some other of the contact pad structures. A storage node device is formed overlying and coupled to the access device. An additional layout structure is formed above the storage node device. At least some of the additional routing structures are coupled to the further contact structures.Furthermore, according to an embodiment of the present disclosure, a microelectronic device includes an array area, a digit line exit area, and a word line exit area. The array region separately includes: memory cells including access means and storage node means; digit lines coupled to the access means and extending in a first direction; word lines coupled to the access means and extending in a second direction orthogonal to the first direction; and a control logic device vertically offset from and in electrical communication with the memory unit. The digit line exit region alternates horizontally with the array region along the first direction and includes separately: a portion of a digit line extending beyond the array region adjacent thereto; a contact pad structure located on the below the portion of the digit line; a digit line contact structure extending through at least some of the portion of the digit line to the contact pad structure; a routing structure located below the contact pad structure and in contact with the contact pad structure some of the control logic devices in electrical communication; and a contact structure extending from the contact pad structure to the routing structure. The word line exit regions alternate horizontally with the array region along the second direction and individually include: a portion of the word line extending beyond the array region adjacent thereto; an additional contact pad structure which under the portion of the word line; a word line contact structure extending through at least some of the portion of the word line to the additional contact pad structure; an additional wiring structure located under the additional contact a pad structure below and in electrical communication with some other of the control logic devices; and an additional contact structure extending from the additional contact pad structure to the additional routing structure.Microelectronic devices according to embodiments of the present disclosure, such as microelectronic device 238 (FIGS. 13A-13D and 14), may be used in embodiments of electronic systems of the present disclosure. For example, FIG. 15 is a block diagram illustrating an electronic system 300 according to an embodiment of the present disclosure. Electronic system 300 may include, for example, a computer or computer hardware components, servers or other networked hardware components, cellular telephones, digital cameras, personal digital assistants (PDAs), portable media (e.g., music) players, Wi-Fi or cellular enabled tablet computers (eg, or tablets), e-books, navigation devices, etc. The electronic system 300 includes at least one memory device 302 . Memory device 302 may include, for example, a microelectronic device previously described herein (eg, microelectronic device 238 (FIGS. 13A-13D and FIG. 14)). Electronic system 300 may further include at least one electronic signal processor device 304 (commonly referred to as a "microprocessor"). Electronic signal processor device 304 may optionally include microelectronic devices previously described herein (eg, microelectronic device 238 (FIGS. 13A-13D and FIG. 14)). Although memory device 302 and electronic signal processor device 304 are depicted as two (2) separate devices in FIG. 15 , in additional embodiments, a single (e.g., , only one) memory/processor device is included in the electronic system 300 . In such embodiments, the memory/processor device may include the microelectronic devices previously described herein (eg, microelectronic device 238 (FIGS. 13A-13D and FIG. 14)). Electronic system 300 may further include one or more input devices 306 for entering information into electronic system 300 by a user, such as a mouse or other pointing device, keyboard, touchpad, buttons, or control panel. Electronic system 300 may further include one or more output devices 308, such as monitors, displays, printers, audio output jacks, speakers, etc., for outputting (eg, visual or audio output) information to a user. In some embodiments, input device 306 and output device 308 comprise a single touch screen device that can be used to input information into electronic system 300 and output visual information to a user. Input device 306 and output device 308 may be in electrical communication with one or more of memory device 302 and electronic signal processor device 304 .Therefore, according to an embodiment of the present disclosure, an electronic system includes: input means; output means; processor means, which are operatively connected to said input means and said output means, and memory means, which are operatively connected to to the processor device. The memory device includes: a memory array area; a digit line contact area between two memory array areas adjacent to each other in a first direction in the memory array area; and a word line contact area in the memory array area. Between two other memory array regions adjacent to each other along a second direction perpendicular to the first direction in the array region. The memory array regions each include dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and vertically offset from and aligned with the DRAM cells. A control logic device electrically connected to the DRAM unit. The digit line contact area includes: end portions of some of the digit lines extending beyond horizontal regions of the two of the memory array areas; conductive pads vertically located on the below said some of the digit lines; digit line contacts extending vertically through said end portions of said some of said digit lines to said conductive pads; below the conductive pad; and a conductive contact extending vertically from the conductive pad to the conductive routing. The word line contact areas include: end portions of some of the word lines extending beyond horizontal regions of the two other ones of the memory array areas; additional conductive pads vertically located underneath said some of said word lines; word line contacts extending completely vertically through said end portions of said some of said word lines to said additional conductive pads; additional conductive routing which vertically below the additional conductive pad; and an additional conductive contact extending vertically from the additional conductive pad to the additional conductive routing.Compared to conventional structures, conventional devices, and conventional methods, the structures, devices, and methods of the present disclosure advantageously facilitate improved microelectronic device performance, reduced cost (e.g., manufacturing cost, material cost), increased miniaturization of components, and increased packaging density. Big. The structures, devices, and methods of the present disclosure may also improve scalability, efficiency, and simplicity compared to conventional structures, conventional devices, and conventional methods.Additional non-limiting example embodiments of the present disclosure are set forth below.Embodiment 1: A method of forming a microelectronic device, comprising: forming a first microelectronic device structure, the first microelectronic device structure comprising a first semiconductor structure, at least partially covering the first semiconductor structure a control logic circuit, a first back end of line (BEOL) structure above and in electrical communication with the control logic circuit, and a first isolation material covering the control logic circuit and the first BEOL structure bonding a second microelectronic device structure over said first BEOL structure of said first microelectronic device structure to form a first assembly; vertically inverting said first assembly; placing a second microelectronic device structure comprising a second semiconductor structure Three microelectronic device structures bonded over a vertically inverted first assembly to form a second assembly; forming a memory cell comprising a portion of the second semiconductor structure after forming the second assembly; and over the memory cell A second BEOL structure is formed.Embodiment 2: The method of Embodiment 1, wherein bonding a second microelectronic device structure over the first BEOL layout structure of the first microelectronic device structure comprises bonding a second microelectronic device structure of the second microelectronic device structure Two isolation materials are bonded to the first isolation material of the first microelectronic device structure.Embodiment 3: The method of one of Embodiments 1 and 2, further comprising forming the first microelectronic device structure to further include a conductive contact structure in a contact region from including the The regions of the control logic are horizontally offset.Embodiment 4: The method of Embodiment 3, further comprising after vertically inverting the first assembly: thinning the first semiconductor structure to expose the first isolation material and the conductive contacts structure; forming a conductive contact pad structure on the conductive contact structure; and forming a second isolation material over the conductive contact pad structure and the remainder of the first semiconductor structure.Embodiment 5: The method of Embodiment 4, wherein bonding a third microelectronic device structure comprising a second semiconductor structure over the vertically inverted first assembly comprises bonding a third microelectronic device structure A third insulation material is bonded to the second insulation material.Embodiment 6: The method of Embodiment 5, wherein forming a memory cell comprises: forming an access device using the portion of the second semiconductor structure; and forming over the access device and with the access device storage node devices in electrical communication to form the memory cells, each of the memory cells individually including one of the access devices and one of the storage node devices.Embodiment 7: The method of Embodiment 6, wherein forming an access device comprises: removing a segment of the second semiconductor structure after forming the second component; patterning a remainder of the second semiconductor structure segment to form the portion of the second semiconductor structure; form a word line extending through the portion of the second semiconductor structure in a first horizontal direction; and form a digit line vertically overlying the Extending horizontally above the word line and the portion of the second semiconductor structure and in a second horizontal direction orthogonal to the first horizontal direction, a first digit line contact structure extending vertically from the digit line and extending to the portion of the second semiconductor structure.Embodiment 8: The method of Embodiment 7, wherein forming a storage node device over and in electrical communication with the access device comprises: forming on the portion of the second semiconductor structure An additional contact structure; forming a conductive routing structure on the additional contact structure; and forming the storage node device on the conductive routing structure, the storage node device being at least partially horizontally offset from the additional contact structure.Embodiment 9: The method of one of embodiments 7 and 8, further comprising: forming second a digit line contact structure; and forming a word line contact structure extending vertically through the word line and to some other of the conductive contact pad structures.Embodiment 10: The method of any one of Embodiments 1-9, further comprising: forming the first BEOL structure of the first microelectronic device structure to include over the control logic circuit and a conductive routing structure electrically communicating with the control logic circuit, and a conductive pad structure above and in electrical communication with the conductive routing structure; and forming the second BEOL structure to be included in the memory An additional conductive routing structure over the cell and in electrical communication with the control logic circuit, and an additional conductive pad structure over the additional conductive routing structure and in electrical communication with the additional conductive routing structure.Embodiment 11: The method of Embodiment 10, further comprising: forming the conductive routing structure and the additional conductive routing structure to each include copper; and forming the conductive pad structure and the additional conductive pad structure formed to each include aluminum.Embodiment 12: A method of forming a microelectronic device, comprising: forming a semiconductor wafer, the semiconductor wafer including a semiconductor material, a trench in the semiconductor material, a control logic device overlying the semiconductor material, an overlay layout structures over the control logic device, and contact structures extending from the semiconductor material to some of the layout structures; attaching an additional die to the semiconductor wafer using oxide-oxide bonding to form an assembly ; vertically inverting the assembly; after vertically inverting the assembly, removing a portion of the semiconductor material to expose a portion of the contact structure; forming a contact pad structure on the exposed portion of the contact structure; After the contact pad structure, attaching an additional semiconductor die comprising additional semiconductor material to the assembly using additional oxide-oxide bonding; forming access means using portions of the additional semiconductor material; device operatively associated word lines and digit lines; forming additional contact structures through said word lines and said digit lines and extending into some of said contact pad structures; forming said contact pad structures extending into said contact pad structures an additional contact structure of some other of the access device; a storage node device formed over and coupled to the access device; and an additional wiring structure formed over the storage node device, the additional wiring structure At least some are coupled to the further contact structures.Embodiment 13: The method of Embodiment 12, wherein forming an access device using a portion of the additional semiconductor material comprises removing a portion of the additional semiconductor material after attaching the additional semiconductor die to the assembly an upper region; patterning a lower region of the additional semiconductor material to form discrete semiconductor structures; and removing portions of the discrete semiconductor structures to form semiconductor pillars serving as channel structures for the access device.Embodiment 14: The method of Embodiment 13, wherein forming a word line and a digit line operatively associated with the access device comprises forming the word line to be horizontally adjacent to the semiconductor pillar and along the extending in a horizontal direction; and forming the digit line to vertically overlie the semiconductor pillar and horizontally adjacent to the semiconductor pillar and extend in a second horizontal direction perpendicular to the first horizontal direction, the digit line contact structure Extending from the discrete semiconductor structure to the digit line.Embodiment 15: The method of any one of Embodiments 12 to 14, further comprising forming the routing structure of the semiconductor wafer to include: over and with the transistors of the control logic device a tungsten routing structure in electrical communication with a transistor of a control logic device; and a copper routing structure overlying and in electrical communication with the tungsten routing structure.Embodiment 16: The method of Embodiment 15, further comprising forming the routing structure of the semiconductor wafer to further include an aluminum pad structure over and in electrical communication with the copper routing structure.Embodiment 17: According to one of Embodiments 15 and 16, it further includes forming the additional layout structure to include: an additional tungsten layout structure above the storage node device; above the tungsten layout structure and an additional copper routing structure in electrical communication with the tungsten routing structure; and an aluminum pad structure over and in electrical communication with the additional copper routing structure.Embodiment 18: A microelectronic device comprising: an array region separately comprising: memory cells comprising access means and storage node means; digit lines coupled to the access means and extending along a first direction extending; a word line coupled to the access device and extending in a second direction orthogonal to the first direction; and a control logic device vertically offset from the memory cell and connected to the memory cell a cell in electrical communication; a digit line exit region which alternates horizontally with said array region along said first direction and which separately comprises: a portion of a digit line extending beyond said array region adjacent thereto; a contact pad structure, under said portion of said digit line; a digit line contact structure extending through at least some of said portion of said digit line to said contact pad structure; a routing structure under said contact pad structure below and in electrical communication with some of the control logic devices; and a contact structure extending from the contact pad structure to the routing structure; and a word line exit area along the second direction with the array area Alternately and individually comprising: a portion of the word line extending beyond the array region adjacent thereto; an additional contact pad structure underlying the portion of the word line; a word line contact structure extending through at least some of the portions of the word line to the additional contact pad structure; an additional routing structure underlying the additional contact pad structure and in electrical communication with some other of the control logic devices and an additional contact structure extending from the additional contact pad structure to the additional routing structure.Embodiment 19: The microelectronic device of Embodiment 18, further comprising: a first back-end-of-line BEOL structure overlying the memory cells and the control logic device and in contact with one or more deep contact components in electrical communication, the one or more deep contact components being in electrical communication with one or more of the control logic devices; and a second BEOL structure underlying the memory cells and the control logic devices and in communication with the control logic devices The one or more deep contact components are in electrical communication.Embodiment 20: The microelectronic device of Embodiment 19, wherein: the first BEOL structure comprises: a first routing structure comprising copper overlying the memory cells and the control logic; and a second a pad structure comprising aluminum overlying and coupled to the first layout structure; and the second BEOL structure comprising: a second layout structure comprising copper underlying the control logic; and a second pad structure comprising aluminum underlying and coupled to the second routing structure.Embodiment 21: The microelectronic device of one of embodiments 19 and 20, further comprising an interposer horizontally offset from the array region, the digit line exit region, and the word line exit region and a socket area that solely includes the one or more deep contact components.Embodiment 22: The microelectronic device of Embodiment 21, wherein the socket block additionally includes additional control logic having a different configuration and operational functionality than the control logic.Embodiment 23: The microelectronic device of Embodiment 22, wherein the socket area additionally comprises a capacitor in electrical communication with one or more of: at least some of the control logic and the additional control At least some of the logic devices.Embodiment 24: The microelectronic device of any one of Embodiments 18 to 23, wherein said control logic means within each of said array regions comprises sense amplifier means located in close proximity to within a plurality of sense amplifier regions located at diagonally opposite corners of the array region; within the word line driver area.Embodiment 25: The microelectronic device of Embodiment 24, wherein for each sense amplifier region of the plurality of sense amplifier regions within the array region: all sense amplifier regions within the sense amplifier region Some of the sense amplifier devices are in electrical communication with some of the digit lines extending through the array region; and some other of the sense amplifier devices within the sense amplifier region are in electrical communication with Some of the digit lines through additional ones of the array regions adjacent to the array region are in electrical communication.Embodiment 26: The microelectronic device of Embodiment 25, wherein said some of said sense amplifier devices communicate with said ones of said digit lines extending through said array region by means of some of the digit line contact structures, some of the contact pad structures, some of the contact structures, and the additional array regions interposed between the array region and the array region and said some other of said sense amplifier devices extend horizontally across said array region by means of The some of the digit lines of the additional array region in the others and some other of said routing structures within said one of said digit line exit regions.Embodiment 27: The microelectronic device of Embodiment 24, wherein for each sub-wordline driver region of the plurality of sub-wordline driver regions within the array region: Some of the sub-word line driver devices are in electrical communication with some of the word lines extending through the array region; and some other of the sub-word line driver devices within the sub-word line driver region are in electrical communication with some of the word lines extending through additional ones of the array regions adjacent to the array region.Embodiment 28: The microelectronic device of Embodiment 27, wherein said some of said sub-word line driver means are associated with all of said word lines extending through said array region by Some of the above electrical connections: some of the word line contact structures, some of the additional contact pad structures, some of the additional contact structures, and the additional some of said additional wiring structures in one of said digit line exit regions between array regions; and said some other of said sub-wordline driver means extending horizontally through The some of the word lines in the additional array region in the array region are in electrical communication with: some other of the word line contact structures, some other of the additional contact pad structures, the Some other of the additional contact structures and some other of the additional wiring structures within the one of the word line exit regions.Embodiment 29: The microelectronic device of any one of Embodiments 18 to 28, wherein each of the contact pad structures and each of the additional contact pad structures including copper.Embodiment 30: An electronic system comprising: input means; output means; processor means operatively connected to said input means and said output means; and memory means operatively connected to said The processor device and includes: memory array regions each including a dynamic random access memory (DRAM) cell, a digit line coupled to the DRAM cell, a word line coupled to the DRAM cell, and vertically connected to the DRAM a control logic device for cell offset and electrical communication with the DRAM cell; a digit line contact region between two memory array regions adjacent to each other in the first direction in the memory array region, the digit line contact Regions include: end portions of some of the digit lines extending beyond the horizontal region of the two of the memory array regions; conductive pads vertically located on all of the digit lines below said ones; digit line contacts extending vertically through said end portions of said ones of said digit lines to said conductive pads; a conductive routing located vertically below said conductive pads; and a conductive contact extending vertically from the conductive pad to the conductive routing; and a word line contact region adjacent to each other along a second direction perpendicular to the first direction in the memory array region Between two other memory array regions of the memory array region, the word line contact region includes end portions of some of the word lines extending beyond a horizontal region of the two other memory array regions of the memory array region an additional conductive pad vertically located below said some of said word lines; a word line contact extending completely vertically through said end portion of said some of said word lines to said some of said word lines The additional conductive pad; the additional conductive routing, which is vertically located under the additional conductive pad; and the additional conductive contact, which vertically extends from the additional conductive pad to the additional conductive routing.While the disclosure is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and described herein in detail. However, the disclosure is not to be limited to the particular forms disclosed. Indeed, the present disclosure is to cover all modifications, equivalents, and alternatives coming within the scope of the following appended claims and their legal equivalents. For example, elements and features disclosed with respect to one embodiment may be combined with elements and features disclosed with respect to other embodiments of the disclosure. |
An integrated circuit with overclocked embedded logic circuitry is described. In an example, a programmable logic device includes programmable logic blocks operable using a first clock signal having a first frequency. A dedicated logic circuit embedded within the programmable logic device is operable using a second clock signal synchronized with the first clock signal and having a second frequency, the second frequency being a multiple of the first frequency. An interface coupled between one or more of the programmable logic blocks and the dedicated logic circuit includes multiplexer circuitry to multiplex output signals produced by the one or more programmable logic blocks among input terminals of the dedicated logic circuit. |
1. A device having programmable interconnections, the device further comprising:programmable function blocks operable using a first clock signal having a first frequency; a dedicated circuit, comprising at least in part hardwired circuitry configured to perform a specific function, operable using a second clock signal synchronized with the first clock signal and having a second frequency, the second frequency being a multiple of the first frequency; and an interface coupled between one or more of the programmable function blocks and the dedicated circuit, the interface configured to multiplex output signals produced by the one or more programmable function blocks among input terminals of the dedicated circuit. 2. The device of claim 1, further comprising:a first bank of registers coupled between the dedicated circuit and the one or more programmable function blocks, the first bank of registers operable using the first clock signal; and a second bank of registers coupled between the at least one dedicated circuit and the one or more programmable function blocks, the second bank of registers operable using the second clock signal. 3. The device of claim 1, wherein the at least one dedicated circuit includes one or more multiply-accumulate circuits operable using the second clock signal.4. The device of claim 3, wherein the dedicated circuit further comprises a clock management circuit configured to receive the first clock signal and provide the second clock signal.5. A programmable logic device, comprising:programmable logic blocks operable using a first clock signal having a first frequency; a first bank of registers coupled to one or more of the programmable logic blocks, the first bank of registers operable using the first clock signal; a plurality of multiplexers in communication with the first bank of registers; a multiplier in communication with each of the plurality of multiplexers, the multiplier operable using a second clock signal synchronized with the first clock signal and having a second frequency, the second frequency being a multiple of the first frequency; an adder/subtracter having a first input bus in communication with the multiplier, a second input bus, and an output bus, the adder/subtracter operable using the second clock signal; a second bank of registers coupled to the output bus of the adder/subtracter, the second bank of registers operable using the second clock signal; and an accumulation circuit coupled between the output bus and the second input bus of the adder/subtracter. 6. The programmable logic device of claim 5, wherein the accumulation circuit comprises:a third bank of registers coupled to the output bus of the adder/subtracter, the third bank of registers operable using the second clock signal; and a multiplexer in communication with the third bank of registers and coupled to the second input bus of the adder/subtracter. 7. The programmable logic device of claim 6, wherein the third bank of registers includes a shift input bus.8. The programmable logic device of claim 6, further comprising a shifter coupled between the multiplexer and the second input bus of the adder/subtracter.9. The programmable logic device of claim 5, further comprising a clock management circuit configured to receive the first clock signal and provide the second clock signal.10. The programmable logic device of claim 5, further comprising a pattern generator for providing control signals to the plurality of multiplexers, the adder/subtracter, the second bank of multiplexers, and the accumulation circuit responsive to an operation control code.11. The programmable logic device of claim 5, wherein the first bank of registers includes a shift input bus and a shift output bus.12. The programmable logic device of claim 5, wherein the second frequency is an integer multiple of the first frequency.13. A programmable logic device, comprising:programmable logic blocks operable using a first clock signal having a first frequency; a first bank of registers coupled to one or more programmable logic blocks, the first bank of registers operable using the first clock signal; a first multiplexer in communication with the first bank of registers; a second multiplexer in communication with the first bank of registers; a third multiplexer in communication with the first bank of registers; a first adder/subtracter having a first input bus in communication with the first multiplexer, a second input bus in communication with the second multiplexer, and an output bus, the first adder/subtracter operable using a second clock signal synchronized with the first clock signal and having a second frequency, the second frequency being a multiple of the first frequency; a multiplier in communication with the output bus of the first adder/subtracter, the multiplier operable using the second clock signal; an second adder/subtracter having a first input bus in communication with the multiplier, a second input bus, and an output bus, the adder/subtracter operable using the second clock signal; a second bank of registers coupled to the output bus of the second adder/subtracter, the second bank of registers operable using the second clock signal; and an accumulation circuit coupled between the output bus and the second input bus of the adder/subtracter. 14. The programmable logic device of claim 13, wherein the second frequency is three-times the first frequency.15. A method of operating a dedicated circuit within a programmable device, comprising:providing a plurality of programmable function blocks; providing a first clock signal having a first frequency to the plurality of programmable function blocks; coupling one or more of the plurality of programmable function blocks to the dedicated circuit; multiplying the first clock signal having the first frequency by a predetermined value to provide a second clock signal having a second frequency, the second clock signal being synchronized with the first clock signal; and operating the dedicated circuitry using the second clock signal having the second frequency. 16. An integrated circuit, comprising:programmatically configurable logic circuitry, the programmatically configurable logic circuitry operable in a first time domain; and dedicated application specific circuitry in electrical communication with the programmatically configurable logic circuitry, the dedicated application specific circuitry operable in a second time domain, wherein frequency of the second time domain is a multiple of frequency of the first time domain, the first time domain and the second time domain sufficiently phase aligned with respect to corresponding timing signal transitions for synchronous operation between the programmatically configurable logic circuitry and the dedicated application specific circuitry. 17. The integrated circuit of claim 16, wherein the dedicated application specific circuitry and the programmatically configurable logic circuitry are in electrical communication via multiplexer circuitry coupled therebetween to multiplex output signals produced by the programmatically configurable logic circuitry among input terminals of the dedicated application specific circuitry.18. The integrated circuit of claim 16, further comprising:a first bank of registers coupled between the dedicated application specific circuitry and the programmatically configurable logic circuitry, the first bank of registers operable using a first clock signal in the first time domain; and a second bank of registers coupled between the dedicated application specific circuitry and the programmatically configurable logic circuitry, the second bank of registers operable using a second clock signal in the second time domain. 19. The programmable logic device of claim 18, wherein the dedicated application specific circuitry includes at least one multiply-accumulate circuit operable using the second clock signal.20. A programmable multiply-accumulator circuit (MAC) comprising:a multiplier circuit coupled to an adder/subtractor circuit, the multiplier circuit receiving a tirst input signal selected from a plurality of first input signals by a first control signal; an accumulator circuit receiving a second input from the adder/subtractor circuit, wherein an output signal of the accumulator circuit is sent back to the adder/subtractor circuit depending on a value of a second control signal; and a pattern generator configured to produce the first and second control signals responsive to at least one operational (OP) code. 21. The programmable multiply-accumulator circuit of claim 20, further comprising:a clock multiplier circuit providing a first clock signal to the pattern generator, wherein a first frequency of the first clock signal is proportional to a second frequency of a second clock signal. 22. The programmable multiply-accumulator circuit of claim 21, wherein the clock multiplier circuit further provides the first clock signal to the multiplier circuit, the adder/subtractor circuit, and the accumulator circuit.23. The programmable multiply-accumulator circuit of claim 21, further comprising:a plurality of programmable function blocks interconnected via programmable interconnections; wherein the at least one OP code is created by the plurality of programmable function blocks; and wherein the plurality of programmable function blocks is responsive to the second clock signal. 24. An integrated circuit (IC) having a programmable multiply-accumulator circuit (MAC), the MAC comprising:a multiplier circuit coupled to an adder/subtractor circuit, the multiplier circuit receiving a first input signal selected from a plurality of first input signals by a first control signal, the multiplier circuit comprising at least in part hardwired circuitry to perform a specific function; a plurality of programmable function blocks interconnected via programmable interconnections, the plurality of programmable function blocks configured to provide the plurality of first input signals; an accumulator circuit receiving a second input from the adder/subtractor circuit, wherein an output signal of the accumulator circuit is sent back to the adder/subtractor circuit depending on a value of a second control signal; a pattern generator configured to produce the first and second control signals responsive to at least one operational (OP) code; and a clock multiplier circuit providing a first clock signal to the pattern generator, wherein a first frequency of the first clock signal is a multiple of a second frequency of a second clock signal. 25. The IC of claim 24, wherein the OP code is produced by software executed on an embedded processor in the IC.26. The IC of claim 24, wherein the multiple is one (1). |
FIELD OF THE INVENTIONOne or more aspects of the invention relate generally to programmable devices and, more particularly, to programmable devices with overclocked dedicated circuitry and/or to a programmable multiply-accumulator circuit.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs) exist as a well-known type of integrated circuit (IC) that may be programmed by a user to perform specified logic functions. There are different types of programmable logic devices, such as programmable logic arrays (PLAs) and complex programmable logic devices (CPLDs). One type of programmable logic device, known as a field programmable gate array (FPGA), is very popular because of a superior combination of capacity, flexibility, time-to-market, and cost.An FPGA typically includes an array of configurable logic blocks (CLBs) surrounded by a ring of programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. The CLBs, IOBs, and interconnect structure are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the CLBs, IOBs, and interconnect structure are configured. The configuration bitstream may be read from an external memory, conventionally an external integrated circuit memory EEPROM, EPROM, PROM, and the like, though other types of memory may be used. The collective states of the individual memory cells then determine the function of the FPGA.Dedicated logic circuits configured to perform specific functions are commonly embedded into PLDs. For example, the Virtex(R)-II family of FPGAs manufactured by Xilinx, Inc., include dedicated multipliers. Current approaches for embedding dedicated logic circuitry within a PLD are fundamentally limited in input/output (I/O) capacity by the routing and data-path processing capabilities of the programmable fabric to which the dedicated logic circuitry is attached. For example, a design in an FPGA may have a data path clocked at 300 MHz. A multiplier, attached to the data path of the FPGA, is also clocked at 300 MHz, even though a multiplier fabricated with the same process used for the FPGA is capable of being clocked at a higher rate.Notably, conventional dedicated logic circuitry within an FPGA that operates using a higher or lower clock frequency than that of the FPGA fabric is clock asynchronously from the FPGA fabric. For example, embedded microprocessors, boundary scan circuitry, and I/O transceivers all operate asynchronously with respect to the FPGA fabric. Data transfer between the FPGA fabric and such dedicated logic circuits is effectuated using asynchronous communication between the FPGA fabric and the dedicated logic circuit (e.g., a first-in, first-out (FIFO) interface), or a handshaking mechanism (e.g., a processor bus and peripheral management).Therefore, it would be desirable and useful to have an integrated circuit with embedded dedicated logic circuitry capable of operating at a higher clock rate and at least partially synchronous with programmatically configurable logic of such an integrated circuit.SUMMARY OF THE INVENTIONAn aspect of the invention is a programmable logic device with overclocked dedicated logic circuitry. In an embodiment, the programmable logic device includes programmable logic blocks operable using a first clock signal having a first frequency. A dedicated logic circuit embedded within the programmable logic device is operable using a second clock signal synchronized with the first clock signal and having a second frequency, the second frequency being a multiple of the first frequency. An interface coupled between one or more of the programmable logic blocks and the dedicated logic circuit includes multiplexer circuitry to multiplex output signals produced by the one or more programmable logic blocks among input terminals of the dedicated logic circuitAnother aspect of the invention is an integrated circuit comprising programmatically configurable logic circuitry in electrical communication with dedicated application specific circuitry. The programmatically configurable logic circuitry operable in a first time domain, and the dedicated application specific circuitry operable in a second time domain. Frequency of the second time domain is a multiple of frequency of the first time domain, and the first time domain and the second time domain are sufficiently phase aligned with respect to corresponding timing signal transitions for synchronous operation between the programmatically configurable logic circuitry and the dedicated application specific circuitry.Another embodiment of the present invention includes a programmable multiply-accumulator circuit (MAC). The MAC includes: a multiplier circuit coupled to an adder/subtractor circuit, where the multiplier circuit receives a first input signal selected from a plurality of first input signals by a first control signal; an accumulator circuit receiving a second input from the adder/subtractor circuit, where an output signal of the accumulator circuit is sent back to the adder/subtractor circuit depending on a value of a second control signal; and a pattern generator configured to produce the first and second control signals responsive to at least one operational (OP) code. In addition the MAC may also include a clock multiplier circuit providing a first clock signal to the pattern generator, where a first frequency of the first clock signal is proportional to a second frequency of a second clock signal. In one aspect of the present invention where the multiplier circuit is a dedicated hardwire circuit, and the MAC is embedded in a PLD, for example, an FPGA, the clock supplied to multiplier is a multiple of the clock for at least some part of the rest of the PLD. Hence the multiplier circuit can be over-clocked with respect to the rest of the PLD.BRIEF DESCRIPTION OF THE DRAWINGSAccompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the invention; however, the accompanying drawing(s) should not be taken to limit the invention to the embodiment(s) shown, but are for explanation and understanding only.FIG. 1 depicts a block diagram of an exemplary embodiment of a field programmable gate array (FPGA) coupled to a program memory;FIG. 2 depicts a block diagram showing an exemplary embodiment of a portion of an FPGA having embedded dedicated logic circuitry;FIG. 3 depicts a block diagram showing an exemplary embodiment of a two-times overclocked multiply-accumulate circuit embedded within an FPGA; andFIG. 4 depicts a block diagram showing an exemplary embodiment of a three-times overclocked multiply-accumulate circuit embedded within an FPGA.DETAILED DESCRIPTION OF THE DRAWINGSProgrammable logic device with overclocked dedicated logic circuitry is described. One or more aspects of the invention relate to dedicated multiply-accumulate circuits embedded within an FPGA. Those skilled in the art will appreciate, however, that the invention may be employed with other types of dedicated logic circuits that are capable of operating in a time-multiplexed mode, such circuits being embedded within other types of programmable logic devices, such as a CPLD, or other integrated circuits with programmatically configurable logic.FIG. 1 depicts a block diagram of an exemplary embodiment of a field programmable gate-array (FPGA) 100 coupled to a program memory 112. FPGA 100 illustratively includes CLBs 107, I/O routing ring 106A ("programmable interconnect"), memory 111, such as random access memory, delay lock loop (DLL) blocks 109, multiply/divide/de-skew clock circuits 110, and programmable IOBs 106B. DLL blocks 109 and clock circuits 110 collectively provide digital clock management (DCM) circuits for managing clock signals within FPGA 100. FPGA 100 may include other types of logic blocks and circuits in addition to those described herein,CLBs 104 are programmably connectable to each other, and to I/O routing ring 108, for performing various types of logic functions. Each of CLBs 104 may include one or more "slices" and programmable interconnect circuitry (not shown). Each CLB slice in turn includes various circuits, such as flip-flops, function generators (e.g., look-up tables (LUTs)), logic gates, memory, and like type well-known circuits.Programmable IOBs 106B (and Multi-gigabit transceivers (MGTs), not shown) are configured to provide input to, and receive output from, one or more of CLBs 107. Configuration information for CLBs 107, I/O routing ring 106A, and programmable IOBs 106B is stored in memory 111. Briefly stated, a configuration bitstream produced from program memory 112 is coupled to a configuration port of FPGA 900 to implement a desired circuit therein.A programmable function block includes circuitry that can be programmed to perform one or more functions. An example of a programmable function block is a programmable logic block, which in the context of an FPGA may include, the CLBs 107, I/O routing ring 106A, and programmable IOBs 106B (and MGTs).FIG. 2 depicts a block diagram showing an exemplary embodiment of a portion 200 of an FPGA having embedded dedicated hardwired circuitry. Portion 200 includes programmable logic blocks 202, a bank of registers ("register bank" 204), interface circuitry 203, a dedicated circuit 208, and a clock multiplier 210. Alternatively, the clock multiplier 210 may be part of interface circuitry 203 or part of dedicated logic circuitry 208.Programmable logic blocks 202 include CLBs, IOBs (MGTs), and I/O routing, which may be configured to implement a user-defined circuit. Synchronous circuit elements within programmable logic blocks 202 operate in response to a clock signal, CLK, provided by a clock 212. Dedicated circuit 208 includes hardwired circuitry embedded within portion 200 configured to perform a specific function, such as, for example, an arithmetic function or a digital signal processing (DSP) function. Synchronous circuit elements within dedicated circuit 208 operate in response to a clock signal, M_CLK, provided by clock multiplier 210. Clock signal M_CLK is synchronized with clock signal CLK, but has a frequency that is a multiple, M, of the frequency of clock signal CLK, where M in one embodiment is an integer. In an alternative embodiment M is any number, including a fractional number, i.e., the frequency of M_CLK is proportional to the frequency of CLK, and M_CLK need not be synchronized with clock signal CLK on every cycle of CLK, but M_CLK may be synchronized on a number K of CLK clock cycles, where K is any positive number, including a whole or fractional number. Dedicated circuit 208 is thus "overclocked" with respect to programmable logic blocks 202.Interface circuitry 203 effectuates data transfer between programmable logic blocks 202 and dedicated circuit 208 such that, to the FPGA fabric, dedicated circuit 208 appears to operate at the data rate of the FPGA fabric using the clock signal that is used by the logic in circuit 202. Notably, for each cycle of clock signal CLK, programmable logic blocks 202 provide input data to interface circuitry 203 and receive output data from interface circuitry 203. However, for each cycle of clock signal CLK, dedicated circuit 208 processes the input data to produce the output data in accordance with multiple cycles of clock signal M_CLK.Interface circuitry 203 effectively isolates dedicated circuit 208 from programmable logic blocks 202 such that, from the viewpoint of programmable logic blocks 202, dedicated circuit 208 operates in accordance with clock signal CLK. At the same time, interface circuitry 203 allows dedicated circuit 208 to operate using a clock frequency higher than that of programmable logic blocks 202. For example, dedicated circuit 208 may be configured to perform multiple, e.g., arithmetic and/or DSP, operations to provide multiple output data using a single set of input data from programmable logic blocks 202.In particular, an input bus of register bank 204 receives input data from programmable logic blocks 202. The input bus of register bank 204 has a data width of X, where X is an integer greater than zero. In an embodiment, the input bus of register bank 204 receives the input data from the programmable interconnect circuitry within programmable logic blocks 202. Registers within register bank 204 load data in response to clock signal CLK. An output bus of register bank 204 provides the input data to multiplexer circuitry 216. A control bus of multiplexer circuitry 216 receives control signal(s) from control circuit 214. Control circuit 214 operates in response to clock signal M_CLK. Control circuit 214 causes multiplexer circuitry 216 to selectively multiplex the input data from register bank 204 to provide output data to dedicated circuit 208. The output bus of multiplexer circuitry 216 has a data width of Y, where Y is an integer greater than zero.Since register bank 204 is operated in response to clock signal CLK, data on the output bus of register bank 204 is static for M clock cycles of clock signal M_CLK. In an embodiment, programmable logic blocks 202 may be configured to provide control information to control circuit 214. For example, dedicated circuit 208 may have several modes of operation that are selectable using the control information.Notably, clock signal CLK is coupled to an input of clock multiplier 210, which multiples clock signal CLK by M. An output of clock multiplier 210 provides clock signal M_CLK. In an embodiment, clock signal M_CLK has a frequency that is an integer multiple of the frequency of clock signal CLK. For example, clock signal M_CLK may have a frequency that is two, three, or four times that of clock signal CLK.An input bus of register bank 206 receives output data from dedicated circuit 208. The input bus of register bank 206 has a data width of Z, where Z is an integer greater than zero. Registers within register bank 206 load data in response to clock signal M_CLK. An output bus of register bank 206 provides the output data to programmable logic blocks 202.Since dedicated circuit 208 is operated in response to clock signal M_CLK, the output data coupled to the input bus of register bank 206 may change M times for each cycle of clock signal CLK. Register bank 206 "captures" the output data of dedicated circuit 208 such that the entire output data may be provided to programmable logic blocks 202.FIG. 3 depicts a block diagram showing an exemplary embodiment of a two-times overclocked multiply-accumulate circuit (MAC) 300 embedded within an FPGA. MAC 300 includes multiplexers 302A and 302B (collectively referred to as multiplexers 302), multiplier 304, adder/subtracter 306, accumulator circuitry 308, and pattern generator 310. Accumulator circuitry 308 includes registers 3160, and 3161, (collectively referred to as registers 316), multiplexer 318, and an optional shifter 320. Multiplier 304 may be a very large scale integrated (VLSI) pipelined multiplier, and adder/subtracter 306 may be a VLSI pipelined adder/subtracter. MAC 300 may be fabricated with the same process used to fabricate the FPGA in which MAC 300 is embedded.MAC 300 receives input data from registers 312A0, 312A1, 312B0, and 312B1 (collectively referred to as registers 312). MAC 300 provides output data to registers 3140 and 3141 (collectively referred to as registers 314). In an embodiment, registers 312 and registers 314 are dedicated circuit elements formed within the FPGA. For example, registers 312 and registers 314 may be fabricated as part of MAC 300. Alternatively, registers 312 and registers 314 may be part of the programmable logic of the FPGA. For example, registers 312 and registers 314 may be part of CLBs within the FPGA.A clock signal CLK that is used to clock synchronous circuit elements within the programmable logic of the FPGA is coupled to a clock multiplier 322. Clock multiplier 322 multiplies clock signal CLK to produce a clock signal M_CLK having a frequency twice that of clock signal CLK. Operation of clock multiplier 322 is well-known in the art. In an embodiment, clock multiplier 322 is fabricated as part of MAC 300. Alternatively, clock multiplier 322 may be part of a DCM circuit with the FPGA.Registers 312 load data in accordance with clock signal CLK. Registers 314 load data in accordance with clock signal M_CLK. Notably, input buses of register 312A0 and register 312A1 are respectively configured to receive signals a0 and a1. Input buses of register 312B0 and register 312B1 are respectively configured to receive signals b0 and b1. Signals a0, a1, b0, and b1 are provided by programmable logic blocks of the FPGA. Each of signals a0, a1, b0, and b1 has a data width of N bits.Output buses of register 3140 and 3141 respectively provide signals q0 and q1. Signals q0 and q1 are provided to the programmable logic blocks of the FPGA. In addition, an operational code (OP code) signal, in one embodiment of the present invention may be provided by programmable logic blocks of the FPGA to pattern generator 310. The OP code may be used to select a specific one of a plurality of operations that may be performed by MAC 300. Pattern generator 310 produces an output pattern in response to clock signal M_CLK.In an alternative embodiment of the present invention the OP code signal is supplied from the decoding of a software program instruction. The software program may be executed by an embedded microprocessor such as in the Virtex II Pro FPGA by Xilinx Corporation of San Jose Calif. In yet another embodiment the OP code may be stored in an FPGA's configuration memory and may be changed by partially reconfiguring the FPGA.Output buses of register 312A0 and 312A1 are respectively coupled to inputs of multiplexer 302A. Output buses of register 312B0 and 312B1 are respectively coupled to inputs of multiplexer 302B. Control terminals of multiplexer 302A and multiplexer 302B are respectively configured to receive signals A_SEL and B_SEL. Signals A_SEL and B_SEL are generated by pattern generator 310 in accordance with the OP code. The logical value of signal A_SEL determines which of signals a0 and a1 is selected by multiplexer 302A. Likewise, the logical value of signal B_SEL determines which of signals b0 and b1 is selected by multiplexer 302B.Output buses of multiplexer 302A and multiplexer 302B are respectively coupled to multiplier 304. Multiplier 304 multiplies the signals provided by multiplexers 302 in response to clock signal M_CLK. Multiplier 304 provides an output signal having a data width of 2N bits to adder/subtracter 306. Adder/subtracter 306 adds or subtracts the output signal of multiplier 304 and an output signal of multiplexer 318 in response to clock signal M_CLK. A control terminal of adder/subtracter 306 is configured to receive signal ADD_MODE. Signal ADD_MODE is generated by pattern generator 310 in accordance with the OP code. The logical value of signal ADD_MODE determines whether adder/subtracter 306 performs an addition operation or a subtraction operation. Adder/subtracter 306 provides an output signal having a data width of M bits.Input buses of register 3140 and 3140 are respectively coupled to the output of adder/subtracter 306. The input buses of registers 314 have a data width of 2N bits. Control terminals of register 3140 and register 3141 are configured to respectively receive signals CAPT0_EN and CAPT1_EN. Signals CAPT0_EN and CAPT1_EN are generated by pattern generator 310 in accordance with the OP code. The logical value of signal CAPT0_EN determines whether register 3140 is enabled. Likewise, the logical value of signal CAPT1_EN determines whether register 3141 is enabled.Input buses of register 3160 and register 3161 are respectively coupled to the output of adder/subtracter 306. Control terminals of register 3160 are respectively configured to receive signals ACC0_CLR and ACC0_EN. Control terminals of register 3161 are respectively configured to receive signals ACC1_CLR and ACC013 CLR signals ACC0_CLR, ACC0_EN, ACC1_CLR, and ACC1_EN are generated by pattern generator 310 in accordance with the OP code. The logic value of signals ACC0_CLR and ACC1_CLR respectively determine whether registers 3160 and 3161 are cleared. The logic value of signals ACC0_EN and ACC1_EN respectively determine whether registers 3160 and 3161 are enabled.Input buses of multiplexer 318 are respectively coupled to output buses of register 3160 and register 3161. A control terminal of multiplexer 318 is configured to receive signal ACC_SEL. Signal ACC_SEL is generated by pattern generator in accordance with the OP code. The logical value of signal ACC_SEL determines which of the output buses of registers 3160 and 3161 is selected by multiplexer 318. As described above, an output bus of multiplexer 318 is coupled to adder/subtracter 306.In general, MAC 300 is capable of performing two multiply-accumulate operations for each cycle of clock signal CLK. The type of multiply-accumulate operation is determined by the OP code. During each cycle of clock signal CLK, MAC 300 will clock-in an OP code as well as input data (i.e., a0, a1, b0 and b1). The OP code may be arbitrarily selected by the programmable logic of the FPGA during operation of MAC 300, allowing MAC 300 to be used for many purposes. Alternatively, the OP code may be loaded during configuration of the FPGA.Operation of MAC 300 may be understood with reference to the following two examples. In a first example, MAC 300 is configured via the OP code(s) to perform two multiplication operations such that output signal q0 is equal to the product of input signal a0 and input signal b0, and output signal q1 is equal to the product of input signal a1 and input signal b1. In a first cycle of clock signal M_CLK, MAC 300 produces output signal q0. Multiplexers 302 are configured to select input signals a0 and b0. Signals a0 and b0 are multiplied by multiplier 304. Accumulator circuitry 308 is configured to provide an output of zero such that the output of adder/subtracter 306 is the product of signals a0 and b0. Accumulator circuitry 308 may be configured to provide an output of zero by disabling both of registers 316. Register 3140 is enabled to capture the output of adder/subtracter 306, and register 314, is disabled. In a second clock cycle of clock signal M_CLK, MAC 300 produces output signal q1. Multiplexers 302 are configured to select input signals a1 and b1, which are multiplied by multiplier 304 and passed through adder/subtracter 306. In the second cycle, however, register 314, is enabled to capture the output of adder/subtracter 306, and register 314, is disabled.In a second example, MAC 300 is configured to perform two multiplication-accumulation operations such that output signal q0 is equal to the product of signal a0 and signal b0 summed with the product of signal a1 and signal b1 (i.e., q0=a0b0+a1b2). In a first clock cycle of clock signal M_CLK, MAC 300 computes the product between signals a0 and b0. Multiplexers 302 are configured to select input signals a0 and b0, which are multiplied by multiplier 304 and passed through adder/subtracter 306. One of registers 316 is enabled to capture the output of adder/subtracter 306. In a second clock cycle of clock signal M_CLK, MAC 300 computes the product between signals a1 and b1, which is summed with the previously computed product between signals a0, and b0. Multiplexers 302 are configured to select input signals a1 and b1, which are multiplied by multiplier 304. Adder/subtracter 306 sums the output of multiplier 304 with the output of the one of registers 316 that captured the previously computed product between signals a0 and b0, which is provided by multiplexer 318. The output of adder/subtracter 306 is captured by register 3140.It should be appreciate that MAC 300 may perform many operations, including the following: [mathematical formula - see original document][mathematical formula - see original document][mathematical formula - see original document][mathematical formula - see original document][mathematical formula - see original document][mathematical formula - see original document]where one or more OP codes may perform one or more of the above modes. In modes 3 and 4 above, if it is desirable to produce an output signal q0 that is the product between signals a0 and b0 summed with signal a1, register 312B1 may be cleared using an optional signal B2_CLR provided by pattern generator 310. This obviates the need to load register 312B1 with the value of one. Modes 5 and 6 above are useful for complex arithmetic and require the input data and OP code to be static for two cycles of clock signal CLK, since a total of four multiply-accumulate operations are performed.In an embodiment, registers 316 in accumulator circuitry 308 may include shift input terminals and shift output terminals. For purposes of clarity by example, only register 3160 is shown as having a shift input terminal and a shift output terminal. It is to be understood, however, that both of registers 316 may have shift input and output terminals. The shift input terminals of registers 316 may be configured to receive input data from the programmable logic blocks of the FPGA, or from another dedicated MAC. The shift output terminals of registers 316 may be configured to provide output data to the programmable logic blocks of the FPGA, or to yet another dedicated MAC. The shift input and output terminals of registers 316 allow MAC 300 to support addition operations involving additional signals c0 and c1 such as providing a signal q0 equal to the product between signals a0 and b0 summed with signal c0, and a signal q1 equal to the product between signals a1 and b1 summed with signal c1.In an embodiment, the output bus of multiplexer 318 is coupled to a shifter 320. Shifter 320 is capable of providing a selectable shift to the output of multiplexer 318 in accordance with a signal SHIFT_SEL. _The signal SHIFT_SEL may be generated using pattern generator 310 in response to the OP code. Shifter 320 allows MAC 300 to support double-precision multiplication-accumulation operations.In an embodiment, registers 312 may support shift-style loading in addition to, or in place of, parallel-style loading. For purposes of clarity by example, only register 312A0 is shown as having a shift input terminal and a shift output terminal. It is to be understood, however, that one or more of registers 312 may have shift input and shift output terminals. Such shift-style loading allows MAC 300 to be cascaded with other MACs. For example, the shift input terminals of registers 312 may be coupled to shift output terminals of respective registers of another MAC. Likewise, the shift output terminals or registers 312 may be coupled to shift input terminals of respective registers of yet another MAC. Alternatively, the shift input and output terminals of registers 312 may be coupled to the programmable logic blocks of the FPGA.MAC 300 may be converted into a four-times oversampling circuit. Notably, multiplexers 302 may be configured to have four input busses and clock signal M_CLK may have a frequency that is four times that of clock signal CLK. Additional registers 312 may be used to provide input data to the additional input busses of multiplexers 302. If MAC 300 is configured as a four-times oversampling circuit, MAC 300 operates substantially as described above, but allows for four parallel multiply-accumulate operations per cycle of clock signal CLK. This allows for the computation of a complex product in a single clock cycle of clock signal CLK. It is noted that, in the case of a complex product, adder/subtracter 306 will be idle for two cycles of clock signal M_CLK. As such, two additional summing operations may be performed for each cycle of clock signal CLK (i.e., one complex product and two real additions per cycle of clock signal CLK).MAC 300 may also be converted into a three-times oversampling circuit. Notably, multiplexers 302 may be configured to have three input busses and clock signal M_CLK may have a frequency that is three times that of clock signal CLK. Additional registers may be used to provide input data to the additional input busses of multiplexers. If MAC 300 is configured as a three-times oversampling circuit, MAC 300 operates substantially as described above, but allows for four parallel multiply-accumulate operations per cycle of clock signal CLK. It is noted that, in the case of a complex product and a three-times oversampling MAC circuit, specialized embodiments may be used. Notably, it is possible to perform a complex product operation with three real multiplications and five real additions/subtractions: [mathematical formula - see original document]and in the second formulation, the partial product, ar[br+bi], is repeated, allowing the complex product to be computed with a total of three additions, two subtractions, and three multiplications. Such formulations are well known in the art.FIG. 4 depicts a block diagram showing an exemplary embodiment of a three-times overclocked MAC 400 embedded within an FPGA. In operation, MAC 400 is optimized to perform a complex product in a single cycle of clock signal CLK. MAC 400 includes multiplexers 4021 through 4023(collectively referred to as multiplexers 402), an adder/subtracter 404, a multiplier 406, an adder/subtracter 408, registers 4101 through 4103 (collectively referred to as registers 410), a multiplexer 411, and a pattern generator 412. Multiplier 406 may be a VLSI pipelined multiplier, and adder/subtracters 404 and 408 may be VLSI pipelined adder/subtracters. MAC 400 may be fabricated with the same process used to fabricate the FPGA in which MAC 400 is embedded.MAC 400 receives input data from registers 4141 through 4144 (collectively referred to as registers 414). MAC 400 provides output data to registers 4161 and 4162 (collectively referred to as registers 416). In an embodiment, registers 414 and registers 416 are dedicated circuit elements formed within the FPGA. For example, registers 414 and registers 416 may be fabricated as part of MAC 400. Alternatively, registers 414 and registers 416 may be part of the programmable logic of the FPGA. For example, registers 414 and registers 416 may be part of CLBs within the FPGA.A clock signal CLK that is used to clock synchronous circuit elements within the programmable logic of the FPGA is coupled to a clock multiplier 418. Clock multiplier 418 multiplies clock signal CLK to produce a clock signal M_CLK having a frequency three-times that of clock signal CLK. In an embodiment, Clock multiplier 418 is fabricated as part of MAC 400. Alternatively, clock multiplier 418 may be part of a DCM circuit with the FPGA.Registers 414 load data in accordance with clock signal CLK. Registers 416 load data in accordance with clock signal M_CLK. Notably, input buses of register 4141 and register 4142 are respectively configured to receive signals ai and bi. Input buses of register 4143 and register 4144 are respectively configured to receive signals ar and br. Signals ai, ar, bi, and br are provided by programmable logic blocks of the FPGA. Each of signals ai, ar, bi, and br has a data width of N bits.Output buses of register 4160 and 416i respectively provide signals qr and qi. Signals qr and qi are provided to the programmable logic blocks of the FPGA. In addition, an OP code signal may be provided by programmable logic blocks of the FPGA to pattern generator 412. The OP code may be used to select a specific one of a plurality of operations that may be performed by MAC 400. Pattern generator 412 produces an output pattern in response to clock signal M_CLK.Output buses of register 4141 and 4142 are respectively coupled to inputs of multiplexer 402<1> . Output buses of register 4143 and 4144 are respectively coupled to inputs of multiplexer 4022. Control terminals of multiplexer 4021 and multiplexer 4022 are respectively configured to receive signals S1 and S2. Signals S1 and S2 are generated by pattern generator 412 in accordance with the OP code. The logical value of signal S1 determines which of signals ai and bi is selected by multiplexer 4021. Likewise, the logical value of signal S2 determines which of signals ar and br is selected by multiplexer 4022.Input buses of multiplexer 4023 are respectively coupled to output buses of register 4142, register 4143, and register 4144. A control terminal of multiplexer 4023 is configured to receive a signal S3. Signal S3 is generated by pattern generator 412 in accordance with the OP code. The logical value of signal S3 determines which of signals bi, ar, and br is selected by multiplexer 4023.Output buses of multiplexer 4021 and multiplexer 4022 are respectively coupled to adder/subtracter 404. Adder/subtracter 404 adds or subtracts the output signal of multiplexer 4021 and the output signal of multiplexer 4022 in response to clock signal M_CLK. A control terminal of adder/subtracter 404 is configured to receive signal M1. Signal M1 is generated by pattern generator 412 in accordance with the OP code. The logical value of signal M1 determines whether adder/subtracter 404 performs an addition operation or a subtraction operation.Multiplier 406 multiplies the output signal of adder/subtracter 404 and the output signal of multiplexer 4023 in response to clock signal M_CLK. Multiplier 406 provides an output signal having a data width of 2N bits to adder/subtracter 408. Adder/subtracter 408 adds or subtracts the output signal of multiplier 406 and an output signal of multiplexer 411 in response to clock signal M_CLK. A control terminal of adder/subtracter 408 is configured to receive signal M2. Signal M2 is generated by pattern generator 412 in accordance with the OP code. The logical value of signal M2 determines whether adder/subtracter 408 performs an addition operation or a subtraction operation. Adder/subtracter 408 provides an output signal having a data width of 3N bits.Input buses of register 4161 and 4162 are respectively coupled to the output of adder/subtracter 408. The input buses of registers 416 have a data width of 2N bits. Control terminals of register 4161 and register 4162 are configured to respectively receive signals E2 and E3. Signals E2 and E3 are generated by pattern generator 412 in accordance with the OP code. The logical value of signal E2 determines whether register 4161 is enabled. Likewise, the logical value of signal E3 determines whether register 4162 is enabled.Input buses of registers 410 are coupled to the output of adder/subtracter 408. Control terminals of registers 4101 through 4103 are respectively configured to receive signals R1 through R3, and E1 through E3, respectively. Signals R1 through R3 and E1 through E3 are generated by pattern generator 412 in accordance with the OP code. The logic value of each of signals R1 through R3 determines whether a respective one of registers 410 is cleared. The logic value of each of signals E1 through E3 determines whether a respective one of registers 410 is enabled.Input buses of multiplexer 411 are respectively coupled to output buses of registers 410. A control terminal of multiplexer 411 is configured to receive signal S4. Signal S4 is generated by pattern generator in accordance with the OP code. The logical value of signal S4 determines which of the output buses of registers 410 is selected by multiplexer 411. As described above, an output bus of multiplexer 411 is coupled to adder/subtracter 408.Programmable logic device with overclocked dedicated logic circuitry has been described. Dedicated logic circuitry embedded within a PLD operates at speeds in excess of the data rates supported by data paths through the PLD fabric. For example, multiply-accumulate circuits may operate at two, three, or four times the clock frequency of the PLD. This may advantageously be used to improve multiply-accumulate operations, such as the running convolutions computed for a finite impulse-response (FIR) filter, since the throughput of a MAC grows linearly with the factor by which the MAC is overclocked. In addition, advantageously I/o limitations of known embedded logic circuits are avoided by broadening the interface between the FPGA fabric and the dedicated logic circuit and embedding control circuitry to maintain flexibility of function within the dedicated logic circuit.While the foregoing describes exemplary embodiment(s) in accordance with one or more aspects of the invention, other and further embodiment(s) in accordance with the one or more aspects of the invention may be devised without departing from the scope thereof, which is determined by the claim(s) that follow and equivalents thereof. Claim(s) listing steps do not imply any order of the steps. Trademarks are the property of their respective owners. |
Disclosed herein are edge-to-edge display devices and related methods. An example display device includes a display screen and a backlight including a light guide frame defining a cavity therein. Theexample display device includes an integrated circuit coupled to the display screen. The example display device includes a flexible printed circuit in communication with the integrated circuit and including an electrical component coupled thereto. The electrical component is at least partially disposed in the cavity of the light guide frame. |
1.A display device comprising:Display screena backlight, comprising a light guiding frame, in which a cavity is defined;An integrated circuit coupled to the display screen;A flexible printed circuit in communication with the integrated circuit and including an electronic component coupled to the flexible printed circuit, the electronic component being at least partially disposed in the cavity of the light guiding frame.2.The display device of claim 1, further comprising a bezel disposed around the display screen, the bezel comprising a first side wall, a second side wall, a third side wall, and a fourth side wall, Each of the first side wall, the second side wall, the third side wall, and the fourth side wall has the same width.3.The display device of claim 1, wherein the flexible printed circuit is a first flexible printed circuit, and the backlight comprises a second flexibility placed adjacent to the first flexible printed circuit and the light guiding frame Printed circuit.4.The display device of claim 1, wherein the integrated circuit is a timing controller embedded driver integrated circuit.5.The display device of claim 1, wherein the backlight comprises a first light source, a second light source, and a spacer disposed between the first light source and the second light source, the spacer comprising a reflection Sex coating.6.The display device of claim 1, wherein a portion of the flexible printed circuit is external to the cavity.7.The display device of claim 1 wherein the electronic component comprises one or more of a resistor or a capacitor.8.The display device of claim 1 or 4, wherein the integrated circuit is coupled to the display screen via a glass wafer bonding technique.9.A display device comprising:Display screena cover body including a cavity defined in the cover body, the display screen being coupled to the cover body;Flexible printed circuit;An integrated circuit coupled to the flexible printed circuit, a first portion of the flexible printed circuit disposed in the cavity;A bezel is placed around the display screen for covering a second portion of the flexible printed circuit.10.The display device of claim 9, wherein the second portion of the flexible printed circuit is external to the cavity.11.The display device of claim 9, wherein the cover is coupled to a base of the user device via a hinge to secure the display screen to the base, when the display screen is coupled to the base, The cavity is placed near the hinge.12.The display device of claim 9, wherein the integrated circuit is coupled to the flexible printed circuit via a flexible wafer bonding technique.13.A display device according to claim 9 or 12, wherein said integrated circuit is a timing controller embedded driver integrated circuit.14.The display device of claim 9, further comprising a backlight, the backlight comprising a first light source, a second light source, and a spacer disposed between the first light source and the second light source, the spacer Includes a reflective coating.15.An electronic device comprising:BaseDisplay screena housing for supporting the display screen, the housing being coupled to the base;a flexible printed circuit that is placed in the housing;An integrated circuit coupled to one of the display screen or the flexible printed circuit;A bezel disposed about the display screen, the bezel being used to cover at least a portion of the flexible printed circuit.16.The electronic device of claim 15 wherein said housing includes a cavity defined in the housing, at least a portion of said flexible printed circuit is placed in said housing, and wherein said An integrated circuit is coupled to the flexible printed circuit.17.The electronic device of claim 16 further comprising a hinge for coupling said housing to said base, said cavity being placed in said housing when said housing is coupled to said base Near the hinge.18.The electronic device of claim 16 further comprising a cable extending between said flexible printed circuit and said base.19.The electronic device of claim 15 wherein said integrated circuit is coupled to said display screen and said bezel is for covering said integrated circuit.20.The electronic device of claim 19, further comprising a backlight, the backlight being placed in the housing, the backlight comprising a light guiding frame, at least a portion of the electronic component of the flexible printed circuit being placed in the Light guide box. |
Edge to edge display device and related methodsTechnical fieldThe present disclosure generally relates to display screens, and more particularly to edge-to-edge display devices and related methods.Background techniqueA personal computing (PC) device, such as a notebook computer or electronic tablet, includes a display screen that enables a user to interact with content displayed on the display via a graphical user interface. A display screen of a PC device typically includes a bezel or frame placed around the edge of the display screen. The bezel, along with the cover or cover of the PC device that houses the display, structurally supports the display when, for example, the display is coupled to another component, such as a notebook keyboard. The bezel also protects electronic components (eg, printed circuit boards, source drivers, etc.) associated with the display and placed near the display such that they are not exposed to the outside.The screen-to-body ratio represents the ratio of the amount of the display surface to the amount of the body surface of the PC device. PC devices having a high aspect ratio typically include a bezel that defines a narrow boundary around the display of the device, thereby defining a larger display area than a device having a smaller aspect ratio.Summary of the inventionAn aspect of the present disclosure provides a display device including: a display screen; a backlight including a light guiding frame in which a cavity is defined; an integrated circuit coupled to the display screen; A flexible printed circuit in communication with the integrated circuit and including an electronic component coupled to the flexible printed circuit, the electronic component being at least partially disposed in the cavity of the light guiding frame.Another aspect of the present disclosure provides a display device including: a display screen; a cover body including a cavity defined in the cover body, the display screen being coupled to the cover body; a flexible printed circuit; An integrated circuit coupled to the flexible printed circuit, a first portion of the flexible printed circuit disposed in the cavity; and a bezel disposed about the display screen, the bezel being used to cover the The second part of the flexible printed circuit.Yet another aspect of the present disclosure provides an electronic device including: a base; a display screen; a housing for supporting the display screen, the housing being coupled to the base; and a flexible printed circuit disposed on In the housing; an integrated circuit coupled to one of the display screen or the flexible printed circuit; and a bezel disposed about the display screen, the bezel for covering the flexibility At least a portion of the printed circuit.DRAWINGS1 is a schematic diagram of a known display device of a PC device.2 is a schematic diagram of an example display device of a PC device constructed in accordance with the teachings of the present disclosure.3 is a side elevational view of the example display device of FIG. 2 taken along line 3-3 of FIG. 2.4 is a partial cross-sectional view of the example display device of FIGS. 2 and 3 taken along line 4-4 of FIG. 2.5 is a schematic diagram of another example display device of a PC device constructed in accordance with the teachings of the present disclosure.Figure 6 is a schematic illustration of a known backlight of a PC device.7 is a schematic diagram of an example backlight of a PC device constructed in accordance with the teachings of the present disclosure.8 is a schematic diagram of another example backlight of a PC device constructed in accordance with the teachings of the present disclosure.9 is a flow chart of an example method of fabricating the example display device of FIGS. 2 through 4.10 is a flow chart of an example method of making the example display device of FIG. 5.These drawings are not to scale. Instead, the thickness of the layer or region can be enlarged in the drawings. The same reference numbers will be used throughout the drawings and the claimsAs used herein, "including" and "including" (and all their forms and tenses) are open terms. Therefore, whenever the claims are in any form (including, including, including, having, containing, etc.) as any of the claims or pre-orders of the "includes" or "includes", it should be understood that additional elements may be present. , terms, etc., and do not fall outside the scope of the corresponding claims or references. As used herein, when the phrase "at least" is used as a transitional term, for example, in the preamble of the claims, it is an open term in the same manner as the terms "including" and "comprising" are open terms. The term "and/or" used in the form of, for example, A, B, and/or C refers to any combination or subset of A, B, C, for example, (1) includes only A; (2) includes only B; (3) includes only C; (4) A and B; (5) A and C; (6) B and C; and (7) A and B and C.Detailed waysA display screen of a PC device, such as a notebook computer or tablet, enables a user to interact with content, such as a user application installed on a PC device, media accessed via the Internet, etc., via a graphical user interface presented on the display screen. A display screen (eg, an LCD panel) is generally surrounded by a frame, border, or border that defines the perimeter of the display screen. For example, when the display is coupled to a PC device (eg, including a notebook keyboard), the bezel helps to structurally support the display. In some examples, the bezel covers one or more electronic components associated with the display screen, such as one or more timing controller (TCON) chips, backlight drivers disposed on a printed circuit board (PCB) near the display screen , source drive. Therefore, the bezel helps protect the electronic components of the display from being exposed to the outside, thereby preventing debris from damaging electronic components and the like.The bezel includes a side wall along the right side of the display screen (eg, relative to the user viewing the display screen), the left side of the display screen, the upper portion of the display screen, and the lower portion of the display screen. The screen ratio represents the ratio of the amount of the display surface to the amount of the body surface of the PC device. The screen ratio can be increased by reducing the width of one or more side walls of the bezel surrounding the display screen, thereby increasing the amount of display that is visible to the user. For example, the widths of the upper, right, and left side walls of the bezel may be reduced to give the user an appearance of the edge of the display to the edge, or otherwise substantially reduce the border or have a smaller border. .As noted above, the lower sidewall of the bezel can cover the electronic components of the display screen, which can be, for example, the source driver integrated circuit (IC) that can be bonded to the TCON PCB and the glass substrate of the display panel. The TCONPCB can be placed adjacent the lower sidewall of the bezel to facilitate efficient communication coupling with, for example, a notebook motherboard (which may be located at the base of the notebook computer) or a portion including the keyboard. A bezel with a narrow lower sidewall may not be sufficient to cover the electronic components of the display. Thus, some known bezels include narrow upper, right and left side walls, but have wider lower side walls to cover the electronic components. The wider lower sidewall limits the screen ratio that can be achieved with such a bezel.In some known PC devices, the TCON PCB is placed on the motherboard of the PC device, which may be located at the base of the PC device (eg, in the example of a notebook computer). Removing the TCON PCB from the portion of the device that includes the display can enable the use of a narrower border around the display. However, moving the TCON PCB introduces the complexity of routing signals between the TCON PCB and the source driver IC(s). For example, it may be desirable to route a cable through a hinge of a PC device that couples the TCON PCB to a display screen and/or source drive(s). This routing method can affect signal integrity and can introduce electromagnetic interference problems.Another known product design for increasing the screen aspect ratio and edge to edge appearance of the display includes adjusting the size of the TCON PCB and/or moving the TCON PCB behind the display. A flexible PCB can be used to adjust the size and/or configuration of the TCON PCB, for example by mounting the TCON PCB behind the display or by reducing the width of the TCON PCB placed near the lower sidewall bezel. However, although the size and/or configuration of the flexible TCON PCB can be adjusted, the number of components coupled to the TCON PCB does not change. Therefore, it may be desirable to increase the thickness of the TCON PCB to accommodate active components (eg, TCON chip(s)) and passive components (eg, resistors, capacitors) that are coupled to the PCB. An increase in the thickness of the PCB may increase the thickness of the side profile of the display, thereby affecting the physical size of the PC device.Disclosed herein are example frames that are disposed around a display screen of a display device for increasing the screen ratio and having a reduced width. In some examples disclosed herein, an integrated circuit (IC) including a timing controller and a source driver (ie, a TCON embedded driver (TED) IC) is mounted on a glass substrate of a display screen using a glass wafer (COG) bonding technique. . The use of TED ICs reduces the number of PCB components of a display device compared to display devices including TCON PCBs. In some examples, a flexible printed circuit (FPC) including passive components (eg, resistors, capacitors) can be placed within the display device without increasing the thickness of the side profile of the display device. Thus, a bezel having a narrow lower sidewall can be used to increase the visual display area without adversely affecting the physical size of the PC device and/or the electrical operation of the device.In other examples disclosed herein, an IC (eg, a TED IC) is mounted to the FPC by a flexible wafer bonding technique. In such an example, at least a portion of the FPC is placed in a portion of the cover or cover of the PC device that receives a base for coupling the display screen to the PC device (eg, a notebook computer including a motherboard, keyboard, etc.) Partial) of the hinge. Thus, the amount of FPC covered by the lower sidewall of the bezel is reduced relative to known display devices including TCON PCB. As a result, the width of the lower side wall of the bezel can be reduced, and the screen ratio of the PC device can be increased.Also disclosed herein is an example backlight that enhances uniform distribution of light from, for example, an LED source while reducing (eg, minimizing) the distance traveled by light from one source and combined with light from other sources. When the gap between the light source and the display screen is reduced, the thickness of the side profile of the display device can be reduced. Thus, a bezel can be used with a reduced thickness along the side portion of the bezel that covers or partially covers the side profile of the display device.Although the examples disclosed herein are discussed in the context of a PC device such as a notebook computer, the display devices disclosed herein can be used in other applications, such as for televisions, smart phones, and the like. Accordingly, the discussion of display devices for PC devices is for illustrative purposes only and does not limit the disclosure to PC devices.FIG. 1 shows a display device 100 as known in the art. It is known that the display device 100 can be used for a PC device such as a notebook computer or an electronic tablet. Display device 100 is known to include a display screen 102 (eg, an LCD panel). The bezel 104 is disposed around the display screen 102. The bezel 104 includes a first side wall 106 (in the orientation shown in FIG. 1, relative to the right side wall of the user viewing the display screen 102), a second or left side wall 108, a third or upper side wall 110, and a Four or lower sidewalls 112.Display screen 104 includes a plurality of source drivers 114 that are mounted to glass substrate 115 of display screen 102 by glass wafer bonding techniques. As shown in FIG. 1, source driver 114 is placed adjacent portion 116 of display screen 102 that is covered by lower sidewall 112 of bezel 104. Display device 100 includes a timing controller (TCON) PCB board 118. The TCON PCB board 118 is communicatively coupled to the source driver 114 by one or more flexible printed circuits (FPC) 120. The TCON PCB board 118 is communicatively coupled to the motherboard of the PC device including the display device 100 by a cable.As shown in FIG. 1, the side walls 106, 108, 110, 112 of the bezel 104 cover a portion 116 of the display screen that includes the source driver 114 and the right, left, and upper edges of the display screen 102. In addition, as shown in FIG. 1, the lower sidewall 112 has a width w greater than the width of the right side wall 106, the left side wall 108, and the upper side wall 110, and thus compared to the other side walls 106, 108, 110, The lower sidewall 112 covers a larger portion of the display screen 104. The width of the lower sidewall 112 is increased compared to the other sidewalls 106, 108, 110 to cover the source driver 114, the FPC 120, the TCON PCB 118, and the like. The width w of the lower sidewall 112 is between 7 nm and 18 mm, as shown in FIG. The thickness t of the side portion 118 of the bezel 104 (i.e., the portion that at least partially covers the side profile of the display device) is approximately 4.5 mm. The side portion 118 (one or more) at least partially covers the TCON PCB 118 and backlight of the known display device 100 (eg, including an LED light source, a light pipe, an optical film).FIG. 2 illustrates an example PC device 200 that includes an example display device 201 constructed in accordance with the teachings of the present disclosure. The example PC device 200 can be implemented as a notebook computer or an electronic tablet (eg, an electronic tablet that can be coupled to one or more keyboards or docking stations, etc.). The example display device 201 of FIG. 2 includes a display screen 202 (eg, an LCD panel). The bezel 204 is placed around the display screen 202. The example bezel 204 of FIG. 2 includes a first side wall 206 (ie, in the direction shown in FIG. 2, relative to the right side wall of the user viewing the display screen 202), a second or left side wall 208, a third or upper side Wall 210, and fourth or lower side wall 212. As shown in FIG. 2, the side walls 206, 208, 210, 212 form a frame around the display screen 202. The bezel 204 includes a side portion 209 that extends along at least a portion of a side profile of the display device 201.In the example of FIG. 2, the bezel 204 can be coupled (eg, by a mechanical fastener) to a cover or cover that houses the display device 201 and functions as a protective cover for the display device 201. The example display device 201 can be coupled to a base 211 (eg, of a notebook computer) via one or more hinges 213. As shown in FIG. 2, the bezel 204 includes an opening that receives the hinge 213. In other examples, the bezel 204 does not include an opening for the hinge (eg, when used with some electronic tablets).In the example of FIG. 2, one or more integrated circuits (ICs) 214 are mounted on a glass substrate 215 of display screen 202. In the example of FIG. 2, the IC(s) 214 is a TCON embedded driver that includes a TCON driver and a source driver. As shown in the example of FIG. 2, the TED IC(s) 214 are coupled to a portion 216 of the glass substrate 215 adjacent the lower sidewall 212 of the bezel 204. In the example of FIG. 2, the TED IC(s) 214 are mounted or bonded to the glass substrate 215 using a glass wafer (COG) bonding technique. For example, the TED IC(s) 214 can be bonded to the glass substrate 215 using an anisotropic conductive film.The example display device 201 of FIG. 2 includes one or more flexible printed circuits (FPCs) 218. In the example of FIG. 2, the FPC 218(s) include passive components such as resistors and capacitors. These passive components may have any functionality such as power delivery, power conditioning, communication between the PC device motherboard and other electronic components of the TED IC and/or display device 201, and the like. Thus, in comparison to the TCON PCB 118 of the known display device 100 of FIG. 1, in the example of FIG. 2, the FPC 218(s) does not include a TCON chip. Thus, the size of the FPC 118(s) of FIG. 2 is reduced compared to the TCON PCB 118 of FIG. Thus, the width w of the lower sidewall 212 of the bezel 204 is less than the width of the lower sidewall 112 of the bezel 104 in the known display device 100 of FIG. 1, because the electronic components covered with the lower sidewall 112 are more numerous than in FIG. Less and/or the size of the electronic components is reduced. For example, the width w of the lower sidewall 212 of the bezel 204 can be approximately 5 mm. In the example of FIG. 2, the COG combination of the TED IC 214(s) to the display glass substrate 215 reduces the number of PCB components compared to the bezel 104 of the known display device 100 of FIG. The width of the lower side wall 212 of the small bezel 204. Thus, the amount of increase of display screen 202 relative to bezel 204 of FIG. 2 is visible to the user. The reduction in the width of the lower sidewall 212 of the bezel 204 of FIG. 2 increases the screen ratio of the PC device 200 and improves the appearance of the edge to edge display.3 is a side elevational view of the example display device 201 of FIG. 2 taken along line 3-3 of FIG. 2. The frame 204 is not shown in FIG. 3 for illustrative purposes. As shown in FIG. 3, the example display device 201 includes a backlight 302 disposed adjacent the back side 304 of the display screen 202 or a side that is not visible to the user. Additionally, as shown in FIG. 3, one of the TED ICs 214 of FIG. 2 is coupled to the display screen 202 by a COG bonding technique.As discussed above, in the example of FIG. 2, the FPC(s) 218 include passive components such as resistors and capacitors, but do not include TCON chips (which are instead included in the TED(s) IC 214). The thickness of the side profile of display device 201 may be affected by the location and/or size of the PCB board(s), FPC(s), and the like. In the examples of FIGS. 2 and 3, the FPC 118(s) are placed adjacent to the light guide frame of the backlight 302 to reduce the thickness of the side profile of the display device. In some examples, a portion of the bezel 204 extends along a side of the display device 201. Thus, reducing the thickness of the side profile of the display device can reduce the thickness of the side portion 209 of the bezel 204.4 is a partial cross-sectional view of a portion of the example display device 201 of FIGS. 2 and 3 along line 4-4 of FIG. 2 and including a portion of the backlight 302 of FIG. The example backlight 302 includes a light guide frame 400 that houses a light source 402 (eg, an LED). The backlight 302 includes a light guiding tube 404 and a plurality of optical films 406 that illuminate the display screen 202 by distributing the light emitted by the light source 402.In the example of FIG. 4, light guide frame 400 includes holes 408 defined in the light guide frame. The voids 408 may be formed in the light guiding frame 400 during fabrication of, for example, the light guiding frame 400, such as by a die casting process. As shown in FIG. 4, FPC 218 is disposed relative to backlight 302 such that one or more PCB components 410 (eg, resistors, capacitors) of FPC 218 are at least partially placed in cavity 408. Thus, the PCB assembly 410 of the FPC 218 is substantially stored in the example backlight 302 without occupying other space of the display device 201. Thus, the thickness t of the side profile of the display device is reduced compared to the known display device 100 of FIG. Accordingly, the thickness required for the component(s) of the bezel 204 to cover components of the display device 201, such as the FPC 218, is reduced as compared to the bezel 104 of the known display device 100 of FIG. For example, with respect to the side portion 209 of the bezel 204, the bezel 204 can have a thickness of 2 mm or less. Thus, the example display device 201 of FIG. 2 provides a reduction in the width of the lower sidewall 212 of the bezel 204 and the thickness of the bezel portion 209 of the bezel 204 that at least partially covers the side profile of the display device 201. small. The reduction in the width of the lower sidewall 212 increases the area of the display screen 202 that is viewable to the user. In addition, the reduction in the width of the lower sidewall 212 and the reduction in the thickness of the side portion 209 of the bezel 204 reduce the physical size of the display device.In some examples, light source 402 is coupled to FPC 412. As shown in FIG. 4, FPC 412 can be a separate FPC from FPC 218 that includes passive PCB assembly 410. In the example of FIG. 4, the FPC 412 is placed relative to the backlight 302 to place the light source 402 adjacent to the light guide tube 404 to illuminate the display screen 202. In other examples, light source 402 is coupled to FPC 218 in addition to being coupled to passive PCB assembly 410. In these examples, the length of the FPC 218 can be extended relative to the example shown in FIG. 4 to couple the light source 402 to the FPC 218 and adjacent to the light guide tube 404.FIG. 5 illustrates an example PC device 500 including another example display device 501 in accordance with the teachings of the present disclosure. In the example of FIG. 5, the PC device can be implemented as a notebook computer. The example display device 501 of FIG. 5 includes a display screen 502 (eg, an LCD panel). A bezel 504 is placed around the display screen 502. The bezel 504 of FIG. 5 can be substantially similar to the example bezel 204 of FIG. For example, the bezel 504 of FIG. 5 includes a first side wall 506 (ie, in the orientation shown in FIG. 5, relative to the right side wall of the user viewing the display screen 502), a second or left side wall 508, a third or Upper sidewall 510, and fourth or lower sidewall 512. As shown in FIG. 5, the side walls 506, 508, 510, 512 form a frame around the display screen 502.In the example of FIG. 5, the bezel 504 is coupled (eg, by mechanical fasteners) to the containment display device 501 of the PC device 500 and used as a protective cover for the display device 501 (eg, when the notebook computer is in a collapsed state) Cover or cover 511. The cover 511 includes a hinged link region 513. In FIG. 5, display device 501 is coupled to base 514 of the PC device via hinge 515 (eg, a portion of the PC device that includes the motherboard). At least a portion of the hinge 515 is received in the hinged region 513 of the cover 511 to couple the display device 501 to the base 514.In the example of FIG. 5, one or more integrated circuits (ICs) 515 are mounted on the FPC 516 of the display device 501. In the example of FIG. 5, IC 515 is a TCON embedded driver that includes a TCON driver and a source driver. In the example of FIG. 5, the TED IC(s) 515(s) are mounted on the FPC 516 using flexible wafer (COF) bonding techniques. For example, the COF bonding technique can include a die attach process that attaches the TED IC 515 to the flexible substrate of the FPC 516 and a wire bond process that provides an electrical connection between the TED IC 514 and the FPC 516.As shown in FIG. 5, the FPC 516 is placed adjacent the lower sidewall 512 of the bezel 504. In the example of FIG. 5, at least a portion of the FPC 216 is placed in a cavity 518 that forms a hinged link region 513 of the display device cover 511. The cable 520 extends from the cavity 518 of the hinged region 513 of the cover 511 to communicatively couple the FPC 516 to the motherboard of the PC device 500 located at the base 514 of the PC device 500.Since a portion of the FPC 216 is placed in the hinged region 513 of the cover 511, it is covered by the lower sidewall 512 of the bezel 504 as compared to the case where the entire sidewall or substantially all of the FPC 216 is covered by the lower sidewall 512 of the bezel 504. The amount of FPC 216 is reduced. For example, the lower sidewall 512 covers the portion of the FPC 216 that is external to the cavity 518 of the hinged portion 513 or that is not placed in the cavity 518 of the hinged portion 513. Thus, the width w of the lower sidewall 512 of FIG. 5 is reduced compared to the width of the lower sidewall 112 of the bezel 104 of the known display device 100 of FIG. In some examples, the width w of the lower sidewall 512 of Figure 5 is approximately 3 mm. Thus, the example display device 501 of FIG. 5 provides an increased aspect ratio of the PC device 500.Thus, the example display devices 201, 501 of Figures 2 through 5 include a bezel having a substantially reduced lower sidewall as compared to known bezels. Due to the use of the TED ICs 214, 514 and the combined processing such as COG and COF, the number of PCB components of the display devices 201, 501 is reduced, which increases the flexibility to adjust the position of the FPC, and correspondingly The width of the lower sidewalls 212, 512 of the bezels 204, 504 is reduced. In some examples, as in the example display device 501 of FIG. 5, a component of a PC device, such as a notebook computer cover 515, is employed to help reduce the width of the lower sidewall 512 of the bezel 504 by storing a portion of the FPC 516. In other examples, such as in display device 201 of Figures 2 through 4, components of the display device itself (e.g., light guide frame 400) are used to position the FPC assembly and reduce the thickness of the display outline. The example display devices 201, 501 facilitate edge to edge display design.In some examples, the PCB component adjustment (eg, using a TED IC, positioning FPC) that reduces the width of the lower sidewall of the bezel discussed in connection with FIGS. 2 through 5, or as an alternative to the PCB component adjustment described above, may be modified The light source of the backlight further reduces the size of the bezel (eg, the thickness of the side portion of the bezel that at least partially covers the side profile of the display device). As shown in FIG. 6, backlight 600 is known to include a gap 602 disposed between light guiding panel 603 (including LED light source 604) and display screen 606. The gap 602 provides light from the LED source 604 with regions that merge together and become uniform on the gap 602 before illuminating the effective area 608 of the display screen 606 (eg, to reduce irregularities or hot spots in the light distribution) Case). In the examples disclosed herein, the width of the gap can be reduced compared to the width w of the gap 602 of the known backlight of FIG. As discussed herein, the reduction in gap size helps to reduce the size of the bezel by reducing the thickness of the display profile. Additionally, in the examples disclosed herein, the reduced gap size and LED light source increase the effective area of the LCD display.FIG. 7 is an example backlight 700 that includes a light guiding panel 704 and a plurality of LED light sources 702. The example backlight 700 can include more or fewer LED light sources 702 than shown in FIG. Backlight 700 includes a gap 706 disposed between light guiding panel 704 and display screen 708 (eg, an LCD panel). In the example of FIG. 7, spacers 710 are placed between LED light sources 702. The example spacer 710 includes a reflective coating (eg, white lacquer) to help spread the light to increase the uniformity of light emitted by the LED source 708. Thus, the width w of the gap 706 is reduced compared to a backlight that does not include spacers 710 that help distribute light (as shown in the example of FIG. 6). Additionally, as the size of the gap 708 is reduced, the effective area 712 of the display screen 708 is increased. In particular, the distribution of light at the edge 714 of the display screen 708 is improved compared to placing the light source at a location that is further from the display screen (as in the known backlight of FIG. 6). The example LED light source 702 of FIG. 7 increases the optical density at the edge 714. In the example where the frame of the PC device is narrow (eg, in an edge to edge screen), the enlarged effective area of the display screen 708 improves the viewing of the display screen.FIG. 8 is an example backlight 800 that includes a light guiding panel 804 and a plurality of 2-chip (or n-chip) LED light sources 802. The example backlight 800 can include more or fewer LED light sources 802 than shown in FIG. The backlight 800 of FIG. 8 includes a gap 806 disposed between the light guide panel 804 and a display screen (eg, LCD panel) 808. In the example of FIG. 8, spacers 810 including a reflective material (eg, white lacquer) are placed between LED light sources 802. As discussed above in connection with FIG. 7, spacers 810 help to spread light and increase the uniformity of light emitted by LED light source 802. As a result, the width w of the gap 806 is reduced compared to the backlight that does not include the spacer 810. Additionally, since the use of a 2-chip LED source increases the optical density at edge 814, the effective area 812 of display 808 increases to near edge 814 of display 808.The example backlights of FIGS. 8 and/or 9 are not limited to the light sources 702, 802 and/or spacers 710, 810 of FIGS. 8 and 9. Other light sources, such as custom small LED strings, can be used to improve light distribution, reduce gap width, and thereby reduce display device profile.9 is a flow diagram of an example method 900 for fabricating a display device such as the example display device 201 of FIGS. 2 through 4. At block 902, the IC(s) 214 (eg, TCON embedded (TED) IC) are coupled to the glass substrate 215 of the display screen 202 of the example display device 201. In the example method 900 of FIG. 9, the TED IC(s) 214 is bonded to the glass substrate 215 using a glass wafer bonding technique.At block 904, a passive PCB assembly 410, such as a resistor and a capacitor, is coupled to a flexible printed circuit (FPC) 218. FPC 218 and passive PCB assembly 410 facilitate communication between the TED IC(s) 214 and the motherboard of the PC device including the example display device 201.At block 906, a void 408 is formed in the light guiding frame 400 of the backlight 302 of the example display device 201 of FIGS. 2 through 4. In some examples, the voids 408 of the light guiding frame 400 are formed prior to coupling, for example, the light source 402 to the light guiding frame 400. The holes 408 may be formed in the light guiding frame 400 by, for example, a die casting process during the manufacture of the light guiding frame.At block 908, the passive component 410 of the FPC 218 is placed in the cavity 408, thereby reducing the profile (eg, side profile) and/or physical dimensions of the example display device 201.At block 910, the bezel 204 is placed around the display screen 202 to frame the display screen 202 (eg, by coupling the bezel to the lid or cover of the PC device 200). As discussed above, the lower sidewall 212 of the bezel 204 is reduced compared to the bezel of known display devices, thereby increasing the viewable area of the display screen 202 and the screen ratio of the PC device including the display device 201.FIG. 10 is a flow diagram of an example method 1000 for fabricating a display device, such as the example display device 501 of FIG. At block 1002, the IC(s) 515 (eg, TCON embedded (TED) IC) are coupled to a flexible printed circuit (FPC) 516 of the example display device 501. In the example method 1000 of FIG. 10, the TED IC(s) 515 are bonded to the FPC 516 using flexible wafer bonding techniques.At block 1004, at least a portion of the FPC 216 is placed in the cavity 520 of the hinged region 513 of the cover 511 of the PC device 500. At block 1006, cable 522 is routed through hinge link region 513 for communicatively coupling FPC 216 to the motherboard of PC device 500.At block 1008, a bezel 504 can be placed around the display screen 502 to frame the display screen 502 (eg, by coupling the bezel 504 to the cover 511 of the PC device 500). As discussed above, the lower sidewall 512 of the bezel 504 is reduced compared to the bezel of known display devices, thereby increasing the viewable area of the display screen 502.Although the example methods 900 and 1000 are described with reference to the flowcharts illustrated in FIGS. 9 and 10, many other methods of fabricating the example display device 201 of FIGS. 2 through 4 and/or the example display device 501 of FIG. 5 may alternatively be used. . For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, removed, or combined. Similarly, additional operations may be included in the manufacturing process before, during, or after the blocks illustrated in Figures 9 and/or 10.It will be apparent from the foregoing that example devices, articles, and methods are disclosed herein to reduce the width of a bezel or frame around a display screen of a PC device, thereby increasing the screen ratio of the PC device. In the examples disclosed herein, the TED IC(s) are mounted on a substrate such as a flexible printed circuit or a glass substrate of a display screen (eg, an LCD panel) using bonding techniques such as a combination of COG and COF. The examples disclosed herein reduce the number of PCB components of a display device and reduce the effect of the PCB component on the width of the border of the frame covering the display screen (eg, the lower boundary of the bezel). Since the boundary size that can be used by the example display device disclosed herein is reduced, an edge to edge display is substantially achieved with respect to each boundary of the bezel.The following paragraphs provide various examples disclosed herein.Example 1 includes a display device including a display screen and a backlight including a light guide frame defining a cavity therein. An example display device includes an integrated circuit mounted to a display screen. An example display device includes a flexible printed circuit in communication with an integrated circuit. At least a portion of the flexible printed circuit is placed in the cavity of the light guiding frame.Example 2 includes the display device of Example 1, further comprising a bezel placed around the display screen.Example 3 includes a display device including a display screen and a cover. The display is coupled to the cover. The cover body includes holes. The display device includes a flexible printed circuit. The display device includes an integrated circuit mounted to a flexible printed circuit. At least a portion of the flexible printed circuit is placed in the cavity.Example 4 includes the display device of claim 3, further comprising a bezel covering the remaining portion of the exterior of the cavity of the flexible printed circuit board.Although certain example methods, apparatus, and articles of manufacture are disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, devices, and articles of the invention that fall within the scope of the claims. |
The invention discloses a multimodal interface. Particular embodiments described herein provide for an electronic device that can receive data from an operating system in an electronic device, where the data is related to hardware that is in communication with the electronic device through a multimodal interface and communicate the data and/or related data to a local policy manager, where the local policy manager is in communication with the multimodal interface. The multimodal interface can be configured to support power transfers, directionality, and multiple input/output (I/O) protocols on the same interface. |
1.A method for communicating through a multi-modal interface, including:Receive data from the operating system in the electronic device and at the shared memory of the electronic device, where the received data is associated with hardware, and the hardware communicates with the electronic device through the multi-modal interface, wherein , The hardware is abstracted from the operating system by the firmware;Transmitting the message that the data is in the shared memory to the firmware;Read the data from the shared memory by the firmware;The data and/or instructions related to the data are transmitted by the firmware to the platform policy manager, wherein the received data passes through the firmware before the data is received by the platform policy manager and The firmware is written to the operating area, wherein the platform policy manager communicates with a plurality of local policy managers, wherein each of the plurality of local policy managers manages a separate multiple Modal interface and communication with different types of hardware; andThe platform policy manager transfers the data and/or one or more commands related to the data to a specific local policy manager, where the specific local policy manager communicates with the hardware To communicate with the multi-modal interface, wherein the specific local policy manager performs command-level communication with the hardware, and wherein the shared memory has a type associated with the multi-modal interface The canonical data structure.2.The method of claim 1, wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface.3.The method of claim 1, wherein the message that the data is in the shared memory is transferred to the interface after the received data is stored in the shared memory.4.The method according to any one of claims 1 to 2, further comprising:Receiving response data from the local policy manager, wherein the response data is related to the hardware; andThe response data and/or related response data are transferred to the shared memory.5.The method according to any one of claims 1 to 2, wherein the communication between the operating system and the hardware is two-way communication and the shared memory is partitioned into two areas.6.A device for communicating with a multi-modal interface, the device comprising:One or more processors;operating system;Shared memory, wherein the operating system stores hardware-related data in the shared memory, and transmits a message that the hardware-related data is in the shared memory to the firmware, wherein the The hardware communicates with the operating system through a multi-modal interface;The firmware, wherein the firmware abstracts the hardware from the operating system and reads the data from the shared memory; andThe platform policy manager, wherein the platform policy manager is configured to:The data and/or instructions regarding the data are received from the firmware, wherein the received data passes through the firmware and is written by the firmware to the firmware before the data is received by the platform policy manager Operation area, wherein the platform policy manager communicates with a plurality of local policy managers, wherein each local policy manager of the plurality of local policy managers manages a separate multi-modal interface and communicates with different types Hardware to communicate; andThe data and/or one or more commands related to the data are transmitted to a specific local policy manager, wherein the specific local policy manager communicates with the multi-modal Interface for communication, wherein the specific local policy manager performs command-level communication with the hardware, wherein the shared memory has a data structure based on a specification associated with the type of the multi-modal interface.7.The apparatus of claim 6, wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface.8.8. The apparatus of any one of claims 6 to 7, wherein the platform policy manager can be further configured to:Receiving response data from the local policy manager, wherein the response data is related to the hardware; andThe response data and/or related response data are transferred to the shared memory.9.A system for communicating through a multi-modal interface, including:A device for receiving data from an operating system in an electronic device and a shared memory of the electronic device, wherein the received data is related to hardware, and the hardware communicates with the electronic device through the multi-modal interface Communication, wherein the hardware is abstracted from the operating system by firmware;Means for transmitting the message that the data is in the shared memory to the firmware;Means for reading the data from the shared memory by the firmware;A device for transmitting the data and/or instructions related to the data to a platform policy manager by the firmware, wherein the received data passes through before the data is received by the platform policy manager The firmware is written into the operating area by the firmware, wherein the platform policy manager communicates with a plurality of local policy managers, wherein each local policy manager of the plurality of local policy managers Manage separate multi-modal interfaces and communicate with different types of hardware; andA device for transmitting the data and/or one or more commands related to the data to a specific local policy manager by the platform policy manager, wherein the specific local policy manager is the same The hardware communicates with the multi-modal interface, wherein the specific local policy manager executes command-level communication with the hardware, and wherein the shared memory has communication based on the multi-modal interface The canonical data structure associated with the type.10.The system of claim 9, wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface.11.The system according to any one of claims 9 to 10, further comprising:Means for receiving response data from the local policy manager, wherein the response data is related to the hardware; andA device for transmitting the response data and/or related response data to the shared memory.12.A multi-modal interface system for communicating with different types of hardware, the system including:operating system;Shared memory, wherein the operating system stores hardware-related data in the shared memory, and transmits a message that the hardware-related data is in the shared memory to the firmware, wherein the The hardware communicates with the operating system through a multi-modal interface;The firmware, wherein the firmware abstracts the hardware from the operating system and reads the data from the shared memory; andThe platform policy manager, wherein the platform policy manager is configured to:The data and/or instructions regarding the data are received from the firmware, wherein the received data passes through the firmware and is written by the firmware to the firmware before the data is received by the platform policy manager Operation area, wherein the platform policy manager communicates with a plurality of local policy managers, wherein each local policy manager of the plurality of local policy managers manages a separate multi-modal interface and communicates with different types Hardware to communicate; andThe data and/or one or more commands related to the data are transmitted to a specific local policy manager, wherein the specific local policy manager communicates with the multi-modal Interface for communication, wherein the specific local policy manager performs command-level communication with the hardware, wherein the shared memory has a data structure based on a specification associated with the type of the multi-modal interface.13.The multi-modal interface system of claim 12, wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface.14.The multi-modal interface system according to any one of claims 12 to 13, wherein the platform policy manager can be further configured to:Receiving response data from the local policy manager, wherein the response data is related to the hardware; andThe response data and/or related response data are transferred to the shared memory.15.A computer-readable storage medium includes one or more instructions, which when executed by at least one processor, cause the computer-readable storage medium to execute the method according to any one of claims 1-5. |
Multimodal interfaceCross-references to related applicationsThis application claims the rights and priority of the Indian non-provisional patent application number 2004/CHE/2015 entitled "MULTIMODAL INTERFACE" filed on April 18, 2015. The Indian non-provisional patent application was approved The reference is hereby incorporated in its entirety.Technical fieldThe present disclosure relates generally to the field of electronic devices, and more specifically to multi-modal interfaces.technical backgroundEnd users have more choices of electronic equipment than ever before. Many prominent technological development trends are currently underway (for example, more computing devices, more detachable displays, more peripheral devices, etc.), and these trends are changing the prospects of electronic devices. One of these technological trends is to connect a large number of devices to electronic devices. In many cases, each of these devices has a single-purpose unique connector to connect the device to the electronic device, and one connector may not operate or function the same as the other connectors. For example, a universal serial bus (USB) connector is different from a display high-definition multimedia interface (HDMI) connector. Therefore, there is a challenge to provide an electronic device that allows a unified connector that can support multiple devices.Description of the drawingsIn order to provide a more comprehensive understanding of the present disclosure and its features and advantages, in conjunction with the accompanying drawings, reference is made to the following description, in which similar reference numerals indicate similar components, in the accompanying drawings:FIG. 1 is a simplified block diagram of an electronic device, showing an embodiment of a communication system according to an embodiment of the present disclosure;Figure 2 is a simplified flowchart showing potential operations that may be associated with a communication system according to an embodiment;Figure 3 is a simplified flow chart showing potential operations that may be associated with the communication system according to an embodiment;4A and 4B are simplified sequence flowcharts showing potential operations that may be associated with the communication system according to the embodiment;Figure 5 is a simplified flow chart showing potential operations that may be associated with the communication system according to an embodiment;Figure 6 is a simplified flow chart showing potential operations that may be associated with the communication system according to an embodiment;Figure 7 is a simplified sequence flow chart showing potential operations that may be associated with the communication system according to the embodiment;Figure 8 is a simplified flow chart showing potential operations that may be associated with the communication system according to an embodiment;Figure 9 is a simplified flowchart showing potential operations that may be associated with a communication system according to an embodiment;Figure 10 is a simplified data structure table showing potential details that may be associated with the communication system according to the embodiment;Figure 11 is a block diagram showing an example computing system arranged in a point-to-point configuration according to an embodiment;Figure 12 is a simplified block diagram associated with an example ARM ecosystem system-on-chip (SOC) of the present disclosure; andFigure 13 is a block diagram showing an example processor core according to an embodiment.The figures of the drawings do not have to be drawn to scale, as their size can vary significantly without departing from the scope of the present disclosure.Detailed description of exemplary embodimentsExample embodimentFIG. 1 is a simplified block diagram of an embodiment of a communication system 100 with a multi-modal interface according to an embodiment of the present disclosure. The communication system 100 may include an electronic device 102 and one or more auxiliary devices 104a-104d. Each auxiliary device 104a-104c may include an interface 128a-128c, respectively. In an example, one or more auxiliary devices may include a wireless module for enabling wireless communication. For example, auxiliary device 104d is shown as including wireless module 130a.The electronic device 102 may include an operating system (OS) 108, a processor 110, a memory 112, an OS policy manager (OSPM) 114, an operating area 116, a firmware 118, a platform policy manager (PPM) 124, and one or more local policies Managers (LPM) 126a-126d, and one or more multi-modal interfaces 106a-106d. The memory 112 may include a mailbox 122. The mailbox 122 may be a buffer in the memory 112. The PPM 124 may include the system host controller 120. In an example, the one or more multi-modal interfaces may include a wireless module for enabling wireless communication. For example, the multi-modal interface 106d is shown as including a wireless module 130b. In another example, the wireless module 130b is included in the electronic device 102, and its resources can be shared by multiple multi-modal interfaces. The auxiliary device 104d may use the wireless modules 130a and 130b to wirelessly communicate with the electronic device 102.In an embodiment, each multi-modal interface 106a-106d may have a corresponding LPM 126a-126d. For example, multimodal interface 106a may correspond to LPM 126a, multimodal interface 106b may correspond to LPM 126b, multimodal interface 106c may correspond to LPM 126c, and multimodal interface 106d may correspond to LPM 126d. Each multi-mode interface 106a-106d may be a multi-mode interface that can support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface. The multiple protocols can be automatically configured to run simultaneously or independently on a multi-modal interface or connector without user intervention. For example, each auxiliary device 104a-104d may be a different electronic device, and a single multi-modal interface (eg, one of the multi-modal interfaces 106a-106d) may be able to support each different auxiliary device 104a-104d.The elements of Figure 1 may be coupled to each other through one or more interfaces using any suitable connection (wired or wireless), which provides a viable path for communication. In addition, any one or more of these elements of FIG. 1 may be combined or removed from the architecture based on specific configuration needs. The communication system 100 may include a configuration capable of implementing Transmission Control Protocol/Internet Protocol (TCP/IP) communication for transmitting or receiving packets in a network. When appropriate and based on specific needs, the communication system 100 may also operate in conjunction with User Datagram Protocol/IP (UDP/IP) or any other appropriate protocol.For the purpose of illustrating certain example techniques of the communication system 100, it is important to understand the communication that can traverse the communication environment. The following basic information can be regarded as the basis from which the present disclosure can be appropriately explained.Current systems usually have single-purpose connectors that only support a specific function, such as display connectors, audio connectors, power connectors, and so on. As users have more choices of electronic devices than before and need to connect those devices, what is needed is a unified connector that supports multiple functions on a single connector. It would be beneficial if the unified connector can be configured automatically and the function of the multi-mode connector is used on a platform independent of the OS or I/O protocol.In order to realize a solution where the OS can communicate with the connected hardware, each connected hardware must be transmitted to the underlying hardware through a specific protocol. However, this is highly undesirable because the software driver needs to be specific to the OS and also requires customization of the software driver for platform ports and device implementations. In order to implement a common solution for all OSs, a common firmware-based interface that allows the OS to interact with different types of hardware needs to be defined. The interface should not be specific to the OS for development, which will increase the burden on the original equipment manufacturer (OEM), thereby negatively affecting the time to market for the preparation of the new technology.As outlined in Figure 1, a communication system that includes a multi-modal interface can solve these problems (and others). In the communication system 100 of FIG. 1, each multi-mode interface 106a-106d can be a multi-mode that can support power transmission, directivity, and multiple input/output (I/O) protocols on the same interface or connector Interface or connector. The multiple protocols can be automatically configured to run simultaneously or independently on the multi-modal interface without user intervention. The communication system 100 may include a set of data structures, commands, notifications, and state transitions (software and hardware) to efficiently configure and operate the platform with one or more multi-modal interfaces or connectors. The communication system 100 can also transfer the capacity of the platform to an OS that implements a multi-modal interface. The I/O protocol can be configured to operate on the same or different pins on the multi-modal interface.This methodology can be defined in an agnostic way of OS (for example, AndroidTM, iOSTM, LinuxTM, OSXTM, WindowsTM, etc.) and I/O interconnection (for example, I2CTM, I2STM, PCIeTM, etc.), and is defined as used to communicate with electronic devices The data structure of the multi-modal interface interface in 102. As a result, hardware component designers, system builders, and device driver (software) developers can develop platforms and devices with multi-modal interfaces that seamlessly interoperate with each other. For example, the PPM 124 may include a combination of hardware and firmware and OS support software provided by any vendor that provides an interface to all multi-modal interfaces on the platform or electronic device. In order to interface with the PPM 124, no specific interface (for example, PCIeTM, I2CTM, etc.) is required. In addition, the interface between OSPM 114 and PPM 124 is defined so that it can be easily implemented using any of the above-mentioned interconnections or any other interconnections not mentioned above.Using PPM 124 and OSPM 114, the communication system 100 can define a general firmware-based interface that abstracts the underlying hardware (for example, Type C hardware), thereby allowing OS access without knowing or understanding its specifications or complexity hardware. OSPM 114 is an interaction layer from OS 108 that abstracts the underlying platform-specific hardware, and is a platform agent that performs actual physical transactions with the hardware. Communication with hardware from OS 108 (e.g., auxiliary device 104a-104d) starts at OSPM 114 and passes through LPM (e.g., LPM 126a) before reaching the hardware (e.g., auxiliary device 104a).The firmware 118 may be a firmware-based device policy manager that can be implemented by the OEM, regardless of the OS or platform they need to support or the number of specific ports and devices on the electronic device 102. This allows OEMs to reduce development efforts and allows faster time-to-market of new products, devices, and features. Firmware 118 and PPM 124 can abstract the underlying hardware from OS 108 to interact with LPM 126a-126d. The LPMs 126a-126d conduct actual command-level communication with auxiliary devices (e.g., auxiliary devices 104a-104d) connected to multi-modal interfaces (e.g., multi-modal interfaces 106a-106d).Each multi-modal interface 106a-106d may be an interface or connector with some minimal detection logic to interact with the LPM 126a-126d, respectively. The electronic device 102 can understand, control, or communicate with the auxiliary devices 104a-104d through the connected LPM 126a-126d. Each LPM 126a-126d may be physically part of the SOC, or may be separately on a platform hidden by other microcontrollers (e.g., embedded controllers or integrated sensor hubs). Because PPM 124 provides another abstraction layer for communication between OS 108 and each auxiliary device 104a-104d, this allows workarounds and defect fixes to be implemented only at the platform level without upper layer changes (such as OS layer changes) .The OSPM 114 is mainly responsible for transmitting any OS 108-based requests or communications to the auxiliary devices 104a-104d to the firmware 118, and vice versa. The firmware 118 can abstract the underlying hardware from the OS 108 and the OSPM 114 by interacting with the PPM 124. PPM 124 is a platform-specific component that maintains all (or almost all) platform and system-level information on different auxiliary devices 104a-104d, and communicates directly with each LPM 126a-126d that manages a separate multi-modal interface 106a-106d . The communication mechanism between OSPM 114 and firmware 118 can be used with almost any specific PPM 124. The firmware 118 can be implemented in a variety of ways (including ACPI, SMM, etc.) so that the firmware 118 can enable runtime communication with the OSPM 114.The communication between OSPM 114 and PPM 124 can be through mailbox 122. The mailbox 122 may be a shared buffer in the memory 112 or some other location in the memory 112 of the electronic device 102. The mailbox 122 may be a shared memory general data structure defined based on the format of the interface specification register for the auxiliary device 104a-104d, which describes the register and mode of access to the auxiliary device 104a-104d. In a specific example, the mailbox 122 may be a shared memory defined based on the format of the USBType C software interface specification register, which describes the register and mode of access to the Type C hardware.In order to avoid possible race conditions between two consecutive commands from the OS 108 and the auxiliary device (for example, one of the auxiliary devices 104a-104d), when the communication between the OS 108 and the auxiliary device is two-way communication, the mailbox 122 may Divided into two areas, each area can only access firmware 118 or OSPM 114. For example, the mailbox 122 may be divided into command and response areas. In an example, the OSPM 114 may write a request or command from the OS 108 to the auxiliary device in the OSPM 114 portion of the mailbox 122. The PPM 124 can read the mailbox 122 to receive the request or command from the OSPM 114. In the event that there is a communication alarm from the auxiliary device to the OSPM 114, the PPM 124 may update the mailbox 122 and warn the OSPM 114, which then reads the mailbox 122 to deal with the cause of the alarm.The system host controller 120 can be an embedded controller (EC) used in a client system, a baseboard management (BM) controller used in a server system, an I2CTM controller used in a tablet computer system, and can be used for long-term A possible solution is an integrated sensor hub (ISH), or some other similar system that helps to reach and communicate from the PPM 124. The firmware 118 can establish an operation area 116 of a communication channel independent of the type of the PPM 124. For example, when a request is detected from the OSPM 114, the firmware 118 may update the operating area 116 with the request and warn the PPM 124. In one example, the operation area 116 can help facilitate access to shared areas such as the mailbox 122. More specifically, the operating area 116 may define the location of the shared area in the electronic device 102, so that the firmware 118 can use the access information provided by the operating area 116 to update or retrieve data in the shared area. In an illustrative example, PPM 124 may retrieve requests (e.g., requests or communications from OS 108), process the requests, and use LPM (e.g., LPM 126a) and a multi-mode connected to an auxiliary device (e.g., auxiliary device 104a) A state interface (e.g., multi-modal interface 106a) transmits the request to the auxiliary device. Similarly, any request or communication from the auxiliary device may pass through the multimodal interface (eg, multimodal interface 106a) and LPM (eg, LPM 126a) and be received by the PPM 124. The PPM 124 can update the mailbox 122 with the request or communication, and the OSPM 114 can be notified of the update. The OSPM 114 can retrieve the request or communication using the firmware 118, which in turn uses the operating area 116 and the mailbox 122 to communicate with the PPM 124. In the example, any request or communication from the auxiliary device (or interface or LPM) is first updated by the PPM 124 in the operation area 116 before the firmware 118 writes it to the mailbox 122 and reads it by the OSPM 114.In a specific example, when OSPM 114 is in an idle state, OSPM 114 may wait for a notification or request from OS 108, PPM 124, or some other element. From the idle state, the OSPM 114 can send a command and enter a state of waiting for the completion of the command. For example, OSPM 114 may send a connector status update command to PPM 124 and wait for a response from PPM 124. If the command is a reset PPM command, the command may generate a notification that the OSPM 114 is disabled, and the PPM 124 may send a reset indicator to indicate that a reset instruction is received.While in the waiting command completion state, if the PPM 124 responds with a busy indicator, the OSPM 114 may delay the completion of the command and return to the waiting command completion state. If the PPM 124 responds with a busy indicator and the OSPM 114 determines that the command needs to be canceled, the OSPM 114 may enter the intermediate state of canceling the current command, send a cancel command message, and return to the wait for command completion state.Also, from the idle state, the OSPM 114 can wait for notification, in this case, the OPSM 114 remains in the idle state. If the connector change pending indication is set, the OSPM 114 can enter the OPM process connector change intermediate state. From the OPM process connector changing the intermediate state, the OSPM 114 can send related commands and enter the OPM waiting for command completion state or confirm the connector change, send a confirmation command, and enter the wait for confirmation command indicator state. After the PPM 124 sends the confirmation command indicator to indicate that the connector change is completed, the PPM 124 may move from the OPM waiting confirmation command indicator state to the idle state.Using the communication system 100, OS vendors can correctly interact with systems that support multi-modal interfaces or connectors (such as multi-modal interfaces 106a-106d). This can allow OS vendors and operational hardware vendors (OHV) to develop software and hardware that can interoperate seamlessly. This can also enable original equipment manufacturers (OEMs) or OHVs to deliver products to users, and produce OS-agnostic standardized hardware that can work seamlessly with various OSs, attached third-party peripherals, and even improve product communication with each other Ability.Turning to the infrastructure of FIG. 1, a communication system 100 according to an example embodiment is shown. The terms "command" and "communication" as used herein refer to the transfer of data. As used herein, the term "data" refers to any type of binary, digital, voice, video, text or script data, or any type of source or object code, or in any suitable format that can be accessed from electronic devices and / Or any suitable information transmitted from one point to another in the network. In an example implementation, the electronic device 102 and auxiliary devices 104a-104d are electronic components, which are intended to include any suitable hardware, software, components, modules, or objects that facilitate their operations, as well as for receiving, transmitting, and/or other Appropriate interface for communicating data or information. This may include appropriate algorithms and communication protocols that allow for the effective exchange of data or information.For the internal structure associated with the communication system 100, each of the electronic device 102 and auxiliary devices 104a-104d may include a memory element for storing information to be used in the operations outlined herein. Each of the electronic device 102 and auxiliary devices 104a-104d can maintain information in any suitable memory element (for example, random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), Application Specific Integrated Circuit (ASIC), etc.), software, hardware, firmware or when appropriate and based on specific needs to be maintained in any other suitable components, devices, elements or objects. Any of the memory items discussed here should be interpreted as being included in the broad term'memory element'. Moreover, the information used, tracked, sent or received in the communication system 100 can be provided in any database, register, queue, table, cache, control list, or other storage structure, all of which can be referenced in any suitable time frame. Any such storage options may also be included in the broad term'memory element' as used herein.In some example implementations, the functions listed here may be implemented in one or more tangible media (e.g., embedded logic provided in an ASIC, digital signal processor (DSP) instructions, a processor or other similar The logic implemented in the software (which may include object code and source code, etc.) executed by the machine, such tangible media may include non-transitory computer-readable media. In some of these instances, the memory element may store data used for operations as described herein. This includes memory elements capable of storing software, logic, code, or processor instructions that are executed to implement the activities described herein.In an example implementation, the electronic device 102 and auxiliary devices 104a-104d may include software modules for implementing or facilitating operations as outlined herein. These modules can be appropriately combined in any suitable way, which can be based on specific configuration and/or supply requirements. In an example embodiment, such operations may be performed by hardware, which is implemented outside of these elements or included in some other network device to achieve the desired function. In addition, these modules can be implemented as software, hardware, firmware, or any suitable combination thereof. These elements may also include software (or reciprocating software) that can coordinate with other network elements to implement the operations listed here.In addition, each of the electronic device 102 and auxiliary devices 104a-104d may include a processor that can execute software or algorithms to perform the activities discussed herein. The processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processor can transform an element or item (eg, data) from one state or thing to another state or thing. In another example, the activities listed here can be implemented with fixed logic or programmable logic (eg, software/computer instructions executed by a processor), and the elements identified here can be some type of programmable processor , Programmable digital logic (for example, Field Programmable Gate Array (FPGA), EPROM, or EEPROM) or ASIC that may include digital logic, software, code, electronic instructions, or any suitable combination thereof. Any possible processing elements, modules and machines described herein should be interpreted as being included in the broad term'processor'.The electronic device 102 may be an electronic component, and includes, for example, a desktop computer, a laptop computer, a mobile device, a personal digital assistant, a smart phone, a tablet, or other similar devices. The auxiliary devices 104a-104d may be auxiliary hardware, such as peripheral devices that communicate with the electronic device 102. The term "peripheral device" as used herein is generally defined as any auxiliary device connected to and working with an electrical device (such as a computer) in some way, such as a universal serial bus (USB) flash drive, computer mouse, keyboard , Speakers, microphones, etc. One or more auxiliary devices 104a-104d may be the same type of device, or each auxiliary device 104a-104d may be a different device.Turning to FIG. 2, FIG. 2 is an example flowchart showing possible operations of the flow 200 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 200 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 202, the data is transferred to the shared memory. For example, the OS policy manager 114 may transfer data to the mailbox 122. In another example, data can be transferred to a register interface implemented in hardware, in which data write operations can be used to trigger other operations. At 204, the message that the data is in the shared memory is transferred to the interface. For example, the OS policy manager 114 may transmit a message that the data is in the mailbox 122 to the firmware 118. At 206, the interface reads the data and transmits one or more instructions to the platform policy manager, where the one or more instructions are related to the data. For example, the firmware 118 can read the data in the mailbox 122 and can transmit one or more instructions related to the data to the PPM 124. At 208, the platform policy manager receives and interprets the instructions, and based on the instructions, transmits one or more commands to the local policy manager. For example, PPM 124 may interpret the instructions and transmit one or more instructions or commands to LPM 126a. At 210, the local policy manager transmits the data to the auxiliary device. For example, the LPM 126a may use the interface 128a and the multi-modal interface 106a to transmit data to the auxiliary device 104a.Turning to FIG. 3, FIG. 3 is an example flowchart showing possible operations of the flow 300 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 300 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 302, the data related to the auxiliary device is transferred to the platform policy manager. For example, OSPM 114 may transmit data to PPM 124. The data can be communicated with the mailbox 122 and the firmware 118, or some other device can be used to communicate with the PPM 124. In addition, the data may be related to the auxiliary device 104a, and include instructions, queries, commands, etc. for the auxiliary device 104a. At 304, the platform policy manager transmits the data and/or related data to the local policy manager. For example, data can be transferred from PPM 124 to LPM 126a. At 306, the local policy manager transmits the data and/or related data to the interface communicating with the auxiliary device. For example, the data may be transmitted to the interface 128a in the auxiliary device 104a through the LPM 126a and the multi-modal interface 106a.Turning to Figures 4A and 4B, Figures 4A and 4B are example timing flowcharts showing possible operations that may be associated with a multi-modal interface according to an embodiment. The sequence flow chart shows the regular synchronization process from OSPM 114 to LPM 126a. The LPM 126a can communicate with the auxiliary device 104a.In an embodiment, the OSPM 114 may transmit the write control command to the mailbox 122. The OSPM 114 can also send a notification of a mailbox write command to the firmware 118. The firmware 118 can read the mailbox 122 and obtain the command details. The firmware 118 may transfer the command details to the PPM 124. In an example, the operation area 116 is used to transfer the command details to the PPM 124. The PPM 124 may transmit the command to the LPM 126a. The LPM 126a can execute the command and send a response back to the PPM 124. For example, the command may be a communication or query to the auxiliary device 104a. The PPM 124 can copy the response to the firmware 118. In an example, the operation area 116 can be used to copy the response to the firmware 118. The firmware 118 may receive the response and copy the response to the mailbox 122. The firmware 118 may also notify the OSPM 114 that the response is in the mailbox 122 and that the command has been completed. The OSPM 114 may write the command completion confirmation to the mailbox 122 and notify the firmware 118 that the command completion confirmation is in the mailbox 122. For the command completion confirmation, the firmware 118 can read the mailbox 122 and obtain the command completion confirmation details. The firmware 118 may transmit the command completion confirmation details to the PPM 124. In an example, the firmware 118 may use the operation area 116 to transfer the command completion confirmation details to the PPM 124. The PPM 124 may transmit the command completion confirmation to the LPM 126a. The LPM 126a may process the command completion confirmation and send the confirmation response back to the PPM 124. The PPM 124 may copy the confirmation response to the firmware 118. In an example, the PPM 124 can use the operation area 116 to copy the confirmation response to the firmware 118. The firmware 118 can obtain the confirmation response and copy the confirmation response to the mailbox 122. The firmware 118 may also notify the OSPM 114 that the confirmation response is in the mailbox 122.Turning to FIG. 5, FIG. 5 is an example flowchart showing possible operations of the flow 500 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 500 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 502, the auxiliary electronic device transmits the data to the electronic device through the interface. For example, the auxiliary device 104a may use the interface 128 and the multi-modal interface 106a to transmit data to the electronic device. At 504, the local policy manager in communication with the interface receives the data and transmits at least a portion of the data and/or related data to the platform policy manager. For example, the LPM 126a may receive the data from the multimodal interface 106a and transmit the data to the PPM 124. At 506, the platform policy manager transmits at least a portion of the data and/or related data to the interface. For example, the PPM 124 may transfer at least a part of the data to the firmware 118. At 508, the interface stores at least a portion of the data and/or related data in a buffer, and transmits a message that the data and/or related data has been stored in the buffer to the operating system policy manager. For example, the firmware 118 may store the data and/or related data in the mailbox 122, and transmit a message that the data and/or related data has been stored in the mailbox 122 to the OSPM 114. At 510, the operating system policy manager accesses the buffer and retrieves the data and/or related data. For example, OSPM 114 may retrieve data in mailbox 122 and transfer the data and/or related data to OS 188 for execution or processing.Turning to FIG. 6, FIG. 6 is an example flowchart showing possible operations of a flow 600 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 600 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 602, the interface communicating with the auxiliary electronic device transmits the data to the local policy manager. For example, the multimodal interface 106a can transmit data to the LPM 126a. The data may have been received from the auxiliary device 104a through the interface 128, and may be the result of disconnection between the interface 128a and the multi-modal interface 106a or some other communication or event. At 604, the local policy manager transmits the data and/or related data to the platform policy manager. At 606, the platform policy manager transmits the data and/or related data to the operating system policy manager. For example, the PPM 124 may directly transmit the data and/or related data to the OSPM 114, the firmware 118 and the mailbox 122 may be used to transmit the data and/or related data to the OSPM 114, or some other path or means may be used to transfer the data and/or related data to the OSPM 114. The data and/or related data are transmitted to OSPM 114.Turning to Fig. 7, Fig. 7 is an example timing flowchart showing possible operations that may be associated with a multi-modal interface according to an embodiment. In an embodiment, the LPM 126a sends an alarm notification to the PPM 124. The PPM 124 copies the alarm to the firmware 118. In an example, the PPM 124 may use the operation area 116 to copy the alarm to the firmware 118. The firmware 118 copies the alarm to the mailbox 122, and the firmware 118 notifies the OSPM 114 of the alarm. The OSPM 144 reads the alarm in the mailbox 122 to obtain information, processes the alarm, and sends an acknowledgement to the mailbox 122. The OSPM 144 also sends a notification to the firmware 118 that the OSPM 144 has placed the data in the mailbox 122. The firmware 118 reads the mailbox 122 to obtain the confirmation details and transmits the confirmation details to the PPM 124. In an example, the firmware 118 may use the operation area 116 to transfer confirmation details to the PPM 124. The PPM 124 transmits the confirmation details to the LPM 126a, and the LPM 126a processes the confirmation details and provides a response to the PPM 124. The PPM 124 copies the confirmation response to the firmware 118. In an example, the PPM 124 can use the operation area 116 to copy the confirmation response to the firmware 118. The firmware 118 puts the confirmation response into the mailbox 122. The firmware 118 notifies the OSPM 144 that the confirmation response is in the mailbox 122.Turning to FIG. 8, FIG. 8 is an example flowchart showing possible operations of the flow 800 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 800 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 802, the slave device sends a command. At 804, the system determines whether the command is a reset command. If the command is a reset command, as in 806, the device receives a reset complete indicator.If the command is not a reset command, as in 808, the system determines whether a busy indicator is received in response to the command. If the busy indicator is not received in response to the command, as in 810, the device receives a command completion notification. At 812, command completion is confirmed. If a busy indicator is received in response to a command, as in 814, the system determines whether the command should be cancelled. If the command should be canceled, as in 810, a cancel command message is sent. If the command should not be cancelled, as in 816, the device waits for a response to the command. At 810, a command completion notification is received, and as in 812, the command completion is confirmed.Turning to FIG. 9, FIG. 9 is an example flowchart showing possible operations of the flow 900 that may be associated with a multi-modal interface according to an embodiment. In an embodiment, one or more operations of the process 900 may be performed by one or more of the OSPM 114, the PPM 124, and the LPM 126a-126d. At 902, a command is received at the device. At 904, the system determines whether the command is a reset command. If the command is a reset command, then as in 906, the device is reset. At 908, a reset indicator is sent.If the command is not a reset command, as in 910, the system determines whether the device is busy. If the device is not busy, then as in 922, the command is completed, and as in 924, a complete command notification is sent. If the device is busy, as in 912, the system determines whether the command is to cancel the current command message. If the command is to cancel the current command message, as in 914, the current command is canceled, and as in 916, a cancel command confirmation is sent. If the command is not to cancel the current command message, as in 918, a busy indicator is sent. At 920, the system determines whether the device is busy. If the device is busy, as in 920, the system checks again to see if the device is busy. If the device is not busy, then as in 922, the command is completed, and as in 924, a complete command notification is sent.Turning to FIG. 10, FIG. 10 is an example simplified data structure table 1000, showing possible details that may be associated with the multi-modal interface according to the embodiment, and the table 1000 in FIG. An example of the structure of the memory location for transferring information between the PPM 124. In an embodiment, the data structure table 1000 may include an offset column 1002, a name column 1004, a memory location column 1006, a direction column 1008, and a size column 1010. The name column 1004 may include the name of the data structure, such as version 1012, reservation 1014, connector change indicator (CCI) 1016, control 1018, message input 120, and message output 1022.The direction column 1008 can include directions along which each memory location can be used. For example, PPM->OPM instructs PPM 124 to use the memory location to pass information to OSPM 114. As far as OSPM 114 is concerned, the location is read-only (RO). Similarly, OPM->PPM instructs OSPM 114 to use the memory location to pass information to PPM 124. In the case of PPM 124, the location is RO. The name column 1004 may include the names of example data structures that can be used in the communication system 100.For example, version 1012 may include a version data structure. The version data structure may include a binary coded decimal (BCD) version to which PPM 124 follows. The reservation 1014 may be a reserved area of the memory. CCI 1016 may include CCI data structures. The CCI data structure may include the response from the PPM 124 to the command sent by the OSPM 114 to the PPM, and the asynchronous state change notifications that occur on the multi-modal interface 106a-106d. Control 1018 may include control data structures. The control data structure may indicate the command to be executed by the PPM 124. The message input 120 may include a message input data structure. The message input data structure may include data that the PPM 124 wants to send to the OSPM 114. The message output 1022 may include a message output data structure. The message output data structure may include data to be sent to PPM 124. The format of the message input and message output data structures can be command-specific.In addition, the OSPM 114 can send various commands to the PPM 124 to configure and operate the multi-modal interfaces 106a-106d. In one illustrative example, the command PPM reset can be used to reset PPM 124. The command PPM reset can be sent by OSPM 114 to PPM 124 at any time. After receiving the PPM reset command, the PPM 124 can perform a hard reset on each of the multi-modal interfaces 106a-106d communicating with the PPM 124. The cancel command can be used to cancel a command previously sent to the PPM124. The connector reset command can be used to reset the multi-modal interfaces 106a-106d. The confirmation command completion and/or change indicator may be used to transmit the confirmation result that the OSPM 114 has received and processed the command completion and/or the connector change indicator to the PPM 124. The set notification enable command can be used to set the asynchronous event list, and the PPM 124 can send a notification about the asynchronous event list to the OSPM 114.The acquire capability command can be used to acquire the capability of the PPM 124. Acquiring connector capabilities may be capabilities for acquiring multi-modal interfaces 106a-106d. The set operating mode can be used to set the operating mode required by the OSPM 114 for the multi-modal interfaces 106a-106d. For example, the setting operation mode may be used to set the multi-modal interface 106a to a specific USB operation mode.The alternative mode acquisition may be used to determine the alternative mode supported by the auxiliary device (eg, auxiliary device 104a). The command for obtaining connector alternative mode support can be used to determine the list of alternative modes supported or supported on the multi-modal interfaces 106a-106d. The Get Current Connector Alternative Mode command can be used to determine the current alternative mode in which the multimodal interface (eg, multimodal interface 106a) is operating. The Set New Connector Alternative Mode command can be used to set the new alternative mode in which the OSPM 114 desires the multi-modal interface (eg, multi-modal interface 106a) to operate.The Acquire Power Delivery Object (PDO) command can be used to obtain a sink or source PDO associated with a multi-modal interface (e.g., multi-modal interface 106a). The set power role command can be used to set the power transmission direction (source or sink) of the multimodal interface (for example, the multimodal interface 106a). The get cable characteristics command can be used to get the characteristics of a cable attached to a multimodal interface (e.g., multimodal interface 106a) or some other type of communication or data connector. The get connector status command can be used to get the current status (eg, power role, data role, configured alternative mode, etc.) of the multimodal interface (eg, multimodal interface 106a).When the PPM 124 is initialized, it is expected that the PPM 124 will function without any OS 108 interaction. When the internal initialization is completed, the PPM 124 may be in the PPM idle (notification disabled) state. The PPM 124 may not notify the OSPM 114 of any activity until the OSPM 114 enables one or more notifications via the set notification enable command. Upon successful completion of the setting notification enable command, the PPM 124 may transition to the PPM idle (notification enable) state. The commands that need to be processed by the PPM 124 in the PPM idle (notification disabled) state are only the setting notification enable command and the PPM reset command.In the example operating model, OSPM 114 can send at most one command to PPM 124 at a time. Once the command is sent, OSPM 114 must wait until PPM 124 completes the current command before sending the next command. If the command completion notification is enabled, the PPM 124 may notify the OSPM 114 when it completes the command. The only exceptions to the one command rule are the cancel command and the PPM reset command. At any time, the OSPM 114 can send a PPM reset command. Only when OSPM 114 wants to cancel an outstanding command for which OSPM 114 has previously received a PPM busy response, OSPM 114 should send a cancel command. In addition, PPM 124 may send only one notification to OSPM 114 at a time. The PPM 124 may wait until the OSPM 114 acknowledges the notification (due to an asynchronous event) before sending the next notification.When receiving a command that is not a cancel command or a PPM reset command, the PPM 124 can execute the command, or if the PPM 124 is busy or the PPM 124 will take more than a predetermined time (for example, 10ms) to complete the command and the CCI data The busy indicator is set in the structure. When executing the command, the PPM 124 can set the CCI data structure, and optionally update the status and message input data structure. If the command completion notification is enabled by the OSPM 114, the PPM 124 may notify the OSPM 114 that the command has been completed.Upon receiving the cancel command, if the PPM 124 does not currently process the command, the PPM 124 may cancel the current operation it is performing or discard or ignore the cancel request. If the PPM 124 can successfully complete the cancel command, the PPM can set the cancel completion indicator to 1 to update the CCI data structure. If the command completion notification is enabled by the OSPM 114, the PPM 124 may notify the OSPM 114 that the command has been completed.Upon receiving the PPM reset command, the PPM 124 can disable all notifications, reset itself and set a reset completion indicator in the CCI data structure, and transition to the PPM idle (notification disabled) state. OSPM 114 can poll the reset completion indicator in the CCI data structure. When an asynchronous event occurs on one or more connectors, the PPM 124 may update the CCI and status data structure, and notify the OSPM 114 if the corresponding notification is enabled by the OSPM 114. Once the OSPM 114 is notified of the command completion and/or asynchronous event, the OSPM 114 can read the CCI and optional status data structure, and confirm the notification by confirming the command completion and/or changing the indication command. If the event is an asynchronous event, the OSPM 114 can send any other commands it needs to obtain details about the asynchronous event.Figure 11 illustrates a computing system 1100 arranged in a point-to-point (PtP) configuration according to an embodiment. Specifically, FIG. 11 shows a system in which processors, memories, and input/output devices are interconnected through many point-to-point interfaces. Generally speaking, one or more of the network elements of the communication system 100 may be configured in the same or similar manner as the computing system 1100.As shown in FIG. 11, the system 1100 may include several processors, for clarity, only two of them are shown, namely processors 1170 and 1180. Although two processors 1170 and 1180 are shown, it should be understood that embodiments of the system 1100 may also include only one such processor. Each of the processors 1170 and 1180 may include a set of cores (ie, the processor cores 1174A and 1174B and the processor cores 1184A and 1184B) to execute multiple threads of the program. The core may be configured to execute instruction code in a manner similar to that discussed above with reference to FIGS. 2-10. Each processor 1170, 1180 may include at least one shared cache 1171, 1181. The caches 1171 and 1181 may store data (for example, instructions) utilized by one or more components of the processors 1170 and 1180 (for example, the processor cores 1174 and 1184).The processors 1170 and 1180 may each include integrated memory controller logic (MC) 1172 and 1182 for communicating with the memory elements 1132 and 1134. The memory element 1132 and/or 1134 may store various data used by the processors 1170 and 1180. In an alternative embodiment, the memory controller logic 1172 and 1182 may be discrete logic separate from the processors 1170 and 1180.The processors 1170 and 1180 may be any type of processors, and may exchange data via a point-to-point (PtP) interface 1150 using point-to-point (PtP) interface circuits 1178 and 1188, respectively. The processors 1170 and 1180 can each use point-to-point interface circuits 1176, 1186, 1194, and 1198 to exchange data with the control logic 1190 via separate point-to-point interfaces 1152 and 1154. The control logic 1190 may also use the interface circuit 1192 (which may be a PtP interface circuit) to exchange data with the high-performance graphics circuit 1138 via the high-performance graphics interface 1139. In an alternative embodiment, any or all of the PtP links shown in FIG. 11 may be implemented as a multi-station bus instead of having PtP links.The control logic 1190 may communicate with the bus 1120 via the interface circuit 1196. The bus 1120 may have one or more devices in communication with it, such as a bus bridge 1118 and an I/O device 1116. Through the bus 1110, the bus bridge 1118 can communicate with other devices, such as a keyboard/mouse 1112 (or other input devices, such as touch screens, trackballs, etc.), communication devices 1126 (such as a modem, a network interface device, or a computer network 1160 other types of communication devices), audio I/O devices 1114, and/or data storage devices 1128. The data storage device 1128 may store code 1130, and the code may be executed by the processor 1170 and/or 1180. In alternative embodiments, any part of the bus architecture is implemented with one or more PtP links.The computer system depicted in FIG. 11 is a schematic diagram of an embodiment of a computing system that can be used to implement the various embodiments discussed herein. It should be understood that the various components depicted in FIG. 11 may be combined in a system-on-chip (SoC) architecture or in any other suitable configuration. For example, the various embodiments disclosed herein may be incorporated into a system including mobile devices such as smart cellular phones, tablet computers, personal digital assistants, portable gaming devices, and so on. It can be understood that, in at least some embodiments, these mobile devices may be provided with an SoC architecture.Turning to FIG. 12, FIG. 12 is a simplified block diagram associated with the exemplary ARM ecosystem SOC 1200 of the present disclosure. At least one example implementation of the present disclosure may include the multi-modal interface features and ARM components discussed herein. For example, the example of FIG. 12 can be associated with any ARM core (eg, A-9, A-15, etc.). In addition, the architecture can be any type of tablet computer, smart phone (including AndroidTM mobile phones, i-PhonesTM), i-PadTM, GoogleNexusTM, Microsoft SurfaceTM, personal computers, servers, video processing components, laptop computers (including any type of notebook Computer), UltrabookTM system, any type of touch input device, etc.In this example of FIG. 12, the ARM Ecosystem SOC 1200 may include multiple cores 1206-1207, L2 cache control 1208, bus interface unit 1209, L2 cache 1210, graphics processing unit (GPU) 1215, interconnect 1202, The video codec 1220 and the LCD I/F 1225 can be associated with a mobile industry processor interface (MIPI)/high-definition multimedia interface (HDMI) link coupled to a liquid crystal display (LCD).ARM Ecosystem SOC 1200 can also include Subscriber Identity Module (SIM) I/F 1230, Boot Read Only Memory (ROM) 1235, Synchronous Dynamic Random Access Memory (SDRAM) Controller 1240, Flash Controller 1245, Serial Peripheral Interface (SPI) main controller 1250, appropriate power control 1255, dynamic RAM (DRAM) 1260, and flash memory 1265. In addition, one or more embodiments include one or more communication capabilities, interfaces, and features, such as an instance of Bluetooth™ 1270, 3G modem 1275, global positioning system (GPS) 1280, and 802.11 Wi-Fi 1285.In operation, the example of FIG. 12 can provide processing power and relatively low power consumption to allow various types of calculations (eg, mobile computing, high-end digital homes, servers, wireless infrastructure, etc.). In addition, this framework can enable many software applications (for example, AndroidTM, AdobeTM FlashTM player, Java Platform Standard Edition (JavaSE), JavaFX, Linux, embedded Microsoft Windows, Symbian and Ubuntu, etc.). In at least one embodiment, the core processor can implement an out-of-sequence superscalar pipeline with a coupled low-latency level 2 cache.Figure 13 shows a processor core 1300 according to an embodiment. The processor core 13 may be a core for any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, or other devices for executing code. Although only one processor core 1300 is shown in FIG. 13, the processing element may alternatively include a plurality of processor cores 1300 as shown in FIG. For example, the processor core 1300 represents the embodiment of the processor cores 1174a, 1174b, 1184a, and 1184b shown and described with reference to the processors 1170 and 1180 of FIG. 11. The processor core 1300 may be a single-threaded core, or for at least one embodiment, the processor core 1300 may be multi-threaded, because the processor core may include multiple hardware thread contexts (or "logical processors" per core). ).Figure 13 also shows a memory 1302 coupled to the processor core 1300 according to an embodiment. The memory 1302 may be any of a wide variety of memories (including different layers of the memory hierarchy) as known to those skilled in the art or otherwise available. The memory 1302 may include code 1304 to be executed by the processor core 1300, and the code 1304 may be one or more instructions. The processor core 1300 may follow the program sequence of instructions indicated by the code 1304. Each instruction enters the front end portion 1306 and is processed by one or more decoders 1308. The decoder may generate micro-operations such as fixed-width micro-operations in a predetermined format as its output, or may generate other instructions, micro-commands, or control signals that reflect original code instructions. The front-end logic 1306 also includes register renaming logic 1310 and scheduling logic 1312, which generally allocate resources and queue operations corresponding to instructions for execution.The processor core 1300 may also include an execution logic 1314, which has a set of execution units 1316-1 to 1316-N. Some embodiments may include several execution units dedicated to specific functions or groups of functions. Other embodiments may include only one execution unit or one execution unit that can perform specific functions. The execution logic 1314 executes the operation specified by the code instruction.After the execution of the operation specified by the code instruction is completed, the back-end logic 1318 may retire the instruction of the code 1304. In one embodiment, the processor core 1300 allows out-of-order execution, but requires orderly retirement of instructions. The retirement logic 1320 may take various known forms (e.g., reorder buffers, etc.). In this way, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 1310, and any registers (not shown) modified by the execution logic 1314, the processing can be converted during the execution of the code 1304器核1300.Although not illustrated in FIG. 13, the processor may include other elements on the chip of the processor core 1300, at least some of which are shown and described herein with reference to FIG. 11. For example, as shown in FIG. 11, the processor may include memory control logic along with the processor core 1300. The processor may include I/O control logic and/or may include I/O control logic integrated with memory control logic.Note that through the examples provided here, interactions can be described for two, three, or more network elements. However, this is just for clarity and only as an example. In some cases, it may be easier to describe one or more of the functions of a given flow set by referencing only a limited number of network elements. It should be recognized that the communication system 100 and its teachings are easily expandable and can accommodate a large number of components and more complex/fine arrangements and configurations. Therefore, the provided example should not limit the scope or inhibit the broad teaching of the communication system 100, as it may be applied to a very large number of other architectures.It is important to also note that the operation explanations in the foregoing flowcharts (ie, FIG. 2, FIG. 3, FIG. 5, FIG. 6, FIG. 8 and FIG. 9) can be executed by the communication system 100 or can be executed within the communication system 100. Only a few of the possible related scenes and modes. Some of these operations can be removed under appropriate circumstances, or these operations can be significantly modified or changed without departing from the scope of the present invention. In addition, some of these operations have been described as being executed concurrently or in parallel with one or more additional operations. However, the timing of these operations can be significantly changed. For the sake of example and discussion, the foregoing operation flow has been provided. The communication system 100 provides a great deal of flexibility because any suitable arrangement, timing, configuration, and timing mechanism can be provided without departing from the teachings of the present disclosure.Although the present disclosure has been described in detail in conjunction with specific arrangements and configurations, these exemplary configurations and arrangements can be significantly changed without departing from the scope of the present disclosure. Moreover, certain components can be combined, separated, eliminated, or added based on specific needs and implementation methods. In addition, although the communication system 100 has been shown with reference to specific elements and operations that facilitate the communication process, these elements and operations may be replaced by any suitable architecture, protocol, and/or process that implements the intended functions of the communication system 100.Many other changes, substitutions, changes, alterations and modifications can be determined to those skilled in the art and it is intended that the present disclosure includes all such changes, substitutions, changes, alterations and modifications that fall within the scope of the appended claims. In order to assist any readers of the United States Patent and Trademark Office (USPTO) and any patents issued in this application in interpreting the claims appended hereto, the applicant wishes to note that the applicant: (a) does not intend any appended claims due to It exists at the filing date of this article and invokes the sixth paragraph (6) of section 112 of 35 USC, unless the words "device for..." or "step for..." are explicitly used In the specific claims; and (b) is not intended to limit the present disclosure in any way that is not otherwise reflected in the appended claims by any statement in the specification.Other notes and examplesExample M1 is a method that includes: receiving data from an operating system in an electronic device; and transmitting the data and/or related data to a local policy manager. The local policy manager may communicate with a multi-modal interface, and the data may be related to hardware that communicates with the electronic device through the multi-modal interface.In Example M2, the subject described in Example M1 may optionally include: wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O )protocol.In Example M3, the subject matter as described in any of Examples M1 and M2 may be optionally: wherein the received data is transferred through firmware before it is received.In Example M4, the subject matter as described in any one of Examples M1 to M3 may be optionally: wherein the received data is written to the operating area by the firmware before it is received.In Example M5, the subject matter as described in any one of Examples M1 to M4 may be optionally: wherein the received data is stored in a shared memory before it is received.In Example M6, the theme as described in any one of Examples M1 to M5 may optionally include: receiving response data from the local policy manager; and transmitting the response data and/or related response data to a shared A memory, wherein the response data is related to the hardware.In Example M7, the theme as described in any one of Examples M1 to M6 may optionally include: wherein the data is received at a platform policy manager, and the platform policy manager interacts with multiple local policies The platform manager communicates, wherein each of the plurality of local strategy platform managers communicates with a different type of hardware.In Example A1, an apparatus for communicating with a multi-modal interface may include a platform policy manager configured to: receive data from an operating system of an electronic device, wherein the data Related to hardware, the hardware communicates with the electronic device through a multi-modal interface; and transmits the data and/or related data to a local policy manager, wherein the local policy platform manager communicates with the multiple Modal interface for communication.In Example A2, the subject described in Example A1 may optionally include: wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O )protocol.In Example A3, the subject matter as described in any one of Examples A1 and A2 may optionally include: wherein the received data is transferred through firmware.In Example A4, the subject as described in any one of Examples A1 to A3 may optionally include: wherein the received data is written to the operating area by the firmware before it is received.In Example A5, the subject matter as described in any one of Examples A1 to A4 may optionally include: wherein the received data is stored in a shared memory before it is received.In Example A6, the theme as described in any one of Examples A1 to A5 may optionally include: wherein the platform policy manager may be further configured to receive response data from the local policy manager and to The response data and/or related response data are transferred to the shared memory, wherein the response data is related to the hardware.Example C1 is at least one machine-readable medium having one or more instructions that when executed by at least one processor cause the at least one machine-readable medium to: receive data from an operating system of an electronic device, wherein , The data is related to the hardware, and the hardware communicates with the electronic device through a multi-modal interface; and the data and/or related data are transmitted to the local policy manager, wherein the local policy manager and The multi-modal interface communicates.In Example C2, the subject described in Example C1 may optionally include: wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O )protocol.In Example C3, the subject matter as described in any of Examples C1 and C2 may optionally include: wherein the received data is transferred through firmware.In Example C4, the subject matter as described in any one of Examples C1 to C3 may optionally include: wherein the received data is written to the operating area by the firmware before it is received.In Example C5, the subject matter as described in any one of Examples C1 to C4 may optionally include: wherein the received data is stored in a shared memory before it is received.In Example C6, the subject matter described in any one of Examples C1 to C5 may optionally include further including one or more instructions that when executed by at least one processor cause the machine-readable medium: Receive response data from the local strategy platform and transfer the response data and/or related response data to a shared memory, wherein the response data is related to the hardware.Example S1 is a multi-modal interface system for communicating with different types of hardware. The system includes a platform policy manager configured to: receive data from an operating system of an electronic device, Wherein, the data is related to hardware, and the hardware communicates with the electronic device through a multi-modal interface; and the data and/or related data are transmitted to the local policy manager, wherein the local policy manager Communicate with the multi-modal interface.In Example S2, the subject described in Example S1 may optionally include: wherein the multi-modal interface is configured to support power transmission, directivity, and multiple input/output (I/O )protocol.In example S3, the subject matter as described in any one of examples S1 and S2 may optionally include: wherein the received data is transferred through firmware.In Example S4, the subject as described in any one of Examples S1 to S3 may be optionally: wherein the received data is written to the operating area by the firmware before it is received.In example S5, the subject matter as described in any one of examples S1 to S4 may be optionally: wherein the received data is stored in a shared memory before it is received.Example X1 is a machine-readable storage medium, including machine-readable instructions for implementing or realizing the method or device described in any one of Examples A1-A6 or M1-M7. Example Y1 is a device that includes means for performing any of the example methods M1-M7. In Example Y2, the subject matter described in Example Y1 may optionally include: the device for executing the method includes a processor and a memory. In Example Y3, the subject matter of Example Y2 may optionally include: the memory includes machine-readable instructions. |
The invention manages the timing of a protocol stack in a tunneling interconnect. A communication in a protocol stack coupled to the tunneling interconnect is received and a determination is made as to whether the communication type is subject to altered timing to accommodate a delay associated with the tunneling interconnect. If so, the timing of at least one stack logic is adjusted to accommodate the delay and the communication is handled using the adjusted timing. The method may be used for a Peripheral Component Interconnect Express (PCIe) protocol stack. |
Claims 1. An apparatus comprising: a first protocol stack to handle data according to a first protocol, the first protocol stack including an interface logic to interface the first protocol stack to a tunneling interconnect; and the tunneling interconnect to couple the first protocol stack to a link and having a timing delay associated therewith, wherein the interface logic is to control at least one timer of the first protocol stack based at least in part on the timing delay. 2. The apparatus of claim 1, wherein the interface logic is to map timing delay information associated with the timing delay to a timing requirement of at least one stack logic of the first protocol stack. 3. The apparatus of claim 2, wherein the interface logic is to determine whether to alter a timing view of the first protocol stack based at least in part on the mapping. 4. The apparatus of claim 2, wherein the interface logic is to dynamically map the timing delay information to the timing requirement, wherein the first protocol stack can be dynamically bound to the tunneling interconnect or to a second physical layer. 5. The apparatus of claim 1, wherein the interface logic is to disable a first clock of the first protocol stack for a predetermined time, the first clock to provide a first clock signal to a first stack logic, such that the first stack logic meets a link timing requirement of the first protocol. 6. The apparatus of claim 5, wherein the tunneling interconnect is to tunnel packets of the first protocol to the link via a protocol of the tunneling interconnect. 7. The apparatus of claim 1, wherein the link is a converged interconnect to be shared by the first protocol stack and a second protocol stack, wherein the first protocol stack is a Peripheral Component Interconnect ExpressTM (PCIe) stack. 8. The apparatus of claim 7, wherein the tunneling interconnect is to allocate first and second slots to the first protocol stack and a third slot to the second protocol stack. 9. The apparatus of claim 8, further comprising a receiver coupled to the link to receive the tunneled packets, wherein the receiver is to account for the allocated first and second slots via an interface logic coupled to the link. 10. A method comprising: -15 -receiving a communication in an interface logic of a first protocol stack coupled to a tunneling interconnect; determining whether a communication type of the communication is subject to altered timing to accommodate a delay associated with the tunneling interconnect; adjusting a timing of at least one stack logic of the first protocol stack to accommodate the delay; and handling the communication in the first protocol stack using the adjusted timing. 11. The method of claim 10, further comprising accessing a table based on the communication type to obtain timing delay information for the tunneling interconnect associated with the communication type. 12. The method of claim 11, wherein the table is stored in non-volatile memory, and includes a first portion including mappings between the tunneling interconnect and the first protocol stack and a second portion including mappings between the tunneling interconnect and a second protocol stack coupled to the tunneling interconnect. 13. The method of claim 12, further comprising sharing the tunneling interconnect between the first protocol stack and the second protocol stack, and providing a slot of the first protocol stack to the second protocol stack if the first protocol stack does not have information to communicate during the slot. 14. The method of claim 10, wherein the timing of the at least one stack logic is adjusted by turning off a clock coupled to the at least one stack logic. 15. The method of claim 14, further comprising delaying a clock for a second stack logic based at least in part on the delay. 16. The method of claim 15, further comprising delaying the clock to prevent an error signal to indicate non-receipt of an acknowledgement from a receiver until after a predetermined time. 17. A system comprising: a transmitter including a physical layer coupled to a link and a protocol stack coupled to the physical layer; a receiver coupled to the transmitter via the link and including a first protocol stack to handle data according to a first protocol, the first protocol stack including a first interface logic to interface the first protocol stack to the link via a tunneling physical layer having a timing delay associated therewith, wherein the first interface logic is to alter -16-timing of at least one first stack logic of the first protocol stack based at least in part on the timing delay; and a dynamic random access memory (DRAM) coupled to the receiver. 18. The system of claim 17, wherein the receiver further includes a second protocol stack to handle data according to a second protocol, wherein the second protocol stack includes a second interface logic to alter timing of at least one second stack logic of the second protocol stack based at least in part on the timing delay. 19. The system of claim 18, wherein the tunneling physical layer includes a controller to select the first protocol stack or the second protocol stack to receive packets received from the transmitter. 20. The system of claim 17, wherein the first interface logic is to determine whether a communication type of a packet received from the transmitter via the tunneling physical layer is subject to altered timing based at least in part on accessing of a table to obtain timing delay information for the tunneling physical layer associated with the communication type. 21. A method substantially as herein described with reference to the drawings. 22. An apparatus substantially as herein described with reference to the drawings. 23. A system substantially as herein described with reference to the drawings.-17 - |
MANAGING TIMING OF A PROTOCOL STACK Background Computer platforms typically include a number of semiconductor components that are coupled by way of various interconnects. These interconnects or links are often of different protocols such that communication on the different links occurs at different speeds and according to different protocols. In some systems, communications of an input/output (10) protocol can be tunneled over another interconnect. Tunneling generally involves taking communications according to a first protocol and providing them through an interconnect that operates according to a second protocol such that the packets of the first protocol are tunneled, e.g., by way of applying a header of the second protocol to packets of the first protocol and sending them along the interconnect. Typically, such protocol tunneling occurs at a very high level such that while the two protocols may have the same software abstraction, there is no shared hardware between the protocols. Thus there is minimal advantage to such tunneling in terms of software compatibility, performance and time to market. Summary There is provided an apparatus comprising a first protocol stack to handle data according to a first protocol, the first protocol stack including an interface logic to interface the first protocol stack to a tunneling interconnect, and the tunneling interconnect to couple the first protocol stack to a link and having a timing delay associated therewith, wherein the interface logic is to control at least one timer of the first protocol stack based at least in part on the timing delay. Optionally, the interface logic is to map timing delay information associated with the timing delay to a timing requirement of at least one stack logic of the first protocol stack. Optionally, the interface logic is to determine whether to alter a timing view of the first protocol stack based at least in part on the mapping. Optionally, the interface logic is to dynamically map the timing delay information to the timing requirement, wherein the first protocol stack can be dynamically bound to the tunneling interconnect or to a second physical layer. Optionally, the interface logic is to disable a first clock of the first protocol stack for a predetermined time, the first clock to provide a first clock signal to a first stack logic, such that the first stack logic meets a link timing requirement of the first protocol. Optionally, the tunneling interconnect is to tunnel packets of the first protocol to the link via a protocol of the tunneling interconnect. Optionally, the link is a converged interconnect to be shared by the first protocol stack and a second protocol stack, wherein the first protocol stack is a Peripheral Component Interconnect ExpressTM (PCIe) stack. Optionally, the tunneling interconnect is to allocate first and second slots to the first protocol stack and a third slot to the second protocol stack. Optionally, the apparatus further comprises a receiver coupled to the link to receive the tunneled packets, wherein the receiver is to account for the allocated first and second slots via an interface logic coupled to the link. There is also provided a method comprising receiving a communication in an interface logic of a first protocol stack coupled to a tunneling interconnect, determining whether a communication type of the communication is subject to altered timing to accommodate a delay associated with the tunneling interconnect, adjusting a timing of at least one stack logic of the first protocol stack to accommodate the delay, and handling the communication in the first protocol stack using the adjusted timing. Optionally, the method further comprises accessing a table based on the communication type to obtain timing delay information for the tunneling interconnect associated with the communication type. Optionally, the table is stored in non-volatile memory, and includes a first portion including mappings between the tunneling interconnect and the first protocol stack and a second portion including mappings between the tunneling interconnect and a second protocol stack coupled to the tunneling interconnect. Optionally, the method further comprises sharing the tunneling interconnect between the first protocol stack and the second protocol stack, and providing a slot of the first protocol stack to the second protocol stack if the first protocol stack does not have information to communicate during the slot. Optionally, the timing of the at least one stack logic is adjusted by turning off a clock coupled to the at least one stack logic. Optionally, the method further comprises delaying a clock for a second stack logic based at least in part on the delay. Optionally, the method further comprises delaying the clock to prevent an error signal to indicate non-receipt of an acknowledgement from a receiver until after a predetermined time. There is also provided a system comprising a transmitter including a physical layer coupled to a link and a protocol stack coupled to the physical layer, a receiver coupled to the transmitter via the link and including a first protocol stack to handle data according to a first protocol, the first protocol stack including a first interface logic to interface the first protocol stack to the link via a tunneling physical layer having a timing delay associated therewith, wherein the first interface logic is to alter timing of at least one first stack logic of the first protocol stack based at least in part on the timing delay, and a dynamic random access memory (DRAIVI) coupled to the receiver. Optionally, the receiver further includes a second protocol stack to handle data according to a second protocol, wherein the second protocol stack includes a second interface logic to alter timing of at least one second stack logic of the second protocol stack based at least in part on the timing delay. Optionally, the tunneling physical layer includes a controller to select the first protocol stack or the second protocol stack to receive packets received from the transmitter. Optionally, the first interface logic is to determine whether a communication type of a packet received from the transmitter via the tunneling physical layer is subject to altered timing based at least in part on accessing of a table to obtain timing delay information for the tunneling physical layer associated with the communication type. Brief Description of the Drawings FIG. 1 is a block diagram of a connection of a protocol stack to a link via a shared physical layer in accordance with one embodiment of the present invention. FIG. 2 is a block diagram of a system having multiple communication stacks coupled to a shared physical layer in accordance with another embodiment of the present invention. FIG. 3 is a flow diagram of a method in accordance with one embodiment of the present invention. FIG. 4 is a flow diagram of a method of operating an interface of a protocol stack in accordance with another embodiment of the present invention. FIG. 5 is a block diagram of a system in accordance with one embodiment of the present invention. Detailed Description In various embodiments, one or more existing 10 protocols can be tunneled at a relatively low level over another interconnect, referred herein as the tunneling interconnect. In one embodiment, a converged 10 (ClO) may be an example of such an interconnect, which can be used to tunnel communications of a Peripheral Component Interconnect Express (PCIe) protocol in accordance with the PCI ExpressTM Specification Base Specification version 2.0 (published January 17, 2007) (hereafter the PCIeTM Specification), or another such protocol as well as other protocols. For ClO, much of the PCIe hardware stack is directly implemented, which provides advantages in software compatibility, performance and time to market. That is, in low level tunneling, most of the tunneled protocol stack is implemented. In contrast, for high level tunneling, the software architecture is preserved, but without necessarily using the packet, encoding or wire protocol mechanisms from the tunneled protocol. Via this low level tunneling, packets of the PCIe protocol stack can be tunneled through the ClO interconnect, e.g., by adaptation of a ClO header to the tunneling packets. When such a transmitted tunneled packet is received in a receiver, the ClO protocol stack of the receiver can decode the header and pass along the PCIe packets to a corresponding PCIe protocol stack of the receiver. However, such an approach to a converged interconnect introduces a problem via this low level tunneling, in contrast to tunneling protocols that occur at higher levels of abstraction. Namely, there are often protocol timing constraints, some implicit, which are trivially satisfied in a non-tunneled, native instantiation of the interconnect protocol, but which can be more difficult to manage when tunneling the interconnect protocol due to the introduction of delays by the interconnect used for tunneling. These delays may be caused by the tunneling interconnect itself or by traffic from other tunneled protocols. Embodiments provide a mechanism for managing explicit and implicit timers of a tunneled protocol when tunneling over a tunneling interconnect. While an embodiment described herein uses an example of a tunneled PCIe protocol over a dO, it is to be understood the scope of the present invention is not limited in this regard and the same principles can be applied to other tunneled interconnects, and other interconnects used for tunneling, including both wired and wireless interconnects. Timing requirements of an interconnect, both explicit and implicit, can be divided into two broad categories, referred to herein as link and wall clock time requirements. Link timing requirements are associated with lower levels such as link protocols, and generally exist to ensure smooth link operation and minimize validation corner cases. Wall clock timing requirements are associated with events that are observable at higher levels, e.g., to operating system (OS) and application software. Link timing requirements can be directly impacted by delays caused by protocol tunneling, and are the requirements addressed by embodiments. Typically, link timing requirements may be on the order of less than approximately 10 microseconds (ts), while wall timing requirements are greater than approximately 10 microseconds (ts). Wall clock requirements are not fundamentally affected by protocol tunneling because they are generally associated with time values long enough (e.g., milliseconds (ms)) to be unaffected by the relatively short delays caused by protocol tunneling (e.g., microseconds), and furthermore these requirements are associated with properties such preventing user visible stalls to application software that are equally desirable regardless of the hardware mechanisms (native vs. tunneled) used to convey a particular interconnect protocol. Table 1 below lists a number of timing requirements associated with PCIe, and showing how each is relevant to this discussion. Note that the quotes in the Description portion are taken from the PCI ExpressTM Specification. Table 1 Description Type Notes Acknowledgment/Non-Link Firm requirement -A (spurious) error acknowledgement (Ack/Nak) will be triggered if time requirements Transmission and Replay Timer not satisfied by the tunneling interconnect Link state zero standby (LOs) Link Triggers link power management when Invocation Policy: "Ports... must link not in use; this is an example of a transition their Transmit Lanes to case where time is counted according to the LOs state if the defined idle tunneling interconnect allocation rather conditions (below) are met for a than what was actually used period of time not to exceed 7 jis." Link State 1 (Li) Entry Negotiation Link An implicit timing requirement. In this -"Upstream component sends this case, the appearance of the traffic to the DLLP repeatedly with no more than PCIe stack is managed, masking inserted four Symbol times of idle" delays Flow Control Updates Link Guideline, not requirement PCI-power management (PM) & Link Guideline, not requirement active state power management (ASPM): Upon exit from Li, it is recommended that the Downstream component send flow control update data link layer packets data link layer packets (DLLP5) for all enabled virtual channel (VCs) and flow control (FC) types starting within i jts of Li exit." LOs/Li Exit Latencies Wall These timing parameters exist to allow Clock determination of impact to traffic over operations Power management event (PME) -Wall Requirement exists to prevent PMEs "If after iOO ms (+ 50%!-5%), the Clock from being completely lost; the specific PME Status bit of a requesting time was chosen to minimize spurious agent has not yet been cleared, the triggering while at the same time being PME Service Timeout mechanism short enough that the PME would still expires triggering the PME be processed in a relatively timely requesting agent to re-send the manner temporarily lost PM_PME Message." Posted Request Acceptance Limit of Wall Intended to limit platform observable iOus Clock delays caused by fabric congestion Flow Control mm update frequency Wall Intended to limit stalls caused by loss of of 3Ous & Update FCP timer -Clock Flow Control packet 200us Note that Table 1 is intended to illustrate some examples of interest, but is not intended to be a complete list of all timing related requirements in PCIe. The link timing requirements are "measured" by the PCIe stack itself, thus if the PCIe stack's view of time is changed, the way these times will be perceived can be altered. To achieve this, a mechanism by which time altering is achieved may be provided, along with hardware, software or firmware to determine when and how to alter the stack timing. The mechanism for altering the PCIe stack's view of time can be done in a number of ways. This can be done by, for example, gating or turning off the clocks to various elements in the PCIe stack logic, which effectively makes time stand still for that logic. Note that this approach has the additional benefit of reducing power consumed by the PCIe stack logic when it is not in use. In other embodiments, an explicit control signal can be added to the PCIe stack logic indicating when time should be counted. Note that it will not generally be sufficient to control the entire stack as one unit, but rather sub-elements of the stack can be semi-independently controlled, because different protocol mechanisms can impact different logic blocks differently. Similarly, not all communications will need to have timing altered in order to comply with the link timing requirements. In one embodiment, control logic can be used to determine when and how to adjust the PCIe stack's view of time, and this logic may be part of a PCIe stack. Referring now to FIG. 1, shown is a block diagram of how the PCIe stack (and other tunneled protocols) are interfaced to a shared tunneling link, which in one embodiment can be a ClO link. As shown in FIG. 1, a system 10 includes a first stack 20a and a second stack 2Ob (generically protocol stack 20). In one embodiment, first protocol stack 20a may be a PCIe stack, while second protocol stack 2Ob may be a universal serial bus (USB), a display interconnect, or other such protocol stack. For ease of illustration, only details of the PCIe protocol stack are shown. Specifically, protocol stack 20a includes a transaction layer 22, a data link layer 24, and an interface or gasket layer 26, which acts as an interface between the PCIe protocol and the tunneling protocol. Details of the operation of such interface logic will be discussed further below. As further shown in FIG. 1, a converged 10 layer may be coupled between first and second protocol stacks 20 and a link 70 which, in one embodiment may be an optical link, an electrical link or other such link. As shown in FIG. 1, the ClO protocol stack may include a dO protocol transport layer 30, a logical block 40 of a physical layer, an electrical block 50 of the physical layer, and an optical block 60 of the physical layer. In this way, blocks 40-60 act as a shared physical layer than can be shared by multiple protocols in communication with the physical layer to thus tunnel information of these multiple protocols along link 70. Referring now to FIG. 2, shown is a representation of a system having multiple communication stacks coupled to a shared physical layer. Specifically, in FIG. 2 in addition to PCIe transmit (TX) and receive (RX) stacks 20a, multiple other transmit and receive stacks 2Ob -2Od may be present. As shown, a pair of multiplexers 35a and 35b (generically multiplexers 35) may be coupled between these stacks and a shared physical layer 40-60. Multiplexers 35 may be operated under control of protocol transport layer control 30. As shown in FIG. 2, ClO protocol transport (PT) layer 30 implements the multiplexing (via multiplexers 35a and 35b) and control mechanisms to tunnel PCIe and other protocols. The PT layer control 30 implements arbitration for the transmitter and steering for the receiver, which is independent of the transmitter. While this type of structure is used for the remainder of this discussion, it is noted that embodiments can be applied to other types of interconnects that control the transmitter and receiver differently, for example, by arbitrating for both at the same time, or by having a single bi-directional connection. Different manners of implementing timing control of an interconnect can be realized in different embodiments. For example, in some implementations a dynamic late binding may occur such that such an interface logic can dynamically determine a tunneling interconnect to which it is to be coupled and to dynamically control any timing requirements of the protocol to accommodate the tunneling interconnect. In other embodiments, a designer may determine during system development a tunneling interconnect to be used by one or more protocol stacks, such that the link timing requirements that may be affected by the tunneling interconnect can be determined during system design. Thus logic can be incorporated, e.g., in interface logic, between the protocol stack and the tunneling interconnect to control the timing of the protocol stack, such as by altering the protocol stack's timing view, to accommodate any additional delays incurred via the tunneling interconnect. Referring now to FIG. 3, shown is an implementation of the former manner of handling link timing requirements, namely a dynamic late binding that may be implemented via the interface logic itself such that the protocol stack can be dynamically coupled to a shared physical layer or another physical layer. Specifically, FIG. 3 shows a flow diagram of a method 100 that can be implemented in, e.g., interface logic of a protocol stack for communication between the protocol stack (which may be a standard stack of a given protocol) and a common physical layer such as a converged interconnect that can tunnel packets of various protocols. As shown in FIG. 3, method 100 may begin by obtaining timing delay information for the tunneling interconnect (block 110). Various manners of obtaining this information may be implemented. For example, in one embodiment a shared physical layer may provide a predetermined listing of delay information to the interface logic. Alternately, the interface logic may analyze packet communications occurring with the shared physical layer to determine the timing delay information. More generally, some embodiments may obtain the timing information in a predetermined manner, while other implementations may dynamically compute such information. There can be several variations on each, e.g., a human vs. machine pre-determination, or for the computed case, one can perform the check one time or repeat it periodically. Note that various instances of such information may exist, where different delays occur for different types of communications, depending upon the nature of the communication and the logic entities involved. Referring still to FIG. 3, control passes to block 120, where the timing delay information may be mapped to timing requirements of the first protocol stack. As one example, a protocol stack may have varying timing requirements with regard to link layer communications, such as set forth above in Table 1. Control then passes to diamond 130 where it may be determined whether a timing view or views of the first protocol stack need to be altered based on the mapping. That is, because of latencies that may be present in the common physical layer, one or more timers associated with given logic of the protocol stack can be controlled, e.g., via speeding up, slowing down, disabling and so forth. If no such altering of the timing view is needed, control passes to block 135 where data may be transmitted and/or received using the standard protocol stack timing. Referring still to FIG. 3, if instead that it is determined that the timing view should be altered, control passes to block 140 where the timing of at least one stack logic may be controlled to alter the first protocol stack timing. As mentioned, this altering of timing may be done via control of timers, controlling of logic to count a given interval (or not) or so forth. After such timing control has been performed, desired data may be transmitted/received using this altered protocol stack timing (block 150). As shown further in FIG. 3, it may then be determined whether a communication, i.e., a given transaction, has been completed (diamond 160). If so, the method may conclude. Alternately, control passes back for a repeated iteration of blocks 140 and 150. While shown with this particular implementation in the embodiment of FIG. 3, the scope of the present invention is not limited in this regard. For example, in other implementations a system design may be fixed such that a given protocol stack is to be tunneled over a known tunneling interconnect having known delays. Accordingly during system design, logic can be implemented to handle control of timing of various protocol transactions as needed to accommodate for any delays inherent in the tunneling interconnect. Table 1 above provides examples of such link layer timing requirements. Referring now to FIG. 4, shown is a flow diagram of a method of operating an interface of a protocol stack in accordance with another embodiment of the present invention. As shown in FIG. 4, method 200 may be implemented by interface logic that can alter a protocol stack's timing view as needed, based on static design parameters. As shown in FIG. 4, method 200 may begin by receiving a communication to/from a tunneling interconnect (block 205). This communication is thus received in interface logic of the protocol stack in an outgoing or incoming direction. Various communication types may be handled in the interface logic, including transmission and receipt of data packets, as well as various protocol packets such as acknowledgements, control packets such as for power management, flow control and so forth. Based on the type of packet, it may be determined in the interface logic whether a given communication type is subject to altered timing (diamond 210). For example, the interface logic may include or may be associated with a table (such as present in a non-volatile memory) that identifies transaction types and whether a timing view of the protocol stack should be altered for that type of communication, along with an indication of the delay that is applicable, and an instruction or other identifier of the type of control measure to be applied by the interface logic to alter the timing accordingly. Note that multiple portions may be present in the table, with each portion associated with a given stack, such that each portion provides mappings for a dedicated stack-tunneling interconnect relationship. Referring still to FIG. 4, if no altering is needed, the standard protocol stack timing may be used to handle the communication, and thus the data may be transmitted/received using the standard protocol stack timing (block 220). If instead it is determined that the -10 -timing view should be altered, control passes to block 230 where the timing of at least one stack logic may be controlled to alter its timing. Then desired data may be transmitted/received using this altered protocol stack timing (block 240). As shown further in FIG. 4, it may then be determined whether a communication, i.e., a given transaction, has been completed (diamond 260). If so, the method may conclude. Alternately, control passes back for a repeated iteration of blocks 230 and 240. Thus static control of handling link timing requirements can be realized. As shown in the above FIGS. 3 and 4, timing control can be altered for certain communication types, while other communication types can proceed according to their normal protocol stack timing without alteration. The following discussion provides some examples of situations in which a protocol stack's timing may be altered to accommodate link timing requirements. In one embodiment, PT layer control 30 can provide transmitter "slots" which are allocated to PCIe, but which can be used for other types of traffic if no PCIe traffic is present to be transmitted. Thus a slot allocated for a first protocol stack can be used by another protocol stack if the first protocol stack has nothing to transmit. Likewise, at a receiver, there may be times where PCIe traffic would be received, but because the other component either did not have PCIe traffic to transmit or because it had higher priority traffic of a different type, the receiver does not receive any PCIe traffic during that time. To correctly convey the notion of "PCIe time" to the PCIe stack, receive and transmit times can be considered somewhat independently. In some cases described in Table 1, such as the LOs Invocation Policy and the "Upon exit from Li..." requirements, the time is measured from only one point of view (in these cases, the transmitter's). However, for the Ack/Nak protocol, both receiver and transmitter points of view need to be considered. The PCIe transmitter's view of the time a transaction layer packet (TLP) was transmitted may be incorrect if it assumes a particular latency to travel through the transmission pipeline that is based on a physical PCIe port, if the ClO transmit pipeline has a different delay. The other (i.e., receiver) component is only able to respond when its PCIe stack has been allocated PCIe time on the shared link, which is perceived by the receiver as needing an adjustment to its (receiver's) view of time. Suppose the PCIe stack expects a transmission pipeline delay of 50 nanoseconds (ns), but the ClO link provides a transmission pipeline delay of 7Ons. In this case, it would be necessary to stall or otherwise adjust the transmitter's time view (for protocol aspects that depend on knowing -ii -this delay) by 2Ons to account for the difference. Thus, a transmitter will wait a proper time interval for an ACK signal from a receiver (which may be delayed by the shared physical layer) so that an error signal is not improperly raised. For a receiver, it must account for the allocated time (not used time) that the other component's transmitter provided for PCIe. In some cases, this will be directly known to the receiver, however in other cases a tunneling protocol mechanism, such as a message may be provided to indicate how much the other component's receiver should advance the time view for each tunneled protocol. For example, if two lOOns slots are allocated to a PCIe transmitter, but only one is used by the transmitter due to a lack of PCIe traffic to transmit, then the receiver must account for 200ns of time. In this way, if the other component violates a timing rule by not taking advantage of a slot that was available for transmission, the violation is visible to the receiver. This would not be the case if only transmission slots used (vs. allocated) were accounted for. Note that a variety of optimizations may be possible for certain protocols. For example, known bandwidth traffic may be accounted for using a counter mechanism, without regard to the link arbitration actually granted. Where a protocol has receive and transmit allocations guaranteed to be equal, it is possible to consider only one (e.g., the transmitter) with the comprehension that the other's (receiver's) time view must match. As noted earlier, embodiments are not in any way dependent on the particulars of dO or PCIe, and may apply to other protocols being tunneled such as display, USB, network, etc. Embodiments also apply to other tunneling protocols/environments, for example, tunneling PCIe over a wired or wireless USB interconnect. By performing tunneling in accordance with an embodiment of the present invention, a larger number of distinct JO applications can be satisfied by a common set of more generic hardware. For example, a platform may include 12 USB ports, 8 PCIe ports, and a variety of special purpose ports (e.g., display). Through tunneling, these ports can be converged, for example, into a set of 16 converged ports, each one of which can be used as any one (or multiple) of the older ports. Embodiments can be implemented in many different system types. Referring to FIG. 5, a block diagram of a system in accordance with one embodiment of the present invention includes devices coupled to a controller hub via a tunneling interconnect that is a serial link. System 300 includes a processor 305 and a system memory 310 coupled to a controller hub 315. Processor 305 includes any processing element, such as a -12 -microprocessor, a host processor, an embedded processor, a co-processor, or other processor. Processor 305 is coupled to controller hub 315 through a front-side bus (FSB) 306. In one embodiment, FSB 306 is a serial point-to-point (PtP) interconnect. System memory 310 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 300. System memory 310 is coupled to controller hub 315 through a memory interface 316. In one embodiment, controller hub 315 is a root hub or root controller in a PCIe interconnection hierarchy. Examples of controller hub 315 include a chipset, a memory controller hub (MCH), a northbridge, an input/output controller hub (ICH) a southbridge, and a root controller/hub. Here, controller hub 315 is coupled to a switchfbridge 320 through a serial link 319. Input/output modules 317 and 321, which may also be referred to as interfaces/ports 317 and 321, include/implement a layered protocol stack to provide communication between controller hub 315 and switch 320. In one embodiment, multiple devices are capable of being coupled to switch 320. Switch 320 routes packets/messages from a device 325 upstream, i.e., up a hierarchy towards controller hub 315 and downstream, i.e., down a hierarchy away from controller hub 315 to device 325. 10 modules 322 and 326 implement a layered protocol stack to communicate between switch 320 and device 325. In one embodiment, 10 module 326 may be a tunneling physical layer to tunnel packets of multiple protocol stacks namely stacks 327 and 328. Device 325 includes any internal or external device or component to be coupled to an electronic system, such as an 10 device, a network interface controller (NIC), an add-in card, an audio processor, a network processor, a hard-drive, a storage device, a monitor, a printer, a mouse, a keyboard, a router, a portable storage device, a Firewire device, a Universal Serial Bus (USB) device, a scanner, and other input/output devices. A graphics accelerator 330 is also coupled to controller hub 315 through serial link 332. In one embodiment, graphics accelerator 330 is coupled to an MCH, which is coupled to an ICH. Switch 320, and accordingly 10 device 325, is then coupled to the ICH. 10 modules 331 and 318 are also to implement a layered protocol stack to communicate between graphics accelerator 330 and controller hub 315. Embodiments may be implemented in code and may be stored on a storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type -13 - of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RW5), and magneto-optical disks, semiconductor devices such as read-only memories (ROM5), random access memories (RAMs) such as dynamic random access memories (DRAM5), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. -14 - |
The present disclosure includes apparatuses and methods related to a command bus in memory. A memory module may be equipped with multiple memory media types that are responsive to perform various operations in response to a common command. The operations may be carried out during the same clock cycle in response to the command. An example apparatus can include a first number of memory devices coupled to a host via a first number of ports and a second number of memory devices each coupled to the first number of memory devices via a second number of ports, wherein the second number of memory devices each include a controller, and wherein the first number of memory devices and the second number of memory devices can receive a command from the host to perform the various (e.g., the same or different) operations, sometime concurrently. |
What is claimed is:1. An apparatus, comprising:a first number of memory devices coupled to a host via a first number of ports; anda second number of memory devices each coupled to the first number of memory devices via a second number of ports, wherein the second number of memory devices include a controller, and wherein the first number of memory devices and the second number of memory devices are configured to receive a command from the host and the first number of memory devices are configured to perform a first operation and the second number of memory devices are configured to perform a second operation in response to the command.2. The apparatus of claim 1, wherein the first operation includes reading data from the first number of memory devices.3. The apparatus of claim 1, wherein the first operation includes sending data to the second number of memory devices.4. The apparatus of claim 1, wherein the second operation includes writing data to the second number of memory devices.5. The apparatus of any one of claims 1-4, further comprising a register clock driver (RCD) configured to receive the command from the host and transmit the command to the first number of memory devices.6. The apparatus of any one of claims 1-4, wherein the second number of memory devices are configured to receive the command from the host.7. The apparatus of any one of claims 1-4, wherein the second number of memory devices are configured to perform the second operation in response to the first number of memory devices performing the first operation.8. The apparatus of any one of claims 1-4, wherein the first number of memory devices are configured to receive the command from the host prior to the second number of memory devices receiving the command.9. The apparatus of any one of claims 1-4, wherein the first number of memory devices comprise dynamic random access memory (DRAM) memory.10. The apparatus of any one of claims 1-4, wherein the second number of memory devices comprise non-volatile memory.11. An apparatus, comprising:a first memory device coupled to a host interface, wherein the first memory device includes a first controller configured to:receive a command from the host to transfer data from the first memory device to a second memory device; andgenerate instructions for performing a read operation for the first memory device to transfer data from the first memory device to the second memory device; andthe second memory device coupled to the host interface and the first memory device, wherein the second memory device includes a second controller configured to:receive the command from the host to transfer data from the first memory device to the second memory device; andgenerate instructions for performing a write operation for the second memory device to write the data received from the first memory device to the second memory device.12. The apparatus of claim 11, comprising a memory module configured to transfer data between the first memory device and the host based at least in part on the command.13. The apparatus of claim 11, wherein the second memory device is configured to perform the write operation during a clock cycle that immediately follows completion of the read operation.14. The apparatus of any one of claims 11-13, configured to transfer data on a bus from the first memory device to the second memory device without interruption by another device or component.15. The apparatus of any one of claims 11-13, wherein the second controller is configured to receive a second command, wherein the second memory device is configured to transfer the data from the second memory device to the host in response to receiving the second command.16. A method, comprising:receiving a first command from a host at a first memory device on a dual in-line memory module (DIMM), the first memory device comprising a first memory medium;receiving the first command from the host at a second memory device on the DIMM, the second memory device comprising a second memory medium that is different from the first memory medium;performing a first operation by the first memory device in response to receiving the first command at the first memory device; andperforming a second operation by the second memory device in response to receiving the first command at the second memory device.17. The method of claim 16, further comprising:receiving the first command at a first controller associated with the first memory device; andgenerating instructions for the first operation at the first controller in response to receiving the first command from the host, wherein the first operation is different from the second operation.18. The method of claim 17, further comprising:receiving the first command at a second controller associated with the second memory device; andgenerating instructions for the second operation at the second controller in response to receiving the first command from the host.19. The method of any one of claims 16-18, further comprising:receiving a second command from the host at the first memory device on the DIMM;receiving the second command from the host at the second memory device on the DIMM;performing a third operation by the first memory device in response to receiving the second command; andperforming a fourth operation by the second memory device in response to receiving the second command, wherein the third operation and the fourth operation comprise a same type of operation.20. A method, comprising:receiving a command at a first memory device from a host, the first memory device comprising a first type of memory media;reading data from the first memory device based at least in part on receiving the command, the second memory device comprising a second type of memory media;receiving the command at the second memory device; andwriting the data, read from the first memory device, to the second memory device based at least in part on receiving the command. |
COMMAND BUS IN MEMORYTechnical Field[0001] The present disclosure relates generally to memory devices, and more particularly, to apparatuses and methods using a command bus in memory.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), among others.[0003] Memory is also utilized as volatile and non-volatile data storage for a wide range of electronic applications. Non-volatile memory may be used in, for example, personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, and other electronic devices. Memory cells can be arranged into arrays, with the arrays being used in memory devices.[0004] Memory can be part of a memory module (e.g., a dual in-line memory module (DIMM)) used in computing devices. Memory modules can include volatile, such as DRAM, for example, and/or non-volatile memory, such as Flash memory or RRAM, for example. The DIMMs can be used as main memory in computing systems.
Brief Description of the Drawings[0005] Figure 1 A is a block diagram of an apparatus in the form of a computing system including a memory system in accordance with a number of embodiments of the present disclosure.[0006] Figure IB is a block diagram of an apparatus in the form of a dual in-line memory modules (DIMM) in accordance with a number of embodiments of the present disclosure.[0007] Figure 2 is a block diagram of a computing system including a host and a memory system comprising a dual in-line memory module (DIMM) with ports in accordance with a number of embodiments of the present disclosure.[0008] Figure 3 is a flow diagram illustrating an example memory process including a command bus in memory in accordance with a number of embodiments of the present disclosure.[0009] Figure 4 is a flow diagram illustrating an example memory process including a command bus in memory in accordance with a number of embodiments of the present disclosure.Detailed Description[0010] The present disclosure includes apparatuses and methods related to a command bus in memory. A memory module may be equipped with multiple memory media types that are responsive to perform various operations in response to a common command. The operations may be carried out during the same clock cycle in response to the command. An example apparatus can include a first number of memory devices coupled to a host via a first number of ports and a second number of memory devices coupled to the first number of memory devices via a second number of ports, wherein the second number of memory devices each include a controller, and wherein the first number of memory devices and the second number of memory devices receive a command from the host and the first number of memory devices perform a first operation and the second number of memory devices perform a second operation.[0011] In a number of embodiments, a first number of memory devices can each include a controller and a second number of memory devices can each
include a controller. The controllers of the second number of memory devices can be configured to receive a command from the host and execute operations on the second number of memory devices based on the command from the host.For example, the controllers of the second number of memory devices can receive a command from the host including instructions to transfer data from the first number of memory devices to the second number of memory devices.Based on the command from the host, the controllers of the second number of memory devices can generate instructions for execution by the second number of memory devices to write data received from the first number of memory devices to the second number of memory devices. In some examples, the write can include receiving data from the first number of memory devices and writing the data from the first number of memory devices.[0012] In some examples, the first number of memory devices can each include a controller. The controllers of the first number of memory devices can be configured to receive commands from the host and execute operations on the first number of memory devices based on the command from the host. For example, the controllers of the first number of memory devices can receive the same command as the controllers of the second number of memory devices including the instructions to transfer data from the first number of memory devices to the second number of memory devices; however, the controllers of the first number of memory devices can generate different instructions based on the same command from the host than the controller of the second number of memory devices. For example, the controllers of the first number of memory devices can generate instructions for execution by the first number of memory devices to read data from the first number of memory devices. In some examples, the read can include reading data from the first number of memory devices and sending the data to the second number of memory devices.[0013] A memory system can include a dual in-line memory module(DIMM) having a number of memory devices. For example, a DIMM can be a non-volatile DIMM (NVDIMM) that includes a number of types of memory mediums and/or media, including a number of volatile memory devices and a number of non-volatile memory devices. A DIMM can execute commands to transfer data between the host and the volatile memory device, between the host and the non-volatile memory device, between the volatile and non-volatile
memory devices, between non-volatile memory devices, and between volatile memory devices. The commands can be received by the DIMM from another device, such as a host, and/or can be generated by a controller on the DIMM.[0014] For example, the number of volatile memory devices can be coupled to another device, such as a host, via a first port (e.g., an A Side Port) and be coupled to a number of non-volatile memory devices on the DIMM via a second port (e.g., a B Side Port). The DIMM can execute commands to transfer data between another device, such as a host, and the volatile memory devices via an A Side Port and the DIMM can execute commands to transfer data between the volatile memory devices and the non-volatile memory devices via a B Side Port. The DIMM can execute the commands to transfer data between another device and the volatile memory devices while executing the commands to transfer data between the volatile memory device and the non volatile memory devices.[0015] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how a number of embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designator“N” indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure.[0016] As used herein,“a number of’ something can refer to one or more of such things. For example, a number of memory devices can refer to one or more of memory devices. Additionally, designators such as“N”, as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure.[0017] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar
digits. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and the relative scale of the elements provided in the figures are intended to illustrate various embodiments of the present disclosure and are not to be used in a limiting sense.[0018] Figure 1A is a functional block diagram of a computing system100 including an apparatus in the form of a number of memory systems 104- 1... 104-N, in accordance with one or more embodiments of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 1 A, memory systems 104-1... 104-N can include a one or more dual in-line memory modules (DIMM) 110-1, . . ., 110-X, 110-Y. The DIMMs 110-1, . . ., 110-X, 110-Y can include volatile memory and/or non-volatile memory. In a number of embodiments, memory systems 104-1,... , 104-N can include a multi-chip device. A multi-chip device can include a number of different memory types and/or memory modules. For example, a memory system can include non volatile or volatile memory on any type of a module. The examples described below in association with Figures 1A-5 use a DIMM as the memory module, but the embodiments of the present disclosure can be used on any memory system that include volatile and/or non-volatile memory.. In this example, each DIMM 110-1, . . ., 110-X, 110-Y includes memory devices 121 and 124. In some examples, memory device 121can be a DRAM device and memory device 124 can be a non-volatile memory device. The memory device 121 can include controller 116 and memory device 124 can include controller 114. Controllers 114 and 116 can receive commands from host 102 and control execution of the commands on the memory devices 121and 124. The host 102 can send commands to the DIMMs 110-1, . . ., 110-X, 110-Y using the protocol of the present disclosure and/or a prior protocol, depending on the type of memory in the DIMM. For example, the host can use the protocol of the present disclosure to communicate on the same channel (e.g., channel 103-1) with a NVDIMM and
a prior protocol to communicate with a DRAM DIMM that are both on the same memory system 104.[0019] As illustrated in Figure 1 A, a host 102 can be coupled to the memory systems 104-1... 104-N. In a number of embodiments, each memory system 104-1... 104-N can be coupled to host 102 via a channel (e.g., channels 103-1, ... , 103-N). In Figure 1A, memory system 104-1 is coupled to host 102 via channel 103-1 and memory system 104-N is coupled to host 102 via channel 103-N. Host 102 can be a laptop computer, personal computers, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, interface hub, among other host systems, and can include a memory access device, e.g., a processor. One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0020] Host 102 includes a host controller 108 to communicate with memory systems 104-1... 104-N. The host controller 108 can send commands to the DIMMs 110-1, . . ., 110-X, 110-Y via channels 103-1... 103-N. The host controller 108 can communicate with the DIMMs 110-1, . . ., 110-X, 110-Y and/or the memory devices 121 and 124 on each of the DIMMs 110-1, . . ., 110- X, 110-Y to read, write, and erase data, among other operations. A physical host interface of host 102 can provide an interface for passing control, address, data, and other signals between the memory systems 104-1... 104-N and host 102 having compatible receptors for the physical host interface. The signals can be communicated between host 102 and DIMMs 110-1, . . ., 110-X, 110-Y on a number of buses, such as a data bus and/or an address bus, for example, via channels 103-1... 103-N.[0021] The host controller 108 and/or controllers 114and 116 on aDIMM can include control circuitry, e.g., hardware, firmware, and/or software.In one or more embodiments, the host controller 108 and/or controllers 114 and 116 can be an application specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA) coupled to a printed circuit board including a physical interface. Also, each DIMM 110-1, . . ., 110-X, 110-Y can include buffers of volatile and/or non-volatile memory and registers. A buffer can be used to buffer data that is used during execution of commands.
[0022] The DIMMs 110-1, 110-X, 110-Y can provide main memory for the memory system or could be used as additional memory or storage throughout the memory system. Each DIMM 110-1, . . ., 110-X, 110-Y can include one or more arrays of memory cells on memory dies, e.g., volatile and/or non-volatile memory cells. The arrays can be flash arrays with a NAND architecture, for example. Embodiments are not limited to a particular type of memory device. For instance, the memory device can include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, and flash memory, among others.[0023] The embodiment of Figure 1A can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the memory systems 104-1... 104-N can include address circuitry to latch address signals provided over I/O connections through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder to access the DIMMs 110-1, . . ., 110-X, 110-Y. It will be appreciated by those skilled in the art that the number of address input connections can depend on the density and architecture of the DIMMs 110-1, . . ., 110-X, 110-Y.[0024] Figure IB is a block diagram of an apparatus in the form of a dual in-line memory modules (DIMM) 110 in accordance with a number of embodiments of the present disclosure. In Figure IB, DIMM 110 can include memory devices 121-1, 121-2, 124-1, 124-2. Memory devices 121-1, 121-2 can include controllers 116-1, 116-2 and memory devices 124-1, 124-2 can include controllers 114-1, 114-2.. Memory devices 1241-1, 121-2 can be DRAM devices and memory devices 124-1, 124-2 can be non-volatile memory devices, for example. Memory devices 121-1, 121-2, 124-1, 124-2 can include control circuitry (e.g., hardware, firmware, and/or software) which can be used to execute commands on the memory devices 121-1, 121-2, 124-1, 124-2. The control circuitry can receive instructions from controllers 114-1, 114-2, 116-1, 116-2. The control circuitry can be configured to execute commands to read and/or write data in the memory devices 121-1, 121-2, 124-1, 124-2.[0025] Figure 2 is a block diagram of a computing system 200 including a host 202 and a memory system comprising a dual in-line memory module (DIMM) 210 with ports in accordance with a number of embodiments of the present disclosure. In Figure 2, host 202 is coupled to DIMM 210 via data buses 212-1, ... , 212-8 and command/address bus 218. Host 202 can be coupled to
DIMM 210 via a number of channels (e.g., channels 103-1, 103-N in Figure1A). For example, host 202 is coupled to DIMM 210 via a first channel that includes data buses 212-1, ... , 212-4 and command/address bus 218 and host 202 is coupled to DIMM 210 via a second channel that includes data buses 212-5,... , 212-8 and command address/bus 218. Host 202 can send commands on the first channel for execution on memory devices 221-1, 221-8 and memory devices 224-1,... , 224-4 and can send commands on the second channel for execution on memory devices 221-9, ... ., 221-16 and memory devices 224-5.... , 224-8. The memory devices 224-1,... , 224-8 can include controllers 214-1.... , 214-8. Controllers 214-1,... , 214-8 can receive commands directly from host 202 via command bus 218, 219. The commands from host 202 can be to read and/or write data to DIMM 210, for example. Controllers 214-1,... , 214-8 can interpret the command from host 202 by generating instructions to read data from and/or write data to memory devices 224-1, ... , 224-8 to read, write, and transfer data on DIMM 210. The commands from host 202 can be sent to register clock driver (RCD) 217 via bus 218 and the commands can be sent from RCD 217 to controllers 214-1,... , 214-8 via bus 219. The controllers 214-1,... ,214-8 can receive the commands from RCD 217 and store data associated with the commands (e.g., command instructions and/or data read from and/or to be written to memory devices 224-1,... , 224-8 during execution of the commands) in a buffer.[0026] The memory devices 221-1,... , 221-16 can include controllers216-1,... , 216-16. Host 202 can send commands to memory devices 221-1... ,221-8 on command bus 225-1, 219 and/or RCD 217. Host 202 can send commands to memory devices 221-9,... ,221-16 on command bus 225-2, 219 and/or RCD 217. The instructions from controllers 216-1,... , 216-16 can include performing read operations to read data on memory devices 221-1, 221-16 and send the data to memory devices 224-1,... , 224-8 on buses 215-1,... ,215-8 and/or send the data to host 202 on buses 212-1,...212-8. The instructions from controllers 216-1,... , 216-16 can include performing write operations to write data to memory devices 221-1, ... , 221-16 that is received from memory devices 224-1,... , 221-8 on buses 215-1,... , 215-8 and/or write data to memory devices 221-1, ... , 221-16 that is received from host 202 on buses 212-1,... , 212-
8. The instructions can be generated and/or executed in response to receiving a command from host 202.[0027] Host 202 can send a signal to RCD 217 indicating which memory device of a pair of memory devices (e.g., memory device 221-1 or 221-2, for example) will execute the command. The signal can be sent from RCD 217 to multiplexor 226-1,... , 226-8 and cause multiplexor 226-1,... ,226-8 to select a memory device from a pair of memory devices and couple the selected memory device to RCD 217 via bus 225-1 and/or 225-2. For example, if the command is transferring data via an A side port and the A side port is coupling memory device 221-1 to host 202, while the B side port is coupling memory device 221-2 to memory device 224-1, the signal can indicate to multiplexor 226-1 to couple bus 225-1 to memory device 221-1. The host controller 208 can then send the command to the controller 216-1 of memory device 221-1. The controller 216- 1 can generate and execute instructions to transfer data between memory device 221-1 and host 202. Memory devices 221-1,... , 221-16 can send signals, (e.g., command completion signals) on buses 225-1 and 225-2 to RCD 217 and controller 214 that indicate memory devices 221-1,... , 221-16 have completed execution of commands and are ready for additional commands. Once a command has been executed, controllers 216-1,... , 216-16 can send instructions to RCE 217 for execution and/or a status signal to the host 202 indicating that the command received from host 202 has been executed. Controllers 216-1,... , 216-16 can include non-volatile and/or volatile memory, such as SRAM memory, that can be a buffer and/or a register used during execution of commands.[0028] Host controller 208 can send commands to memory devices 224-1,... ,224-8 on buses 218 and 219. The controllers 214-1,... , 214-8 can receive the commands from host controller 208. The controllers 214-1,... , 214-8 can generate and execute instructions based on the command from the host controller 208. The instructions can include performing read operations to read data from memory devices 224-1, ... , 224-8 and send the data directly to memory devices 221-1,... , 221-16 on buses 215-1,... , 215-8. The instructions from controllers 214-1,... , 214-8 can include performing write operations to write data to memory devices 224-1, ... , 224-8 received from memory devices 221-1,... , 221- 16 directly via buses 215-1,... , 215-8. Memory devices 224-1, ... , 224-8 can
include buffers to temporarily store data received from memory devices 221-1.. 221-16 when writing the data to memory devices 224-1, ... , 224-8.[0029] Controllers 214-1,... , 214-8 and controllers 216-1,... , 216-16 can generate instructions for performing read and/or write operations on memory devices 224-1, ... , 224-8 and 221-1, ... , 221-16 with timing such that the memory devices 224-1, ... , 224-8 and 221-1,... , 221-16 can execute a write operation without latency after completion of a read operation. For example, controller 214-1 can generate instructions for performing a read operation on memory device 224-1. Memory device 224-1 can execute the read operation and send the data associated with the read operation to memory device 221-1 on bus 215-1. Controller 216-1 can generate instructions for performing a write operation on memory device 221-1 at a time such that the memory device 221-1 can execute the write operation without latency and as memory device 221-1 is receiving the data from memory device 224-1 on bus 215-1. Memory device 221-1 can execute the write operation from controller 216-1 with timing such that memory device 221-1 can begin execution of the write operation in a clock cycle that occurs immediately following completion of the read operation and receipt of the data from memory device 224-1.[0030] DIMM 210 can include a first number of memory devices 221-1.... , 221-16. For example, memory devices 221-1,... , 221-16 can be DRAM memory devices, among other types of volatile and/or non-volatile memory.The memory devices 221-1,... , 221-16 can be paired together. For example, memory devices 221-1 and 221-2 are paired together, coupled to the host via port 222-1 (A Side Port) and buses 212-1 and 212-2, and coupled to memory device 224-1 via port 222-2 (B Side Port) and bus 215-1. Memory devices 221- 3 and 221-4 are paired together, coupled to the host via port 222-3 (A Side Port) and bus 212-2 , and coupled to memory device 224-2 via port 222-4 (B Side Port) and bus 215-2. Memory devices 221-5 and 221-6 are paired together, coupled to the host via port 222-5 (A Side Port) and bus 212-3, and coupled to memory device 224-3 via port 222-6 (B Side Port) and bus 215-3. Memory devices 221-7 and 221-8 are paired together, coupled to the host via port 222-7 (A Side Port) and bus 212-4, and coupled to memory device 224-4 via port 222- 8 (B Side Port) and bus 215-4. Memory devices 221-9 and 221-10 are paired together, coupled to the host via port 222-9 (A Side Port) and bus 212-5, and
coupled to memory device 224-5 via port 222-10 (B Side Port) and bus 215-5. Memory devices 221-11 and 221-12 are paired together, coupled to the host via port 222-11 (A Side Port) and bus 212-6, and coupled to memory device 224-6 via port 222-12 (B Side Port) and bus 215-6. Memory devices 221-13 and 221- 14 are paired together, coupled to the host via port 222-13 (A Side Port) and bus 212-7, and coupled to memory device 224-7 via port 222-14 (B Side Port) and bus 215-7. Memory devices 221-15 and 221-16 are paired together, coupled to the host via port 222-15 (A Side Port) and bus 212-8, and coupled to memory device 224-8 via port 222-16 (B Side Port) and bus 215-8.[0031] DIMM 210 can include a second number of memory devices 224-1,... , 224-8. For example, memory devices 224-1,... , 224-8 can be 3D XPoint memory devices, among other types of volatile and/or non-volatile memory.[0032] Memory system 200 can be configured to execute commands sent from host 202 to DIMM 210 by sending command/address information from the host controller 208 on command/address busses 218 and 219 via the register clock driver (RCD) 217 and data on data buses 212-1,... ,212-8. The commands from the host can include address information for memory devices 221-1,...221-16 where the host is requesting an operation on data at a particular location in memory devices 221-1, ...221-16. The commands from the host can include address information for memory devices 224-1,... , 224-8 where the host is requesting an operation on data at particular location in memory devices 224- 1,... , 224-8, while memory devices 221-1, ...221-16 can act as a buffer during execution of the commands.[0033] In a number of embodiments, memory devices 221-1, ...221-16 can be configured as cache. For example, memory devices can be configured as cache for the data stored in memory devices 224-1, ... , 224-8 and/or other memory devices coupled to the computing system. The DIMM 210 can be configured to have a portion of memory devices 221-1, ...221-16 addressable by host 202 and a portion of the memory devices 221-1, ...221-16 configured as cache.[0034] DIMM 210 includes memory devices that are paired together and one of the paired memory devices can be selected for coupling to host 202 via an A Side Port and the other of the paired memory device can be selected for coupling to another memory device via a B Side Port. For example, memory
devices 221-1, which is paired with memory device 221-2, can be selected for coupling to host 202 via port 222-1, while memory device 221-2 can be selected for coupling to memory device 224-1 via port 222-2. Port 222-1 can include a multiplexor to select and couple memory device 221-1 to host 202 while isolating memory device 221-2 from host 202. Port 222-2 can include a multiplexor to select and couple memory device 221-2 to memory device 224-1 while isolating memory device 221-1 from memory device 224-1. Host 202 can send command to DIMM 210 for execution on the selected A Side Port memory device (e.g., memory device 221-1). The commands can be executed by transferring data between host 202 and memory device 221-1 via port 222-1 on buses 212-1 and/or 212-2. DIMM 210 can also execute commands for execution on the selected B Side Port memory device (e.g., memory device 221-2). The commands can be executed by transferring data between memory device 221-2 and memory device 224-1 on buses 215-1. Commands executed using the B Side Port can transfer data between memory devices 221-1,... , 221-16 and memory devices 224-1, ... , 224-8. Ports 222-1, ... , 222-16 can be external to memory devices 221-1,... , 221-16 as illustrated in Figure 2 and/or internal to memory devices 221-1,... , 221-16.[0035] In a number of embodiments, commands that transfer data via theA Side Ports can be executed while commands that transfer data via the B Side Ports. The data that is stored in pairs memory devices can be arbitrated and reconciled by the controller. Memory devices that have executed commands where data was transferred to and/or from one of the memory devices on the A Side Port and to and/or from the other paired memory device on the B Side Port can have the data on the pair of memory device reconciled by transferring data between the pair of memory devices and/or between the pair of memory devices and memory devices 224-1, ... , 224-8. For example, after A Side Port and B Side Port transfers have occurred on a pair of memory devices and DIMM 210 is idle, controllers 214-1,... , 214-8 and/or controllers 216-1,... , 216-16 can send instructions to reconcile the data stored on the pair of memory devices so that the same data is stored on each of the memory devices by transferring data between the pair of memory devices and/or between the pair of memory devices and memory devices 224-1, ... , 224-8.
[0036] In a number of embodiments, commands can be received from host 202 and instructions can be generated by controllers 214-1,... , 214-8, based on the commands from the host, to transfer data between memory devices 224-1,... , 224-8. Data can be transferred between memory devices 224-1, ... , 224-8 via controllers 214-1,... , 214-8 using buffers and/or registers on or coupled to the controllers 214-1,... , 214-8.[0037] In a number of embodiments, memory devices 221-1,... , 221-16 can be a first number of memory devices and memory devices 224-1,... , 224-8 can be a second number of memory devices. As described above, the first number of memory devices can be coupled to host 202 via a first number of side ports 222-1, 222-3, 222-5, 222-7, 222-9, 222-11, 222-13, and 222-15 (e.g., A side ports). The second number of memory devices 224-1,... , 224-8 can be coupled to the first number of memory devices 221-1,... , 221-16 via a second number of ports 222-2, 222-4, 222-6, 222-8, 222-10, 222-12, 222-14, and 222-16 (e.g., B side ports).[0038] The second number of memory devices 224-1,... , 224-8 can each include a controller 214-1,... , 214-8. The first number of memory devices 221- 1 , ... , 221 - 16 and the second number of memory devices 224- 1 , ... , 224-8 can receive a command from the host 202. In response to receiving the command from the host, the first number of memory devices 221-1,... , 221-16 can perform a first operation and the second number of memory devices 224-1,... , 224-8 can perform a second operation.[0039] In some examples, the first operation can include reading data from the first number of memory devices 221-1,... , 221-16 and/or sending data to the second number of memory devices 224-1,... , 224-8. The second operation can include writing data to the second number of memory devices 224- 1,... , 224-8, for example.[0040] The command can be sent to the first number of memory devices221-1,... , 221-16 and/or the second number of memory devices 224-1,... , 224-8 via the RCD 217 by the host controller 208 and/or directly from the host controller 208, as described above. The host controller 208 can be configured to send the command to the first number of memory devices 221-1,... , 221-16 prior to sending the command to the second number of memory devices 224-1,... , 224-8. In some examples, the controllers 216-1,... , 216-16 of the first number
of memory devices 221-1,... , 221-16 and the controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8 can receive the command from the host controller 208 at the same time, but can execute the command at different times.[0041] In a number of embodiments, the controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8 can be configured to receive the command from the host 202 and generate instructions for the second number of memory devices 224-1,... , 224-8 based on the command from the host 202. For example, the controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8 can receive a command from the host 202 to transfer data from the first number of memory devices 221-1, 221-16 to the second number of memory devices 224-1,... , 224-8. Based on the command from the host 202, the controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8 can generate instructions for the second number of memory devices 224-1,... , 224-8 to write the data received from the first number of memory devices 221-1,... , 221-16 to the second number of memory devices 224-1,... , 224-8.[0042] In some examples, the first number of memory devices 221-1,... ,221-16 can each include a controller 216-1,... , 216-16. The controllers 216- 1,... , 216-16 of the first number of memory devices 221-1,... , 221-16 can be configured to receive the command from the host 202 and generate instructions for the first number of memory devices 221-1,... , 221-16 based on the command from the host 202. For example, the controllers 216-1,... , 216-16 of the first number of memory devices 221-1,... , 221-16 can receive the same command as the controllers 214-1,... , 214-8 of the second number of memory devices 224- 1,... , 224-8 to transfer data from the first number of memory devices 221-1,... , 221-16 to the second number of memory devices 224-1,... , 224-8; however, the controllers 216-1,... , 216-16 of the first number of memory devices 221-1,... , 221-16 can generate a instructions based on the same command from the host 202 than the controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8. For example, the controllers 216-1,... , 216-16 of the first number of memory devices 221-1,... , 221-16 can generate instructions for the first number of memory devices 221-1,... , 221-16 to perform a read operation. The read operation can include reading the data from the first number
of memory devices 221-1,... , 221-16 and sending the data from the first number of memory devices 221-1,... , 221-16 to the second number of memory devices 224-1,... , 224-8.[0043] In a number of embodiments, controllers 214-1,... , 214-8 of the second number of memory devices 224-1,... , 224-8 can receive a second command from the host 202 to transfer the data from the second number of memory devices 224-1,... , 224-8 to the host 202. The controllers 214-1,... , 214- 8 of the second number of memory devices 224-1,... , 224-8 can generate instructions for the second number of memory devices 224-1,... , 224-8 in response to receiving the second command from the host 202. The instructions can include performing a read operation to read the data from the second number of memory devices 224-1,... , 224-8 and send the data to the host 202, for example.[0044] Figure 3 is a flow diagram illustrating an example memory process including a command bus in memory in accordance with a number of embodiments of the present disclosure.[0045] At block 352, the method 350 can include receiving a first command from a host at a first memory device on a dual in-line memory module (DIMM), the first memory device comprising a first memory medium.[0046] At block 354, the method 350 can include receiving the first command from the host at a second memory device on the DIMM, the second memory device comprising a second memory medium that is different from the first memory medium.[0047] At block 356, the method 350 can include performing a first operation by the first memory device in response to receiving the first command at the first memory device.[0048] At block 358, the method 350 can include performing a second operation by the second memory device in response to receiving the first command at the second memory device.[0049] Figure 4 is a flow diagram illustrating an example memory process including a command bus in memory in accordance with a number of embodiments of the present disclosure.
[0050] At block 462, the method 460 can include receiving a command at a first memory device from a host, the first memory device comprising a first type of memory media.[0051] At block 464, the method 460 can include reading data from the first memory device based at least in part on receiving the command, the second memory device comprising a second type of memory media.[0052] At block 466, the method 460 can include receiving the command at the second memory device.[0053] At block 468, the method 460 can include writing the data ,read from the first memory device, to the second memory device based at least in part on receiving the command.[0054] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0055] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at
various positions, including being distributed such that portions of functions are implemented at different physical locations.[0056] Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). For the avoidance of doubt, a list of at least one of A, B, or C, or any combination thereof is likewise an inclusive list. Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase“based at least in part on.”[0057] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclose66d embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
An apparatus and system may include a peripheral device, such as an interrupt controller or Peripheral Component Interconnect (PCI) bridge device, having a memory-mapped legacy register and a PCI dummy register. The legacy register may be accessed by a Basic Input/Output System (BIOS) as part of a power-on initialization sequence for the peripheral device, and the dummy register may be accessed during a hot-plug operation using code executed by an Operating System (OS). An article, including a machine-accessible medium, may contain data capable of causing a machine to carry out a method of representing a peripheral device which includes identifying the peripheral device as a legacy device in a name space, such as an Advanced Configuration and Power Interface (ACPI) name space, and identifying the peripheral device as a dummy PCI device capable of being accessed during a hot-plug operation. |
What is claimed is: 1. An apparatus, comprising: a first register associated with a device, the first register to be accessed by start-up code as part of an initialization operation that treats the device as a legacy device; and a second register associated with the device, the second register to be accessed during a hot-plug operation that treats the device as a peripheral component interconnect (PCI) device using code executed by an operating system. 2. The apparatus of claim 1, wherein the start-up code comprises a basic input/output system. 3. The apparatus of claim 1, wherein the device is included in a hot-pluggable input/output node. 4. The apparatus of claim 1, wherein the device is included in a hot-pluggable PCI bridge device. 5. The apparatus of claim 1, wherein the device is a device in a PCI bus hierarchy. 6. The apparatus of claim 5, wherein the device comprises an interrupt controller. 7. A system, comprising: a device including a first register to be accessed by start-up code as part of an initialization operation that treats the device as a legacy device, and a second register to be accessed using code executed by an operating system that treats the device as a peripheral component interconnect (PCI) device during a hot-plug operation; and an input/output hub capable of being communicatively coupled to the device. 8. The system of claim 7, further comprising: a hot-pluggable device capable of being communicatively coupled to the device using a PCI bus. <Desc/Clms Page number 12> 9. The system of claim 7, wherein the first register is located at a base address for the device. 10. The system of claim 7, further comprising: a node controller capable of being communicatively coupled to the input/output hub. 11. The system of claim 10, wherein the node controller is capable of hot-plug operation. 12. A method, comprising: identifying a device as a legacy device in a name space; and identifying the device as a peripheral component interconnect (PCI) device capable of being accessed during a hot-plug operation. 13. The method of claim 12, wherein identifying a device as a legacy device in a name space comprises: associating the legacy device with a device identifier ; and identifying an address space associated with the legacy device. 14. The method of claim 12, wherein identifying a device as a legacy device in a name space comprises: associating the legacy device with a device identifier; and identifying resources required by the legacy device. 15. The method of claim 12, wherein the device is operatively coupled to a platform prior to a time when power is applied to the platform, further comprising: applying power to the platform and the device; and initializing the device as the legacy device. 16. The method of claim 12, further comprising: hot-adding the device to a platform; and initializing the device as the PCI device. <Desc/Clms Page number 13> 17. An article comprising a machine-accessible medium having associated data, wherein the data, when accessed, results in a machine performing: identifying a device as a legacy device in a name space; and identifying the device as a peripheral component interconnect (PCI) device capable of being accessed during a hot-plug operation. 18. The article of claim 17, wherein the machine-accessible medium further includes data, which when accessed by the machine, results in the machine performing: accessing the device as a legacy device using a basic input/output system during an initialization sequence for a platform. 19. The article of claim 18, wherein the machine-accessible medium further includes data, which when accessed by the machine, results in the machine performing: hot adding the device included in an input/output node to the platform; and initializing the device as the PCI device using code executed by an operating system. 20. The article of claim 17, wherein the machine-accessible medium further includes data, which when accessed by the machine, results in the machine performing: initializing the device using operating system executable code derived from a configuration and power interface language. 21. The article of claim 17, wherein identifying the device as a PCI device capable of being accessed during a hot-plug operation comprises: creating an operational region for accessing the PCI device during the hot-plug operation. |
<Desc/Clms Page number 1> DEVICE REPRESENTATION APPARATUS AND METHODS Background Information As computers come to play an ever more prominent part in our daily lives, Reliability, Availability, and Serviceability (RAS) have become important factors to consider with respect to system performance. For this reason, support for hot-plug operations (i. e. , wherein some part of an actively operating computer system platform can be removed and replaced with little or no degradation in overall operating performance) is being added to selected (typically high-end) computer systems. Brief Description of the Drawings FIG. 1 is a pseudo-coded method of representing a peripheral device according to an embodiment of the invention; FIG. 2 is a block diagram of an apparatus, a system, and an article according to various embodiments of the invention; and FIG. 3 is a flow diagram of a method of representing a peripheral device according to an embodiment of the invention. Detailed Description of Embodiments of the Invention In the following detailed description of embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration, and not of limitation, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to understand and implement them. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments of the invention is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. <Desc/Clms Page number 2> After power is first applied to a computing platform (i. e., after power-on), some form of initialization operation or sequence typically occurs. During this time, most of the platform components, along with their associated devices, are addressed and initialized by the platform start-up code, such as Basic I/O System (BIOS) software. When a hot-plug"hot-add"operation occurs, any component added to the operating platform also typically requires some attention with regard to initialization. However, since the platform BIOS is not in control of the platform during the hot-plug operation, device-specific code, provided by the BIOS, is typically executed by the OS to effect hot-plug initialization. For example, such an operational mechanism may be implemented using the Advanced Configuration and Power Interface (ACPI) Source Language (ASL), as defined in the ACPI Specification. Further information regarding ACPI and ASL may be obtained by referring to the ACPI Specification, Revision 2. 0a, March 31,2002. A particular example of hot-plug capability involves the use of an IntelW 82870 based server, one or more hot-pluggable Scalable Node Controllers (SNCs) and one or more I/O Hubs, such as Server I/O Hubs (SIOHs). When the components are involved in a hot-plug operation with respect to the server, all individual devices associated with the components are also involved in the operation. Thus, when a single SIOH is hot-replaced (e. g. , a first SIOH is hot-removed, and then a second SIOH is hot-added), it typically means that two Intel P64H2 devices (i. e. , PCI bridges) and one ICH2 (i. e. , an I/O Controller Hub) are also hot-replaced. Each P64H2 device may include two Intel@ 82093AA I/O Advanced Programmable Interrupt Controllers (IOAPICs), which are typically exposed to the OS by the BIOS as a legacy device. As such, each IOAPIC is identified in the ACPI name space as a Microsoft Windows@ compatible device with a Plug and Play" identifier of "PNP0003", and not as a PCI device in the PCI bus hierarchy with an identifier of "interrupt controller". For more information regarding the use of Plug and Play" identifiers, refer to the ACPI Specification, Table 5-42. Legacy devices initialized by the BIOS typically require ASL initialization during a hot-plug operation. Unfortunately, ASL-based hot-plug initialization can only be performed within the PCI configuration space, and this will not occur unless the device is represented as a PCI device. Currently available OSs are not able to view and support IOAPIC devices as PCI devices. In fact, currently available OSs ignore a PCI device <Desc/Clms Page number 3> having an"interrupt controller"identifier. The inability of the OS to treat the IOAPIC as both a legacy device and a PCI device prevents the use of hot-plug operations with components that include one or more IOAPICs, such as the SIOH. Herein is described a new mechanism for identifying and representing a peripheral device, such as an interrupt controller, so that operational software is able to treat the peripheral device as a legacy device during power-on initialization, and as a PCI device for initialization operations immediately following a hot-plug operation. In one embodiment, this may be accomplished by identifying the peripheral device as both a legacy device and as a dummy PCI device. FIG. 1 is a pseudo-coded method of identifying a peripheral device according to an embodiment of the invention. In this example, assume that the peripheral device is an interrupt controller, similar to or identical to an IOAPIC (e. g. , one of two IOAPICs forming part of a P64H2 device) which comprises part of a hot-pluggable I/O node that has an IOH, two P64H2 devices, and one ICH2 device. For reference purposes, the hot pluggable VO node may be similar to or identical to the VO node (i. e. , element 280 shown in FIG. 2) described hereinafter. Reference may also be made to the ACPI Specification, Version 2. 0a, March 31,2002 with regard to implementation details for some of the methods and objects described in FIG. 1. The pseudo code of FIG. 1, which sets forth one example of a method 110 implementing an embodiment of the invention, includes an initialization portion 118. In line 120, the IOH that forms part of the hot-pluggable I/O node is associated with a module device, i. e. , a container object that acts as a bus node in a namespace. Thus, a device named"IOH1"is created and, via the HID object, the created device is associated with the Plug and Play identifier"ACPI0004". Then, via the UID object, the node identification is associated with the node's unique, persistent identification"NID IOH1" in line 122. The STA method is then evaluated to ensure the IOH is connected in line 124. The method 110 also has a legacy identification portion 130 wherein the IOAPIC device is identified as a legacy device, and a PCI identification portion 132 wherein the IOAPIC device is identified as a PCI device for access during hot-plug operations. In lines 134 and 138 a device"IA09"is created in the ACPI name space and associated with a Plug and Play identifier of"PNP0003" (which tells the OS that that this device is an interrupt controller). In line 142 the status of the device is checked, and then in line 146 <Desc/Clms Page number 4> the CRS method is used to identify to the OS which resources (I/O, memory mapped address space, etc. ) the device IA09 will be using. In line 150 the MAT method is used to identify to the OS which base address will be used to operate the device, as well as to provide information about where in the platform (system) the interrupt controller (or other device) base vector is located. This is accomplished when the MAT is evaluated to a buffer returning data in the format of a series of Multiple APIC Description Table (MADT) APIC Structure entries. The OS may need the latter information when there are multiple IOAPICs in the system. Thus, at the end of the legacy identification portion 130 of the pseudo code, the device IA09 exposes an IOAPIC, along with all the information needed to program and use the IOAPIC, to a legacy OS (one that does not address IOAPICs as PCI devices). In the PCI identification portion 132 of the pseudo code, line 154, the start-up code (e. g. , a BIOS) has created a device"IP09"in the ACPI name space. The IP09 device is a dummy PCI device used in the hot add process to program the IOAPIC for legacy operation. In line 158, the ADR method provides information necessary for programming the device via the PCI programming mechanism. More specifically, the device number and function number of the ACPI component are provided so that the OS can use them for initializing/programming the device during hot-add operations. Thus, the ASL method executed during the hot-add operation (for programming the device as a legacy IOAPIC) is then able to access the device for initialization and programming via the PCI configuration space. Since the ASL method is provided by the start-up code and interpreted/executed by the OS, the elements of the device which should be programmed, and the mechanisms for programming them, should be identified to the OS. The operation region, specified in line 162, provides this information. Thus, in this case, the OS receives information associating the IP09 with a specified region (e. g. , a base address in the configuration space of 0x40, and a length of 0x41), and the IP09 device is identified as being of type"PCICONFIG". The ASL method that executes during a hot-add operation will now be able to refer to the specified operation region. For example, if a field named"RegA"is defined in the operation region (this would be done after the operation region definition for the device IP09 has been defined), and if this field needs to be set to a value of"1"during the hot-add operation in order to have the IOAPIC programmed to operate in legacy mode, then the <Desc/Clms Page number 5> ASL method that executes at hot-add time might use the following instruction expressed as an ASL method: store (One, ~SB. IOH1. IP09. REGA) Using the pseudo code of FIG. 1, the OS may interpret this statement to mean that the IP09 is of the type PCICONFIG. Using the device information provided in the ADR method (i. e. , device Oxle, function 0, and the offset for REGA from the base address in the IP09 configuration space), the correct register in the PCI configuration space of the IOAPIC can be programmed. While a particular mixture of pseudo code and actual code have been used to illustrate the operation of the embodiment of the invention shown in FIG. 1, it is emphasized that other pseudo code and actual code implementations of the method illustrated in FIG. 1 may also be used, and they are included within the scope of various embodiments of the invention. FIG. 2 is a block diagram of an apparatus, a system, and an article according to various embodiments of the invention. Interconnected switches 276 may be coupled to one or more I/O nodes 280, as well as Scalable Node Controllers (SNCs) 282, coupled in turn to memories 283 having data 284, as well as one or more processors 285. The I/O nodes 280 and the SNCs 282 may be hot-pluggable components. The I/O node 280 may include an I/O Hub (IOH) 287, such as a Server I/O Hub (SIOH) 287 coupled to and/or including one or more hot-pluggable devices 288, including PCI bridge devices 288, similar to or identical to P64H2 devices, which in turn may include one or more interrupt controllers 290 (e. g. , similar to or identical to an IOAPIC), each associated with or having a legacy register 291 and a PCI dummy register 292. The SIOH 287 may also be coupled to, and/or include a PCI device 293, perhaps by way of a PCI bus 294, as well as an ICH2 device 295. In one embodiment, an apparatus 296 may include a memory-mapped legacy register 291, and a PCI dummy register 292, such as those included in the peripheral device 290. The legacy register 291, which may be located at the base address of an IOAPIC, for example, may be accessed by start-up code (e. g. , a BIOS) as part of a power- on initialization operation or sequence for the peripheral device 290. The PCI dummy register 292 may be accessed during a hot-plug operation in association with a device in <Desc/Clms Page number 6> the PCI bus hierarchy, using code executable by an OS, such as code derived from the ASL. In another embodiment, a system 297 may include an apparatus 296 having a peripheral device 290 (e. g. , a device associated with or including a memory-mapped legacy register 291 and a PCI dummy register 292) and an IOH 287 capable of being communicatively coupled to the peripheral device 288. A hot-pluggable PCI device 293 may be communicatively coupled to the system 297, perhaps using the PCI bus 294. As noted above, the peripheral device 290 may be similar to or identical to an IOAPIC, or even a PCI bridge device 288, such as a P64H2 device. The system may also include one or more SNCs 282 capable of being communicatively coupled to the IOH 287, perhaps using the switch 276. In addition, the SNCs 282 may be capable of hot-plug operation It should be noted that the switches 276, the memories 278, the nodes 280, the SNCs 282, the IOHs 287, the devices 288, the devices 290, registers 291,292, the hot- pluggable devices 293, the ICH2 devices 295; the apparatus 296, and the systems 297 may all be characterized as"modules"herein. Such modules may include hardware circuitry, such as a microprocessor and/or memory circuits, software program modules, and/or firmware, and combinations thereof, as directed by the architect of the apparatus 296 and system 297, and appropriate for particular implementations of various embodiments of the invention. The apparatus and systems of various embodiments of the present invention can be used in applications other than those involving interconnected servers and hot-pluggable I/O nodes, and thus, the invention is not to be so limited. The illustrations of an apparatus 296 and a system 297 are intended to provide a general understanding of the structure of various embodiments of the present invention, and are not intended to serve as a complete description of all the elements and features of apparatus and systems which might make use of the structures described herein. Applications which may include the novel apparatus and systems of various embodiments of the present invention include electronic circuitry used in high-speed computers, communications and signal processing circuitry, processor modules, embedded processors, and application-specific modules, including multilayer, multi-chip modules. Such apparatus and systems may further be included as sub-components within a variety <Desc/Clms Page number 7> of electronic systems, such as televisions, video cameras, cellular telephones, personal computers, radios, vehicles, medical monitoring equipment, and others. FIG. 3 is a flow diagram of a method of representing a peripheral device according to an embodiment of the invention. Generalizing from the pseudo code example shown in FIG. 1, the method 311 may begin with applying power to a computing platform, such as an I/O node, and a peripheral device, such as an interrupt controller (e. g. , an IOAPIC) at block 321. The method may continue with identifying the peripheral device as a legacy device in a name space, such as an ACPI name space, at block 325. The method may include identifying the peripheral device as a peripheral component interconnect (PCI) device capable of being accessed during a hot-plug operation at block 331, which may in turn include creating an operational region for accessing the peripheral device as a PCI device during a hot-plug operation. Identifying the peripheral device as a legacy device at block 325 may include associating the legacy device with a device identifier, such as a Plug and PlayTM identifier, at block 335 (e. g. associating the identifier using the HID object of the ACPI Specification), identifying resources required by the legacy device at block 341 (e. g., using the CRS object of the ACPI Specification), and identifying an address space associated with the legacy device at block 345 (e. g. , using the MAT object of the ACPI specification). Depending on the OS in use, for example, considering an OS which ignores PCI device descriptions, the peripheral device may be initialized as a legacy device at block 351. Alternatively, if the OS is compatible with PCI devices in general, the peripheral device may be initialized as a PCI device at block 355. If the device is hot-added to the platform at block 361, the device may again be initialized as a PCI device at block 355. Steps 361 and 355 may be repeated indefinitely. It should be noted that while ACPI and ASL compatible program instructions have been used in some examples of representing peripheral devices herein, other mechanisms may also be used according to various embodiments of the invention, and therefore, the invention is not to be so limited. Therefore, it should be clear that some embodiments of the present invention may also be described in the context of computer-executable instructions, such as program modules, being executed by a computer. Generally, <Desc/Clms Page number 8> program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Thus, referring back to FIG. 2, an article 298 according to an embodiment of the invention can be seen. One of ordinary skill in the art will understand, upon reading and comprehending this disclosure, the manner in which a software program can be launched from a computer-readable medium in a computer based system to execute the functions defined in such a software program. One of ordinary skill in the art will further understand the various programming languages which may be employed to create a software program designed to implement and perform the methods of the present invention. Such programs can be structured in an object-orientated format using an object-oriented language such as Java, Smalltalk, or C++. Alternatively, the programs can be structured in a procedure-orientated format using a procedural language, such as COBOL or C. The software components may communicate using any of a number of mechanisms that are well-known to those skilled in the art, such as Application Program Interfaces (APIs) or interprocess communication techniques. However, as will be appreciated by one of ordinary skill in the art upon reading this disclosure, the teachings of various embodiments of the present invention are not limited to any particular programming language or environment. As is evident from the preceding description, a processor 285 typically accesses at least some form of computer-readable media, such as the memory 283. However, computer-readable and/or accessible media may be any available media that can be accessed by the processor 285, the apparatus 296, and/or the system 297. By way of example and not limitation, computer-readable media may comprise computer storage media and communications media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented using any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Communication media specifically embodies computer-readable instructions, data structures, program modules or other data present in a modulated data signal such as a carrier wave, coded information signal, and/or other transport mechanism, which includes any information delivery media. The term "modulated data signal"means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example and not limitation, communications media also includes wired media such as a wired network <Desc/Clms Page number 9> or direct-wired connections, and wireless media such as acoustic, optical, radio frequency, infrared and other wireless media. Combinations of any of the above are also included within the scope of computer-readable and/or accessible media. Thus, referring to FIG. 2, it is now easily understood that another embodiment of the invention may include an article 298 comprising a machine-accessible medium 283 having associated data 284, wherein the data 284, when accessed, results in the machine 285 performing activities such as identifying a peripheral device as a legacy device in a name space, and identifying the peripheral device as a PCI device capable of being accessed during a hot-plug operation. Other activities may include accessing the peripheral device as a legacy device using start-up code (e. g. , a BIOS) during an initialization operation or sequence for an associated platform, or, after hot-adding the peripheral device included in an 1/0 node to the platform, for example, initializing the peripheral device as the PCI device using a code (e. g., ASL-derived code) executable by an OS. Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of the invention. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments of the invention includes any other applications in which the above structures and methods are used. The scope of embodiments of the invention should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C. F. R. 1. 72 (b) requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In the foregoing Detailed Description of Embodiments of the Invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as <Desc/Clms Page number 10> the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description of Embodiments of the Invention, with each claim standing on its own as a separate preferred embodiment. |
Semiconductor devices comprising interconnect with improved adhesion of barrier layers to dielectric layers are formed by laser thermal annealing exposed surfaces of a dielectric layer in an atmosphere of NH3 and N2, and subsequently depositing Ta to form a composite barrier layer. Embodiments include forming a dual damascene opening in an interlayer dielectric comprising F-containing silicon oxide, such as F-containing silicon oxide derived from F-TEOS, laser thermal annealing the exposed silicon oxide surface in NH3 and N2, depositing Ta and then filling the opening with Cu. Laser thermal annealing in NH3 and N2 depletes the exposed silicon oxide surface of F while forming an N2-rich surface region. Deposited Ta reacts with the N2 in the N2-rich surface region to form a composite barrier layer comprising a graded layer of tantalum nitride and a layer of alpha-Ta thereon. |
What is claimed is: 1. A method of manufacturing a semiconductor device, the method comprising:forming an opening in a dielectric layer laser thermal annealing exposed surfaces of the dielectric layer in ammonia (NH3) and nitrogen (N2); and forming a composite barrier layer comprising tantalum (Ta) lining the opening. 2. The method according to claim 1, wherein the dielectric layer comprises fluorine (F) containing silicon oxide derived from F-doped tetraethyl orthosilicate (F-TEOS).3. The method according to claim 2, comprising laser thermal annealing the exposed surfaces to form a surface region depleted in F and enriched in N2.4. The method according to claim 3, comprising forming the composite barrier layer by depositing Ta, the composite barrier layer comprising:a graded layer of tantalum nitride on the N2-enriched surface region, the graded tantalum nitride layer containing N2 in an amount decreasing in a direction away from the N2-enriched surface region; and a layer of [alpha]-Ta on the graded tantalum nitride layer. 5. The method according to claim 4, further comprising filling the opening with copper (Cu) or a Cu alloy.6. The method according to claim 5, wherein the opening comprises a dual damascene opening having a lower via hole in communication with an upper trench, the method comprising filling the opening with Cu or Cu alloy to form a lower via in communication with an upper line.7. The method according to claim 6, comprising laser thermal annealing by impinging a laser light beam on the exposed surfaces at a radiant fluence of about 0.09 to about 0.11 joules/cm<2>.8. The method according to claim 7, comprising laser thermal annealing to elevate the temperature of about 370[deg.] C. to about 430[deg.] C.9. The method according to claim 2, comprising laser thermal annealing employing an N2 flow rate of about 200 to about 2,000 sccm and an NH3 flow rate of about 200 to about 2,000 sccm to form a region on the exposed surfaces depleted in F and enriched in N2.10. The method according to claim 9, comprising forming the composite barrier layer by depositing Ta, the composite barrier layer comprising:a graded layer of tantalum nitride on the N2-enriched surface region, the graded tantalum nitride layer containing nitrogen in an amount decreasing in a direction away from the N2-enriched surface region; and a layer of [alpha]-Ta on the graded tantalum nitride layer. 11. The method according to claim 1, comprising laser thermal annealing the exposed surfaces of the dielectric layer to form a surface region enriched in N2.12. The method according to claim 11, comprising forming the composite barrier layer by depositing Ta, the composite barrier layer comprising:a graded layer of tantalum nitride on the N2-enriched surface region the graded tantalum nitride layer containing N2 in an amount decreasing in a direction away from the N2-enriched surface region; and a layer of [alpha]-Ta on the graded tantalum nitride layer. 13. The method according to claim 11, further comprising filling the opening with copper (Cu) or a Cu alloy. |
TECHNICAL FIELDThe present invention relates to copper (Cu) and/or Cu alloy metallization in semiconductor devices, and to a method for manufacturing semiconductor devices with reliable, low resistance Cu or Cu alloy interconnects. The present invention is particularly applicable to manufacturing high speed integrated circuits having submicron design features and high conductivity interconnect structures.BACKGROUND ARTThe escalating demand for high density and performance impose severe requirements on semiconductor fabrication technology, particularly interconnection technology in terms of providing reliable low R*C (resistance*capacitance) interconnect patterns with higher electromigration resistance, wherein submicron vias, contacts and trenches have high aspect ratios. Conventional semiconductor devices comprise a semiconductor substrate, typically doped monocrystalline silicon, and a plurality of sequentially formed interlayer dielectrics and conductive patterns. An integrated circuit is formed containing a plurality of conductive patterns comprising conductive lines separated by interwiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines and logic interconnect lines. Typically, the conductive patterns on different layers, i.e., upper and lower layers, are electrically connected by a conductive plug filling a via hole, while a conductive plug filling a contact hole establishes electrical contact with an active region on a semiconductor substrate, such as a source/drain region. Conductive lines are formed in trenches which typically extend substantially horizontal with respect to the semiconductor substrate. Semiconductor "chips" comprising five or more levels of metallization are becoming more prevalent as device geometry's shrink to submicron levels.A conductive plug filling a via hole is typically formed by depositing an interlayer dielectric on a conductive layer comprising at least one conductive pattern, forming an opening through the interlayer dielectric by conventional photolithographic and etching techniques, and filling the opening with a conductive material, such as tungsten (W). Excess conductive material on the surface of the interlayer dielectric is typically removed by chemical mechanical polishing (CMP). One such method is known as damascene and basically involves forming an opening in the interlayer dielectric and filling the opening with a metal. Dual damascene techniques involve forming an opening comprising a lower contact or via hole section in communication with an upper trench section, which opening is filled with a conductive material, typically a metal, to simultaneously form a conductive plug in electrical contact with a conductive line.High performance microprocessor applications require rapid speed of semiconductor circuitry. The control speed of semiconductor circuitry varies inversely with the resistance and capacitance of the interconnection pattern. As integrated circuits become more complex and feature sizes and spacings become smaller, the integrated circuit speed becomes less dependent upon the transistor itself and more dependent upon the interconnection pattern. Miniaturization demands long interconnects having small contacts and small cross-sections. As the length of metal interconnects increases and cross-sectional areas and distances between interconnects decrease, the R*C delay caused by the interconnect wiring increases. If the interconnection node is routed over a considerable distance, e.g., hundreds of microns or more as in submicron technologies, the interconnection capacitance limits the circuit node capacitance loading and, hence, the circuit speed. As design rules are reduced to about 0.12 micron and below, the rejection rate due to integrated circuit speed delays significantly reduces production throughput and increases manufacturing costs. Moreover, as line widths decrease electrical conductivity and electromigration resistance become increasingly important.Cu and Cu alloys have received considerable attention as a candidate for replacing Al in interconnect metallizations. Cu is relatively inexpensive, easy to process, and has a lower resistively than Al. In addition, Cu has improved electrical properties vis-à-vis W, making Cu a desirable metal for use as a conductive plug as well as conductive wiring.An approach to forming Cu plugs and wiring comprises the use of damascene structures employing CMP. However, due to Cu diffusion through interdielectric layer materials, such as silicon dioxide, Cu interconnect structures must be encapsulated by a diffusion barrier layer. Typical diffusion barrier metals include tantalum (Ta), tantalum nitride (TaN), titanium nitride (TiN), titanium (Ti), titanium-tungsten (TiW), tungsten (W), tungsten nitride (WN), Ti-TiN, titanium silicon nitride (TiSiN), tungsten silicon nitride (WSiN), tantalum silicon nitride (TaSiN) and silicon nitride for encapsulating Cu. The use of such barrier materials to encapsulate Cu is not limited to the interface between Cu and the dielectric interlayer, but includes interfaces with other metals as well.In implementing Cu metallization, particularly in damascene techniques wherein an opening is formed in a dielectric layer, particularly a dielectric layer having a low dielectric constant, e.g., a dielectric constant less than about 3.9, various reliability, electromigration and resistance issues are generated. Reliability issues stem, in part, from the use of Ta or TaN, the barrier layers of choice in Cu metallization. Ta has been found to lack adequate adhesion to various interlayer dielectric materials, particularly, interlayer dielectric materials having a low dielectric constant, such as, a dielectric constant (k) less than about 3.9 such as, fluorine (F)-containing oxides, e.g., F-containing silicon oxide derived from F-doped orthosilicate (F-TEOS). Lack of sufficient barrier layer adhesion to dielectric layers results in delamination with attendant reliability issues. TaN has been found to lack adequate adhesion to Cu and Cu alloys filling a damascene opening. Moreover, Ta and TaN are typically deposited by physical vapor deposition (PVD) techniques, such as ionized (I) PVD. The resulting layer of Ta is typically [beta]-phase Ta ([beta]-Ta) which exhibits a relatively high resistivity, e.g., about 200 to about 250 [mu]ohm-cm. TaN is typically deposited with a nitrogen (N2) content of about 30 to about 55 at. %, and exhibits a resistivity in excess of 200 [mu]ohm-cm.The barrier layer adhesion problems adversely impact electromigration resistance and device reliability, while the high resistivity of TaN and [beta]-Ta manifestly adversely impact circuit speed. Accordingly, there exists a need for reliable, low resistance interconnects, particularly Cu and Cu alloy interconnects formed in low dielectric constant materials, and for enabling methodology.DISCLOSURE OF THE INVENTIONAn advantage of the present invention is a semiconductor device having reliable, low resistance interconnects, such as Cu or Cu alloy interconnects, exhibiting improved electromigration resistance.Another advantage of the present invention is a method of manufacturing a semiconductor device having reliable, low resistance interconnects, such as Cu or Cu alloy interconnects, with improved electromigration resistance.Additional advantages and other features of the present invention will be set forth in the description which follows and, in part, will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the present invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method of manufacturing a semiconductor device, the method comprising: forming an opening in a dielectric layer; laser thermal annealing exposed surfaces of the dielectric layer in ammonia (NH3) and nitrogen (N2); and forming a composite barrier layer comprising tantalum (Ta) lining the opening.Another advantage of the present invention is a semiconductor device comprising: an opening in a dielectric layer; and a composite barrier layer formed on a surface of the dielectric layer lining the opening; wherein: the surface of the dielectric layer comprises a nitrogen (N2)-enriched surface region; and the composite barrier layer comprises: an initial graded layer of tantalum nitride, containing N2 in an amount decreasing in a direction away from the N2-enriched surface region; and a layer of [alpha]-Ta on the graded tantalum nitride layer.Embodiments include forming a dual damascene opening in a dielectric layer having a low dielectric constant (k) less than about 3.9, such as F-containing silicon oxide derived from F-TEOS, and impinging a pulsed laser light beam on exposed surfaces of the F-containing silicon oxide layer employing an NH3 flow rate of about 200 to about 2,000 sccm and a N2 flow rate of about 200 to about 2,000 sccm, for a brief period of time, e.g., about 10 to about 100 nanoseconds, thereby elevating the temperature of the exposed surfaces to about 370[deg.] C. to about 430[deg.] C., such that the laser thermal annealed exposed surfaces become depleted in F and enriched in N2. Ta is then deposited, as by IPVD, such that the deposited Ta reacts with N2 in the N2-enriched surfaced region to form a graded layer of tantalum nitride thereon. Upon continuing deposition, a layer of [alpha]-Ta is formed on the graded titanium nitride layer.Embodiments of the present invention further include single and dual damascene techniques comprising forming an opening in a dielectric layer or layers on a wafer, laser thermal annealing exposed surfaces of the dielectric layer or layers in NH3 and N2, depositing Ta to form a composite diffusion barrier layer of graded tantalum nitride/[alpha]-Ta, lining the opening and on the dielectric layer(s), depositing a seedlayer, depositing the Cu or a Cu alloy layer on the seedlayer filling the opening and over the dielectric layer(s), removing any portion of the Cu or Cu alloy layer beyond the opening by CMP leaving an exposed surface and depositing a silicon nitride or silicon carbide capping or barrier layer on the treated surface.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein embodiments of the present invention are described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF DRAWINGSFIGS. 1 and 2 schematically illustrate sequential phases of a method in accordance with an embodiment of the present invention.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves various problems attendant upon forming metallized interconnects, such as Cu or Cu alloy interconnects, particularly, damascene structures in dielectric layer(s) having a dielectric constant less than about 3.9, such as F-containing dielectric material, e.g., F-containing silicon oxide derived from F-TEOS. As employed throughout this application, the symbol Cu is intended to encompass high purity elemental copper as well as Cu-based alloys, such as Cu alloys containing minor amounts of tantalum, indium, tin, zinc, manganese, titanium, magnesium, chromium, titanium, germanium, strontium, platinum, magnesium, aluminum or zirconium.As design rules are scaled down into the deep submicron range, such as about 0.12 micron and under, electromigration and contact resistance issues associated with interconnects, particularly Cu interconnects, become increasingly significant. Reliability and electromigration issues stem, in part, from the poor adhesion of [beta]-Ta to various low-k dielectric materials and poor adhesion of TaN to Cu and Cu alloys. TaN and [beta]-Ta exhibit high resistivities, thereby adversely impacting circuit speed.The present invention addresses and solves such problems by performing laser thermal annealing, as by impinging a pulsed laser light beam, in NH3 and H2, on exposed surfaces of the dielectric layer prior to barrier layer deposition. Laser thermal annealing in NH3 and N2 modifies the surface of the dielectric layer such that a N2-enriched surface region is formed. Subsequently, during Ta deposition, a titanium nitride layer is initially formed having a graded N2 concentration such that the amount of N2 decreases in a direction away from the N2-enriched surface region. Continued Ta deposition results in the formation of a thin [alpha]-Ta layer on the graded tantalum nitride layer. The resulting composite barrier layer, comprising the graded tantalum nitride layer in contact with dielectric material and a layer of [alpha]-Ta in contact with the Cu metallization, solves adhesion issues generated by the poor adhesion of [beta]-Ta to dielectric material and the poor adhesion of tantalum nitride to Cu metallization. Deposition of Ta on a layer of tantalum nitride advantageously results in [alpha]-Ta, since the graded tantalum nitride layer serves as a template for the growth of [alpha]-Ta, a low resistivity form of Ta, typically exhibiting a resistivity of about 40 to about 50 [mu]ohm-cm vis-à-vis about 200 to about 250 [mu]ohm-cm for [beta]-Ta. It was found particularly advantageous to deposit Ta by IPVD, e.g., ionized sputter deposition (ISD).The initial layer of graded tantalum typically has a thickness of about 20 Å to about 50 Å, while the layer of [alpha]-Ta is typically deposited at a thickness of about 200 Å to about 300 Å. The layer of graded tantalum nitride typically contains nitrogen in an amount from a value of about 10 to about 40 at. %, proximate the N2-enriched surface region of the dielectric layer to zero proximate the [alpha]-Ta layer.It should be understood that suitable Ta deposition conditions are dependent upon the particular situation and can be optimized accordingly. It was found suitable, for example, to employ an argon (Ar) flow rate of about 40 to about 60 sccm, e.g., about 45 to about 60 sccm, a D.C. power of about 1,000 to about 40,000 watts, an RF power of about 1,000 to about 3,000 watts, and a pressure of about 1 to about 45 mTorr, depending upon the particular deposition system and technique.Embodiments of the present invention comprise utilizing halogen-doped dielectric layers, such as F-doped dielectric layers, i.e., F-doped silicon oxide derived from F-TEOS. In implementing such embodiments, laser thermal annealing of the exposed surfaces of the dielectric layer results not only in N2 enrichment of a surface region but also F depletion. The resulting surface region typically has a thickness of about 10 Å to about 20 Å and contains a lower amount F than the remainder of the dielectric layer. It is believed that during laser thermal annealing, NH3 releases hydrogen which reacts with F in the surface portion of the dielectric layer forming hydrofluoric acid (HF) which is carried out of the chamber, thereby depleting the surface region of F. The surface region then becomes enriched with N2 which is present during laser thermal annealing.The use of laser thermal annealing advantageously enables pinpoint targeting of the exposed surfaces of the dielectric layer to form the N2-enriched surfaced region in a relatively short period of time without unnecessarily heating different areas of the wafer, thereby avoiding various adverse consequences, such as problematic dopant diffusion issues. In implementing embodiments of the present invention, any of various conventional laser systems can be employed, such as an excimer or Nd-YAG pulse laser. Commercially available laser tools for laser annealing, either with or without a mask, are available, such the Verdant Technologies laser anneal tool operating at an exposure wavelength of 308 nm. Available laser sources are capable of operating at energies of from about 10 to about 2,000 mj/cm<2>/pulse. Suitable operating conditions can be determined in a particular situation. For example, it was found suitable to subject the exposed surfaces of the dielectric layer to laser thermal annealing by impinging a pulsed laser light beam at a radiant fluence of about 0.09 to about 0.11 joules/cm<2 >thereby heating the exposed surfaces of the dielectric layer to a temperature of about 370[deg.] C. to about 430[deg.] C. employing a N2 flow rate of about 200 to about 2000 sccm and an NH3 flow rate of about 200 to about 2000 sccm.Embodiments of the present invention include single damascene structures as well as dual damascene structures. An embodiment of the present invention involving a dual damascene structure is schematically illustrated in FIGS. 1 and 2, wherein similar features or elements are denoted by similar reference characters. Adverting to FIG. 1, lower metal feature 11, e.g., Cu, is formed in an underlying interlayer dielectric 10, e.g., F-containing silicon oxide derived from F-TEOS. A capping layer 12, such as silicon nitride or silicon carbide, is formed on an upper surface of interlayer dielectric layer 10 and a dielectric layer 13, such as a low-k dielectric material, e.g., F-containing silicon oxide derived from F-TEOS, is formed thereon. A middle etch stop layer 14, such as silicon nitride or silicon carbide, is then formed on dielectric layer 13. Another dielectric layer 15, such as, a dielectric layer containing a low-k dielectric material, e.g., F-doped silicon oxide derived from F-TEOS, is then deposited. A dual damascene opening 16 is then formed leaving exposed surfaces 17 of dielectric layers 13 and 15. It should be understood that the dual damascene opening can be formed by either a via first-trench last technique or a trench first-via last technique. The exposed surfaces 17 of dielectric layers 13 and 15 are then subjected to laser thermal annealing, by impinging a pulsed laser light beam thereon, as indicated by arrows 18, thereby forming a surface region 19 depleted in F and enriched in N2.Adverting to FIG. 2, Ta deposition is then implemented, as by ISD, to sequentially form a graded titanium nitride layer 21 on surface region 19 and a layer [alpha]-Ta 21 on graded titanium nitride layer 20. A seedlayer 22 can then be deposited followed by electrodeposition or electroless deposition of Cu forming an overburden. CMP is then conducted and a capping layer 24, such as silicon nitride or silicon carbide, is deposited to complete the interconnect structure depicted in FIG. 2 comprising Cu line 23A in communication with Cu via 23B which is in electrical contact with underlying metal feature 11.In implementing various damascene techniques in accordance with embodiments of the present invention, Cu can be deposited by electroless deposition or electroplating using a seedlayer. Typical seedlayers include Cu alloys containing magnesium, aluminum, zinc, zirconium, tin, nickel, palladium, silver or gold in a suitable amount, e.g., about 0.3 to about 12 at. %. CMP is then performed such that the upper surface of the inlaid Cu is substantially coplanar with the upper surface of the interlayer dielectric.In accordance with embodiments of the present invention, the damascene opening can also be filled with Cu by PVD at a temperature of about 50[deg.] C. to about 150[deg.] C. or by CVD at a temperature under about 200[deg.] C. In various embodiments of the present invention, conventional substrates and interlayer dielectrics, can be employed. For example, the substrate can be doped monocrystalline silicon or gallium-arsenide. The interlayer dielectric employed in the present invention can comprise any dielectric material conventionally employed in the manufacture of semiconductor devices. For example, dielectric materials such as silicon dioxide, phosphorous-doped silicate-glass (PSG), boron-and phosphorus doped silicate glass (BPSG), and silicon dioxide derived from tetraethylorthosilicate (TEOS) or silane by PECVD can be employed. The openings formed in dielectric layers are effected by conventional photolithographic and etching techniques.Advantageously, dielectric materials for use as interlayer dielectrics in accordance with embodiments of the present invention can comprise dielectric materials with lower values of permitivity and those mentioned above, in order to reduce interconnect capacitance. The expression "low-k" material has evolved characterized materials with a dielectric constant less than about 3.9, e.g., about 3.5 or less. The value of a dielectric constant expressed herein is based upon the value of (1) for a vacuum.A wide variety of low-k materials can be employed in accordance with embodiments of the present invention, both organic and inorganic. Suitable organic materials include various polyimides and BCB. Other suitable low-k dielectrics include poly(arylene)ethers, poly(arylene)ether azoles, parylene-N, polyimides, polynapthalene-N, polyphenylquinoxalines (PPQ), polyphenyleneoxide, polyethylene and polypropylene. Other low-k materials suitable for use in embodiments of the present invention include FOx(TM) (HSQ-based), XLK(TM) (HSQ-based), and porous SILK(TM), an aromatic hydrocarbon polymer (each available from dow Chemical Co., Midland, Miss.); Coral(TM), a carbon-doped silicon oxide (available from Novellus Systems, San Jose, Calif.), silicon-carbon-oxygen-hydrogen (SiCOH) organic dielectrics, Black-Diamond(TM) dielectrics, Flare(TM), an organic polymer, HOSP(TM), a hybrid sioloxane-organic polymer, and Nanoglass(TM), a nanoporous silica (each available from Honeywell Electronic Materials) and halogen-doped (e.g., fluorine-doped) silicon dioxide derived from tetraethyl orthosilicate (TEOS) and fluorine-doped silicate glass (FSG).The present invention enables the manufacture of semiconductor devices having interconnects, particularly Cu interconnects, with significantly improved barrier layer adhesion, improved electromigration resistance, enhanced reliability and reduced contact resistance. The use of laser thermal annealing by impinging a pulsed laser light beam on exposed surfaces of the dielectric layer, particularly a F-doped dielectric layer, enables formation of a surface region depleted in F and enriched in N2. Subsequent Ta deposition results in the formation of a composite barrier layer comprising a graded tantalum nitride layer on the surface region of the dielectric layer and a layer of [alpha]-Ta deposited thereon. The formation of a composite barrier layer avoids adhesion problems attendant upon conventional practices thereby increasing device reliability and improving electromigration resistance.The present invention enjoys industrial applicability in the formation of various types of interconnects, particularly inlaid Cu metallization interconnection patterns. The present invention is particularly applicable to manufacturing semiconductor devices having submicron features and high aspect ratio openings.In the previous description, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., to provide a better understanding of the present invention. However, the present invention can be practiced without resorting to the details specifically set forth. In other instances, well known processing and materials have not been described in detail in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present invention. It is to be understood that the present invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein. |
Disclosed herein are electronic components having three-dimensional capacitors disposed in a metallization stack, as well as related methods and devices. In some embodiments, for example, an electronic component may include: a metallization stack and a capacitor disposed in the metallization stack wherein the capacitor includes a first conductive plate having a plurality of recesses, and a second conductive plate having a plurality of projections, wherein individual projections of the plurality of projections extend into corresponding individual recesses of the plurality of recesses without contacting the first conductive plate. |
Claims: 1. An electronic component, comprising:a metallization stack; anda capacitor disposed in the metallization stack, wherein the capacitor includes:a first conductive plate having a plurality of recesses, anda second conductive plate having a plurality of projections, wherein individual projections of the plurality of projections of the second conductive plate extend into corresponding individual recesses of the plurality of recesses without contacting the first conductive plate. 2. The electronic component of claim 1, wherein the first conductive plate has a plurality of projections, and the plurality of recesses and the plurality of projections of the first conductive plate alternate in the first conductive plate in a parallel ridge pattern. 3. The electronic component of claim 2, wherein the first conductive plate has a plurality of projections, and the plurality of recesses and the plurality of projections of the first conductive plate alternate in the first conductive plate in a checkerboard pattern. 4. The electronic component of claim 1, further comprising:a dielectric material extending between the first conductive plate and the second conductive plate. 5. The electronic component of claim 1, wherein individual ones of the plurality of recesses are tapered. 6. The electronic component of claim 5, wherein individual ones of the plurality of projections are tapered. 7. The electronic component of claim 1, wherein the first and second conductive plates are spaced apart by a maximum distance between 5 and 10 microns.8. The electronic component of any of claims 1-7, wherein the metallization stack is a package redistribution layer. 9. The electronic component of claim 8, wherein the electronic component is an embedded wafer level ball grid array (eWLB) package. 10. The electronic component of claim 9, wherein:the electronic component includes a mold compound having a fanout area; andthe capacitor is disposed in the package redistribution layer below the fanout area. 11. The electronic component of claim 8, wherein the electronic component is a flip chip (FC) package. 12. The electronic component of claim 8, wherein the electronic component has a height less than 1 millimeter. 13. The electronic component of any of claims 1-7, wherein the metallization layer includes back-end metal in a die. 14. The electronic component of claim 13, further comprising:a bond pad electrically coupled to the capacitor. 15. A computing device, comprising:a circuit board; andan integrated circuit (IC) package coupled to the circuit board, wherein the IC package includes:a redistribution layer,a die, including a memory device or a processing device, coupled to the redistribution layer, anda capacitor disposed in the redistribution layer, wherein the capacitor includes:a first conductive plate having a recess, and a second conductive plate having a projection, wherein the projection extends into the recess without contacting the recess. 16. The computing device of claim 15, wherein the die includes a power management IC (PMIC). 17. The computing device of claim 15, wherein the circuit board is a motherboard. 18. The computing device of any of claims 15-17, wherein the computing device is a smartphone. 19. The computing device of any of claims 15-17, wherein the computing device is a tablet computing device. 20. A method of manufacturing an electronic component having a three-dimensional capacitor in a metallization stack, comprising:forming a first conductive plate in the metallization stack, wherein the first conductive plate has a recess;providing a dielectric material on the first conductive plate; andforming a second conductive plate in the metallization stack on the dielectric material, wherein the second plate has a projection extending into the recess and spaced apart from the first conductive plate by the dielectric material. 21. The method of claim 20, wherein providing the dielectric material includes spray-coating the dielectric material onto the first conductive plate. 22. The method of claim 20, wherein providing the dielectric material includes providing a conformal layer of the dielectric material onto the first conductive plate.23. The method of any of claims 20-22, further comprising:providing first and second dielectric layers in the metallization stack such that the first and second conductive plates are both disposed between the first and second dielectric layers. 24. The method of any of claims 20-22, wherein the metallization stack is a package redistribution layer, and the method further comprises:forming conductive pathways between the first and second conductive plates and first and second conductive contacts of a die coupled to the package redistribution layer. 25. The method of claim 24, further comprising:providing a mold compound in contact with the die and the package redistribution layer. |
ELECTRONIC COMPONENTS HAVING THREE-DIMENSIONAL CAPACITORS IN AMETALLIZATION STACK Cross-Reference to Related Application[0001] This application claims priority to U.S. Patent Application No.15/062,143, filed March 6, 2016 and titled "ELECTRONIC COMPONENTS HAVING THREE-DIMENSIONAL CAPACITORS IN A METALLIZATION STACK." This priority application is incorporated by reference herein. Technical Field[0002] The present disclosure relates generally to the field of electronic components, and more particularly, to electronic components having three-dimensional capacitors in a metallization stack. Background[0003] Capacitors are used in many different electronic device designs. These capacitors are typically separately fabricated and surface mounted to a substrate. Brief Description of the Drawings[0004] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, likereference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.[0005] FIG.1 is a cross-sectional side view of a metallization stack having a three-dimensional capacitor disposed therein, in accordance with various embodiments.[0006] FIG.2 is a cross-sectional side view of an example three-dimensional capacitor that may be included in the metallization stack of FIG.1, in accordance with various embodiments.[0007] FIGS.3A and 3B are top views of example plates of a three-dimensional capacitor that may be included in the metallization stack of FIG.1, in accordance with various embodiments.[0008] FIG.4 is a cross-sectional side view of a flip chip integrated circuit (IC) package including a package metallization stack having a three-dimensional capacitor disposed therein, in accordance with various embodiments. [0009] FIG.5 is a cross-sectional side view of an embedded wafer level ball grid array (eWLB) IC package including a package metallization stack having a three-dimensional capacitor disposed therein, in accordance with various embodiments.[0010] FIGS.6-9 illustrate various operations in an example process for manufacturing the three-dimensional capacitor of FIG.2 in a metallization stack, in accordance with various embodiments.[0011] FIGS.10 and 11 are cross-sectional side views of various examples of three- dimensional capacitors that may be included in the metallization stack of FIG.1, inaccordance with various embodiments.[0012] FIG.12 is a flow diagram of a method of manufacturing an IC device having a three- dimensional capacitor in a metallization stack, in accordance with various embodiments.[0013] FIGS.13A and 13B are top views of a wafer and dies that may include a three- dimensional capacitor in a metallization stack or may be included in an IC package having a three-dimensional capacitor in the package metallization stack, in accordance with any of the embodiments disclosed herein.[0014] FIG.14 is a cross-sectional side view of an IC device that may include a three- dimensional capacitor in a metallization layer or may be included in an IC package having a three-dimensional capacitor in the package metallization stack, in accordance with any of the embodiments disclosed herein.[0015] FIG.15 is a cross-sectional side view of an IC device assembly that may include an electronic component having a three-dimensional capacitor in a metallization stack, in accordance with any of the embodiments disclosed herein.[0016] FIG.16 is a block diagram of an example computing device that may include a three- dimensional capacitor in a metallization stack of an electronic component, in accordance with any of the embodiments disclosed herein. Detailed Description[0017] Disclosed herein are electronic components having three-dimensional capacitors disposed in a metallization stack, as well as related methods and devices. In someembodiments, for example, an electronic component may include: a metallization stack and a capacitor disposed in the metallization stack, wherein the capacitor includes a first conductive plate having a plurality of recesses, and a second conductive plate having a plurality of projections, wherein individual projections of the plurality of projections of the second conductive plate extend into corresponding individual recesses of the plurality of recesses without contacting the first conductive plate.[0018] As noted above, capacitors are commonly included in electronics packages as surface- mount devices electrically coupled through a substrate to a die. Surface-mount capacitors, however, may have a footprint (and sometimes a height) that limits how small of a form factor can be achieved for the overall device (e.g., a mobile or wearable device), and thus the use of surface-mount capacitors may not be able to satisfy small form factor requirements. Additionally, the distance between the surface-mount capacitors and the in-die electronics that utilize the capacitors may result in substantial parasitics that introduce noise and lower signal quality, resulting in reduced performance.[0019] Various ones of the embodiments disclosed herein may provide improved capacitors for inclusion in any of a number of electronic components. For example, various ones of the embodiments disclosed herein may reduce the parasitics, cost, and size of IC packages relative to conventional capacitor techniques, while providing greater capacitance than "planar" capacitors. In particular, the three-dimensional capacitors disclosed herein may effectively increase the overall area between two plates of a capacitor relative to a planar capacitor, thereby providing a greater capacitance for the same footprint. Additionally, in some applications, positioning these capacitors in the redistribution layer of a package improves on the parasitics incurred by surface-mount capacitors without incurring the expense and complexity of in-die capacitors.[0020] In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense.[0021] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments.[0022] For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The term "between," when used with reference to measurement ranges, is inclusive of the ends of the measurement ranges. The drawings are not necessarily to scale.[0023] The description uses the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, a "package" and an "IC package" are synonymous. As used herein, a " redistribution layer" may refer to a portion of an electronic component that provides electrical pathways to make conductive contacts in the electronic component available in other locations in or on the package.[0024] FIG.1 is a cross-sectional side view of a metallization stack 100 having a three- dimensional capacitor 102 disposed therein, in accordance with various embodiments. The terms "three-dimensional capacitor 102" and "capacitor 102" may be used interchangeably herein. The metallization stack 100 may be formed of a dielectric material 101, and may have conductive pathways 122 extending from a first face 124 of the metallization stack 100 through the dielectric material 101. In some embodiments, when the metallization stack 100 is a package redistribution layer, one or more dies may be coupled to the first face 124 (e.g., as discussed below with reference to FIGS.4 and 5). In some embodiments, when the metallization stack 100 includes interconnect layers or other back-end metal, transistors and/or other front-end devices in a device layer may be coupled to the first face 124, as discussed below with reference to FIG.14. Some of these conductive pathways 122 may couple to a conductive contact 121 disposed at the second face 126 of the metallization stack 100, while others of these conductive pathways 122 may couple to the capacitor 102. When the metallization stack 100 includes interconnect layers or other back-end metal, the conductive contacts 121 may include bond pads, as discussed below with reference to FIG. 14.[0025] The capacitor 102 may include a first plate 104 and a second plate 114, with one conductive pathway 122 coupled to the first plate 104 and another conductive pathway 122 coupled to the second plate 114. Each of the plates 104 and 114 may be formed of a conductive material (e.g., a metal, such as copper), and the plates 104 and 114 may be spaced apart to enable the storage of energy in the capacitor 102 via the differential charge stored at the plates 104 and 114. In some embodiments, the plates 104 and 114 may be formed as part of different metallization layers in the metallization stack 100. The conductive pathways 122 may be formed of a metal (e.g., copper) and may include conductive traces, vias, or any other suitable structure for routing electrical signals. In some embodiments, the capacitor 102 may be disposed strictly between the first face 124 and the second face 126 of the metallization stack 100 (e.g., such that the capacitor 102 is disposed between portions of the dielectric material 101), while in other embodiments, the capacitor 102 may be disposed at the first face 124 or the second face 126. For example, the capacitor 102 may be disposed at the first face 124 so that the second plate 114 of the capacitor 102 may be in physical contact with a conductive contact of a die disposed on the first face 124 (not shown in FIG.1). Although a single capacitor 102 is illustrated in various ones of the accompanying figures, a metallization stack 100 may include one or more capacitors 102.[0026] One or more of the plates 104 and 114 may have a three-dimensional structure. For example, FIG.2 is a cross-sectional side view of an example three-dimensional capacitor 102 that may be included in the metallization stack 100 of FIG.1, in accordance with various embodiments. The first plate 104 of the capacitor 102 of FIG.2 may have one or more recesses 108 and one or more projections 106. The second plate 114 of the capacitor 102 of FIG.2 may have one or more recesses 118 and one or more projections 116, and the plates 104 and 114 may be arranged so that individual ones of the projections 106 extend into individual ones of the recesses 118, and individual ones of the projections 116 extend into individual ones of the recesses 108. A dielectric material 110 (which may be the same dielectric material as the dielectric material 101) may be disposed between the plates 104 and 114. [0027] The recesses 108 and 118, and the projections 106 and 116, of the capacitor 102 may have any suitable dimensions. In some embodiments, the widths and/or depths of different ones of the recesses 108 (or the recesses 118) may be different, while in other embodiments, the widths and/or depths of different ones of the recesses 108 (or the recesses 118) may be substantially the same (e.g., within manufacturing tolerances, which may be 20-25% in some applications). For example, FIG.2 illustrates an embodiment in which all of the recesses 108 have a same width 128, and all of the recesses 118 have a same width 132; however, the width 128 is not the same as the width 132. In other embodiments, the width 128 may be the same as the width 132. In some embodiments, the width 128 and/or the width 132 may be between 3 and 25 microns (e.g., between 5 and 20 microns), for example.[0028] In some embodiments, the widths and/or heights of different ones of the projections 106 (or the projections 116) may be different, while in other embodiments, the widths and/or heights of different ones of the projections 106 (or the projections 116) may be substantially the same. For example, FIG.2 illustrates an embodiment in which all of the projections 106 have a height 130, and all of the projections 116 have a height 134. In some embodiments, the height 130 may be approximately the same as the height 134, while in otherembodiments, the height 130 may be different from the height 134. In some embodiments, the height 130 and/or the height 134 may be between 3 and 25 microns (e.g., between 5 and 20 microns), for example.[0029] The dielectric material 110 disposed between the first plate 104 and the second plate 114 may have a uniform thickness, or a non-uniform thickness. In the embodiment illustrated in FIG.2, the dielectric material 110 has a substantially uniform thickness 112 that defines the separation between proximate regions of the first plate 104 and the second plate 114. This thickness 112 may be, for example between 5 and 10 microns. In some embodiments, the dielectric material 110 may extend between the first plate 104 and the second plate 114, contacting both plates. In such embodiments, the distance between the first plate 104 and the second plate 114 may be defined by the thickness of the dielectric material 110. In some embodiments in which the distance between the first plate 104 and the second plate 114 is non-uniform (e.g., as discussed below with reference to FIGS.10 and 11), the minimum distance between the first plate 104 and the second plate 114 may be between 5 and 10 microns, and/or the maximum distance between the first plate 104 and the second plate 114 may be between 5 and 10 microns.[0030] The number of projections 106, recesses 108, projections 116, and recesses 118 in a capacitor 102 may take any suitable values. Additionally, the projections 106, recesses 108, projections 116, and recesses 118 may be arranged in any desired way two-dimensionally between the first face 124 and the second face 126 of the metallization stack 100. For example, FIG.3A is a top view of an embodiment of the first plate 104 of the capacitor 102 of FIG.1, in accordance with various embodiments. The top view illustrated in FIG.3A depicts projections 106 and recesses 108 arranged one-dimensionally in a parallel ridge pattern. The particular number of projections 106 and recesses 108 in the first plate 104 may take any suitable values.[0031] In another example, FIG.3 is a top view of an embodiment of the first plate 104 of the capacitor 102 of FIG.1, in accordance with various embodiments. The top view illustrated in FIG.3 depicts projections 106 and recesses 108 arranged two-dimensionally in acheckerboard pattern. The particular number of projections 106 and recesses 108 in the first plate 104 may take any suitable values. Comparing the embodiments of FIGS.3A and 3B, the embodiment of FIG.3A may have fewer pairs of facing sidewalls to contribute to the effective area of the capacitor 102 than the embodiment of FIG.3B, but may be less complex to manufacture. The footprint of the first plate 104 need not be a rectangle (as illustrated in FIGS.3A and 3B), but may take any desired form (with or without the projections 106 and recesses 108 arranged in a parallel ridge or checkerboard pattern). For the embodiments illustrated in FIGS.3A and 3B, the pattern of projections 116 and recesses 118 of the second plate 114 (not shown) may be complementary to the pattern of projections 106 and recesses 108 of the first plate 104 (e.g., as illustrated in FIG.2).[0032] In a conventional, planar capacitor (in which two flat, parallel plates each having an area A are spaced apart by a distance d by a dielectric material having a permittivity e), the capacitance C is given by C=e*A/d.[0033] In the embodiments disclosed herein, the top surfaces of the projections 106 and the bottom surfaces of the recesses 118 provide a set of approximate parallel plate capacitors, as do the bottom surfaces of the recesses 108 and the top surfaces of the projections 116. In particular, if the capacitor 102 has a footprint of A (e.g., as viewed from the top, as in FIGS.3A and 3B), the dielectric material 110 has a permittivity e, and the thickness 112 is d, the capacitance contributed by these portions of the capacitor 102 is approximately equal to eA/d (the standard parallel plate capacitor). However, the three-dimensional structure of the capacitor 102 includes additional parallel surfaces that contribute additional capacitance to the capacitor 102—namely, the sidewalls of the recesses 108 that face corresponding sidewalls of the recesses 118. Each of these pairs of facing sidewalls increases thecapacitance of the capacitor 102 above eA/d, with the amount of increase dependent upon the number of facing sidewalls, the area of the facing sidewalls, and their separation (also calculated in accordance with the general eA/d expression). For example, if a capacitor 102 has a 5x9 checkerboard pattern of squares, as illustrated in FIG.3B, the heights 130 and 134 are each equal to 1 length unit (LU), the widths 128 and 132 are each equal to 1 LU, and the thickness 112 is assumed to be smaller relative to 1 LU, the "effective" surface area of this capacitor 102 is approximately 5x9 LU plus 76x1x1 LU (where 76 is the number of pairs of facing sidewalls), which equals 121 LU2, more than 228% of the area of a standard parallel plate capacitor having the same footprint (45 LU2). In this manner, substantial increases in capacitance may be achieved without increasing the footprint of the capacitor.[0034] A metallization stack 100 (including one or more capacitors 102) may be included in any suitable electronic component. For example, the electronic component may be a die, and the metallization stack 100 may take the form of metal layers in the die, as discussed below with reference to FIG.14. In another example, the metallization stack 100 may be included in a wafer-level chip scale package (WLCSP) or a panel fanout (FO) package as a package redistribution layer. FIGS.4 and 5 illustrate different types of IC packages 150 that may include the metallization stack 100 in the form of a package redistribution layer. FIG.4 is a cross-sectional side view of a flip chip IC package 150 including a metallization stack 100 having a three-dimensional capacitor 102 disposed therein, in accordance with various embodiments. The capacitor 102 of the embodiment of FIG.4 may take any of the forms disclosed herein. For example, the capacitor 102 may include a first plate 104 having a recess 108, and a second plate 114 having a projection 116 that extends into the recess 108 without contacting the first plate 104. The IC package 150 of FIG.4 may include a die 158 coupled to the metallization stack 100 via conductive contacts 156 of the die 158, first-levelinterconnects 154, and conductive contacts 152 of the metallization stack 100. The conductive contacts 152 may be coupled to the conductive pathways 122, allowing circuitry within the die 158 to electrically couple to various ones of the conductive contacts 121 or to the capacitor 102. The first-level interconnects 154 illustrated in FIG.4 are solder bumps, but any suitable first-level interconnects 154 may be used. As used herein, a "conductive contact" may refer to a portion of conductive material (e.g., metal) serving as an electrical interface between different components; conductive contacts may be recessed in, flush with, or extending away from a surface of a component, and may take any suitable form (e.g., a conductive pad or socket).[0035] In some embodiments, an underfill material 160 may be disposed between the die 158 and the metallization stack 100 around the first-level interconnects 154, and a mold compound 162 may be disposed around the die 158 and in contact with the metallization stack 100. In some embodiments, the underfill material 160 may be the same as the mold compound 162. Example materials that may be used for the underfill material 160 and the mold compound 162 are epoxy mold materials, as suitable. Second-level interconnects 120 may be coupled to the conductive contacts 121. The second-level interconnects 120 illustrated in FIG.4 are solder balls (e.g., for a ball grid array arrangement), but any suitable second-level interconnects 120 may be used (e.g., pins in a pin grid array arrangement or lands in a land grid array arrangement). The second-level interconnects 120 may be used to couple the IC package 150 to another component, such as a circuit board (e.g., amotherboard), an interposer, or another IC package, as known in the art. The IC package 150 may have a height 166; in some embodiments, the height 166 may be less than 1 millimeter (e.g., between 0.5 and 1 millimeter).[0036] FIG.5 is a cross-sectional side view of an embedded wafer level ball grid array (eWLB) IC package 150 including a metallization stack 100 having a three-dimensional capacitor 102 disposed therein, in accordance with various embodiments. The capacitor 102 of the embodiment of FIG.5 may take any of the forms disclosed herein. For example, the capacitor 102 may include a first plate 104 having a recess 108, and a second plate 114 having a projection 116 that extends into the recess 108 without contacting the first plate 104. The IC package 150 of FIG.5 may include a die 158 coupled to the metallization stack 100 via conductive contacts 156 of the die 158; as known in the art of eWLB, the metallization stack 100 may be "built up" on the die 158. The conductive contacts 156 may be coupled to the conductive pathways 122, allowing circuitry within the die 158 to electrically couple to various ones of the conductive contacts 121 or to the capacitor 102.[0037] The mold compound 162, the second-level interconnects 120, and the height 166 may take the form of any of the embodiments discussed above with reference to FIG.4. The mold compound 162 of the embodiment of FIG.5 may include fanout areas 164 disposed around the die 158. In some embodiments, one or more of the capacitors 102 included in the metallization stack 100 may be disposed proximate to the fanout areas 164 (e.g., between the fanout areas 164 and the second face 126 of the metallization stack 100, outside of the "shadow" of the die 158).[0038] Although a single die 158 is illustrated in the IC packages 150 of FIGS.4 and 5, these IC packages 150 may include multiple dies 158, with one or more of the multiple dies 158 coupled to capacitors 102 included in the metallization stack 100. In some embodiments, the dies 158 may themselves include capacitors 102 in their back-end metal stacks, as discussed above with reference to FIG.14. The IC packages 150 may include additional passive components, such as "surface mount" resistors, capacitors, and inductors disposed on the first face 124 of the metallization stack. More generally, the IC packages 150 may include any other active or passive components known in the art.[0039] Any suitable manufacturing techniques may be used to form the capacitors 102 in the metallization stacks 100, as disclosed herein. FIGS.6-9 illustrate various operations in an example process for manufacturing the three-dimensional capacitor 102 of FIG.2, in accordance with various embodiments. Although the example process discussed below with reference to FIGS.6-9 has the first plate 104 being formed before the second plate 114, the operations of this process may be reversed in accordance with the teachings herein. For example, when the capacitor 102 is included in the metallization stack 100 of an eWLB IC package 150 (e.g., in the form of a package redistribution layer, as discussed above with reference to FIG.5), the second plate 114 may be formed before the first plate 104 (e.g., as the metallization stack 100 is "built up" on the die 158). Additionally, only the operations related to the construction of the capacitor 102 are illustrated below with reference to FIGS. 6-9; these operations will generally be performed during the fabrication of the metallization stack 100, as known in the art. [0040] FIG.6 illustrates a first portion 600 of conductive material. This first portion 600 may be substantially planar, and may be formed using any suitable metal deposition process known in the art (for example, known lithography and etch techniques). In someembodiments, the first portion 600 may be formed of copper.[0041] FIG.7 illustrates the first plate 104 subsequent to providing a second portion 702 of conductive material on the first portion 600 of conductive material (FIG.6). The second portion 702 may be formed by patterned metal deposition (e.g., using known lithography and etch techniques). The second portion 702 and the first portion 600 may together form the first plate 104, with the projections 106 and the recesses 108. The first plate 104 of FIG.7 may take the form of any of the first plates 104 disclosed herein. In some embodiments, the first plate 104 may be part of a metallization layer of the metallization stack 100, and may be formed along with other metal structures in that metallization layer.[0042] FIG.8 illustrates an assembly 800 subsequent to providing a layer of dielectric material 110 on the first plate 104 (FIG.7). In some embodiments, the layer of dielectric material 110 may be a conformal layer that distributes itself over the first plate 104. For example, the layer of dielectric material 110 may be spray-coated on the first plate 104 to a desired thickness. In other embodiments, the layer of dielectric material 110 may be patterned onto the first plate 104 using known lithography and etch techniques to achieve a profile that follows the projections 106 and recesses 108 of the first plate 104, as shown. The layer of dielectric material 110 of FIG.8 may take the form of any of the embodiments of the dielectric material 110 disclosed herein.[0043] FIG.9 illustrates a capacitor 102 subsequent to providing the second plate 114 on the assembly 800 (FIG.8). The second plate 114 may be formed by any suitable metal deposition technique, for example. The capacitor 102 illustrated in FIG.9 may take the form of the capacitor 102 discussed above with reference to FIG.2). In some embodiments, the second plate 114 may be part of a metallization layer of the metallization stack 100, and may be formed along with other metal structures in that metallization layer. This metallization layer may be different from the metallization layer of the first plate 104.[0044] The capacitor 102 illustrated in FIG.2 has recesses 108 and 118 with sidewalls that are at right angles to the bottoms of the recesses 108 and 118, and also has projections 106 and 116 whose top surfaces are at right angles to the sidewalls. This is simply for ease of illustration, and in many embodiments, the bottoms of the recesses 108 and 118, the sidewalls of the recesses 108 and 118, and the top surfaces of the projections 106 and 116 may be curved and/or angled. For example, FIGS.10 and 11 are cross-sectional side views of various embodiments of a three-dimensional capacitor 102 that may be included in a metallization stack 100, in accordance with various embodiments. In the embodiment of FIG. 10, the bottoms of the recesses 108 and 118, the sidewalls of the recesses 108 and 118, and the top surfaces of the projections 106 and 116 are flat, but the sidewalls of the recesses 108 and 118 are angled with respect to the bottoms of the recesses 108 and 118 and the top surfaces of the projections 106 and 116. The result is recesses 108 and 118 (andcorresponding projections 106 and 116) that are tapered; in the particular embodiment illustrated in FIG.10, the recesses 108 and 118 narrow towards the bottom of the recesses 108 and 118, and the projections 106 and 116 narrow towards the top surfaces of the projections 106 and 116. The projections 116 of the second plate 114 extend into the recesses 108 of the first plate 104 without contacting the first plate 104, and a dielectric material 110 is disposed between the first plate 104 and the second plate 114.[0045] In the embodiment of FIG.11, the bottoms of the recesses 108 and 118, the sidewalls of the recesses 108 and 118, and the top surfaces of the projections 106 and 116 have some curvature, with the sidewalls of the recesses 108 and 118 angled with respect to the bottoms of the recesses 108 and 118 and the top surfaces of the projections 106 and 116. The result is recesses 108 and 118 (and corresponding projections 106 and 116) that are tapered; in the particular embodiment illustrated in FIG.11, the recesses 108 and 118 narrow towards the bottom of the recesses 108 and 118, and the projections 106 and 116 narrow towards the top surfaces of the projections 106 and 116. No corners may delineate the top surfaces of the projections 106 and 116 from the sidewalls of the recesses 108 and 118, nor the sidewalls of the recesses 108 and 118 from the bottoms of the recesses 108 and 118. The projections 116 of the second plate 114 extend into the recesses 108 of the first plate 104 without contacting the first plate 104, and a dielectric material 110 is disposed between the first plate 104 and the second plate 114. Embodiments in which the bottoms of the recesses 108 and 118, and the top surfaces of the projections 106 and 116, are curved (instead of terminating at right angles) may be easier to seal from current leakage, and thus may exhibit improved performance. [0046] FIG.12 is a flow diagram of a method 1200 of manufacturing an electronic component having a three-dimensional capacitor in a metallization stack, in accordance with various embodiments. Although the operations of the method 1200 may be illustrated with reference to the capacitor 102 in the metallization stack 100, the method 1200 may be used to form any suitable capacitor in a metallization stack. Operations are illustrated once each and in a particular order in FIG.12, but the operations may be reordered and/or repeated as desired (e.g., with different operations performed in parallel when manufacturing multiple electronic components simultaneously).[0047] At 1202, a first plate in a metallization stack may be formed. The first plate may have a recess. For example, the first plate 104 may be formed in the metallization stack 100, and may have a recess 108 (e.g., as discussed above with reference to FIGS.6 and 7).[0048] At 1204, a dielectric material may be provided on the first plate. For example, the dielectric material 110 may be provided on the first plate 104 (e.g., as discussed above with reference to FIG.8).[0049] At 1206, a second plate may be formed in the metallization stack on the dielectric material of 1204. The second plate may have a projection extending into the recess and spaced apart from the first plate (of 1202) by the dielectric material (of 1204). For example, the second plate 114 may be formed on the dielectric material 110, and may have a projection 116 extending into a recess 108 without contacting the first plate 104 (e.g., as discussed above with reference to FIG.9).[0050] The capacitors 102 disclosed herein may be included in the metallization stack of any suitable electronic component. FIGS.13-16 illustrate various examples of apparatuses that may be included in an electronic component having a metallization stack 100 with one or more of any of the capacitors 102 disclosed herein, or that may include a metallization stack including one or more of any of the capacitors 102 disclosed herein.[0051] FIGS.13A-B are top views of a wafer 1300 and dies 1302 that may include a metallization stack 100 with one or more capacitors 102, or may be included in an IC package 150 (e.g., in a die 158) in accordance with any of the embodiments disclosed herein. The wafer 1300 may be composed of semiconductor material and may include one or more dies 1302 having IC structures formed on a surface of the wafer 1300. Each of the dies 1302 (which may be used as a die 158 in an IC package 150) may be a repeating unit of a semiconductor product that includes any suitable IC. After the fabrication of the semiconductor product is complete, the wafer 1300 may undergo a singulation process in which each of the dies 1302 is separated from one another to provide discrete "chips" of the semiconductor product. The die 1302 may include one or more transistors (e.g., some of the transistors 1440 of FIG.14, discussed below) and/or supporting circuitry to route electrical signals to the transistors, as well as any other IC components. In some embodiments, the wafer 1300 or the die 1302 may include a memory device (e.g., a static random access memory (SRAM) device), a logic device (e.g., an AND, OR, NAND, or NOR gate), or any other suitable circuit element. Multiple ones of these devices may be combined on a single die 1302. For example, a memory array formed by multiple memory devices may be formed on a same die 1302 as a processing device (e.g., the processing device 1602 of FIG.16) or other logic that is configured to store information in the memory devices or execute instructions stored in the memory array.[0052] FIG.14 is a cross-sectional side view of an IC device 1400 that may include a metallization stack 100 with one or more capacitors 102, or may be included in an IC package 150 having a capacitor 102 in a metallization stack 100 (e.g., in a package redistribution layer), in accordance with any of the embodiments disclosed herein. For example, one or more of the IC devices 1400 may be included in one or more dies 158. The IC device 1400 may be formed on a substrate 1402 (e.g., the wafer 1300 of FIG.13A) and may be included in a die (e.g., the die 1302 of FIG.13B). The substrate 1402 may be a semiconductor substrate composed of semiconductor material systems including, for example, N-type or P-type materials systems. The substrate 1402 may include, for example, a crystalline substrate formed using a bulk silicon or a silicon-on-insulator substructure. In some embodiments, the semiconductor substrate 1402 may be formed using alternative materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Further materials classified as group II-VI, III-V, or IV may also be used to form the substrate 1402. Although a few examples of materials from which the substrate 1402 may be formed are described here, any material that may serve as a foundation for an IC device 1400 may be used. The substrate 1402 may be part of a singulated die (e.g., the dies 1302 of FIG.13B) or a wafer (e.g., the wafer 1300 of FIG.13A). [0053] The IC device 1400 may include one or more device layers 1404 disposed on the substrate 1402. The device layer 1404 may include features of one or more transistors 1440 (e.g., metal oxide semiconductor field-effect transistors (MOSFETs)) formed on the substrate 1402. The device layer 1404 may include, for example, one or more source and/or drain (S/D) regions 1420, a gate 1422 to control current flow in the transistors 1440 between the S/D regions 1420, and one or more S/D contacts 1424 to route electrical signals to/from the S/D regions 1420. The transistors 1440 may include additional features not depicted for the sake of clarity, such as device isolation regions, gate contacts, and the like. The transistors 1440 are not limited to the type and configuration depicted in FIG.14 and may include a wide variety of other types and configurations such as, for example, planar transistors, non-planar transistors, or a combination of both. Non-planar transistors may include FinFET transistors, such as double-gate transistors or tri-gate transistors, and wrap-around or all-around gate transistors, such as nanoribbon and nanowire transistors.[0054] Each transistor 1440 may include a gate 1422 formed of at least two layers, a gate dielectric layer and a gate electrode layer. The gate dielectric layer may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide, and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer to improve its quality when a high-k material is used.[0055] The gate electrode layer may be formed on the gate dielectric layer and may include at least one P-type work function metal or N-type work function metal, depending on whether the transistor 1440 is to be a PMOS or an NMOS transistor. In someimplementations, the gate electrode layer may consist of a stack of two or more metal layers, where one or more metal layers are work-function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as a barrier layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides (e.g., ruthenium oxide). For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide).[0056] In some embodiments, when viewed as a cross-section of the transistor 1440 along the source-channel-drain direction, the gate electrode may consist of a U-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.[0057] In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In some embodiments, a plurality of spacer pairs may be used; for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.[0058] The S/D regions 1420 may be formed within the substrate 1402 adjacent to the gate 1422 of each transistor 1440. The S/D regions 1420 may be formed using either an implantation/diffusion process or an etching/deposition process, for example. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion- implanted into the substrate 1402 to form the S/D regions 1420. An annealing process that activates the dopants and causes them to diffuse farther into the substrate 1402 may follow the ion-implantation process. In the latter process, the substrate 1402 may first be etched to form recesses at the locations of the S/D regions 1420. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the S/D regions 1420. In some implementations, the S/D regions 1420 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1420 may be formed using one or more alternate semiconductor materials such as germanium or a group III-V material or alloy. In further embodiments, one or more layers of metal and/or metal alloys may be used to form the S/D regions 1420.[0059] Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from the transistors 1440 of the device layer 1404 through one or more interconnect layers disposed on the device layer 1404 (illustrated in FIG.14 as interconnect layers 1406- 1410). For example, electrically conductive features of the device layer 1404 (e.g., the gate 1422 and the S/D contacts 1424) may be electrically coupled with the interconnect structures 1428 of the interconnect layers 1406-1410. The one or more interconnect layers 1406-1410 may form an interlayer dielectric (ILD) stack 1419 of the IC device 1400. In someembodiments, the interconnect layers 1406-1410 may provide the metallization stack 100, and one or more capacitors 102 (not shown) may be disposed in the interconnect layers 1406, in accordance with any of the techniques disclosed herein. For example, the first plate 104 may be included in one of the interconnect layers 1406-1410, and the second plate 114 may be included in another one of the interconnect layers 1406-1410. One or more capacitors 102 in the ILD stack 1419 may be coupled to any suitable ones of the devices in the device layer 1404, and/or to one or more of the bond pads 1436 (discussed below).[0060] The interconnect structures 1428 may be arranged within the interconnect layers 1406-1410 to route electrical signals according to a wide variety of designs (in particular, the arrangement is not limited to the particular configuration of interconnect structures 1428 depicted in FIG.14). Although a particular number of interconnect layers 1406-1410 is depicted in FIG.14, embodiments of the present disclosure include IC devices having more or fewer interconnect layers than depicted.[0061] In some embodiments, the interconnect structures 1428 may include trench structures 1428a (sometimes referred to as "lines") and/or via structures 1428b (sometimes referred to as "holes") filled with an electrically conductive material such as a metal. The trench structures 1428a may be arranged to route electrical signals in a direction of a plane that is substantially parallel with a surface of the substrate 1402 upon which the device layer 1404 is formed. For example, the trench structures 1428a may route electrical signals in a direction in and out of the page from the perspective of FIG.14. The via structures 1428b may be arranged to route electrical signals in a direction of a plane that is substantially perpendicular to the surface of the substrate 1402 upon which the device layer 1404 is formed. In some embodiments, the via structures 1428b may electrically couple trench structures 1428a of different interconnect layers 1406-1410 together.[0062] The interconnect layers 1406-1410 may include a dielectric material 1426 disposed between the interconnect structures 1428, as shown in FIG.14. In some embodiments, the dielectric material 1426 disposed between the interconnect structures 1428 in different ones of the interconnect layers 1406-1410 may have different compositions; in otherembodiments, the composition of the dielectric material 1426 between differentinterconnect layers 1406-1410 may be the same.[0063] A first interconnect layer 1406 (referred to as Metal 1 or "M1") may be formed directly on the device layer 1404. In some embodiments, the first interconnect layer 1406 may include trench structures 1428a and/or via structures 1428b, as shown. The trench structures 1428a of the first interconnect layer 1406 may be coupled with contacts (e.g., the S/D contacts 1424) of the device layer 1404.[0064] A second interconnect layer 1408 (referred to as Metal 2 or "M2") may be formed directly on the first interconnect layer 1406. In some embodiments, the second interconnect layer 1408 may include via structures 1428b to couple the trench structures 1428a of the second interconnect layer 1408 with the trench structures 1428a of the first interconnect layer 1406. Although the trench structures 1428a and the via structures 1428b are structurally delineated with a line within each interconnect layer (e.g., within the second interconnect layer 1408) for the sake of clarity, the trench structures 1428a and the via structures 1428b may be structurally and/or materially contiguous (e.g., simultaneously filled during a dual-damascene process) in some embodiments.[0065] A third interconnect layer 1410 (referred to as Metal 3 or "M3") (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 1408 according to similar techniques and configurations described in connection with the second interconnect layer 1408 or the first interconnect layer 1406. In some embodiments, the interconnect layers that are "higher up" in the IC device 1400 may be thicker, and may be particularly advantageous for forming the plates 104 and 114 of the capacitor 102.[0066] The IC device 1400 may include a solder resist material 1434 (e.g., polyimide or similar material) and one or more bond pads 1436 formed on the interconnect layers 1406-1410. The bond pads 1436 may be electrically coupled with the interconnect structures 1428 and configured to route the electrical signals of the transistor(s) 1440 to other external devices. For example, solder bonds may be formed on the one or more bond pads 1436 tomechanically and/or electrically couple a chip including the IC device 1400 with another component (e.g., a circuit board). The IC device 1400 may have other alternativeconfigurations to route the electrical signals from the interconnect layers 1406-1410 than depicted in other embodiments. For example, the bond pads 1436 may be replaced by or may further include other analogous features (e.g., posts) that route the electrical signals to external components.[0067] FIG.15 is a cross-sectional side view of an IC device assembly 1500 that may include an IC package 150 including a capacitor 102 in a metallization stack 100 (e.g., in a package redistribution layer), and/or another electronic component (e.g., a die) having a capacitor 102 in a metallization stack 100, in accordance with any of the embodiments disclosed herein. The IC device assembly 1500 includes a number of components disposed on a circuit board 1502 (which may be, e.g., a motherboard). The IC device assembly 1500 includescomponents disposed on a first face 1540 of the circuit board 1502 and an opposing second face 1542 of the circuit board 1502; generally, components may be disposed on one or both faces 1540 and 1542. Any of the IC packages discussed below with reference to the IC device assembly 1500 may include one or more capacitors 102 disposed in a metallization stack 100 (e.g., of a die or of the IC package).[0068] In some embodiments, the circuit board 1502 may be a printed circuit board (PCB) including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 1502. In other embodiments, the circuit board 1502 may be a non-PCB substrate.[0069] The IC device assembly 1500 illustrated in FIG.15 includes a package-on-interposer structure 1536 coupled to the first face 1540 of the circuit board 1502 by couplingcomponents 1516. The coupling components 1516 may electrically and mechanically couple the package-on-interposer structure 1536 to the circuit board 1502, and may include solder balls (as shown in FIG.15), male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.[0070] The package-on-interposer structure 1536 may include an IC package 1520 coupled to an interposer 1504 by coupling components 1518. The coupling components 1518 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 1516. Although a single IC package 1520 is shown in FIG.15, multiple IC packages may be coupled to the interposer 1504; indeed, additional interposers may be coupled to the interposer 1504. The interposer 1504 may provide an intervening substrate used to bridge the circuit board 1502 and the IC package 1520. The IC package 1520 may be or include, for example, a die (the die 1302 of FIG.13B), an IC device (e.g., the IC device 1400 of FIG.14), or any other suitable component. Generally, the interposer 1504 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the interposer 1504 may couple the IC package 1520 (e.g., a die) to a ball grid array (BGA) of the coupling components 1516 for coupling to the circuit board 1502. In the embodiment illustrated in FIG.15, the IC package 1520 and the circuit board 1502 are attached to opposing sides of the interposer 1504; in other embodiments, the IC package 1520 and the circuit board 1502 may be attached to a same side of the interposer 1504. In some embodiments, three or more components may be interconnected by way of the interposer 1504. In some embodiments, the IC package 1520 may take the form of any of the IC packages 150 disclosed herein.[0071] The interposer 1504 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In some embodiments, the interposer 1504 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials. The interposer 1504 may include metal interconnects 1508 and vias 1510, including but not limited to through-silicon vias (TSVs) 1506. The interposer 1504 may further include embedded devices 1514, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 1504. The package-on-interposer structure 1536 may take the form of any of the package-on- interposer structures known in the art.[0072] The IC device assembly 1500 may include an IC package 1524 coupled to the first face 1540 of the circuit board 1502 by coupling components 1522. The coupling components 1522 may take the form of any of the embodiments discussed above with reference to the coupling components 1516, and the IC package 1524 may take the form of any of the embodiments discussed above with reference to the IC package 1520. In some embodiments, the IC package 1524 may take the form of any of the IC packages 150 disclosed herein. The IC device assembly 1500 may also include an IC package 150, in accordance with any of the embodiments disclosed herein.[0073] The IC device assembly 1500 illustrated in FIG.15 includes a package-on-package structure 1534 coupled to the second face 1542 of the circuit board 1502 by coupling components 1528. The package-on-package structure 1534 may include an IC package 1526 and an IC package 1532 coupled together by coupling components 1530 such that the IC package 1526 is disposed between the circuit board 1502 and the IC package 1532. The coupling components 1528 and 1530 may take the form of any of the embodiments of the coupling components 1516 discussed above, and the IC packages 1526 and 1532 may take the form of any of the embodiments of the IC package 1520 discussed above. The package-on- package structure 1534 may be configured in accordance with any of the package-on-package structures known in the art. In some embodiments, the IC package 1526 and/or the IC package 1532 may take the form of any of the IC packages 150 disclosed herein.[0074] FIG.16 is a block diagram of an example computing device 1600 that may include one or more IC packages 150 having one or more capacitors 102 disposed in a metallization stack 100 (e.g., in the package redistribution layer), or one or more other electronic components having one or more capacitors 102 disposed in a metallization stack, in accordance with any of the embodiments disclosed herein. For example, any suitable ones of the components of the computing device 1600 may include one or more of the IC packages 150 disclosed herein. A number of components are illustrated in FIG.16 as included in the computing device 1600, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included in the computing device 1600 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system-on-a-chip (SoC) die.[0075] Additionally, in various embodiments, the computing device 1600 may not include one or more of the components illustrated in FIG.16, but the computing device 1600 may include interface circuitry for coupling to the one or more components. For example, the computing device 1600 may not include a display device 1606, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 1606 may be coupled. In another set of examples, the computing device 1600 may not include an audio input device 1624 or an audio output device 1608, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1624 or audio output device 1608 may be coupled.[0076] The computing device 1600 may include a processing device 1602 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processing device 1602 may include one or more digital signal processors (DSPs), application-specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices. The computing device 1600 may include a memory 1604, which may itself include one or more memory devices such as volatile memory (e.g., dynamic random access memory (DRAM)), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 1604 may include memory that shares a die with the processing device 1602. This memory may be used as cache memory and may include embedded dynamic random access memory (eDRAM) or spin transfer torque magnetic random-access memory (STT-MRAM).[0077] In some embodiments, the computing device 1600 may include a communication chip 1612 (e.g., one or more communication chips). For example, the communication chip 1612 may be configured for managing wireless communications for the transfer of data to and from the computing device 1600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.[0078] The communication chip 1612 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 1402.11 family), IEEE 1402.16 standards (e.g., IEEE 1402.16- 2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultra mobile broadband (UMB) project (also referred to as "3GPP2"), etc.). IEEE 1402.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands forWorldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 1402.16 standards. The communication chip 1612 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal MobileTelecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E- HSPA), or LTE network. The communication chip 1612 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 1612 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced CordlessTelecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 1612 may operate in accordance with other wireless protocols in other embodiments. The computing device 1600 may include an antenna 1622 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).[0079] In some embodiments, the communication chip 1612 may manage wiredcommunications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet). As noted above, the communication chip 1612 may include multiple communication chips. For instance, a first communication chip 1612 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a secondcommunication chip 1612 may be dedicated to longer-range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 1612 may be dedicated to wireless communications, and a second communication chip 1612 may be dedicated to wired communications.[0080] The computing device 1600 may include battery/power circuitry 1614. The battery/power circuitry 1614 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the computing device 1600 to an energy source separate from the computing device 1600 (e.g., AC line power).[0081] The computing device 1600 may include a display device 1606 (or corresponding interface circuitry, as discussed above). The display device 1606 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display, for example.[0082] The computing device 1600 may include an audio output device 1608 (orcorresponding interface circuitry, as discussed above). The audio output device 1608 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds, for example.[0083] The computing device 1600 may include an audio input device 1624 (or corresponding interface circuitry, as discussed above). The audio input device 1624 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).[0084] The computing device 1600 may include a global positioning system (GPS) device 1618 (or corresponding interface circuitry, as discussed above). The GPS device 1618 may be in communication with a satellite-based system and may receive a location of the computing device 1600, as known in the art.[0085] The computing device 1600 may include an other output device 1610 (orcorresponding interface circuitry, as discussed above). Examples of the other output device 1610 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.[0086] The computing device 1600 may include an other input device 1620 (or corresponding interface circuitry, as discussed above). Examples of the other input device 1620 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.[0087] The computing device 1600 may have any desired form factor, such as a hand-held or mobile computing device (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultra-mobile personal computer, etc.), a desktop computing device, a server or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable computing device. In some embodiments, the computing device 1600 may be any other electronic device that processes data.[0088] The following paragraphs provide various examples of the embodiments disclosed herein.[0089] Example 1 is an electronic component, including: a metallization stack; and a capacitor disposed in the metallization stack, wherein the capacitor includes a first conductive plate having a plurality of recesses, and a second conductive plate having a plurality of projections, wherein individual projections of the plurality of projections of the second conductive plate extend into corresponding individual recesses of the plurality of recesses without contacting the first conductive plate.[0090] Example 2 may include the subject matter of Example 1, and may further specify that the first conductive plate has a plurality of projections, and the plurality of recesses and the plurality of projections of the first conductive plate alternate in the first conductive plate in a parallel ridge pattern. [0091] Example 3 may include the subject matter of Example 2, and may further specify that the first conductive plate has a plurality of projections, and the plurality of recesses and the plurality of projections of the first conductive plate alternate in the first conductive plate in a checkerboard pattern.[0092] Example 4 may include the subject matter of any of Examples 1-3, and may further include a dielectric material extending between the first conductive plate and the second conductive plate.[0093] Example 5 may include the subject matter of any of Examples 1-4, and may further specify that individual ones of the plurality of recesses are tapered.[0094] Example 6 may include the subject matter of Example 5, and may further specify that individual ones of the plurality of projections are tapered.[0095] Example 7 may include the subject matter of any of Examples 1-6, and may further specify that the first and second conductive plates are spaced apart by a maximum distance between 5 and 10 microns.[0096] Example 8 may include the subject matter of any of Examples 1-7, and may further specify that the metallization stack is a package redistribution layer.[0097] Example 9 may include the subject matter of Example 8, and may further specify that the electronic component is an embedded wafer level ball grid array (eWLB) package.[0098] Example 10 may include the subject matter of Example 9, and may further specify that: the electronic component includes a mold compound having a fanout area; and the capacitor is disposed in the package redistribution layer below the fanout area.[0099] Example 11 may include the subject matter of Example 8, and may further specify that the electronic component is a flip chip (FC) package.[0100] Example 12 may include the subject matter of any of Examples 8-11, and may further specify that the electronic component has a height less than 1 millimeter.[0101] Example 13 may include the subject matter of any of Examples 1-7, and may further specify that the metallization layer includes back-end metal in a die.[0102] Example 14 may include the subject matter of Example 13, and may further include a bond pad electrically coupled to the capacitor.[0103] Example 15 is a computing device, including: a circuit board; and an integrated circuit (IC) package coupled to the circuit board, wherein the IC package includes a redistribution layer, a die, including a memory device or a processing device, coupled to the redistribution layer, and a capacitor disposed in the redistribution layer, wherein the capacitor includes a first conductive plate having a recess, and a second conductive plate having a projection, wherein the projection extends into the recess without contacting the recess.[0104] Example 16 may include the subject matter of Example 15, and may further specify that the die includes a power management IC (PMIC).[0105] Example 17 may include the subject matter of any of Examples 15-16, and may further specify that the circuit board is a motherboard.[0106] Example 18 may include the subject matter of any of Examples 15-17, and may further specify that the computing device is a smartphone.[0107] Example 19 may include the subject matter of any of Examples 15-18, and may further specify that the computing device is a tablet computing device.[0108] Example 20 is a method of manufacturing an electronic component having a three- dimensional capacitor in a metallization stack, including: forming a first conductive plate in the metallization stack, wherein the first conductive plate has a recess; providing a dielectric material on the first conductive plate; and forming a second conductive plate in the metallization stack on the dielectric material, wherein the second plate has a projection extending into the recess and spaced apart from the first conductive plate by the dielectric material.[0109] Example 21 may include the subject matter of Example 20, and may further specify that providing the dielectric material includes spray-coating the dielectric material onto the first conductive plate.[0110] Example 22 may include the subject matter of Example 20, and may further specify that providing the dielectric material includes providing a conformal layer of the dielectric material onto the first conductive plate.[0111] Example 23 may include the subject matter of any of Examples 20-22, and may further include providing first and second dielectric layers in the metallization stack such that the first and second conductive plates are both disposed between the first and second dielectric layers.[0112] Example 24 may include the subject matter of any of Examples 20-23, and may further specify that the metallization stack is a package redistribution layer, and the method further includes forming conductive pathways between the first and second conductive plates and first and second conductive contacts of a die coupled to the package redistribution layer.[0113] Example 25 may include the subject matter of Example 24, and may further include providing a mold compound in contact with the die and the package redistribution layer. |
Certain aspects of the present disclosure provide techniques for generating execution schedules, comprising receiving a data flow graph for a process, where data flow graph comprises a plurality of nodes and a plurality of edge; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in the memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, where the second modified topological ordering enables increased parallel utilization of a plurality of hardware components. |
WHAT IS CLAIMED IS:1. A method, comprising: receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.2. The method of Claim 1 , wherein rearranging one or more nodes in the first modified topological ordering comprises moving one or more nodes corresponding to loading data from a host processing system memory into the memory to an earlier position in the topological ordering.3. The method of Claim 1, wherein: the plurality of nodes in the data flow graph correspond to operations performed during the process, the plurality of edges in the data flow graph correspond to data passing among the operations, each respective edge of the plurality of edges is associated with a respective weight based on a size of the data associated with the respective edge, and generating the topological ordering comprises finding a set of minimum cuts in the data flow graph based on the weights.4. The method of Claim 3, wherein finding the set of minimum cuts comprises modifying the data flow graph to enforce data dependencies by:38
for each respective edge of the plurality of edges, adding a respective backwards edge of infinite weight.5. The method of Claim 4, wherein finding the set of minimum cuts further comprises modifying the data flow graph to enforce data dependencies by: ensuring that at least one valid path exists in the data flow graph from a source to each of the plurality of nodes and from each of the plurality of nodes to a sink.6. The method of Claim 3, wherein finding the set of minimum cuts comprises assigning the weights to the plurality of edges by: identifying a producer node of the plurality of nodes that outputs data to at least one consumer node of the plurality of nodes; determining a size of the data output by the producer node; and inserting a deallocation node into the data flow graph by: creating a first edge with a weight corresponding to the size of the data output by the producer node, wherein the first edge is inserted from the producer node to the deallocation node; assigning weight of zero to an edge from the producer node to the at least one consumer node; and creating an edge from the at least one consumer node to the deallocation node, assigned a weight of zero.7. The method of Claim 3, wherein finding the set of minimum cuts comprises, for a first index node of the plurality of nodes, constraining a first minimum cut to occur after the first index node by: creating a first edge with an infinite weight from a source to the first index node; identifying a set of consumer nodes, from the plurality of nodes, that receive data from the first index node, creating edges with an infinite weight from each consumer node in the set of consumer nodes to a sink; and computing the first minimum cut, wherein the first minimum cut places the first index node in a first portion of the data flow graph and all successors of the first index node in a second portion of the data flow graph.398. The method of Claim 7, wherein finding the set of minimum cuts further comprises iteratively computing minimum cuts for index nodes in the first and second portions of the data flow graph and separating the first and second portions of the data flow graph based on the minimum cuts until a predefined stopping condition is satisfied.9. The method of Claim 7, further comprising selecting the first index node based on determining that the first index node is centered in the data flow graph.10 The method of Claim 9, further comprising: determining that the first index node is one of a set of sibling nodes in the data flow graph; and computing the first minimum cut by constraining a first portion of the set of sibling nodes to the first portion of the data flow graph and a second portion of the set of sibling nodes to the second portion of the data flow graph.11. A processing system, comprising: a memory comprising computer-executable instructions; and one or more processors configured to execute the computer-executable instructions and cause the processing system to perform an operation comprising: receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.12. The processing system of Claim 11, wherein rearranging one or more nodes in the first modified topological ordering comprises moving one or more nodes40
corresponding to loading data from a host processing system memory into the memory to an earlier position in the topological ordering.13. The processing system of Claim 11, wherein: the plurality of nodes in the data flow graph correspond to operations performed during the process, the plurality of edges in the data flow graph correspond to data passing among the operations, each respective edge of the plurality of edges is associated with a respective weight based on a size of the data associated with the respective edge, and generating the topological ordering comprises finding a set of minimum cuts in the data flow graph based on the weights.14. The processing system of Claim 13, wherein finding the set of minimum cuts comprises modifying the data flow graph to enforce data dependencies by: for each respective edge of the plurality of edges, adding a respective backwards edge of infinite weight.15. The processing system of Claim 14, wherein finding the set of minimum cuts further comprises modifying the data flow graph to enforce data dependencies by: ensuring that at least one valid path exists in the data flow graph from a source to each of the plurality of nodes and from each of the plurality of nodes to a sink.16. The processing system of Claim 13, wherein finding the set of minimum cuts comprises assigning the weights to the plurality of edges by: identifying a producer node of the plurality of nodes that outputs data to at least one consumer node of the plurality of nodes; determining a size of the data output by the producer node; and inserting a deallocation node into the data flow graph by: creating a first edge with a weight corresponding to the size of the data output by the producer node, wherein the first edge is inserted from the producer node to the deallocation node;
assigning weight of zero to an edge from the producer node to the at least one consumer node; and creating an edge from the at least one consumer node to the deallocation node, assigned a weight of zero.17. The processing system of Claim 13, wherein finding the set of minimum cuts comprises, for a first index node of the plurality of nodes, constraining a first minimum cut to occur after the first index node by: creating a first edge with an infinite weight from a source to the first index node; identifying a set of consumer nodes, from the plurality of nodes, that receive data from the first index node; creating edges with an infinite weight from each consumer node in the set of consumer nodes to a sink; and computing the first minimum cut, wherein the first minimum cut places the first index node in a first portion of the data flow graph and all successors of the first index node in a second portion of the data flow graph.18. The processing system of Claim 17, wherein finding the set of minimum cuts further comprises iteratively computing minimum cuts for index nodes in the first and second portions of the data flow graph and separating the first and second portions of the data flow graph based on the minimum cuts until a predefined stopping condition is satisfied.19. The processing system of Claim 17, further comprising selecting the first index node based on determining that the first index node is centered in the data flow graph.20. The processing system of Claim 19, further comprising: determining that the first index node is one of a set of sibling nodes in the data flow graph; and computing the first minimum cut by constraining a first portion of the set of sibling nodes to the first portion of the data flow graph and a second portion of the set of sibling nodes to the second portion of the data flow graph.21. The processing system of Claim 11, further comprising:
an ordering component configured to generate the topological ordering; a memory component configured to generate the first modified topological ordering by inserting the one or more new nodes corresponding to memory access; an allocation component configured to allocate the units of memory; and a reordering component configured to generate the second topological ordering by rearranging one or more nodes in the first modified topological ordering.22. A non-transitory computer-readable medium comprising computerexecutable instructions that, when executed by one or more processors of a processing system, cause the processing system to perform an operation comprising: receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.23. The non-transitory computer-readable medium of Claim 22, wherein rearranging one or more nodes in the first modified topological ordering comprises moving one or more nodes corresponding to loading data from a host processing system memory into the memory to an earlier position in the topological ordering.24. The non-transitory computer-readable medium of Claim 22, wherein: the plurality of nodes in the data flow graph correspond to operations performed during the process, the plurality of edges in the data flow graph correspond to data passing among the operations,43
each respective edge of the plurality of edges is associated with a respective weight based on a size of the data associated with the respective edge, and generating the topological ordering comprises finding a set of minimum cuts in the data flow graph based on the weights.25. The non-transitory computer-readable medium of Claim 24, wherein finding the set of minimum cuts comprises modifying the data flow graph to enforce data dependencies by: for each respective edge of the plurality of edges, adding a respective backwards edge of infinite weight; and ensuring that at least one valid path exists in the data flow graph from a source to each of the plurality of nodes and from each of the plurality of nodes to a sink.26. The non-transitory computer-readable medium of Claim 24, wherein finding the set of minimum cuts comprises assigning the weights to the plurality of edges by: identifying a producer node of the plurality of nodes that outputs data to at least one consumer node of the plurality of nodes; determining a size of the data output by the producer node; and inserting a deallocation node into the data flow graph by: creating a first edge with a weight corresponding to the size of the data output by the producer node, wherein the first edge is inserted from the producer node to the deallocation node; assigning weight of zero to an edge from the producer node to the at least one consumer node; and creating an edge from the at least one consumer node to the deallocation node, assigned a weight of zero.27. The non-transitory computer-readable medium of Claim 24, wherein finding the set of minimum cuts comprises, for a first index node of the plurality of nodes, constraining a first minimum cut to occur after the first index node by: creating a first edge with an infinite weight from a source to the first index node;44
identifying a set of consumer nodes, from the plurality of nodes, that receive data from the first index node; creating edges with an infinite weight from each consumer node in the set of consumer nodes to a sink; and computing the first minimum cut, wherein the first minimum cut places the first index node in a first portion of the data flow graph and all successors of the first index node in a second portion of the data flow graph.28. The non-transitory computer-readable medium of Claim 27, wherein finding the set of minimum cuts further comprises iteratively computing minimum cuts for index nodes in the first and second portions of the data flow graph and separating the first and second portions of the data flow graph based on the minimum cuts until a predefined stopping condition is satisfied.29. The non-transitory computer-readable medium of Claim 28, further comprising: selecting the first index node based on determining that the first index node is centered in the data flow graph; determining that the first index node is one of a set of sibling nodes in the data flow graph; and computing the first minimum cut by constraining a first portion of the set of sibling nodes to the first portion of the data flow graph and a second portion of the set of sibling nodes to the second portion of the data flow graph.30. A processing system, comprising: means for receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; means for generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; means for generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; means for allocating units of memory in memory based on the first modified topological ordering; and45
means for generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.46 |
MEMORY-BOUND SCHEDULINGCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to United States Patent Application serial number 17/463,393, filed August 31, 2021, which claims the benefit of and priority to United States Provisional Patent Application serial number 63/073,269, filed September 1, 2020, the entire contents of each of which are incorporated herein by reference in their entirety.INTRODUCTION[0001] Aspects of the present disclosure relate to computer processor operation scheduling, and in particular to improved operation scheduling for memory-bound systems.[0002] A large variety of computing processes today involve execution of a number of discrete operations sequentially or in parallel. Scheduling these operations should account for data dependencies (e.g., if particular operations must be completed before certain subsequent operations). Computing systems often utilize memory with fast access, such as caches, tightly-coupled memory (TCM), static random-access memory (SRAM) and the like, to store the associated data needed for execution by each operation. In memory-bound systems, however, there may be insufficient space in these fast-access memories to store the entire sequence of operations and the accompanying data.[0003] Executing such processes on a memory -bound system can reduce performance in a variety of ways. Though some data can typically be stored in fast-access memory such as caches, memory -bound systems often need to rely on larger and slower memories to store the remaining data. Because the larger host memory typically incurs significantly more computational cost than fast-access memories such as SRAM, it is useful to reduce the number of such memory accesses in order to improve the execution of the process. Generally, accesses to the host memory increase power consumption and latency and reduce the overall bandwidth of the computer. An important aspect of scheduling such operations is therefore reduction of memory accesses to the slower memory (e.g., host processing system dynamic random access memory (DRAM)).
[0004] Some existing schedulers utilize greedy heuristics and local optimizations toward the goal of developing an optimal schedule that reduces power consumption, latency, and memory accesses. A variety of approaches exist for balancing the competing goals, but such approaches are inherently local and sub-optimal solutions to a problem which is driven by a global structure of the computing process and operations.[0005] Accordingly, what are needed are systems and methods to improve process scheduling in order to perform computer processing more efficiently.BRIEF SUMMARY[0006] Certain embodiments provide a method for generating execution schedules, comprising receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in the memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.[0007] Other aspects provide processing systems configured to perform the aforementioned methods as well as those described herein; non-transitory, computer- readable media comprising instructions that, when executed by one or more processors of a processing system, cause the processing system to perform the aforementioned methods as well as those described herein; a computer program product embodied on a computer readable storage medium comprising code for performing the aforementioned methods as well as those further described herein; and a processing system comprising means for performing the aforementioned methods as well as those further described herein.[0008] The following description and the related drawings set forth in detail certain illustrative features of one or more embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS[0009] The appended figures depict certain aspects of the one or more embodiments and are therefore not to be considered limiting of the scope of this disclosure.[0010] FIG. 1 depicts a workflow for improved process scheduling to ensure efficient execution of the process.[0011] FIG. 2A depicts a graph illustrating a set of operations and corresponding data flow involved in executing a process.[0012] FIG. 2B depicts a reverse edge modification to create a modified graph.[0013] FIGS. 2C-2D depict cuts on a graph to partition the nodes into disjoint subsets.[0014] FIG. 2E depicts a full connection modification to create a modified graph.[0015] FIG. 2F depicts a deallocation modification to create a modified graph.[0016] FIGS. 3A - 3D depict a sequence of evaluations and operations performed to efficiently generate a valid topological ordering of a data flow graph to improve scheduling of the corresponding process.[0017] FIG. 4 depicts a flow diagram illustrating a method for improved process scheduling.[0018] FIG. 5 depicts a visualization of memory allocations, according to some embodiments disclosed herein.[0019] FIG. 6 depicts a flow diagram illustrating a method for generating topological orderings to improve process scheduling.[0020] FIG. 7 depicts a flow diagram illustrating a method for enforcing topological validity while generating efficient process schedules.[0021] FIG. 8 depicts a flow diagram illustrating a method for handling parallel data flows to accurately generate efficient process schedules.[0022] FIG. 9 depicts a flow diagram illustrating a method for dividing data flow graphs to generate topological orderings to yield efficient process schedules.[0023] FIG. 10 depicts a flow diagram illustrating a method for generating and modifying topological orderings to improve process scheduling.
[0024] FIG. 11 depicts an example processing system, which may be configured to perform at least some of the methods described herein.[0025] FIG. 12 depicts an example processing system, which may be configured to perform at least some of the methods described herein.[0026] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.DETAILED DESCRIPTION[0027] Aspects of the present disclosure provide apparatuses, methods, processing systems, and computer readable mediums for generating more efficient computer processing operation schedules using graph analysis to minimize memory utilization and improve the computational efficiency of executing the schedules.[0028] Execution of many computing processes can be modeled using graphs where each node in the graph corresponds to a particular operation to be performed, and each edge corresponds to a flow of data among the operations.[0029] For example, to execute a neural network, data can flow among any number of nodes for processing in order to generate a final output. In embodiments of the present disclosure, computing processes can be constructed as directed acyclic graphs (DAGs). A DAG is a directed graph that has no directed cycles (e.g., a graph with nodes and directed edges, where following the directed edges from node-to-node will never result in a closed loop). The “source” of a directed graph may generally refer to some upstream entity or component (just before the first node(s) of the graph) that provides any needed input data for the process, while the “target” is a downstream entity or component (just after the last node(s)) that receives any output from the process. To schedule execution of the process, embodiments of the present disclosure can generate a topological ordering (e.g., a linear ordering of nodes) based on the graph. Generally, a valid topological ordering must ensure that producer nodes (e.g., nodes that generate or otherwise output some data for downstream consumption) are scheduled and executed before any corresponding consumer nodes (e.g., nodes that receive, operate on, or are otherwise dependent on data generated by some upstream entity) begin execution. That is, if there is a directed edge from a first node (a producer) to a second node (a consumer), the first
node must appear before the second node in the topological ordering. Notably, a given node may act as both a producer (for one or more downstream nodes) and a consumer (for one or more upstream nodes).[0030] There are often a large number of valid topological orderings for any given graph. Finding an ordering to maximize any particular criteria or property is typically NP- hard. In some embodiments of the present disclosure, topological orderings are created to attempt to minimize storage accesses. By minimizing such storage accesses, the schedule can be executed with reduced latency and power consumption. In an embodiment, each node in the graph may output data that is consumed by zero or more subsequent nodes. Memory or storage is generally allocated (or used) when data is produced by a node, and freed (e.g., the memory space is made available) only when the last consumer completes its processing on the generated data. Thus, in some embodiments, a weighted directed graph is generated to reflect the node dependencies, where each edge corresponds to the units of memory required by the data.[0031] For example, if a given (producer) node outputs 4 kilobytes of data to a consuming node, the edge between them may be assigned a weight of 4 kilobytes. This allows the system to quantify the total memory that will be needed at any given stage of execution.[0032] Embodiments of the present disclosure provide techniques to generate and modify graphs using appropriate edge weighting to yield a valid topological ordering that seeks to minimize memory utilization in order to reduce the number of accesses to slower storage components. Additionally, embodiments of the present disclosure provide techniques to analyze and modify topological orderings to improve efficiency of the schedule.[0033] In various embodiments, the efficiency gains can include, without limitation, reduced power consumption and latency, increased throughput of the system, and the like. In various embodiments, these scheduling improvements can be applied to improve the operations of a wide variety of processors and processes, including execution of machine learning models.Minimum Cuts in Graphs[0034] For a given graph, a cut is a set of edges that, if removed, disconnect the graph (e g., partition the nodes into two disjoint subsets). As used herein, a minimum cut is a
cut that minimizes the cost of the cut. In embodiments, the cost of a cut is defined as the sum of the weights of each edge that is severed or removed by the cut. Thus, a minimum cut is one that completely separates the graph into two disjoint subgraphs, while minimizing the total weight of the removed edges. For example, if a graph is partitioned by removing two edges, each with a weight of ten, the cost of the cut is twenty.[0035] In the case of a directed graph, the cost of a given cut can be determined in part on the directionality of each removed edge and the directionality of the cut. Generally, edges that cross the cut in one direction (e.g., from left to right, in the case of a two dimensional graph) are included when computing the cost of the cut, while edges that cross the cut in the other direction are ignored.[0036] A corollary to the concept of minimum cuts is maximum flow. The max-flow min-cut theorem states that the maximum amount of flow passing through a directed graph from the source to the target is equal to the total cost of the minimum cut. In some embodiments disclosed herein, edges in directed graphs are assigned weights based on the amount of data that flows across the edge. Any data that has been produced by a producer node but not yet consumed by a consumer node may be referred to as “in flight,” and must be allocated space in memory. Thus, the weights of the edges in the graph indicate the amount of data that is “in flight” and therefore must have space allocated in memory for the producer/consumer set. Therefore, under the max-flow/min-cut theorem, by finding the minimum cut, the maximum amount of “in flight” data can be determined. That is, the cost of the minimum cut is the maximum amount of memory that will be needed at any one time to execute the operations in the graph.Example Workflow for Improving Computer Process Scheduling[0037] FIG. 1 depicts an example workflow 100 for improved process scheduling to ensure efficient execution of the process. In the illustrated workflow 100, a variety of components are depicted for conceptual clarity. In various embodiments, however, the functionality of each component may be combined or distributed across any number and variety of components. Additionally, the various components and operations may be performed in any order, including iteratively (e.g., a given component may be utilized multiple times in the workflow 100). The illustrated workflow 100 includes an Ordering Component 110, a Memory Component 120, a Reordering Component 125, and an Allocation Component 130. Each of these components may generally be implemented as
a software process on a general purpose processor, using hardware, or as a combination of hardware and software.[0038] As illustrated, the workflow 100 begins when a Data Graph 105 is received by an Ordering Component 110. In an embodiment, the Data Graph 105 is a directed graph reflecting the flow of data to accomplish a given process. In such an embodiment, each node in the Data Graph 105 may correspond to an operation (e.g., a transformation of data), while each edge may correspond to data being passed between operations.[0039] For example, in one embodiment, the process corresponds to executing a machine learning model task, such as training or inferencing with an artificial neural network model. Although neural networks are used in some examples discussed herein, embodiments of the present disclosure are readily applicable to any data processing operation.[0040] In an embodiment, each node in the Data Graph 105 may correspond to a neuron in a neural network, and the edges may correspond to the connections between such neurons. In order to process model input data (e.g., image data, sound data, sensor date, textual data, or other types of data) using the neural network, it is parsed and processed by the neurons sequentially, in parallel, or both. Thus, in an embodiment, the Data Graph 105 reflects this sequence of operations using a set of nodes with corresponding edges for the flow of data between neurons.[0041] In some embodiments, the edges of the Data Graph 105 are weighted based on the data each edge corresponds to. In one embodiment, the weight of a given edge indicates the amount of data that is transmitted along the corresponding connection in the neural network. For example, if a first node passes ten kilobytes of data to a second node, the corresponding edge in the Data Graph 105 will have a weight of ten kilobytes. In this way, the Data Graph 105 quantifies the amount of data that is “in flight” at any given point during execution. In embodiments, when data is created or produced, space is required in memory to store it. This space is not freed until the last consumer of the data has finished its operations. Thus, the Data Graph 105 can be used to quantify the amount of memory needed at any given point in execution, by identifying producer-consumer sets that have begun (e.g., the producer has output data) but not yet terminated (e.g., the consumer(s) have not yet finished execution).
[0042] In the illustrated embodiment of FIG. 1, the Ordering Component 110 generates a Topological Ordering 115A based on the Data Graph 105. The Topological Ordering 115A is generally a linear sequence of operations (e.g., a sequence of nodes from the Data Graph 105) that respects the dependencies reflected in the graph. Thus, if a given node in the Data Graph 105 is completed before a second node, the given node will precede the second node in the Topological Ordering 115A. By stepping along the Topological Ordering 115A (executing each node in the indicated sequence), one can perform the original process (e.g., processing data using a machine learning model, such as a neural network model).[0043] As discussed above, in some embodiments, there is a limited amount of local and/or relatively faster memory available in the system (e g., faster to access than a host system memory that requires moving data over a common data bus). For example, a tightly-coupled memory (TCM) may act as a fast, local memory that is tightly-coupled to a processor. In various embodiments, this fast memory may include cache space, SRAM, and the like. Although such memory can be accessed quickly (due to its tight coupling with the processor(s)), the size of such fast memory is often limited owing to physical constraints and other design considerations for a processing system. In contrast, a relatively large amount of storage or memory may be available elsewhere in the host system (e g., in typical RAM, hard drive or solid state drive devices, and the like). Execution of typical computing operations can be memory intensive, often exceeding the space available in local faster memory. Thus, while it is desirable to store the needed data in the local fast memory, it is often not possible to store all of this data simultaneously (due to its small size), which requires reliance on remote memories such as host system memory (e.g., DRAM).[0044] In an embodiment, the Topological Ordering 115 A is configured to reduce the amount of data that is stored at any given point in the process. Beneficially, this reduces the overall memory utilization, latency, and power during the process. In some embodiments, the Ordering Component 110 is configured to generate a set of minimum cuts, in order to generate the Topological Ordering 115A.[0045] Generally, computing a minimum cut includes finding the smallest aggregate edge weights that disconnect the source of the graph from the target. As discussed above, the source of a graph (or subgraph) is a node or component that provides the input to the graph, while a target is a node or component that acts as the final sink or destination for
data traversing the graph. By iteratively finding such minimum cuts, the Ordering Component 110 can generate the Topological Ordering 115A. In embodiments, the Ordering Component 110 may utilize any number of techniques for finding minimum cuts, including the Ford-Fulkerson algorithm, the Edmonds-Karp algorithm, and the like.[0046] In the illustrated workflow 100, the Topological Ordering 115A is received by a Memory Component 120 that inserts memory operations into the ordering, as needed. In an embodiment, each memory operation corresponds to moving one or more units of data from the local memory (e.g., a TCM) into the remote memory or storage (e.g., DRAM), moving one or more units of data from the remote memory or storage into the local memory, or both. In some embodiments, such operations may be referred to as “spill/fill” operations. In other words, data may be spilled from a local memory to a remote memory, and later filled back into a local memory from the remote memory.[0047] In an embodiment, the Memory Component 120 analyzes the Topological Ordering 115A to determine the amount of memory needed at each point in the ordering (e.g., at each node, or between each sequential node). In some embodiments, to do so, the Memory Component 120 determines the aggregate weight of all edges that are still “in flight” at each point (e.g., have left a producer node but not yet terminated at a consumer node).[0048] For any points that exceed an available space in the local memory, the Memory Component 120 inserts one or more memory operations to move some data out of the faster memory into the remote memory. In this way, the Memory Component 120 generates a Modified Topological Ordering 115B, which is the original Topological Ordering 115A with zero or more memory operations inserted therein. Here it is zero or more because some Data Graphs 105 (e.g., those that require memory space that is smaller than or equal to the local memory) will not require any memory operations to the remote memory.[0049] In the illustrated embodiment, this Modified Topological Ordering 115B is then provided to a Reordering Component 125. In an embodiment, the Reordering Component 125 can move nodes in the Modified Topological Ordering 115B (while respecting the data dependencies) in an effort to improve the potential for parallel processing in execution. In an embodiment, respecting data dependencies includes ensuring that no consumer node is placed before any of its producers in the Modified
Topological Ordering 115B. For example, if a consumer is placed immediately after its producer, the consumer cannot be moved earlier in the ordering. If one or more nodes are located between a consumer and its producer in the Topological Ordering 115A, the consumer may be moved earlier in the ordering to create the Modified Topological Ordering 115B.[0050] For example, in one embodiment, the Reordering Component 125 can move direct memory access (DMA) operations to earlier positions in the ordering to allow for them to occur in the background, being performed by one or more processing units, while the execution of the operations continues on one or more other processing units. That is, rather than initiate DMA just before the data is needed (e.g., by the next node), if sufficient space is available in the local memory, the Reordering Component 125 may move the DMA operation earlier to allow it to begin loading the data into the memory before it is needed. This increases parallel utilization of hardware components (e.g., while one processing unit loads the data, others may continue to operate on other data).[0051] As another example, the Reordering Component 125 can modify the ordering of the nodes to improve parallel execution on discrete hardware processing units. In some embodiments, the system can include a number of processing units (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more machine learning accelerators, such as neural processing units (NPUs), and the like). In such an embodiment, the Reordering Component 125 may further modify the Modified Topological Ordering 115B to allow some operations to be performed in parallel on the separate processing units. This results in a second Modified Topological Ordering 115C.[0052] In the illustrated workflow 100, an Allocation Component 130 then processes the Modified Topological Ordering 115C to generate a Processing Schedule 135. Although depicted as being used as the last processing step in the workflow 100 (resulting in the generation of the Processing Schedule 135), in some embodiments, the Allocation Component 130 may be utilized at different steps in other workflows. Additionally, in some embodiments, one or more additional processes may be applied after the Allocation Component 130.[0053] For example, in some embodiments, the Memory Component 120 can insert memory operations before the Allocation Component 130 operates. In some embodiments, however, it is only during allocation that one or more memory operations
need to be inserted. In such an embodiment, therefore, the Memory Component 120 may analyze (or re-analyze) the ordering after allocation is performed.[0054] In one embodiment, the Allocation Component 130 allocates units of memory (e g., in a TCM) based on the Topological Ordering 115C. In embodiments, as discussed above, the Data Graph 105 and Topological Orderings 115 each include a collection of operations that use input data to produce output data. This data requires working memory while in use. Because the local memory space may be limited, in some embodiments, some memory locations must be reused. This is possible because some units of data are needed only for some subset of the process. For example, if data is produced by a producer node and consumed by a consumer node, the data need only be stored from the time of production to the time of consumption. Thus, each producer-consumer set spans a portion of the Topological Ordering 115C, beginning at the producer node and ending at the (last) consumer node. The Allocation Component 130 may allocate physical locations in the memory at each point in time for the data that is “live” at that point in the ordering.[0055] In embodiments, the Processing Schedule 135 can then be executed by one or more processing units of a processing system, such as described with respect to FIGS. 10 and 11, in order to perform the original process.Example Graph Modifications for Improving Computer Process Scheduling[0056] FIG. 2A depicts an example graph 200A including a set of operations and corresponding data flows involved in executing a process. For example, the process may include a machine learning task, such as training or inferencing based on a machine learning model.[0057] In the illustrated embodiment, data flows from a Source 205 to a Target 215 via a set of Nodes 210. In embodiments, each Node 210 performs some operation on incoming data, such as a mathematical operation, transformation, and the like. Further, each Node 210 may or may not output some data. In the illustrated embodiment, each edge is directed (indicated by the directionality of the arrow). Thus, each edge corresponds to data flowing from a producer node to a consumer node. In the depicted graph 200A, Node 210A receives two units of data from the Source 205, performs some operation, and outputs four units of data to Node 210B and to Node 210C. Node 210B does not output data and thus may be referred to as a data sink or leaf node.
[0058] As illustrated, Node 210C receives 4 units of data and outputs one unit of data to Node 210E. For example, Node 210C may perform a layer operation in a machine learning model in which data is compressed, such as a convolution layer or pooling layer.[0059] Node 210E additionally receives five units of data from Node 210D. Based on this input, Node 210E outputs four units of data to the Target 215. Thus, in the illustrated embodiment, Node 210A is a “producer” node for two consumers: Nodes 210B and 210C. Node 210C is a producer for consumer Node 210E. Node 210D is also a producer for consumer Node 210E. In turn, Node 210E is a producer for the Target 215.[0060] Although the graph 200A depicts a simple example process for conceptual clarity, in various embodiments the graphs may be significantly more complex.[0061] Notably, not all sets of minimum cuts for a given the graph yield a valid ordering. For example, a set of minimum cuts may create an ordering that does not respect the data dependencies (indicated by the directionality of the edges). Similarly, when a given producer has multiple consumers, only the node which completes last can free the allocated data. To address these issues, a set of pre-processing operations can be performed to transform the graph 200A of FIG. 2A into a modified representation prior to generating the topological ordering.[0062] FIG. 2B depicts a reverse edge modification to create a modified graph 200B. In the illustrated embodiment, this modification is performed by transforming the graph 200A. In one embodiment, the modification is performed by the Ordering Component 110 of FIG. 1. This graph 200B has one or more new edges inserted to ensure that the cuts yield a valid topological ordering. As discussed above, the concept of a minimum cut does not respect data dependencies, and will not necessarily result in a valid topological ordering (e.g., because the cut can curve back to cut an edge such that, from the perspective of the cut, the edge crosses backwards (e.g., from right to left) and therefore the weight is not included in the cost of the cut). In the illustrated embodiment, for each edge in the graph, a corresponding backwards or reverse edge (labeled 216) has been inserted. This is illustrated in FIG. 2B using dashed arrows. In an embodiment, these reverse edges are assigned a weight that ensures they will not be cut during the subsequent operations, as discussed in more detail below with respect to FIGS. 2C and 2D. That is, the backwards edges force the cut to respect data dependencies by being, in effect, uncuttable. For example, each backwards edge may be assigned an infinite weight.
[0063] Note that reverse edges are described in some examples herein as having “infinite” weights in order to ensure certain cuts will be prevented algorithmically. Generally, such edges may have arbitrarily high values. For example, because a truly infinite weight may be unworkable due to limitations of the physical hardware and software, the system may use a predefined value that significantly exceeds any realistic weight used in the system and is thus arbitrarily high. Generally, this predefined value may be selected to be larger than the costliest minimum cut, which ensures that it will never be included in a cut because cutting such a high weight, even alone, would incur more penalty than any alternative valid cut.[0064] In some embodiments, this arbitrarily high predefined value is selected to be high enough that it exceeds reasonable or realistic weights, but low enough such that adding the weight of the infinite edge to weights of other edges would not cause integer overflow that could bring the weight back into the realm of reasonable weights. In other embodiments, the arbitrarily high value for reverse edges may be set based on one more values of forward edges in the graph, such as based on the largest forward edge weight, an average of forward edge weights, and the like, and such reference values may be modified by a coefficient or function to become arbitrarily high.[0065] FIGS. 2C-2D depict cuts on a graph to partition the nodes into disjoint subsets.[0066] FIG. 2C depicts the graph 200B with a Cut 217A. In embodiments, a minimum cut can be conceptualized as a directed line that crosses a set of one or more edges in the graph to separate the source from the target. In the illustrated embodiment, the Cut 217A slices from bottom to top across the page. For each edge that passes through the Cut 217A in one direction (e.g., from the left of the cut through to the right in the illustrated embodiment), the weight of the edge is added to the cost of the cut. For each edge that passes in the other direction (from the right of the cut through to the left in the illustrated embodiment), the weight of the edge is ignored.[0067] In the illustrated embodiment, the edge from Node 210D to Node 210E and the edge from Node 210C to 210E both cross the Cut 217A in the same direction, from the left to the right. Their weights are therefore included in the cost of the Cut 217A. In contrast, the two reverse edges from Node 210E to 210C and Node 210E to 210D cross
the Cut 217A from the right to the left, and their weights are therefore ignored. Thus, the cost of the Cut 217A is six.[0068] FIG. 2D depicts another Cut 217B. The edge from Node 210C to Node 210E crosses the Cut 217B from left to right, and is therefore counted when computing the cost of the Cut 217B. The infinite reverse edge from Node 210E to Node 210C crosses from right to left, and is therefore ignored. As illustrated, the edge from Node 210A to Node 210C crosses the Cut 217B from the right of the Cut 217B to the left of the Cut 217B (determined based on the directionality of the Cut 217B, indicated by the arrow head). Thus, this edge is ignored. In contrast, the infinite reverse edge from Node 210C to Node 210A crosses the Cut 217B from the left of the Cut 217B to the right of the Cut 217B. Thus, the (infinite) weight of this edge is included in the cost of the Cut 217B. This gives the Cut 217B (which may violate the topology of the original graph) an infinite cost, ensuring it will not be used to partition the graph 200B.[0069] Thus, if no infinite (or high weight) edges are inserted, the minimum cut techniques may result in graphs that cut “backwards” across an edge and violate the dependencies. Inserting reverse edges ensures that the subsequently-generated cuts do not violate the topology: such a cut would incur infinite cost.[0070] FIG. 2E depicts a full connection modification to create a modified graph 200C. Specifically, the graph 200C reflects a modified version of the graph 200B, with additional edges inserted to enforce topological validity of the final ordering. A valid topological ordering (also referred to an ordering with topological validity) is one that respects the original dependencies of the graph, in that no consumer nodes are located before any of their producers. In the illustrated embodiment, this modification is performed to ensure that there is a path in the graph 200C from every Node 210 to the Target 215, and from the Source 205 to every Node 210. To do so, in the illustrated embodiment, zero-weight edges are inserted. Specifically, in graph 200B of FIG. 2B, no path existed to arrive at the Node 210D from the Source 205. By contrast, in graph 200C of FIG. 2E, a zero-weight edge has been inserted directly from the Source 205 to the Node 210D (e.g., Ordering by Component 110 of FIG. 1) and a corresponding infinite-weight reverse edge has been added.[0071] Similarly, no path existed from the Node 210B to the Target 215 in graph 200B of FIG. 2B. To ensure topological validity, therefore, the Ordering Component 110
inserted a zero-weight edge connecting the Node 21 OB to the Target 215 (along with a corresponding infinite weight edge in reverse).[0072] FIG. 2F depicts a deallocation modification to create a modified graph 200D. Graph 200D is based on the graph 200C of FIG. 2E and is configured to account for multiple consumer nodes. As discussed above, when a producer outputs data to one or more consumers, that data must be stored until all of the consumer(s) finish processing it. If a single consumer exists for the producer, the data can be deallocated as soon as the consumer completes. If multiple consumers exist, however, the data cannot be deallocated until all have completed their operations. In many embodiments, it may be difficult (or impossible) to know which consumer will complete last. Thus, in the illustrated embodiment, the Ordering Component 110 generates and inserts Deallocation Nodes 220 as needed. In embodiments, Deallocation Nodes 220 are placeholder nodes (e.g., nodes that do not perform any data operations or transformations, and exist to indicate when memory space can be deallocated or freed).[0073] In one embodiment, the Ordering Component 110 inserts a Deallocation Node 220 for any producer node that has more than one consumer. Deallocation Nodes 220 are inserted to ensure that space in the local memory is not deallocated until all consumers of the data in that space have finished. If a producer node has a single consumer, the dependency reflected in the graph (e.g., the edge from the producer to the consumer) is sufficient to ensure that the consumer is scheduled after the producer. The space needed for the producer/consumer set is deallocated once the consumer finishes. However, if multiple consumers are present for a given producer, it is impossible to know when the space can be deallocated, as there is no dependency between the consumers that forces one to finish last. To ensure that the space is not deallocated prematurely, therefore, Deallocation Nodes 220 are used to signal when the space should be deallocated (only once all consumers have finished operating on the data).[0074] In another embodiment, the Ordering Component 110 can simply insert a deallocation node for every producer, regardless of the number of consumers. As illustrated, inserting a Deallocation Node 220 for a given producer includes inserting an edge from each consumer of the producer to the Deallocation Node 220. In the illustrated embodiment, therefore, for the producer Node 210A, zero- weight edges are inserted from each of the consumers Node 210B and 210C to the Deallocation Node 220. As above, infinite-weight backwards edges are also added for each. This ensures that the
Deallocation Node 220 is placed after both Nodes 210B and 210C in the topological ordering, and that the space is not deallocated until both have completed their operations.[0075] In embodiments, the Ordering Component 110 also sets the weights of each original edge from the producer node to the consumers to zero. In the illustrated embodiment, this includes changing the weights of the edges from Node 210A to 210B and from Node 210A to 210C to zero. This is performed because if both edges retained their original weight, the data flowing on those edges would be “double counted” while computing minimum cuts, resulting in inaccuracies and inefficiencies in the final schedule.[0076] To quantify the weight of these edges and ensure they are counted in the final ordering, the Ordering Component 110 can additionally insert an edge from the producer Node 210A to the Deallocation Node 220. This edge is assigned a weight that corresponds to the original edges from the producer to the consumer(s). Additionally, as illustrated, this new edge is similarly accompanied by an infinite backwards edge to enforce topological validity.[0077] In embodiments, the Ordering Component 110 can then process the graph 200D using maximum flow/minimum cut techniques in order to generate a valid topological ordering for the original graph 200A. Additionally, by utilizing the edge weights (corresponding to the amount of data needed for each producer/consumer set), the minimum cuts are computed based on memory utilization and attempt to use the minimum amount of memory at each stage of the execution.Example Minimum Cut Procedures for Improving Computer Process Scheduling[0078] FIGS. 3A - 3D depict a sequence of evaluations and operations performed to generate a valid topological ordering of a data flow graph to improve scheduling of the corresponding process. Specifically, FIGS. 3A-3D depict a sequence of minimum cuts computed by an ordering component (e.g., Ordering Component 110 of FIG. 1) in order to generate a topological ordering.[0079] FIG. 3A depicts an initial graph 300A before cuts are generated. In the illustrated graph, data flows from a Source 305A to a Target 315A via a set of Nodes 310. Specifically, Node 310A receives data from Source 305A and transmits data to Nodes 310B and 310C. Node 310B in turn provides data to Node 310D, and Node 310C provides
data to Node 310E. Node 310F receives data from both Node 310D and Node 310E. Node 310F then provides data to the Target 315A.[0080] FIG. 3B depicts a first Cut 317 in the graph 300A. In embodiments, the minimum cut technique yields a single cut for a graph. To generate a topological ordering, the ordering component can select a node to serve as an “index node” for a given cut, and constrain the cut to pass just after the index node. This allows the ordering component to exert some control over where the cut is placed, to help improve the efficiency and latency of the cutting process. For example, if the cuts are not constrained, the minimum cut for a given graph is likely to be a single edge near the beginning or end of the graph. Cutting here, however, is not useful in generating a topological ordering, because this portion of the graph is already linear. That is, if a single edge connects node A to node B, there is no possible ordering that places the node B before node A. Generating a cut at this point, therefore, is useless.[0081] In some embodiments, to constrain the cut to an index node, the Ordering Component 110 selects an index node, computes a cut to divide the graph into two subgraphs, and then processes each sub-graph to compute another cut for each. By iteratively processing each sub-graph, a topological ordering is generated. In embodiments, a topological ordering is a linear sequence of nodes. There may be multiple valid topological orderings for a given graph. By iteratively processing each subgraph to subdivide each into additional subgraphs, the ordering component iteratively makes the overall graph more linear. That is, each cut effectively enforces some linearity by placing some nodes before the cut and some nodes after. By iteratively computing cuts, the graph becomes more linear.[0082] In one embodiment, this process repeats until all subgraphs are linear (or all subgraphs include a single node). In another embodiment, rather than computing cuts until the subgraphs are linear (or contain a single node), the processing system can proceed until each subgraph has reached some predefined criteria relating to size or complexity (e.g., a number of nodes in the subgraph). These subgraphs may then be transformed using one or more techniques to generate topological orderings for the subgraph.[0083] In an embodiment, the final topological ordering is created by reconnecting each subgraph, in the proper order, where the cuts were made (e.g., by adding or
reconnecting the edges that were severed, while maintaining the linear sequence of nodes).[0084] In the illustrated embodiment, the Ordering Component 110 has selected the Node 310B to serve as the index node for the first Cut 317. In some embodiments, the Ordering Component 110 selects the index node randomly. In at least one embodiment, the Ordering Component 110 attempts to select an index node that is near the center of the graph 300A based on the depth of each node.[0085] In one embodiment, the Ordering Component 110 determines the depth of each respective Node 310 based on its distance from both the Source 305A and the Target 315A. For example, for each respective Node 310, the Ordering Component 110 may count the number of nodes or edges that precede it (e.g., that must be traversed to get from the source to the node) and the number of nodes or edges that are subsequent to it (e.g., that must be traversed to get from the node to the target). By aggregating these counts (e.g., through addition or multiplication), the Ordering Component 110 can identify the node (or set of nodes) located nearest to the center of the graph 300A. In an embodiment, the node with the highest aggregate depth from source and target is referred to as a “centroid” node. Although generally located near the center, in embodiments, the centroid node may of course not be precisely centered in the graph.[0086] In one embodiment, if multiple nodes have the same depth score, the Ordering Component 110 can randomly select among them to select the index node. In at least one embodiment, if two nodes are at the same depth and have the same operation type, the Ordering Component 110 treats them as sibling nodes. Sibling nodes generally correspond to a single node which was split into a set of siblings, each performing the same operation. Sibling nodes may be generated to enhance processing parallelism. In an embodiment, to select among such siblings, the Ordering Component 110 constrains the cut after the middle node of the siblings, such that half of the siblings complete before the index node runs, and half complete after. In one embodiment, to identify the middle sibling, the ordering component traverses one edge upstream in the graph, then one edge back down to a sibling. This repeats until all siblings are found.[0087] Typically, a minimum cut technique will place the cut to bisect the graph 300A in a location that minimizes the weight of the edges it cuts through. In order to constrain the cut to occur just after the selected index node (e.g., Node 310B), the Ordering
Component 110 may add additional edges to the graph. In the illustrated embodiment, the Ordering Component 110 adds an edge with infinite weight from the Source 305 A to the index Node 310B. The Ordering Component 110 additionally adds an edge with infinite weight from each consumer of the index node (Node 310D in the illustrated embodiment) to the Target 315 A. This ensures that the computed cut will pass just after the index node and before any consumers of the index node.[0088] In the illustrated embodiment, the resulting Cut 317 separates the index Node 310B and its consumer Node 310D. As illustrated, to bisect the graph 300A and disconnect the Source 305A from the Target 315A, the Cut 317 also cuts through the edge between Node 310A and Node 310C. Of course, in embodiments, the Cut 317 could instead have cut between Nodes 310C and 310E, between Nodes 310E and 310F, or between Node 310D and 31 OF (continuing through the edge between Node 31 OF and 315A).[0089] In embodiments, the particular path of the cut (e g., the edges it severs) is chosen to minimize the aggregate/accumulated weight of the severed edges. The Cut 317 must bifurcate the edge between Nodes 310B and 310D (because of the edges added by the Ordering Component 110). The Ordering Component 110 will then route the Cut 317 through other edge(s) as needed to completely separate the graph 300A, while minimizing the total cost. Each time the Cut 317 passes through an edge, the weight of the edge is added to the total cost (also referred to as a penalty) of the cut. Notably, the total cost of the Cut 317 reflects the total amount of data that is maintained in memory at the corresponding point of the cut.[0090] The Cut 317 bifurcates the graph 300A, such that some portion of the graph 300A is performed prior to the Cut 317, and some portion occurs after the Cut 317. In the illustrated embodiment, Nodes 310A and 310B precede the Cut 317 (along with the Source 305A), while Nodes 310C-F are subsequent to it (along with the Target 315A).[0091] FIG. 3C depicts the result of the Cut 317. Specifically, as illustrated, all elements on one side of the Cut 317 have been placed in a first subgraph 300B, while all elements on the other side belong to a second subgraph 300C. Additionally, as illustrated, a new Target 315B has been inserted into the subgraph 300B to provide a new target for the subgraph. In some embodiments, for each edge that was cut by the Cut 317, a new
edge is added to the new Target 315B, in order to preserve and enforce the dependencies in the original graph 300 A.[0092] Similarly, in the subgraph 300C, a new Source 305B has been added. Additionally, as illustrated, for each edge that the Cut 317 severed, a new edge has been added from the new Source 305B to enforce the dependencies of the original graph 300A.[0093] FIG. 3D depicts additional Cuts 320 and 325 generated by the Ordering Component 110 for the subgraphs 300B and 300C, respectively. In the subgraph 300B, the Ordering Component 110 has selected the Node 310A as the index node. To constrain the Cut 320 to occur just after this Node 310A, as illustrated, the Ordering Component 110 inserts infinite-weight edges from the Source 305A to the index Node 310A, and from all consumers of the index node (e.g., from consumer Node 310B) to the new Target 315B.[0094] Similarly, in subgraph 300C, the Ordering Component 110 has selected Node 310E to serve as the index node. To constrain the Cut 325, the Ordering Component 110 has inserted an infinite edge from the new Source 305B to the index Node 310E, and an infinite edge from each consumer of the index node (here, Node 310F) to the Target 315 A.[0095] As discussed above, Cuts 320 and 325 bifurcate each subgraph 300B and 300C into two new subgraphs (yielding four subgraphs in total). In some embodiments, the Ordering Component 110 can perform similar processing for each subgraph iteratively until some terminating criteria (e.g., a maximum number of iterations, or a maximum time spent finding the cuts) has been satisfied. That is, the Ordering Component 110 may compute a cut to divide a graph into two subgraphs. For each subgraph, the Ordering Component 110 can then compute another cut to divide each subgraph into two more subgraphs (yielding four subgraphs total). For each of these four subgraphs, the Ordering Component 110 may similarly compute a cut to yield eight total subgraphs.[0096] In one embodiment, the Ordering Component 110 selects the next subgraph to be divided based on the size of each subgraph. Although selecting a centroid node can help balance the subgraphs, the resulting cut can be significantly uneven (with more nodes on one side than the other). In some embodiments, the Ordering Component 110 selects the largest of the available subgraphs to compute the next cut. This iterative process repeats until the predefined criteria are met. By iteratively selecting and cutting the largest subgraph, the ordering component can compute minimum cuts at the denser or more
complex regions of the graph first, followed by the less dense or complex regions. This results in a more efficient topological ordering, and reduces the time needed to find final the set of cuts.[0097] In one embodiment, the terminating criteria relate to a full topological ordering. For example, the Ordering Component 110 may continue to iteratively cut each subgraph until the nodes in a given subgraph are linear. When a subgraph is linear, no additional cuts are required to yield a topological ordering. Once all subgraphs are linear, they may then be combined (e.g., by linking the subgraphs together at the places where they were cut) to form the full topological ordering. In some embodiments, the terminating criteria may include a number of iterations or cuts to be computed. Once the number of iterations has been reached, the process stops. In some embodiments, the terminating criteria includes a time bound. When the predefined amount of time has been spent, the cutting process stops.Example Method for Process Scheduling[0098] FIG. 4 depicts a flow diagram illustrating a method 400 for improved scheduling of computer processing operations. In some embodiments, the method 400 is performed by a processing system, such as described with respect to FIG. 10, including one or more components, such as an Ordering Component 110, a Memory Component 120, a Reordering Component 125, an Allocation Component 130, and the like.[0099] The method 400 begins at block 405, where a data flow graph is received for processing. In an embodiment, as discussed above, this data flow graph generally corresponds to some computing process, where each node corresponds to an operation performed during the process and each (directed) edge corresponds to data flow in the process. In some embodiments, the weight of each edge in the data flow graph corresponds to the amount of data that is passed along the dependency and is therefore required to be allocated space in memory.[0100] At block 410, the processing system generates a topological ordering for the received data flow graph. In one embodiment, this process includes some or all of the steps illustrated and discussed above with reference to FIGS. 2A-2D and 3A-3D.[0101] The method 400 then proceeds to block 415, where the processing system determines whether an available space in memory (e g., in local memory) will be exceeded by the topological ordering at any point. That is, the processing system can
determine whether the memory needed (indicated by the aggregate weight of the edges at any point in the ordering) exceeds the available space in the memory (e.g., in the TCM). If so, the method 400 continues to block 420.[0102] At block 420, the processing system inserts one or more memory operations (e.g., spill/fi 11 operations) into the topological ordering. In an embodiment, for each point where the needed space (indicated by the weight of the edges at each point) exceeds the available space in local memory, the processing system inserts memory operation(s) to move some data out of the local memory and into more remote memory to ensure the local memory capacity is not exceeded. The method 400 then continues to block 425. The processing system may similarly insert operations to move the data back into memory when needed.[0103] Additionally, at block 415, if the processing system determines that no point in the ordering will require more space than is available in local memory, the method 400 continues to block 425.[0104] At block 425, the processing system allocates units of memory based on the topological ordering. As discussed above, in some embodiments, this includes assigning addresses in the memory to each piece of data at each point in time[0105] FIG. 5 depicts a visualization of memory allocations, according to some embodiments disclosed herein. In the illustrated embodiment, allocations are plotted on a graph 500 where the horizontal axis is time and the vertical axis is units of memory in the local memory. In an embodiment, each Allocation 510 (e g., to a producer-consumer or producer-consumers set) is depicted as a rectangle that spans horizontally from a time when the producer node produces the data to a time when the final consumer node consumes it. The height of each Allocation 510 corresponds to the amount of data needed for the producer-consumer set. In an embodiment, the processing system allocates memory by assigning units of memory in an effort to pack such rectangles as tightly as possible, without exceeding some predefined value on the vertical axis (corresponding to the available space of the local memory, indicated by the dashed line 505). If any Allocations 510 pass this line 505, some data must be moved to the remote (host) memory, and the corresponding space in the local memory is deallocated (allowing it to be reallocated to other producer/consumer sets).
[0106] In the illustrated embodiment, the allocations indicate the time when each producer and consumer can operate. For example, the producer associated with the Allocation 51 OF has space allocated after the consumer associated with the Allocation 51 OB has completed. Thus, the producer of Allocation 51 OF cannot begin until the consumer of Allocation 51 OB completes.[0107] Returning to FIG. 4, once the memory has been allocated, in the illustrated embodiment, the method 400 continues to block 430, where the processing system modifies the ordering of the nodes in the topological ordering in order to increase parallel utilization of resources. In various embodiments, this can include, for example, moving load operations (e.g., moving data from storage to memory) to earlier positions in the ordering, rearranging nodes to allow for parallel execution on separate processing units, and the like.Example Method for Generating Topological Orderings[0108] FIG. 6 depicts a flow diagram illustrating a method 600 for generating topological orderings to improve process scheduling.[0109] In one embodiment, the method 600 provides additional detail for block 410 in FIG. 4 (generating a topological ordering).[0110] The method 600 begins at block 605, where the processing system performs one or more operations to ensure that the cuts will result in a valid topological ordering (e.g., an ordering that respects the original dependencies). As discussed above, this may include, for example, adding reverse edges with high or infinite weights (to prevent the cut from crossing an edge backwards and violating a dependency).[0111] The method 600 then continues to block 610, where the processing system inserts zero or more deallocation nodes into the graph, as needed. In one embodiment, as discussed above, the deallocation nodes can be utilized to ensure that producers with multiple consumer nodes are processed correctly by the minimum cut algorithm(s).[0112] At block 615, a loop is initiated to generate a set of minimum cuts. At block 615, the processing system determines whether predefined termination criteria are satisfied.[0113] In one embodiment, the terminating criteria includes determining whether the graph (or each subgraph) is linear. If so, the method 600 continues to block 620, where
the processing system returns this linear topological ordering. In various embodiments, the termination criteria can include, for example, a maximum time, a maximum number of cuts or iterations, and the like.[0114] If the terminating criteria are not satisfied, the method 600 continues to block 625, where the processing system selects a subgraph. In an embodiment, during the first iteration of the loop, the processing system operates on the entire graph. In each subsequent iteration, block 625 can include selecting the next subgraph to be operated on. In one embodiment, the processing system selects the largest of the remaining (nonlinear) subgraphs.[0115] The method 600 then continues to block 630, where the processing system computes a minimum cut for the selected subgraph (or the original graph, for the first iteration). In some embodiments, to compute the cut, the processing system first selects an index node to constrain the cut. That is, the processing system ensure that the cut separates the index node from all of its consumers. Selecting index nodes near the center of the graph can allow the processing system to process the complex portions of the graph first, rather than cutting off single nodes, which ensures that the iterative process is rapid and efficient, and results in a final ordering that accounts for the global graph structure (as opposed to individual local portions thereof).[0116] In some embodiments, computing the cut comprises selecting a set of edges to remove in order to completely separate the source and the target nodes while minimizing the total cost of the cut. The cost of any given cut is determined based on aggregating (e.g., adding) the individual weights of each edge that the cut crosses.[0117] The method 600 then returns to block 615. In this way, the processing system continues to iteratively evaluate and cut each subgraph until the terminating criteria are satisfied.Example Method for Modifying Graphs to Enforce Topological Validity[0118] FIG. 7 depicts a flow diagram illustrating a method 700 for enforcing topological validity while generating efficient process schedules.[0119] In one embodiment, the method 700 provides additional detail for block 605 in FIG. 6 (ensuring the cuts will yield a valid topological ordering).
[0120] In the illustrated embodiment, the method 700 begins at block 705, where the processing system selects an edge in the graph. This initial selection may be accomplished in any number of ways, including randomly, as the processing system will iterate through all edges in the graph.[0121] At block 710, the processing system generates a corresponding reverse edge for the selected edge. That is, if the selected edge traverses from a first node to a second node, the reverse edge is from the second node to the first node. In embodiments, this reverse edge is assigned a predefined weight or other flag indicating that it cannot be cut by a minimum cut. For example, in one embodiment, the processing system assigns an infinite (or an arbitrarily high) weight that would cause an infinite (or arbitrarily high) penalty to be applied for any cut that crosses in the wrong direction (e.g., in a direction that would violate the data dependencies in the graph).[0122] At block 715, the processing system determines whether any additional edges remain to be evaluated. If so, the method 700 returns to block 705. In this way, the processing system inserts reverse edges to enforce the data dependencies in the graph and ensure any minimum cut results in a valid topological ordering. The method 700 then continues to block 720.[0123] At block 720, the processing system attempts to visit all nodes in the graph by traversing directed edges from the source. This may include utilizing a breadth-first search or a depth-first search, depending on the particular implementation. In an embodiment, the processing system notes which nodes have been visited during this search. The method 700 then proceeds to block 725, where the processing system determines whether any nodes in the graph where not visited during this search. If so, the unvisited nodes are disconnected from the source, in that no valid path exists using directed edges to reach the node from the source. If all nodes were visited, the method 700 proceeds to block 735. If at least one node was not traversed during the search, however, the method 700 continues to block 730.[0124] At block 730, the processing system inserts edge(s) with zero weight from the source to any nodes that were not visited during the search. This ensures that the nonvisited nodes are fully connected to the source, and enforces topological validity of any computed minimum cuts. In one embodiment, the processing system additionally inserts
an infinite weight reverse edge from the non-visited node(s) to the source. The method 700 then continues to block 735.[0125] At block 735, the processing system performs another search to attempt to visit all nodes by traversing the reverse edges from the target. The system may similarly note which nodes are found/traversed during this search. In embodiments, this search may be performed depth-first or breadth-first.[0126] The method 700 then continues to block 740, where the processing system determines whether all nodes were visited during this search from the target. If any nodes were not found, they are disconnected from the target and no valid path exists in the graph using the (forward) directed edges from the non-visited node to the target. If all nodes were found, the method 700 proceeds to block 750. If at least one node was not visited, however, the method 700 continues to block 745.[0127] At block 745, the processing system inserts edge(s) with zero weight from the non-visited node(s) to the target. This ensures that the node is fully connected to the target, and enforces topological validity of any computed minimum cuts. In one embodiment, the processing system additionally inserts an infinite weight reverse edge from the target to the non-visited node(s). The method 700 then continues to block 750.[0128] At block 750, the processing system returns the modified graph. In this way, the processing system ensures that each node is connected via a valid path to both the source and the target, in order to enforce the data dependencies in the graph and ensure any minimum cut results in a valid topological ordering.Example Method for Modifying Graphs using Deallocation Nodes[0129] FIG. 8 depicts a flow diagram illustrating a method 800 for handling parallel data flows to accurately generate efficient process schedules.[0130] In one embodiment, the method 800 provides additional detail for block 610 in FIG. 6 (inserting deallocation nodes as needed). The method 800 begins at block 805, where the processing system selects a producer node in the graph. In embodiments, producer nodes are any nodes that output data to one or more subsequent nodes (or to the target node)[0131] In some embodiments, at block 805, the processing system selects a producer node from a subset of the producers in the graph. For example, in one embodiment, the
method 800 is only applied to producer nodes that have more than one consumer. That is, because adding deallocation nodes for any producer with a single consumer is not needed and is potentially wasteful, the processing system may first identify all nodes that output to multiple consumers, and select from this identified subset. In another embodiment, the method 800 is applied to all producer nodes, regardless of the number of consumers each is associated with. In various embodiments, this initial selection may be accomplished in any number of ways, including randomly.[0132] At block 810, the processing system identifies the set of consumer node(s) for the selected producer.[0133] The method 800 then proceeds to block 815, where the processing system determines the amount of data that is output by the selected producer to the identified consumer(s). The amount of data produced will be used to set the weight of the edge to the deallocation node. For example, if the producer produces ten kilobyte of data (regardless of the number of consumers that use this data), the system will subsequently set the weight of the edge to the deallocation node to ten kilobytes. The method 800 then continues to block 820.[0134] At block 820, the processing system generates a deallocation node for the selected producer. At block 825, the processing system then inserts an edge from the selected producer to the deallocation node.[0135] The method 800 then proceeds to block 830, where the processing system assigns a weight to the newly-generated edge. In an embodiment, the weight of the edge is based on the (previously-determined) amount of data that is output by the producer node to the consumer(s). In some embodiments, the processing system also inserts a reverse edge with arbitrarily-high weight from the deallocation node to the selected producer node.[0136] The method 800 then proceeds to block 835, where the processing system creates edges from each identified consumer of the selected producer, connecting them to the newly-created deallocation node.[0137] At block 840, the processing system sets the weight of the edge to and from the identified consumers to zero. That is, the processing system sets the weight(s) of all edge(s) from the selected producer to the identified consumer(s) to zero. This ensures that the data for the producer-consumer(s) set is not counted multiple times in computing the
minimum cuts. The system further sets the weight(s) of the newly-created edge(s) from each consumer to the deallocation node to zero.[0138] At block 845, the processing system determines whether there is at least one additional producer (or one additional producer in the subset of producers with multiple consumers) that has not been evaluated. If so, the method 800 returns to block 805.[0139] Otherwise, the method 800 continues to block 850, where the processing system returns this modified graph with deallocation nodes inserted.Example Method for Finding Minimum Cuts[0140] FIG. 9 depicts a flow diagram illustrating a method 900 for dividing data flow graphs to generate topological orderings to yield efficient process schedules.[0141] In one embodiment, the method 900 provides additional detail for block 630 in FIG. 6 (generating a cut in a graph).[0142] The method 900 begins at block 905, where the processing system selects an index node from the graph. In one embodiment, as discussed above, the processing system selects the index node based on the depth of each of the nodes. For example, the processing system may generate a depth score for each node based on its distance from the start node and to the end node, where higher depth scores correspond to nodes that are closer to the center of the graph (or subgraph). The processing system may then select, as the index node, the node with the largest depth score.[0143] At block 910, the processing system inserts an infinite weight edge from the source node to the selected index node. This constrains the subsequent cut to occur after the index node.[0144] Further, at block 915, the processing system identifies the consumer(s) of the index node.[0145] At block 920, the processing system similarly inserts an infinite weight edge from each identified consumer to the target. This constrains the cut to occur before any of the identified consumers. In this way, the processing system can constrain the minimum cut techniques to cut immediately after the index node, separating it from its consumers.[0146] The method 900 then continues to block 925, where the processing system generates a minimum cut for the index node. In embodiments, the minimum cut is found
by identifying a set of edges to sever that will separate the source from the target while incurring the minimum penalty (based on edge weight), as discussed above.Example Method for Generating Topological Orderings for Efficient Process Scheduling[0147] FIG. 10 depicts a flow diagram illustrating a method 1000 for generating and modifying topological orderings to improve process scheduling.[0148] The method 1000 begins at block 1005, where a processing system receives a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges.[0149] At block 1010, the processing system generates a topological ordering for the data flow graph based at least in part on memory utilization of the process.[0150] In some aspects, the plurality of nodes in the data flow graph correspond to operations performed during the process, the plurality of edges in the data flow graph correspond to data passing among the operations, each respective edge of the plurality of edges is associated with a respective weight based on a size of the data associated with the respective edge, and generating the topological ordering comprises finding a set of minimum cuts in the data flow graph based on the weights.[0151] In some aspects, finding the set of minimum cuts comprises modifying the data flow graph to enforce data dependencies by: for each respective edge of the plurality of edges, adding a respective backwards edge of infinite weight. In some aspects, finding the set of minimum cuts further comprises modifying the data flow graph to enforce data dependencies by ensuring that at least one valid path exists in the data flow graph from a source to each of the plurality of nodes and from each of the plurality of nodes to a sink.[0152] In some aspects, finding the set of minimum cuts comprises assigning the weights to the plurality of edges by: identifying a producer node of the plurality of nodes that outputs data to at least one consumer node of the plurality of nodes; determining a size of the data output by the producer node; and inserting a deallocation node into the data flow graph by: creating a first edge with a weight corresponding to the size of the data output by the producer node, wherein the first edge is inserted from the producer node to the deallocation node; assigning weight of zero to an edge from the producer node to the at least one consumer node; and creating an edge from the at least one consumer node to the deallocation node, assigned a weight of zero.
[0153] In some aspects, finding the set of minimum cuts comprises, for a first index node of the plurality of nodes, constraining a first minimum cut to occur after the first index node by: creating a first edge with an infinite weight from a source to the first index node; identifying a set of consumer nodes, from the plurality of nodes, that receive data from the first index node; creating edges with an infinite weight from each consumer node in the set of consumer nodes to a sink; and computing the first minimum cut, wherein the first minimum cut places the first index node in a first portion of the data flow graph and all successors of the first index node in a second portion of the data flow graph.[0154] In some aspects, finding the set of minimum cuts further comprises iteratively computing minimum cuts for index nodes in the first and second portions of the data flow graph and separating the first and second portions of the data flow graph based on the minimum cuts until a predefined stopping condition is satisfied.[0155] In some aspects, the method 1000 further includes selecting the first index node based on determining that the first index node is centered in the data flow graph.[0156] In some aspects, the method 1000 further includes determining that the first index node is one of a set of sibling nodes in the data flow graph; and computing the first minimum cut by constraining a first portion of the set of sibling nodes to the first portion of the data flow graph and a second portion of the set of sibling nodes to the second portion of the data flow graph.[0157] At block 1015, the processing system generates a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity.[0158] At block 1020, the processing system allocates units of memory in the memory based on the first modified topological ordering.[0159] At block 1025, the processing system generates a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.[0160] In some aspects, rearranging one or more nodes in the first modified topological ordering comprises moving one or more nodes corresponding to loading data from a host processing system memory into the memory to an earlier position in the topological ordering.
Example Systems for Generating and Executing Efficient Process Schedules[0161] FIG. 11 depicts an example Processing System 1100, which may be configured to perform aspects of the various methods described herein, including, for example, the methods described with respect to FIGS. 4 and 6-10.[0162] Processing System 1100 includes a central processing unit (CPU) 1102, which in some examples may be a multi-core CPU. Instructions executed at the CPU 1102 may be loaded, for example, from a program memory associated with the CPU 1102 or may be loaded from a memory 1114.[0163] Processing System 1100 also includes additional processing components tailored to specific functions, such as a graphics processing unit (GPU) 1104, a digital signal processor (DSP) 1106, and a neural processing unit (NPU) 1108.[0164] Though not depicted in FIG. 11, NPU 1108 may be implemented as a part of one or more of CPU 1102, GPU 1104, and/or DSP 1106.[0165] Although not included in the illustrated embodiment, the Processing System 1100 may also include one or more input and/or output devices, such as screens, physical buttons, speakers, microphones, and the like.[0166] Processing System 1100 also includes Memory 1114, which is representative of one or more static and/or dynamic memories, such as a dynamic random access memory, a flash-based static memory, and the like. In this example, Memory 1114 includes computer-executable components, which may be executed by one or more of the aforementioned processors of Processing System 1100.[0167] In this example, Memory 1114 includes an Ordering Component 110, Memory Component 120, Reordering Component 125, Allocation Component 130, Data Graph(s) 105, Topological Ordering(s) 115, and Processing Schedule(s) 135. The depicted components, and others not depicted, may be configured to perform various aspects of the methods described herein. For example, the Ordering Component 110, Memory Component 120, Reordering Component 125, and Allocation Component 130 may analyze Data Graphs 105 to generate Topological Orderings 115 and Processing Schedules 135. These Processing Schedules 135 may be executed by the Processing System 1100, or may be used by one or more other devices or systems.
[0168] In the illustrated example, the Processing System 1100 also includes an Ordering Circuit 1120, a Memory Circuit 1122, a Reordering Circuit 1124, and an Allocation Circuit 1126. The depicted circuits, and others not depicted, may be configured to perform various aspects of the techniques described herein.[0169] For example, the Ordering Circuit 1120 may be configured to perform the functionality of the Ordering Component 110, the Memory Circuit 1122 may be configured to perform the functionality of the Memory Component 120, the Reordering Circuit 1124 may be configured to perform the functionality of the Reordering Component 125, and the Allocation Circuit 1126 may be configured to perform the functionality of the Allocation Component 130.[0170] Though depicted as separate components and circuits for clarity in FIG. 11, Ordering Circuit 1120, Memory Circuit 1122, Reordering Circuit 1124, and Allocation Circuit 1126 may collectively or individually be implemented in other processing devices of the Processing System 1100, such as within CPU 1102, GPU 1104, DSP 1106, NPU 1108, and the like.[0171] FIG. 12 depicts an example multi-processor Processing System 1200, which may be configured to perform aspects of the various methods described herein.[0172] Processing System 1200 includes a central processing unit (CPU) 1202, which in some examples may be a multi-core CPU. Instructions executed at the CPU 1202 may be loaded, for example, from a program memory associated with the CPU 1202 or may be loaded from a Memory 1214 or Host Memory 1216.[0173] Processing System 1200 also includes additional processing components tailored to specific functions, such as a graphics processing unit (GPU) 1204, a digital signal processor (DSP) 1206, and a neural processing unit (NPU) 1208. In some examples, one or more of the processors of Processing System 1200 may be based on an ARM or RISC-V instruction set.[0174] Though not depicted in FIG. 12, NPU 1208 may be implemented as a part of one or more of CPU 1202, GPU 1204, and/or DSP 1206.[0175] Although not included in the illustrated embodiment, the Processing System 1200 may also include one or more input and/or output devices, such as screens, physical buttons, speakers, microphones, and the like.
[0176] Processing System 1200 includes a Local Memory 1214, which is representative of memory or storage situated close to the various processing units. For example, the Local Memory 1214 may include tightly-coupled memory (TCM), SRAM, cache space, and the like. As illustrated, the Local Memory 1214 includes some Data 1218A. In an embodiment, this Data 1218A in the Local Memory 1214 may correspond to data that is currently being processed or used by the Processing System 1200 (e.g., while executing a process using a Processing Schedule 135).[0177] Processing System 1200 also includes Host Memory 1216, which is representative of one or more static and/or dynamic memories, such as a dynamic random access memory, a flash-based static memory, and the like. In this example, Host Memory 1216 includes computer-executable Processing Schedule(s) 135, which may be executed by one or more of the aforementioned processors of Processing System 1200. In the illustrated embodiment, the Host Memory 1216 also includes Data 1218B. In some embodiments, this Data 1218B may be additional data for one or more ongoing operations (e g., being executed according to a Processing Schedule 135) that does not fit within the limited space available in the Local Memory 1218A.[0178] In some embodiments, the Processing System 1200 may move data back and forth between the Local Memory 1214 and the Host Memory 1216 while executing a Processing Schedule 135 using one or more processing units, as discussed above.Example Clauses[0179] Clause 1: A method, comprising: receiving a data flow graph for a process, wherein data flow graph comprises a plurality of nodes and a plurality of edges; generating a topological ordering for the data flow graph based at least in part on memory utilization of the process; generating a first modified topological ordering by inserting, into the topological ordering, one or more new nodes corresponding to memory access based on a predefined memory capacity; allocating units of memory in the memory based on the first modified topological ordering; and generating a second modified topological ordering by rearranging one or more nodes in the first modified topological ordering, wherein the second modified topological ordering enables increased parallel utilization of a plurality of hardware components.[0180] Clause 2: The method according to Clause 1, wherein rearranging one or more nodes in the first modified topological ordering comprises moving one or more nodes
corresponding to loading data from a host processing system memory into the memory to an earlier position in the topological ordering.[0181] Clause 3: The method according to any one of Clauses 1-2, wherein: the plurality of nodes in the data flow graph correspond to operations performed during the process, the plurality of edges in the data flow graph correspond to data passing among the operations, each respective edge of the plurality of edges is associated with a respective weight based on a size of the data associated with the respective edge, and generating the topological ordering comprises finding a set of minimum cuts in the data flow graph based on the weights.[0182] Clause 4: The method according to any one of Clauses 1-3, wherein finding the set of minimum cuts comprises modifying the data flow graph to enforce data dependencies by, for each respective edge of the plurality of edges, adding a respective backwards edge of infinite weight.[0183] Clause 5: The method according to any one of Clauses 1-4, wherein finding the set of minimum cuts further comprises modifying the data flow graph to enforce data dependencies by ensuring that at least one valid path exists in the data flow graph from a source to each of the plurality of nodes and from each of the plurality of nodes to a sink.[0184] Clause 6: The method according to any one of Clauses 1-5, wherein finding the set of minimum cuts comprises assigning the weights to the plurality of edges by: identifying a producer node of the plurality of nodes that outputs data to at least one consumer node of the plurality of nodes; determining a size of the data output by the producer node; and inserting a deallocation node into the data flow graph by: creating a first edge with a weight corresponding to the size of the data output by the producer node, wherein the first edge is inserted from the producer node to the deallocation node; assigning weight of zero to an edge from the producer node to the at least one consumer node; and creating an edge from the at least one consumer node to the deallocation node, assigned a weight of zero.[0185] Clause 7: The method according to any one of Clauses 1-6, wherein finding the set of minimum cuts comprises, for a first index node of the plurality of nodes, constraining a first minimum cut to occur after the first index node by: creating a first edge with an infinite weight from a source to the first index node; identifying a set of consumer nodes, from the plurality of nodes, that receive data from the first index node;
creating edges with an infinite weight from each consumer node in the set of consumer nodes to a sink; and computing the first minimum cut, wherein the first minimum cut places the first index node in a first portion of the data flow graph and all successors of the first index node in a second portion of the data flow graph.[0186] Clause 8: The method according to any one of Clauses 1-7, wherein finding the set of minimum cuts further comprises iteratively computing minimum cuts for index nodes in the first and second portions of the data flow graph and separating the first and second portions of the data flow graph based on the minimum cuts until a predefined stopping condition is satisfied.[0187] Clause 9: The method according to any one of Clauses 1-8, further comprising selecting the first index node based on determining that the first index node is centered in the data flow graph.[0188] Clause 10: The method according to any one of Clauses 1-9, further comprising: determining that the first index node is one of a set of sibling nodes in the data flow graph; and computing the first minimum cut by constraining a first portion of the set of sibling nodes to the first portion of the data flow graph and a second portion of the set of sibling nodes to the second portion of the data flow graph.[0189] Clause 11: A system, comprising: a memory comprising computer-executable instructions; and one or more processors configured to execute the computer-executable instructions and cause the processing system to perform a method in accordance with any one of Clauses 1-10.[0190] Clause 12: A system, comprising means for performing a method in accordance with any one of Clauses 1-10.[0191] Clause 13: A non-transitory computer-readable medium comprising computer-executable instructions that, when executed by one or more processors of a processing system, cause the processing system to perform a method in accordance with any one of Clauses 1-10.[0192] Clause 14: A computer program product embodied on a computer-readable storage medium comprising code for performing a method in accordance with any one of Clauses 1-10.
Additional Considerations[0193] The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. The examples discussed herein are not limiting of the scope, applicability, or embodiments set forth in the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.[0194] As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0195] As used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).[0196] As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
[0197] The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering.[0198] The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. §112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. |
A memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor and a capacitor. One of (a) a channel region of the transistor, or (b) a pair of electrodes of the capacitor, is directly above the other of (a) and (b). Additional embodiments and aspects are disclosed. |
Docket No.: MI22-6560WO 2018/208719 21 PCT/US2018/031500CLAIMS:1. A memory array comprising vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising a transistor and a capacitor, one of (a) a channel region of the transistor, or (b) a pair of electrodes of the capacitor, being directly above the other of (a) and (b) .2. The array of claim 1 wherein the channel region is directly above the pair of electrodes.3. The array of claim 1 wherein the pair of electrodes is directly above the channel region. 4. The array of claim 1 wherein the transistor comprises first and second source/drain regions neither of which is directly above the other.5. The array of claim 1 wherein the transistor comprises first and second source/drain regions one of which is above the other.6. The array of claim 5 wherein neither of the first and second source/drain regions is directly above the other.7. The array of claim 1 wherein all of the channel region ishorizontally-oriented for horizontal current flow there-through.8. The array of claim 1 wherein the transistor comprises first and second source/drain regions having the channel region there-between, the first and second source/drain regions and the channel region collectively comprising opposing C-like shapes that face one another in a straight-line vertical cross- section.9. The array of claim 1 wherein at least one electrode of the pair comprises opposing C-like shapes that face one another in a straight-line vertical cross-section.10. The array of claim 1 wherein the channel region comprises an annulus in a straight-line horizontal cross-section. Docket No.: MI22-6560WO 2018/208719 22 PCT/US2018/03150011. The array of claim 1 wherein at least one of the pair of electrodes comprises an annulus in a straight-line horizontal cross-section.12. The array of claim 1 wherein the transistor comprises a gate, the gate comprising an annulus in a straight-line horizontal cross-section. 13. The array of claim 12 wherein a plurality of the gates in individual of the tiers of memory cells are directly electrically coupled to one another along a conductive line, the annuli of immediately-laterally-adjacent of the gates overlapping one another in the line.14. A memory array, comprising:vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising:a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, at least a portion of the channel region being horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions;a capacitor comprising first and second electrodes having a capacitor insulator there-between, the first electrode being electrically coupled to the first source/drain region, the second capacitor electrodes of multiple of the capacitors in the array being electrically coupled with one another; andone of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, being directly above the other of (a) and (b); anda sense-line structure extending elevationally through the vertically- alternating tiers, individual of the second source/drain regions of individual of the transistors that are in different memory cell tiers being electrically coupled to the elevationally-extending sense-line structure.15. The array of claim 14 wherein the sense-line structure is directly electrically coupled to a horizontal longitudinally-elongated sense line that is above or below the vertically-alternating tiers. Docket No.: MI22-6560WO 2018/208719 23 PCT/US2018/03150016. The array of claim 14 wherein the sense-line structure comprises a pillar.17. A memory array, comprising:vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising:a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, at least a portion of the channel region being horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions; a capacitor comprising first and second electrodes having a capacitor insulator there-between, the first electrode being electrically coupled to the first source/drain region; and one of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, being directly above the other of (a) and (b);a capacitor-electrode structure extending elevationally through the vertically-alternating tiers, individual of the second electrodes of individual of the capacitors that are in different memory cell tiers being electrically coupled to the elevationally-extending capacitor-electrode structure; anda sense line electrically coupled to multiple of the second source/drain regions of individual of the transistors that are in different memory cell tiers.18. The array of claim 17 wherein the sense-line structure comprises a pillar. 19. The array of claim 17 comprising at least one more capacitor- electrode structure extending elevationally through the vertically-alternating tiers, the individual second electrodes of the individual capacitors that are in different memory cell tiers being electrically coupled to the at least one more elevationally-extending capacitor-electrode structure.20. The array of claim 19 comprising more than one more capacitor- electrode structure extending elevationally through the vertically-alternating Docket No.: MI22-6560WO 2018/208719 24 PCT/US2018/03150021. The array of claim 20 wherein the capacitor-electrode structures are circumferentially spaced about the first electrode.22. The array of claim 21 wherein the capacitor-electrode structures total six in number. 23. The array of claim 17 wherein the capacitor-electrode structure is directly electrically coupled to a horizontally-elongated capacitor-electrode construction that is above or below the vertically-alternating tiers.24. A memory array, comprising:vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising:a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, at least a portion of the channel region being horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions;a capacitor comprising first and second electrodes having a capacitor insulator there-between, the first electrode being electrically coupled to the first source/drain region; and one of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, being directly above the other of (a) and (b);a sense-line structure extending elevationally through the vertically- alternating tiers, individual of the second source/drain regions of individual of the transistors that are in different memory cell tiers being electrically coupled to the elevationally-extending sense-line structure; anda capacitor-electrode structure extending elevationally through the vertically-alternating tiers, individual of the second electrodes of individual of the capacitors that are in different memory cell tiers being electrically coupled to the elevationally-extending capacitor-electrode structure. Docket No.: MI22-6560WO 2018/208719 25 PCT/US2018/03150025. The array of claim 24 comprising at least one more capacitor- electrode structure extending elevationally through the vertically-alternating tiers, the individual second electrodes of the individual capacitors being electrically coupled to the at least one more elevationally-extending capacitor- electrode structure.26. The array of claim 25 comprising more than one more capacitor- electrode structure extending elevationally through the vertically-alternating tiers, the capacitor-electrode structures being circumferentially spaced about the first electrode. 27. The array of claim 24 wherein the gate comprises an annulus in a straight-line horizontal cross-section, a plurality of the gates in individual of the tiers of memory cells being directly electrically coupled to one another along a conductive line, the annuli of immediately-laterally-adjacent of the gates overlapping one another in the line. |
DESCRIPTIONMEMORY ARRAYSTECHNICAL FIELDEmbodiments disclosed herein pertain to memory arrays. BACKGROUNDMemory is one type of integrated circuitry, and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bit lines, data lines, or sense lines) and access lines (which may also be referred to as word lines) . The sense lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a sense line and an access line.Memory cells may be volatile, semi-volatile, or non-volatile.Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates, and is therefore refreshed/rewritten to maintain data storage. Volatile memory may have a retention time of milliseconds or less. Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a "0" or a " 1 ". In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A capacitor is one type of electronic component that may be used in a memory cell. A capacitor has two electrical conductors separated by electrically insulating material. Energy as an electric field may be electrostatically stored within such material. Depending on composition of the insulator material, that stored field will be volatile or non-volatile. For example, a capacitor insulator material including only S1O2 will be volatile. One type of non-volatile capacitor is a ferroelectric capacitor which has ferroelectric material as at least part of the insulating material. Ferroelectric materials are characterized by having two stable polarized states and thereby can comprise programmable material of a capacitor and/or memory cell. The polarization state of the ferroelectric material can be changed by application of suitable programming voltages, and remains after removal of the programming voltage (at least for a time) . Each polarization state has a different charge-stored capacitance from the other, and which ideally can be used to write (i.e. , store) and read a memory state without reversing the polarization state until such is desired to be reversed. Less desirable, in some memory having ferroelectric capacitors the act of reading the memory state can reverse the polarization. Accordingly, upon determining the polarization state, a re-write of the memory cell is conducted to put the memory cell into the pre-read state immediately after its determination. Regardless, a memory cell incorporating a ferroelectric capacitor ideally is non-volatile due to the bi-stable characteristics of the ferroelectric material that forms a part of the capacitor. Programmable materials other than ferroelectric materials may be used as a capacitor insulator to render capacitors non-volatile.A field effect transistor is one type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example reversibly programmable charge storage/trap regions as part of the gate construction between the gate insulator and the conductive gate.One type of transistor is a ferroelectric field effect transistor (FeFET) wherein at least some portion of the gate construction (e.g., the gate insulator) comprises ferroelectric material. The two different polarized states of the ferroelectric material in field effect transistors may be characterized by different threshold voltage (Vt) for the transistor or by different channel conductivity for a selected operating voltage. Again, polarization state of the ferroelectric material can be changed by application of suitable programming voltages, and which results in one of high channel conductance or low channel conductance. The high and low conductance, invoked by the ferroelectric polarization state, remains after removal of the gate programming voltage (at least for a time) . The status of the channel can be read by applying a small drain voltage which does not disturb the ferroelectric polarization. Programmable materials other than ferroelectric materials may be used as a gate insulator to render a transistor to be non-volatile.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic sectional view of a substrate fragmentcomprising a memory array in accordance with an embodiment of the invention, and is taken through line 1 - 1 in Figs. 2-6.Fig. 2 is a sectional view taken through line 2-2 in Fig. 1 , and at a smaller scale than Fig. 1.Fig. 3 is a sectional view taken through line 3-3 in Fig. 1 , and is at the same smaller scale as Fig. 2.Fig. 4 is a sectional view taken through line 4-4 in Fig. 1 , and is at the same smaller scale as Fig. 2.Fig. 5 is a sectional view taken through line 5-5 in Fig. 1 , and is at the same smaller scale as Fig. 2.Fig. 6 is a sectional view taken through line 6-6 in Fig. 1 , and is at the same smaller scale as Fig. 2.Fig. 7 is a sectional view taken through line 7-7 in Figs. 2-6, and is at the same scale as Fig. 1.Fig. 8 is a diagrammatic partial and expanded perspective view of the Fig. 1 substrate fragment, with some components removed for clarity of other depicted components.Fig. 9 is a side-by-side exploded perspective view and assembled perspective view of certain components of the Fig. 1 substrate fragment.Fig. 10 is a diagrammatic sectional view of another substrate fragment comprising a memory array in accordance with an embodiment of the invention.Fig. 11 is a diagrammatic sectional view of a predecessor substrate to that shown by Figs. 1 -9, is taken through line 11 - 11 in Fig. 12.Fig. 12 is a sectional view of the Fig. 1 1 substrate taken through line 12- 12 in Fig. 11 , and at a larger scale than Fig. 11. Fig. 13 is a sectional view of the Fig. 1 1 substrate at a processing step subsequent to that shown by Fig. 1 1 , and is taken through line 13- 13 in Fig. 14.Fig. 14 is a sectional view taken through line 14- 14 in Fig. 13, and is at the same larger scale as Fig. 12.Fig. 15 is a sectional view of the Fig. 14 substrate at a processing step subsequent to that shown by Fig. 14, and is at the same larger scale as Fig. 12.Fig. 16 is a sectional view of the Fig. 15 substrate at a processing step subsequent to that shown by Fig. 15, and is at the same larger scale as Fig. 12.Fig. 17 is a sectional view of the Fig. 16 substrate at a processing step subsequent to that shown by Fig. 16, and is at the same larger scale as Fig. 12.Fig. 18 is a sectional view of the Fig. 17 substrate at a processing step subsequent to that shown by Fig. 17, and is at the same larger scale as Fig. 12.Fig. 19 is a sectional view of the Fig. 18 substrate at a processing step subsequent to that shown by Fig. 18, and is at the same larger scale as Fig. 12.Fig. 20 is a sectional view of the Fig. 19 substrate at a processing step subsequent to that shown by Fig. 19, and is at the same larger scale as Fig. 12.Fig. 21 is a sectional view of the Fig. 20 substrate at a processing step subsequent to that shown by Fig. 20, is taken through line 21 -21 in Fig. 23, and is at the same scale as Fig. 11.Fig. 22 is a sectional view taken through line 22-22 in Fig. 23, and is at the same scale as Fig. 11.Fig. 23 is a sectional view taken through line 23-23 in Figs. 21 and 22, and is at the same larger scale as Fig. 12.Fig. 24 is a sectional view of the Fig. 23 substrate at a processing step subsequent to that shown by Fig. 23, and is at the same larger scale as Fig. 12.Fig. 25 is a sectional view of the Fig. 24 substrate at a processing step subsequent to that shown by Fig. 24, and is at the same larger scale as Fig. 12.Fig. 26 is a sectional view of the Fig. 25 substrate at a processing step subsequent to that shown by Fig. 25, and is at the same larger scale as Fig. 12.Fig. 27 is a sectional view of the Fig. 26 substrate at a processing step subsequent to that shown by Fig. 26, and is at the same larger scale as Fig. 12.Fig. 28 is a sectional view of the Fig. 27 substrate at a processing step subsequent to that shown by Fig. 27, and is at the same larger scale as Fig. 12. Fig. 29 is a sectional view of the Fig. 28 substrate at a processing step subsequent to that shown by Fig. 28, and is at the same larger scale as Fig. 12.Fig. 30 is a sectional view of the Fig. 29 substrate at a processing step subsequent to that shown by Fig. 29, is taken through line 30-30 in Fig. 31 , and is at the same scale as Fig. 11.Fig. 31 is a sectional view taken through line 31 -31 in Fig. 30, and is at the same larger scale as Fig. 12.Fig. 32 is a sectional view of the Fig. 30 substrate at a processing step subsequent to that shown by Fig. 30, and is taken through line 32-32 in Fig. 33.Fig. 33 is a sectional view taken through line 33-33 in Fig. 32, and is at the same larger scale as Fig. 12.Fig. 34 is a sectional view of the Fig. 33 substrate at a processing step subsequent to that shown by Fig. 33, and is at the same larger scale as Fig. 12.Fig. 35 is a sectional view of the Fig. 34 substrate at a processing step subsequent to that shown by Fig. 34, and is at the same larger scale as Fig. 12.Fig. 36 is a sectional view of the Fig. 35 substrate at a processing step subsequent to that shown by Fig. 35, and is at the same larger scale as Fig. 12.Fig. 37 is a sectional view of the Fig. 36 substrate at a processing step subsequent to that shown by Fig. 36, and is at the same larger scale as Fig. 12.Fig. 38 is a sectional view of the Fig. 37 substrate at a processing step subsequent to that shown by Fig. 37, and is at the same larger scale as Fig. 12.Fig. 39 is a sectional view of the Fig. 38 substrate at a processing step subsequent to that shown by Fig. 38, and is at the same larger scale as Fig. 12.Fig. 40 is a sectional view of the Fig. 39 substrate at a processing step subsequent to that shown by Fig. 39, and is at the same larger scale as Fig. 12.Fig. 41 is a sectional view of the Fig. 40 substrate at a processing step subsequent to that shown by Fig. 40, and is at the same larger scale as Fig. 12.Fig. 42 is a sectional view of the Fig. 41 substrate at a processing step subsequent to that shown by Fig. 41 , and is at the same larger scale as Fig. 12.Fig. 43 is a sectional view of the Fig. 42 substrate at a processing step subsequent to that shown by Fig. 42, and is at the same larger scale as Fig. 12.Fig. 44 is a sectional view of the Fig. 43 substrate at a processing step subsequent to that shown by Fig. 43, and is at the same larger scale as Fig. 12. Fig. 45 is a sectional view of the Fig. 44 substrate at a processing step subsequent to that shown by Fig. 44, is taken through line 45-45 in Fig. 46, and is at the same scale as Fig. 11.Fig. 46 is a sectional view taken through line 46-46 in Fig. 45, and is at the same larger scale as Fig. 12.Fig. 47 is a sectional view of the Fig. 46 substrate at a processing step subsequent to that shown by Fig. 46, and is at the same larger scale as Fig. 12.Fig. 48 is a sectional view of the Fig. 47 substrate at a processing step subsequent to that shown by Fig. 47, is taken through line 48-48 in Fig. 49, and is at the same scale as Fig. 11.Fig. 49 is a sectional view taken through line 49-49 in Fig. 48.Fig. 50 is a sectional view of the Fig. 49 substrate at a processing step subsequent to that shown by Fig. 49, and is at the same larger scale as Fig. 12.Fig. 51 is a sectional view of the Fig. 50 substrate at a processing step subsequent to that shown by Fig. 50, and is at the same larger scale as Fig. 12.Fig. 52 is a sectional view of the Fig. 5 1 substrate at a processing stepFig. 53 is a sectional view of the Fig. 52 substrate at a processing step subsequent to that shown by Fig. 52, is taken through line 53-53 in Fig. 54, and is at the same scale as Fig. 11.Fig. 54 is a sectional view taken through line 54-54 in Fig. 53, and is at the same larger scale as Fig. 12.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass memory arrays. A first example embodiment is shown in and described with references to Figs. 1 -9. Such includes a substrate structure or construction 8 comprising a memory array 10 fabricated relative to a base substrate 1 1. Substrate 1 1 may comprise any one or more of conductive/conductor/conducting (i.e., electrically herein),semiconductive/semiconductor/semiconducting, andinsulative/insulator/insulating (i.e., electrically herein) materials. Various materials have been formed elevationally over base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of the Figs. 1 -9- depicted materials. For example, other partially or wholly fabricatedcomponents of integrated circuitry may be provided somewhere above, about, or within base substrate 1 1. Control and/or other peripheral circuitry for operating components within a memory array may also be fabricated, and may or may not be wholly or partially within a memory array or sub-array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. As used in this document, a "sub-array" may also be considered as an array.Construction 8 includes vertically-alternating tiers 12 and 14 of insulative material 16 (e.g., comprising, consisting essentially of, or consisting of carbon- doped silicon nitride [2 to 10 atomic percent carbon] , silicon nitride, and/or doped or undoped silicon dioxide deposited to a thickness of 200 Angstroms to 500 Angstroms) and memory cells 19, respectively. Memory cell tiers 14 may be of the same or different thickness as that of insulative material tiers 12, with different and greater thickness being shown (e.g. , 500 Angstroms to1 ,500 Angstroms) . Construction 8 is shown as having five vertically-alternating tiers 12 and 14, although likely many more (e.g., dozens, hundreds, etc.) may be formed. Accordingly, more tiers 12 and 14 may be below the depicted tiers and above base substrate 1 1 and/or more tiers 12 and 14 may be above the depicted tiers.Memory cells 19 individually comprise a transistor 25 and a capacitor 34. Transistor 25 comprises a first source/drain region 20 and a second source/drain region 22 (e.g. , conductively-doped semiconductor material such as polysilicon for each) having a channel region 24 there-between (e.g., doped semiconductor material, such as polysilicon, but not to be intrinsically conductive). In some embodiments and as shown, electrically semiconductive regions 21 (e.g., LDD and/or halo regions) and/or conductively-doped semiconductive material regions 21 are between channel region 24 and one or both of source/drain regions 20 and 22.A gate 26 (e.g., one or more of elemental metal, a mixture or alloy of two or more elementals, conductive metal compounds, and conductively-doped semiconductive materials) is operatively proximate channel region 24.Specifically, in the depicted example, a gate insulator material 28 (e.g., silicon dioxide, silicon nitride, hafnium oxide, other high k insulator material, and/or ferroelectric material) is between gate 26 and channel region 24. At least a portion of channel region 24 is horizontally-oriented for horizontal current flow in the portion between first source/drain region 20 and second source/drain region 22. In the depicted example embodiment, all of channel region 24 is horizontally-oriented for horizontal current flow there-through. Regardless, when suitable voltage is applied to gate 26, a conductive channel can form within channel region 24 proximate gate insulator material 28 such that current is capable of flowing between source/drain regions 20 and 22 (and through regions 21 when present).In one embodiment and as shown, one (e.g., 22) of first source/drain region 20 and second source/drain region 22 is above the other. Regardless, in one embodiment and as shown, neither of first source/drain region 20 nor second source/drain region 22 is directly above the other. In one embodiment and as shown, first source/drain region 20, second source/drain region 22, and channel region 24 collectively comprise opposing C-like shapes 17 that face one another in a straight-line vertical cross-section (e.g., the cross-section shown by Fig. 7, with Fig. 1 not being a straight-line vertical cross-section as evidenced by the angled 1 - 1 section line segments in Figs. 2-6; Fig. 7 only shows one memory cell tier 14 and two insulative material tiers 12) . In one embodiment and as shown, first source/drain region 20 comprises an annulus 41 in a straight- line horizontal cross-section (e.g. , the cross-section shown by Fig. 4). In one embodiment and as shown, second source/drain region 22 comprises an annulus 42 in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 3).In one embodiment and as shown, channel region 24 comprises an annulus 40 in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 3). In one embodiment and as shown, gate 26 comprises an annulus 44 in a straight-line horizontal cross-section, (e.g. , the cross-section shown by Fig. 2) . In one embodiment and as shown, multiple of gates 26 in individual memory cell tiers 14 are directly electrically coupled to one another along a conductive line 15 (Figs. 2 and 8) . Annuli 44 of immediately-laterally-adjacent gates 26 overlap one another in conductive line 15 (e.g. , forming an access line 15).Capacitor 34 comprises a pair of electrodes, for example a first electrode 46 and a second electrode 48 (e.g. , conductively-doped semiconductive material and/or metal material for each), having a capacitor insulator 50 there-between (e.g., silicon dioxide, silicon nitride, hafnium oxide, other high k insulator material, and/or ferroelectric material) . First electrode 46 is electrically coupled, in one embodiment directly electrically coupled, to first source/drain region 20. Second electrodes 48 of multiple of capacitors 34 in array 10 are electrically coupled, in one embodiment directly electrically coupled, with one another. In one embodiment, all such second electrodes of all capacitors in array 10 are electrically coupled with one another, and in one embodiment directly electrically coupled with one another. In one embodiment and as shown, at least one electrode (e.g. , first electrode 46) of pair of electrodes 46, 48 comprises opposing C-like shapes 23 that face one another in a straight-line vertical cross-section (e.g. , the cross-section shown by Fig. 7). In one embodiment and as shown, first electrode 46 comprises an annulus 45 in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 5). An undoped silicon liner 62 may be received about first electrode 46. A heavily-doped silicon region 69 may also be present as shown, and which may be non-functional and an artifact of manufacture as described below.In one embodiment, a capacitor-electrode structure 52 (e.g. , a solid or hollow pillar, a solid or hollow wall, etc.) extends elevationally through vertically-alternating tiers 12 and 14, with individual of second electrodes 48 of individual capacitors 34 that are in different memory cell tiers 14 being electrically coupled, in one embodiment directly electrically coupled, to elevationally-extending capacitor-electrode structure 52. Example materials for capacitor-electrode structure 52 are metal materials and conductively-doped semiconductor material, and such may be of the same composition as that of second electrodes 48 as shown. In one embodiment and as shown, capacitor- electrode structure 52 extends vertically or within 10° of vertical. In one embodiment and as shown, capacitor-electrode structure 52 comprises a pillar 55, with capacitor-insulator material 50 being circumferentially aboutstructure 52/pillar 55. In one embodiment, such, by way of example only, is one example of how second capacitor electrodes 48 of multiple of capacitors 34 that are in different memory cell tiers 14 in the array may be electrically coupled with one another. In one embodiment, capacitor-electrode structure 52 is directly electrically coupled to a horizontally-elongated capacitor-electrode construction 29 (e.g. , a line or a plate) that is above or below (above being shown) vertically-alternating tiers 12 and 14. Construction(s) 29 may, in one embodiment, directly electrically couple together all second electrodes 48 within the array.In one embodiment, at least one more capacitor-electrode structure extends elevationally through the vertically-alternating tiers, with the individual second electrodes of the individual capacitors that are in different memory cell tiers being electrically coupled to the at least one more elevationally-extending capacitor-electrode structure. In one such embodiment, more than one more capacitor-electrode structure extends elevationally through the vertically- alternating tiers. In one such latter embodiment, the capacitor-electrode structures are circumferentially spaced about the first electrode. For example, and by way of example only, six capacitor-electrode structures 52 are shown received about individual first capacitor electrodes 46.A sense line is electrically coupled, in one embodiment directlyelectrically coupled, to multiple of the second source/drain regions of individual of the transistors that are in different memory cell tiers 14. In one embodiment and as shown, a sense-line structure 56 (e.g. , a solid or hollow pillar, a solid or hollow wall, etc.) extends elevationally through vertically-alternating tiers 12 and 14, with individual of second source/drain regions 22 of individual transistors 25 that are in different memory cell tiers 14 being electrically coupled, in one embodiment directly electrically coupled, thereto. In one embodiment and as shown, sense-line structure 56 extends vertically or within 10° of vertical. In one embodiment and as shown, sense-line structure 56 comprises a pillar 59. In one embodiment and as shown, sense-line structure 56 comprises a peripheral conductively-doped semiconductive material 58 (e.g., polysilicon) and a central metal material core 60 (e.g., titanium nitride and/or tungsten). In one embodiment, sense-line structure 56 is directly electrically coupled to a horizontal longitudinally-elongated sense line 57 (Figs. 1 and 8) that is above or below (below as shown) vertically-alternating tiers 12 and 14. Fig. 8 shows contacts/vias 67 extending to individual lines 15 (e.g., access lines/word lines) in an example staircase area/region of array 10. Insulative material 16 is not show in Fig. 8 for clarity of other components.Example insulator material 47 (e.g. , silicon nitride), insulator material 49 (e.g., silicon dioxide), and non-conductive material 5 1 (e.g., undoped amorphous silicon or undoped polysilicon) may be provided as shown for suitable isolation in sub-tiers of memory cell tiers 14.In individual memory cells 19, one of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, is directly above the other of (a) and (b) . Figs. 1 -9 show an embodiment where (a) is above (b) (i.e. , in Figs. 1 -9, channel region 24 of transistor 25 is directly above first electrode 46 and second electrode 48 of capacitor 34). An alternate embodiment construction 8a is shown in Fig. 10 (corresponding to the Fig. 2 view). Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix "a". In Fig. 10, (b) is above (a) (i.e., in Fig. 10, first electrode 46 and second electrode 48 of capacitor 34 are directly above channel region 26 of transistor 25) . Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.The above example structures may be manufactured by any existing or yet-to-be-developed techniques. One example technique of manufacturing the embodiment shown by Figs. 1 -9 is described with reference to Figs. 11 -54. Like numerals from the above-described embodiments have been used for predecessor construction(s), regions, and like/predecessor materials thereof.Figs. 1 1 and 12 show an example portion of a predecessor to theconstruction or stack of Figs. 1 -9, and which for brevity only shows two insulative-material tiers 12 having what will be a memory cell tier 14 therebetween. Sense lines 57 (not shown) would have been formed previously. The person of skill in the art may select any suitable different combinations of materials recognizing, in accordance with the continuing description, that certain materials will be etched selectively relative to other materials in the example method. As examples, and consistent with those described above, example material 16 for insulative material tiers 12 is carbon-doped silicon nitride (2 to 10 atomic percent carbon) . An example thickness for insulative material 16 is 200 to 500 Angstroms. Each of materials or layers 47, 49, and 51 may be considered as a sub-tier within what will be memory cell tiers 14.Example thickness for each of materials 47, 49, and 51 is 200 to 400 Angstroms, with example materials being silicon nitride, silicon dioxide, and amorphous silicon, respectively. Openings 33 have been formed in and through the depicted stack of materials in an offset or staggered manner. The centers of example openings 33 are centered relative to what will be the centers of sense- lines structures 56 and annuli 40, 41 , 42, 44, and 45. Fig. 11 shows three example lines 15 of openings 33 wherein a spacing "A" between immediately- adjacent centers of openings 33 within a line 15 is different from an analogous lateral spacing "B" between lines 15, specifically with B being greater than A.Referring to Figs. 13 and 14, substrate construction 8 of Figs. 11 and 12 has been subjected to suitable etching whereby material 47 has been etched laterally/radially selectively relative to the other depicted materials effective to widen openings 33 to join within lines 15 but not to join laterally (B being slightly larger than A). With respect to the above example materials, an example etching chemistry is hot phosphoric acid, with such etch being conducted in a timed manner. By way of examples only, 20 nm and 10 nm diagonal and lateral separation distances, respectively, are shown.Referring to Fig. 15, a silicon nitride liner 35 (e.g. , 35 Angstroms, and not designated in Figs. 1 - 10 as ideally it is the same material as material 47) and gate insulator 28 (e.g. , 50 Angstroms) have been formed within the original and widened openings 33 as shown. Gate insulator 28 may be silicon dioxide that is subjected to in situ steam generation for densification (e.g., at 650°C to 1000°C, atmospheric or sub-atmospheric pressure, and in the presence of 02and H2).Referring to Fig. 16, gate material 26 (e.g. , all titanium nitride, or a titanium nitride liner with the remaining volume filled with elemental tungsten) has been deposited to within openings 33 sufficient to fill the laterally-widened portions thereof, but ideally not sufficient to fill the central portion of the narrower part of such openings.Referring to Fig. 17, gate material 26 has been subjected to a suitable etch to recess it to set the channel length (e.g., 200 Angstroms) . An example chemistry to etch titanium nitride and elemental tungsten selectively relative to the other example materials comprises a combination of sulfuric acid and hydrogen peroxide.Referring to Fig. 18, example oxide gate insulator 28 has been etched selectively relative to other exposed materials (e.g. , using dilute HF) to form the illustrated construction. Referring to Fig. 19, more silicon-nitride-insulator material 47 has been deposited effective to fill the depicted recesses/gaps that were formed by the etching shown in Fig. 18. Fig. 20 shows removal of such material 47 from within the narrower portions of openings 33, for example by using phosphoric acid or any suitable dry anisotropic etching chemistry.Referring to Figs. 21 -23, material 5 1 (e.g., amorphous silicon) has been subjected to a suitable etch selectively relative to the other depicted materials to widen openings 33 therein for ultimate formation of the capacitors. An example etching chemistry for the stated materials for selectively etching material 5 1 is tetramethylammonium hydroxide (TMAH) or a fluorocarbon-based dry etching chemistry. Such may be conducted by a timed etch that is sufficiently controlled to preclude the widened openings form joining or bridging with anyimmediately-adjacent opening within material 51. Fig. 21 shows example word line constructions 15 that were essentially completed as-described above with respect to Figs. 16-20.Referring to Fig. 24, native oxide 61 (e.g. , 10 Angstroms) has been formed peripherally on material 51.Referring to Fig. 25, an undoped silicon liner 62 (e.g., 30 Angstroms) has been deposited, followed by deposition of conductive material 46(e.g., 40 Angstroms of titanium nitride) for ultimate formation of first capacitor electrodes 46.Referring to Fig. 26, example silicon-dioxide-insulator material 49 (e.g. , silicon dioxide) has been deposited sufficient to fill remaining volume of widened openings 33 in material 51 , but ideally not sufficient to fill remaining volume of the narrowest portions of openings 33.Referring to Fig. 27, example silicon-dioxide-insulator material 49 has been subjected to a suitable timed etch to selectively laterally/radially recess such as shown (e.g., using dilute HF), for example to leave a lateral annular thickness of material 49 of about 200 Angstroms.Referring to Fig. 28, example silicon-nitride-insulator material 47 has been deposited to fill such remaining recesses. Referring to Fig 29, such silicon nitride 47 has been subjected to a suitable selective etch (e.g. , phosphoric acid) to recess such as-shown. Referring to Figs. 30 and 31 , example conductive titanium nitride material 46 has been etched from remaining openings 33 (e.g. , using sulfuric acid and hydrogen peroxide, followed by subsequent removal of silicon liner 62 from sidewalls of openings 33 (e.g., using dilute HF) .Referring to Figs. 32 and 33, example silicon-dioxide-insulator material49 has been subjected to a suitable selective etch (e.g., using dilute HF) to widen openings 33 within material 47 as-shown. Such exposes about 30Angstroms of nitride liner 35 above and approximately 35 Angstroms of silicon liner 62 below.Referring to Fig. 34, example silicon-nitride-insulator material 47 has been subjected to a suitable etch (e.g. , using hot phosphoric acid) to remove the depicted approximate 30 Angstroms of silicon nitride to expose gate insulator 28.Referring to Fig. 35, suitable channel material 24 has been deposited (e.g., 50 Angstroms of suitably-doped polysilicon).Referring to Fig. 36, example silicon-dioxide-insulator material 49 has been deposited to fill the depicted remaining volume of widened openings 33 with material 49, and ideally not sufficient to fill remaining volume of the narrowest portions of openings 33. Fig. 37 shows subsequent anisotropic etching of material 49 to remove such from being over sidewalls of openings 33.Referring to Fig. 38, example titanium nitride material 46 has been subjected to a suitable etch to recess it laterally/radially as shown (e.g. , using sulfuric acid and hydrogen peroxide) . Note that only a side surface of example silicon material 24 is exposed in an upper portion, whereas side and horizontal surfaces of silicon material 24 and silicon liner 62 are exposed in a lower portion.Referring to Fig. 39, silicon material 24 and silicon liner 62 have been subjected to a suitable wet or vapor etch (e.g., using TMAH) . Such is ideally conducted to remove a greater amount of silicon material 24 and silicon liner 62 back in the lower portion as shown due to vertical and horizontal surface exposure of silicon, as compared to only vertical surface exposure of silicon material 24 in the upper portion.Referring to Fig. 40, example silicon material 24 and silicon liner 62 have has been subjected to a suitable ion implanting to form first source/drain regions 20 and second source/drain regions 22. Another doped region 69 may also be formed, and which may be non-functional and an artifact of manufacture.Referring to Fig. 41 , example silicon-dioxide-insulator material 49 has again been deposited, and then anisotropically etched to remove it from being over sidewalls of opening 33 as shown in Fig. 42. An example minimum diameter of opening 33 in Fig. 42 is 900 Angstroms.Referring to Figs. 43 and 44, conductively-doped semiconductor material 58 has been formed, followed by forming of conductive core metal material 60, thus essentially completing formation of sense-line structures 56.Referring to Figs. 45 and 46, capacitor openings 64 have been formed as shown, and which will be used for ultimate formation of capacitor-electrode structures 52 (not shown in Figs. 45 and 46 as not-yet-formed) . An example minimum diameter of opening 64 is 900 Angstroms. Silicon-dioxide-insulator material 49 (not shown) exposed to opening 64 in the middle sub-tier of memory cell tier 14 has been removed by selective etching (e.g. , using HF) . In Fig. 47, such removed silicon dioxide has been replaced with silicon nitride 47 (e.g. , by deposition sufficient to fill such recessed volume, followed by anisotropic etch thereof to remove such from within capacitor opening 64) .Referring to Figs. 48 and 49, example amorphous silicon material 51 (not shown) has been etched selectively relative to other exposed material to stop on native oxide layers 61 (e.g., using TMAH) .Referring to Fig. 50, native oxide 61 (not shown) has been etched away (e.g., using HF) to expose example titanium nitride material 46. Some material of silicon liner 62 is also shown as having been etched.Referring to Fig. 5 1 , example titanium nitride material 46 has been wet etched (e.g., using sulfuric acid and hydrogen peroxide) sufficiently to expose example silicon-dioxide-insulator material 49 there-between. Fig. 52 shows subsequent removal of such example silicon-dioxide-insulator material 49 from between titanium nitride material 46.Referring to Figs. 53 and 54, capacitor insulator 50 and second capacitor electrode material 48 have been deposited as shown.In this document unless otherwise indicated, "elevational", "higher", "upper", "lower", "top", "atop", "bottom", "above", "below", "under","beneath", "up", and "down" are generally with reference to the vertical direction. "Horizontal" refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally orthogonal thereto. Reference to "exactly horizontal" is the direction along the primary substrate surface (i.e. , no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, "vertical" and "horizontal" as used herein are generally perpendicular directions relative one another and independent of orientation of the substrate in three-dimensional space.Additionally, "elevationally-extending" and "extending elevationally" refer to a direction that is angled away by at least 45° from exactly horizontal. Further,"extend(ing) elevationally" and "elevationally-extending" with respect to a field effect transistor are with reference to orientation of the transistor's channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors, "extend(ing) elevationally" and "elevationally- extending" are with reference to orientation of the base length along which current flows in operation between the emitter and collector.Further, "directly above" and "directly under" require at least some lateral overlap (i.e., horizontally) of two stated regions/materials/components relative one another. Also, use of "above" not preceded by "directly" only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e. , independent of whether there is any lateral overlap of the two stated regions/materials/components). Analogously, use of "under" not preceded by "directly" only requires that some portion of the stated region/material/component that is under the other be elevationally inward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components) .Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Further, unless otherwise stated, each material may be formed using any suitable or yet-to-be-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples. Additionally, "thickness" by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately-adjacent material of different composition or of an immediately-adjacent region. Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, "different composition" only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another, "different composition" only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is "directly against" another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast, "over", "on", "adjacent", "along", and "against" not preceded by "directly" encompass "directly against" as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, regions-materials-components are "electrically coupled" relative one another if in normal operation electric current is capable of continuously flowing from one to the other, and does so predominately by movement of subatomic positive and/or negative charges when such are sufficientlygenerated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions- materials-components are referred to as being "directly electrically coupled", no intervening electronic component (e.g. , no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials- components. Additionally, "metal material" is any one or combination of an elemental metal, a mixture or an alloy of two or more elemental metals, and anyconductive metal compound.In this document, a selective etch or removal is an etch or removal where one material is removed relative to another stated material or materials at a rate of at least 2.0: 1. Further, selectively growing or selectively forming is growing or forming one material relative to another stated material or materials at a rate of at least 2.0: 1 for at least the first 100 Angstroms of growing or forming.Further, a "self-aligned manner" means a technique whereby at least a lateral surface of a structure is defined by deposition of material against a sidewall of a previously-patterned structure.CONCLUSIONIn some embodiments, a memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor and a capacitor. One of (a) a channel region of the transistor, or (b) a pair of electrodes of the capacitor, is directly above the other of (a) and (b) .In some embodiments, a memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. At least a portion of the channel region is horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions. The memory cells individually comprise a capacitor comprising first and second electrodes having a capacitor insulator there-between. The first electrode is electrically coupled to the first source/drain region. The second capacitor electrodes of multiple of the capacitors in the array are electrically coupled with one another. One of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, is directly above the other of (a) and (b). A sense-line structure extends elevationally through the vertically- alternating tiers. Individual of the second source/drain regions of individual of the transistors that are in different memory cell tiers are electrically coupled to the elevationally-extending sense-line structure. In some embodiments, a memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. At least a portion of the channel region is horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions. The memory cells individually comprise a capacitor comprising first and second electrodes having a capacitor insulator there-between. The first electrode is electrically coupled to the first source/drain region. One of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, is directly above the other of (a) and (b) . A capacitor-electrode structure extends elevationally through the vertically-alternating tiers.Individual of the second electrodes of individual of the capacitors that are in different memory cell tiers are electrically coupled to the elevationally- extending capacitor-electrode structure. A sense line is electrically coupled to multiple of the second source/drain regions of individual of the transistors that are in different memory cell tiers.In some embodiments, a memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. At least a portion of the channel region is horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions. The individual memory cells comprise a capacitor comprising first and second electrodes having a capacitor insulator there-between. The first electrode is electrically coupled to the first source/drain region. One of (a) the channel region of the transistor, or (b) the first and second electrodes of the capacitor, is directly above the other of (a) and (b) . A sense-line structure extends elevationally through the vertically-alternating tiers. Individual of the second source/drain regions of individual of the transistors that are in different memory cell tiers are electrically coupled to the elevationally-extending sense- line structure. A capacitor-electrode structure extends elevationally through the vertically-alternating tiers. Individual of the second electrodes of individual of the capacitors that are in different memory cell tiers are electrically coupled to the elevationally-extending capacitor-electrode structure. |
Embodiments of systems and methods for power-on user authentication are disclosed. A method for power-on user authentication may comprise receiving an authentication input with a security controller of a computing device prior to supplying power to a primary processor of the computing device, comparing the authentication input to an authentication code using the security controller, and supplying power to the primary processor in response to the authentication input matching the authentication code. |
CLAIMS 1. A method comprising: receiving an authentication input with a security controller of a computing device prior to supplying power to a primary processor of the computing device; comparing the authentication input to an authentication code using the security controller; and supplying power to the primary processor in response to the authentication input matching the authentication code. 2. The method of claim 1 , wherein receiving the authentication input comprises receiving one or more keystrokes with the security controller. 3. The method of claim 1 , wherein receiving the authentication input comprises communicating with a near-field communications (NFC) device in proximity to the computing device. 4. The method of claim 1 , wherein comparing the authentication input to the authentication code comprises retrieving the authentication code from a dedicated memory. 5. The method of claim 1 , wherein comparing the authentication input to the authentication code comprises: calculating a hash value for the authentication input; and comparing the hash value for the authentication input with the authentication code. 6. The method of claim 1 , further comprising: receiving a user request to change the authentication code after supplying power to the primary processor; receiving a new authentication code with the security controller; and replacing the authentication code with the new authentication code in the dedicated memory. 7. The method of claim 1 , further comprising: receiving a new authentication code from a remote server having an established trust relationship with the computing device; and replacing the authentication code with the new authentication code in the dedicated memory. 8. One or more non- transitory, machine-readable media comprising a plurality of instructions that, when executed by a security controller of a computing device, cause the security controller to: receive an authentication input prior to supplying power to a primary processor of the computing device; compare the authentication input to an authentication code; and supply power to the primary processor in response to the authentication input matching the authentication code. 9. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions cause the security controller to receive the authentication input by registering one or more keystrokes. 10. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions cause the security controller to receive the authentication input from near-field communications (NFC) circuitry configured to communicate with an NFC device in proximity to the computing device. 11. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions further cause the security controller to retrieve the authentication code from a dedicated memory. 12. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions cause the security controller to: calculate a hash value for the authentication input; and compare the hash value for the authentication input with the authentication code. 13. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions further cause the security controller to: receive a user request to change the authentication code after supplying power to the primary processor; receive a new authentication code; and replace the authentication code with the new authentication code in the dedicated memory. 14. The one or more non- transitory, machine-readable media of claim 8, wherein the plurality of instructions further cause the security controller to: receive a new authentication code from a remote server having an established trust relationship with the computing device; and replace the authentication code with the new authentication code in the dedicated memory. 15. A computing device comprising: a primary processor; a security controller configured to execute an authentication module; and a dedicated memory, the dedicated memory comprising the authentication module and an authentication code; wherein the authentication module, when executed by the security controller, causes the security controller to (i) receive an authentication input prior to supplying power to the primary processor, (ii) compare the authentication input to the authentication code, and (iii) supply power to the primary processor in response to the authentication input matching the authentication code. 16. The computing device of claim 15, wherein: the dedicated memory further comprises a keyboard module; the security controller is further configured to execute the keyboard module; and the keyboard module, when executed by the ancillary processor, causes the security controller to receive the authentication input by registering one or more keystrokes. 17. The computing device of claim 15, further comprising near- field communications (NFC) circuitry configured to receive the authentication input from an NFC device in proximity to the computing device and to transmit the authentication input to the security controller. 18. The computing device of claim 15, wherein the dedicated memory is accessible only to the security controller. 19. The computing device of claim 15, wherein the security controller is an embedded controller configured for power management of the computing device. 20. The computing device of claim 15, wherein the security controller is a secure execution engine configured to operate irrespective of a power state of the primary processor. |
SYSTEMS AND METHODS FOR POWER-ON USER AUTHENTICATION BACKGROUND Modern computing devices, such as laptops, smartphones, mobile interne devices (MIDs), and tablets, often carry sensitive personal and professional data. Protection of such data from theft by malicious attack is increasingly important. As malicious attacks become more sophisticated, security is increasingly difficult. In response to those threats, user authentication schemes currently exist at different points in the boot process, such as at the BIOS (Basic Input Output System), PBA (Pre Boot Authentication level), or at the operating system (OS) log-on. However, even those types of user authentication may be vulnerable to attack in certain circumstances. BIOS-based passwords are known to be insecure— the passwords are stored in memory at a known location and are thus susceptible to being disabled or hacked. Furthermore, BIOS-based passwords are optional and may not be enabled. Additionally, both PBA and OS passwords can be bypassed by changing the boot drive on the computing device or by selecting a boot source other than the hard disk. Moreover, most components of the computing device will be initialized and executing by the time the computing device is ready to accept a user authentication, which may leave the entire system vulnerable to attack. BRIEF DESCRIPTION OF THE DRAWINGS The invention described herein is illustrated by way of example and not by way of limitation in the accompanying drawings. For simplicity and clarity of illustration, elements illustrated in the drawings are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the drawings to indicate corresponding or analogous elements. FIG. 1 is a simplified block diagram of one embodiment of a system including a computing device having power-on user authentication; FIG. 2 is a simplified block diagram of one embodiment of a security controller of the computing device of FIG 1; and FIG. 3 is a simplified flow diagram of one embodiment of a method for power-on user authentication of the computing device of FIG. 1. DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific exemplary embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource pai itiornng/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present disclosure. It will be appreciated, however, by one skilled in the art that embodiments of the disclosure may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etcetera, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention implemented in a computing device may include one or more bus-based interconnects between components and/or one or more point-to-point interconnects between components. Embodiments of the invention may also be implemented as instructions stored on one or more non-transitory, machine-readable media, which may be read and executed by one or more processors and/or controllers. A non-transitory, machine-readable medium may include any tangible mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, non- transitory, machine-readable media may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others. Referring now to FIGS. 1 and 2, a system 100 for power-on user authentication comprises a computing device 102 including a primary processor 104 and a security controller 110. In one illustrative embodiment, discussed in more detail below, the security controller 110 authenticates a user prior to supplying power to the primary processor 104. When the security controller 110 receives a power-on signal 206 (e.g., from a power button 124), the security controller 110 initiates an authentication procedure. As part of the authentication procedure, the security controller 110 may receive an authorization input 208. The authorization input 208 may be embodied as a password entered via a keyboard 122, a communication from a near-field communications (NFC) device 146 in response to the NFC device 146 being brought into the proximity of NFC circuitry 126, or as other authorization data. As a further part of the authentication procedure, the security controller 110 also retrieves an authorization code 210 from a dedicated memory 114. The security controller 110 then compares the authorization input 208 and the authorization code 210. If the authorization input 208 and authorization code 210 match, the security controller 110 supplies power to the primary processor 104, allowing the computing device 102 to proceed with a boot. The computing device 102 may be embodied as any type of computing device capable of performing the functions described herein. By way of example, the computing device 102 may be embodied as a laptop computer, a desktop computer, a server, a netbook, a smartphone, cellular phone, a mobile interne device, a tablet computing device, a handheld computer, a personal digital assistant, or other computing device. As shown in the illustrative embodiment of FIG. 1, the computing device 102 comprises a primary processor 104, a chipset 106, a memory 108, a keyboard controller 112, a dedicated memory 1 14, a keyboard 122, a power button 124, and NFC circuitry 126. In some embodiments, several of the foregoing components may be incorporated on a motherboard of the computing device 102, while other components may be communicatively coupled to the motherboard via, for example, a peripheral port. Furthermore, it will be appreciated by those of skill in the art that the computing device 102 may include other components, sub-components, and devices commonly found in a computer and/or computing device, which are not illustrated in FIG. 1 for clarity of the description. The primary processor 104 of the computing device 102 may be embodied as any type of processor capable of executing software/firmware, such as a microprocessor, digital signal processor, microcontroller, or the like. The primary processor 104 is illustratively embodied as a single core processor having a processor core 120. However, in other embodiments, the primary processor 104 may be embodied as a multi-core processor having multiple processor cores 120. Additionally, the computing device 102 may include additional processors 104 having one or more processor cores 120. In other embodiments, the primary processor 104 may be part of a system-on-a- chip (SoC) integrated circuit. For example, the primary processor 104 could be the microprocessor of an SoC implementation as part of a smartphone. Additionally, in one particular embodiment, the primary processer 104 is embodied as a central processing unit (CPU) of the computing device 102. The processor 104 is communicatively coupled to the chipset 106 via a number of signal paths. These signal paths (and the other signal paths illustrated in FIG. 1) may be embodied as any type of signal paths capable of facilitating communication between the components of the computing device 102. For example, the signal paths may be embodied as any number of wires, cables, light guides, printed circuit board traces, via, bus, intervening devices, and/or the like. The chipset 106 of the computing device 102 may include a memory controllerhub (MCH or "northbridge"), an input/output controller hub (ICH or "southbridge"), and a firmware device. The firmware device of the chipset 106 may be embodied as a memory device for storing Basic Input/Output System (BIOS) data and/or instructions and/or other information (e.g., a BIOS driver used during booting of the computing device 102). However, in other embodiments, chipsets having other configurations may be used. For example, in some embodiments, the chipset 106 may be embodied as a platform controller hub (PCH). In such embodiments, the MCH may be incorporated in or otherwise associated with the primary processor 104, and the primary processor 104 may communicate directly with the memory 108 (as shown by the dashed line in FIG. 1). The memory 108 may be embodied as one or more memory devices or data storage locations including, for example, dynamic random access memory devices (DRAM), synchronous dynamic random access memory devices (SDRAM), double-data rate synchronous dynamic random access memory device (DDR SDRAM), flash memory devices, and/or other volatile memory devices. In the illustrative embodiment, the memory 108 is communicatively coupled to the chipset 106 via a number of signal paths. Various data and software may be stored in the memory 108. For example, one or more operating systems, applications, programs, libraries, and drivers that make up the software stack executed by the primary processor 104 may reside in memory 108 during execution. Furthermore, software and data stored in memory 108 may be swapped between the memory 108 and a data storage device (e.g. a hard disk drive) as part of memory management operations. The computing device 102 further includes a keyboard controller 112 that is configured to monitor user inputs and to perform power management. The keyboard controller 112 may be embodied as an embedded controller that is communicatively coupled to the chipset 106 via a number of signal paths or, alternatively, that is part of the chipset 106. In the illustrative embodiment, the keyboard controller 112 is also communicatively coupled, via a number of signal paths, to the dedicated memory 114, the keyboard 122, the power button 124, and the NFC circuitry 126. As described below, the keyboard controller 112 is operable to receive input signals from the keyboard 122, the power button 124, and the NFC circuitry 126 and to perform various tasks in response to these inputs signals. Among other tasks, the keyboard controller 112 is responsible for managing the power supplied to various components of the computing device 102. As such, the keyboard controller 112 may selectively supply power to at least the primary processor 104. In some embodiments, the keyboard controller 112 may also gate power to other components of the computing device 102 (e.g., the chipset 106), in addition to gating power to the primary processor 104. The keyboard controller 112 may be persistently powered and capable of accepting an input signal from the power button 124 at any time. The dedicated memory 114 may be embodied as any type of device configured for the short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Various software, firmware, and/or data may be stored in the dedicated memory 114. For example, one or more software or firmware modules executable by the keyboard controller 112 may reside in the dedicated memory 114. Although the dedicated memory 114 is illustrated as a distinct component from the keyboard controller 112 in FIG. 1, it is contemplated that the dedicated memory 114 may be integrated into the keyboard controller 112 in other embodiments. In either case, the dedicated memory 114 is directly accessible to the keyboard controller 112 but is not directly accessible to other components of the computing device 102. In other words, the dedicated memory 114 is protected against unauthorized write operations and thus substantially secure from malicious attacks. The reserved nature of the dedicated memory 114 aids in the security of the software, firmware, and/or data accessed by the keyboard controller 112 during run- time. The keyboard 122 of the computing device 102 may be any human-machine interface (HMI) that may be operated by a user of the computing device 102 to provide inputs to the computing device 102. In the illustrative embodiment, a user of the computing device 102 may press one or more keys of the keyboard 122, which the keyboard controller 112 will register as one or more keystrokes. It is contemplated that the keyboard 122 may be physical (e.g., as a standard personal computer keyboard) or virtual (e.g., a touchscreen on a smartphone or tablet, which may be independently powered or receive power from the keyboard controller 112). The power button 124 may be embodied as any type of button, switch, or other mechanism configured to send a power-on signal 206 to the keyboard controller 112. In some embodiments, the power button 124 may also supply signals to the keyboard controller 112 that power-off the computing device 102 or place the computing device 102 into a sleep, suspend, or other low power mode of operation. Although the embodiment of FIG. 1 is illustrated with a power button 124, it should be appreciated that the power-on signal 206 may be generated by any number of sources (including sources remote from the computing device 102). In some embodiments, the computing device 102 may also include NFC circuitry 126. The NFC circuitry 126 may comprise relatively short-ranged, high frequency wireless communications circuitry. For example, in some embodiments, the effective communications range of the NFC circuitry 126 may be approximately ten centimeters. This relatively short communications range of the NFC circuitry 126 allows validation of the physical presence of another device (i.e., the NFC device 146) by the NFC circuitry 126. The NFC device 146 may be embodied as any type of device capable of communicating with or being recognized by the NFC circuitry 126. By way of example, the NFC device 146 may be embodied as a smartphone or cellular phone. As noted above, the computing device 102 also includes a security controller 110 configured to perform a power-on user authentication. The security controller 110 may be embodied as any number of hardware, firmware, and/or software modules that perform user authentication, power management, keyboard management, and/or other functions. In the illustrative embodiment of FIG. 1, the security controller 110 is embodied as a number of hardware, firmware, and/or software modules within the keyboard controller 112. In this illustrative embodiment, the security controller 110 is communicatively coupled to the dedicated memory 114, the keyboard 122, the power button 124, and the NFC circuitry 126, in the same manner described above with regard to the keyboard controller 112. One illustrative embodiment of the software/firmware environment 200 of the security controller 110 is shown as a block diagram in FIG. 2. The security controller 110 illustratively includes a power management module 220, an authentication module 222, and a keyboard management module 224. Each of these modules 220-224 may be embodied as hardware, software, firmware or any combination thereof. Additionally, each of these modules 220-224 may be configured to receive one or more inputs (e.g., a power-on signal 206, an authentication input 208, and/or an authentication code 210), process data represented by the one or more inputs, and generate one or more outputs. These one or more outputs may be passed to another of the modules 220-224 or transmitted to another component of the computing device 102. The power management module 220 of the security controller 110 is configured to receive the power-on signal 206 and to selectively supply power to the various components of the computing device 102. As discussed above, the power-on signal 206 may be received from the power button 124. In other embodiments, the power-on signal 206 may be received from another component of the computing device 102 or a remote source. Upon receiving the power-on signal 206, the power management module 220 may instruct the authentication module 222 to perform a user authentication. If the authentication module 222 returns a positive result (i.e., the user is authenticated), the power management module 220 will cause power to be supplied to the primary processor 104 of the computing device 102. The authentication module 222 of the security controller 110 is configured to receive an authentication input 208, retrieve an authentication code 210, and compare the authentication input 208 to the authentication code 210. The authentication input 208 may be embodied as any input, data, or signal configured to authenticate a particular user to the computing device 102. In some embodiments, the authentication module 222 will receive the authentication input 208 via the keyboard management module 224. In such embodiments, the keyboard management module 224 monitors the keyboard 122 and registers one or more keystrokes input by a user of the computing device 102. The keyboard management module 224 then communicates the authentication input 208 to the authentication module 222. In other embodiments, the authentication input 208 may be a signal supplied to the authentication module 222 from the NFC circuitry 126 when an NFC device 146 associated with a particular user is brought into proximity with the NFC circuitry 126. In still other embodiments, the authentication input 208 may be provided by spoken word or other means of entering a unique authorization. The authentication module 222 also retrieves the authentication code 210 from the dedicated memory 114. The authentication code 210 may be embodied as any data comprising a unique user identification. Each authorized user (and/or each NFC device 146 representing an authorized user) of the computing device 102 may have a unique authentication code 210 stored in the dedicated memory 114. In some embodiments, the authentication code 210 may be stored as a hash value in a hash table in the dedicated memory 114. After receiving the authentication input 208 and retrieving the authentication code 210, the authentication module 222 compares the two values to determine the authenticity of the authentication input 208. This comparison may be performed using any number of known methods. By way of example, where the authentication code 210 is stored as a hash value, the authentication module 222 may perform a hash of the authentication input 208 and compare the two hash values. The authentication module 222 may return the results to the power management module 220, which may then supply power to the primary processor 104 in response to a successful user authentication. From time to time, the authentication code 210 may need to be changed or a new authorization code 210 added for a new user. After the user has been authenticated and power has been supplied to the primary processor 104, as described above, the user may request to change the password (or the NFC device 146 recognized by the computing device 102). In response, the authentication module 222 may listen for an authentication input 208, either from the keyboard management module 224 or the from the NFC circuitry 126. When an authentication input 208 is received, the authentication module 222 will treat the authentication input 208 as a new authentication code 210. The authentication module 222 may replace the old authentication code 210 stored in the dedicated memory 114 with this new authentication code 210. Returning now to FIG. 1, in alternative embodiments, the security controller 110 may be embodied as a number of hardware, firmware, and/or software modules within a security engine 130 of the chipset 106 (as illustrated in phantom). The security engine 130 may include hardware, firmware, and/or software modules that are configured to perform security, encryption, and/or authentication functions. For example, the security engine 130 may be embodied as or otherwise include an out-of-band processor, a trusted platform module (TPM), and/or other security enhancing hardware and associated software modules. The security engine 130 may be persistently powered and capable of operating irrespective of the power state of the primary processor 104. In embodiments where the security controller 110 is located within the security engine 130 (rather than the keyboard controller 112), the dedicated memory 114 may be embodied as a secure portion of memory 108 accessible only to the security engine 130. The security controller 110 in such embodiments may also be in direct communication with the keyboard 122, the power button 124, and the NFC circuitry 126, or may access these components via the keyboard controller 112. In addition to the computing device 102, the system 100 for power-on user authentication may also include a remote server 142 communicatively coupled to the computing device 102 via a network 144. The network 144 may be embodied as any number of various wired and/or wireless telecommunication networks. For example, the network 144 may be embodied as or otherwise include one or more cellular networks, telephone networks, local or wide area networks, publicly available global networks (e.g., the Internet), or any combination thereof. Furthermore, the network 144 may include any number of additional devices to facilitate communication between the computing device 102 and the remote server 142, such as routers, switches, intervening computers, and/or the like. The computing device 102 and the remote server 142 may use any suitable communication protocol to communicate with each other over the network 144. The remote server 142 may be any type of computing device that is embodied outside of and discrete from the computing device 102 and is configured to support the security functions of the security controller 110 and/or the security engine 130. As one illustrative example, the remote server 142 may be embodied as an Intel® AntiTheft server. The remote server 142 and the computing device 102 may have an established trust relationship such that security is maintained during operations between the remote server 142 and the computing device 102. Due to this established trust relationship, the remote server 142 may communicate anew authentication code 210 to the security controller 110 of the computing device 102 periodically or in response to a request from the user. The security controller 110 may then replace the old authentication code 210 stored in the dedicated memory 114 with this new authentication code 210. Using this mechanism, a user may access the computing device 102 despite a forgotten password. Several of the features of the security controller 110, including its persistent power connection and dedicated memory 114, allow the computing device 102 to provide power-on user authentication. To do so, as illustrated in FIG. 3, the computing device 102 may be configured to execute a method 300. In general, the method 300 involves the comparison of a received authentication input 208 and a retrieved authentication code 210. The method 300 may be executed by, for example, the security controller 110, in conjunction with other components of the computing device 102, which may interact with other components of the system 100. The method 300 begins in block 302 in which the security controller 110 receives a power-on signal 206. As discussed above, the power-on signal 206 may be received from the power button 124. In other embodiments, the power-on signal 206 may be received from another component of the computing device 102 or a remote source. The security controller 110 may receive the power-on signal 206 by any known method. For example, in some embodiments, the power-on signal 206 may create a hardware or software interrupt that causes the security controller 110 to respond. In other embodiments, the security controller 110 may poll for the power-on signal 206. After receiving the power-on signal 206 in block 302, the method 300 proceeds to block 304 in which the security controller 110 receives an authentication input 208. During block 304, the security controller 110 may monitor the keyboard 122, the NFC circuitry 126, and/or other input sources for the authentication input 208. As described above, the authentication input 208 may be supplied to the security controller 110 when a user enters a password on the keyboard 122 or when a particular NFC device 146 is brought into proximity with the NFC circuitry 126. The security controller 110 may receive the authentication input 208 by any known method. For example, in some embodiments, the authentication input 208 may create a hardware or software interrupt that causes the security controller 110 to respond. After block receiving the authentication input, the method 300 proceeds to block 306 in which the security controller 110 retrieves the authentication code 210 from the dedicated memory 114. Of course, it should be appreciated that, block 306 may be performed prior to, or contemporaneously with block 304. The authentication code 210, representing one or more authorized users of the computing device 102, is retrieved from the dedicated memory 114. As discussed above, the retrieved authentication code 210 may be one or more hash values in some embodiments. After block 306, the method 300 proceeds to block 308 in which the security controller 110 determines whether the authentication input 208 received in block 304 matches the authentication code 210 retrieved from the dedicated memory 114 in block 306. This comparison may be performed using any number of known methods. By way of example, where the retrieved authentication code 210 is a hash value, the security controller 110 may perform a hash of the authentication input 208 and compare the two hash values. A match between the authentication input 208 and the authentication code 210 indicates the presence of an authorized user of the computing device 102. If the security controller 110 determines in block 308 that the authentication input 208 matches the authentication code 210, the method 300 proceeds to block 310 in which the security controller 110 supplies power to the primary processor 104, thereby allowing a power-on of the computing device 102. As noted above, the security controller 110 may also supply power to other components of the computing device 102 in response to determining that the authentication input 208 matches the authentication code 210. If the security controller 110 determines in block 308 that the authentication input 208 does not match the authentication code 210, the method 300 instead proceeds to block 312 in which the security controller 110 withholds power from the primary processor 104, thereby preventing a power-on of the computing device 102. In some embodiments, the security controller 110 may take other appropriate action in block 312. For example, the security controller 110 may communicate a message to the user informing them that the primary processor 104 and/or other components of the computing device 102 will not power-on. While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such an illustration and description is to be considered as exemplary and not restrictive in character, it being understood that only illustrative embodiments have been shown and described and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. |
The present disclosure is directed to systems and methods for a memory device such as, for example, a Processing-In-Memory Device that is configured to perform multiplication operations in memory using a popcount operation. A multiplication operation may include a summation of multipliers being multiplied with corresponding multiplicands. The inputs may be arranged in particular configurations within a memory array. Sense amplifiers may be used to perform the popcount by counting active bits along bit lines. One or more registers may accumulate results for performing the multiplication operations. |
CLAIMSTherefore, the following is claimed:1. A system comprising: at least one memory device; at least one memory array of a memory device; a plurality of memory cells of the at least one memory array, the plurality of memory cells accessible via a plurality of bit lines and a plurality of word lines, wherein at least one multiplicand is stored in the memory array and at least one multiplier is stored in the memory device; the at least one memory device being configured to: generate a sum of the at least one multiplicand based on a plurality of popcount operations performed on the at least one multiplicand; and generate a multiplication result based on the sum and by sequencing through the bits of the multiplier.2. The system of claim 1, wherein the sum is generated based on accumulation of a current sum value with the popcount result, in response to the first bit of multiplier having a first predefined value at a corresponding position.3. The system of claim 2, wherein the sum is generated based on sequencing through positions without accumulating, in response to the bits of the multiplier having a second predefined value at corresponding positions.4. The system of claim 1 , wherein at least one multiplicand is stored along a corresponding bit line.5. The system of claim 4, wherein the plurality of popcount operations comprises counting the number of bits having the first predefined value for each bit position of the at least one multiplicand.6. The system of claim 5, wherein the bit positions of the at least one multiplicand are stored along the same word line.7. The system of claim 4, further comprising a plurality of sense amplifiers, wherein each sense amplifier is coupled to a corresponding bit line, wherein the plurality of sense amplifiers are used to perform the plurality of popcount operations.8. A system comprising: at least one memory device; at least one memory array of a memory device; a plurality of memory cells of the at least one memory array, the plurality of memory cells accessible via a plurality of bit lines and a plurality of word lines, wherein at least one multiplicand is stored in the memory array and at least one multiplier corresponding to the at least one multiplicand is stored in the memory array; and the memory device further configured to generate the dot product result of the at least one multiplicand and the at least one multiplier based on a plurality of popcount operations performed on the at least one multiplicand, wherein the plurality of popcount operations are selectively performed by sequencing through bits of the at least one multiplier.9. The system of claim 8, wherein a set of popcount operations generates a popcount result for each bit position of bits of the at least one multiplier.10. The system of claim 9, wherein each popcount result is generated by selectively applying a plurality of popcount operations on the at least one multiplicand, and each popcount operation comprises counting the number of bits with a first value for each bit position of the at least one multiplicand.11. The system of claim 10, wherein each popcount result is generated in response to a bit of the at least one multiplier having a first predefined value at a corresponding bit position, and bypassed in response to a bit of the at least one multiplier having a second predefined value at a corresponding bit position.12. The system of claim 10, wherein the at least one multiplier is stored along odd-numbered bit lines, wherein the at least one multiplicand is stored along even-numbered bit lines.13. The system of claim 10, wherein each pair of consecutive bit lines is coupled to a respective sense amplifier.14. The system of claim 10, wherein each dot product calculation among a plurality of dot product calculation is generated from a group of multipliers with corresponding multiplicands.15. A method comprising: storing a plurality of multiplicand and corresponding multipliers in a memory device, the memory device comprising a plurality of memory cells accessible via a plurality of bit lines and a plurality of word lines; generating a dot product result by summing multiplications of each multiplicand and corresponding multiplier, wherein the dot product result is generated by: sequencing through positions of bits of the multipliers; selectively applying a plurality of popcount operations on the multiplicands based on the bit value of a current bit position of a corresponding multiplier to generate a popcount result; and selectively accumulating the at least one popcount result; wherein the dot product result is generated upon completing the sequencing.16. The method of claim 15, wherein the dot product result is generated as value in a feature map used in a convolutional neural network.17. The method of claim 15, wherein the multipliers are stored in a first memory array and wherein the multiplicands are stored in a second memory array that is different from the first memory array.18. The method of claim 15, wherein the multipliers are stored along odd- numbered bit lines, wherein the multiplicands are stored along even- numbered bit lines.19. The method of claim 15, wherein each pair of consecutive bit lines are coupled to a respective sense amplifier.20. The system of claim 19, wherein the sense amplifiers are used to perform the plurality of popcount operations.21. A method comprising: storing at least one summand in a memory array of a memory device, the memory array comprising a plurality of memory cells accessible via a plurality of bit lines and a plurality of word lines; generating a fused multiply-accumulation result by multiplying a multiplier by the popcount result of the at least one summand and accumulating it to the sum; generating a popcount result by applying a plurality of popcount operations on the at least one summand; and selectively accumulating a current sum value with the popcount result by sequencing through the at least one bit of the multiplier; wherein the fused multiply-accumulation result is generated upon completing the sequencing through the at least one bit of the multiplier. |
COUNTER-BASED MULTIPLICATION USING PROCESSING IN MEMORYRELATED APPLICATIONS[0001] The present application claims priority to U.S. Pat. App. Ser. No. 16/836,773, filed March 31, 2020 and entitled “COUNTER-BASED MULTIPLICATION USING PROCESSING IN MEMORY,” the entire disclosure of which is hereby incorporated herein by reference.BACKGROUND[0002] Generic processors may interface with memory components and caches to perform repeated calculations on stored data. Data may be loaded into a cache, the processor may then access the data, the processor may calculate a result, and then the result may be stored in memory. Processors may perform repetitive or intensive linear algebra operations by handling matrix elements. For example, processors may perform read/write operations to fetch data, process it, and store it in memory. These generic processors may be used to perform multiplication operations as part of more complex algorithms such as, for example, convolutional operations. In addition to generic processors, special purpose devices such as Processing In-Memory devices may include memory arrays and logic to perform operations on the contents of the memory arrays.BRIEF DESCRIPTION OF THE DRAWINGS[0002] Many aspects of the present disclosure can be better understood with reference to the attached drawings. The components in the drawings are not necessarily drawn to scale, with emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout several views.[0003] FIG. 1 is a drawing of a memory device according to various embodiments.[0004] FIGS. 2A-2B are drawings of a memory device that performs in memory multiplication with a common multiplier using popcount operations according
to various embodiments.[0005] FIG. 3 is a flowchart illustrating the functionality of a memory device that performs in-memory multiplication of a common multiplier using popcount operations according to various embodiments.[0006] FIGS. 4A-4E are drawings of a memory device that perform in memory multiplication of different multipliers using popcount operations according to various embodiments.[0007] FIG. 5 is a flowchart illustrating the functionality of a memory device that performs in-memory multiplication of different multipliers using popcount operations according to various embodiments.[0008] FIG. 6 is a schematic drawing showing a system that includes a memory device according to various embodiments.DETAILED DESCRIPTION[0009] The present disclosure is directed to systems and methods that perform multiplication in a memory device using popcount operations. By performing multiplication in a memory device (e.g., performing operations “in-memory”) at least some of the operands of the multiplication operations may be stored in memory with separately loading them in a system memory, cache, some other fast memory or the like. By performing multiplication operations, the memory device of the present disclosure may include components such as registers, controllers, and sense amplifiers, to perform multiplication operations directly in the memory device.[0010] The present disclosure describes embodiments using a popcount operation. A popcount operation is an operation performed in a memory device where a range of memory cells are provided as an input and the number of high (“hi”) bits (e.g., active bits or “ones”) are counted. For example, if the range of memory cells include the binary values [1, 0 ,1, 1, 0 1], then a popcount operation would yield the number four because there are four bits having the value of “one” (e.g., an active bit or hi bit).[0011] The present disclosure describes various ways of structuring a memory device to perform popcount operations as part of a multiplication operation. A basic multiplication operation involves a multiplier (A) and multiplicand (B) which yield a product expressed as “A x B”. A more complex multiplication operation may include one multiplier (A) and a plurality of multiplicands (B and C), where the
product is expressed as “A x B + A x C” or alternatively “A x (B + C). In this case, the multiplicands are summed together and then multiplied by the multiplicand. The multiplicands may be referred to as “summands” which are operands to be summed together. Thus, in this example, the multiplier (A) is a common multiplier. A more complex multiplication operation may involve different multipliers corresponding to different multiplicands. For example, the multipliers (A and B) may correspond to multiplicands (C and D) such that the resulting product is expressed as “A x C + B x D.” In this case, the operation may be referred to as ‘dot product’ of two vectors or number sequences [A, B] dot [C, D], or more broadly a sum of element-wise products of more than one vectors or sequences.[0012] In some embodiments, the memory device is a special purpose device used to implement multiplication operations as a part of a convolutional operation in general (e.g. in a convolutional neural network). The memory device may be used to implement one or more layers in a convolutional neural network.The convolutional neural network may be designed for detecting: image features in image data, motion features in video streams, text patterns in textual data, statistical features in multi-dimensional complex data, scientific features in scientific processes and simulations, astronomical features in astronomical data coming from the space, weather conditions in world weather data as well as predictions based on them, words in voice audio data. The convolutional neural network may be used to detect features or characteristics in computer generated graphics, virtual reality data, and augmented reality data. Features may be detected for satellite imagery, long exposure generated graphics, time-lapse videos, slow-motion videos. The convolutional neural network may be configured to perform feature detection on graphical or visual representation of data collected from a variety of sources such as data repositories or databases. The data subject to feature detection may be data that is structured, data that is semi-structured, data that is unstructured, data objects generated from machines, data logs, real-time data generated from a remote source, data that is aggregated from a plurality of sources, data received over a network, data that has been pre-processed by external systems, data that has been subject to visual filters, or data that generated at least partially by an external computing system. Features that searched for within the data include visual patterns, horizontal lines, edges, vertical lines, various shapes, curves, angles, particular colors, orientations. In addition, simple features may be combined to formulate more
complex features such as complex objects.[0013] In addition to implementing convolution neural networks, the memory device may be configured as a Linear algebra accelerator in-memory device, a Neuromorphic processor in-memory device, a Memory Dual in-line Memory Module (DIMM) with compute capabilities, an in-memory graphics processor, and an intelligent solid-state drive (SSD) with computation. In addition, the memory device may include NAND Flash memory arrays, X-point arrays, or other memory arrays with compute capabilities. The memory device may be configured to perform Matrix-matrix multiplication and neural network inference and training.[0014] The following discussion refers to the FIGS to illustrate various embodiments of a memory device that uses popcount operations to perform multiplication operations.[0015] FIG. 1 is a drawing of a memory device 100 according to various embodiments. The memory device 100 as shown in FIG. 1 may be embodied as a Processing In Memory (PIM) device. However, the present disclosure is not limited only to PIM devices. A PIM device is a semiconductor device that comprises one or more memory arrays and a PIM processor coupled to these arrays in-memory. The PIM processor is configured to perform operations using data stored in the cells of the memory array without the need to perform time-intensive input/output operations, fetch operations, or load / store operations over a memory bus. In this respect, the PIM processor may access at least some data without a buffer memory or cache or bus to perform data and compute operations. In contrast, a host processor is coupled with one or more memory devices over a memory bus or other link. A host processor may be a central processing unit (CPU), digital signal processor, graphics processing unit (GPU), special purpose processor, or general-purpose processor that is installed in a device or system external to the memory device. The host processor may be installed in a computing device, lap top, mobile device, server, special purpose computer, general purpose computer.[0016] The memory device 100 is an integrated circuit. The memory device 100 may be a semiconductor chip or die or a die stack. The memory device 100 may include one or more memory arrays 103. A memory array 103 comprises a plurality of rows and columns and may be defined in terms of a row-column size.The example of FIG. 1 shows a memory array 103 having rows labeled r1 - rn and columns d - c n. At each row and column intersection is a memory cell configured
to store at least part of an operand in a multiplication operation. The operand may be a multiplier or a multiplicand. In this respect, an operand, such as, for example, the decimal number “9” may be stored as a series of binary bits “1001” across multiple memory cells.[0017] A memory cell may be a single level cell that stores one binary bit (e.g., a bit that can have two values or states, e.g. “0” or “1” encoded as low or high value of a memory cell) or as a multi-level cell that includes multiple bits or levels (e.g., the cell that can store multiple bits). Examples of multi-level cells include QCL NAND Flash memory, which can have sixteen values or states encoded as Vt (threshold voltage) of a floating gate transistor thereby storing four bits or levels of data per memory cell. To illustrate a memory array made of multi-level cells, for example dual-level cells encoding 2-bit binary numbers, the decimal number “9” (expressed as the binary number “1001”) may have the left part of the four-bit binary number (two most significant bits) stored in one multi-level cell as the binary number “10” while the right part (two least significant bits) may be stored in another dual-level cell as the binary number “01”. In a memory device having single- level memory cells, the decimal number “9” may occupy at least four separate cells to represent the binary number “1001.”[0018] Thus, the memory array 103 is a hardware component used to store data as a plurality of array elements addressable by rows and columns. The memory device 100 may include several memory arrays 103 organized throughout the memory device 100. The memory array 103 may be implemented using various types of technologies, organizations or aspects. The memory array may be defined as including both volatile and nonvolatile memory. Volatile components may be those that do not retain data values upon loss of power. Nonvolatile components may be those that retain data upon a loss of power. The memory array 103 may comprise random access memory (RAM), read-only memory (ROM), solid-state memory arrays. RAM may comprise static random-access memory (SRAM), dynamic random access memory (DRAM). The memory array 103 may comprise solid-state memory such as Flash memory, NOR Flash (e.g., Flash memory in a NOR configuration) or NAND Flash (e.g., Flash memory in a NAND configuration). The memory array may be resistive RAM (ReRAM), cross-point memory, or cross bar 3D memory. Each type of memory technology used to implement the memory array may be accessed using a row, column, or other memory address. Rows may
be referred to as word lines. A word line may comprise terminals of transistor gates of corresponding memory cells. Alternatively, a word line can be connected directly to memory cell matter, e.g. for resistor-like or for diode-like memory cells. Columns may be referred to as bit lines. A bit line may comprise source and/or drains of transistors that constitute memory cells, capacitor terminals of the capacitors that constitute memory cells, resistor terminals of the resistors that constitute memory cells, diode terminals of the diodes that constitute memory cells, or a combination thereof.[0019] Memory array 103 comprises peripheral circuitry, which can be outside of the memory array 103 or a part of it. The peripheral circuitry may include an accumulator 106, a controller 109, a buffer memory 112, a system link 115, and potentially other integrated components such as, for example, sense amplifiers to sense data from memory array 103 and drivers to store data back to memory array 103.[0020] The accumulator 106 may be, for example, a Fused Multiply- Accumulate (FMA) Unit 106. The accumulator 106 may be configured to perform dot product multiplication operations on arrays of data (or matrices) comprising operands such as, for example, multipliers or multiplicands. The operands for the multiplication operations may be supplied directly from the memory array 103 as well as from a controller 109. In some embodiments, the accumulator 106 may be dedicated to only perform dot product matrix calculations. The accumulator 106 may be configured to perform a multiply-accumulate operation that computes a product of input operands and adds that product to an accumulated value. The accumulator 106 may include one or more registers for storing intermediate values as part of a multiplication operation.[0021] The controller 109 is a part of a processor of the memory device 100. The controller 109 may comprise integrated circuitry or logic embodied in hardware that is used to store data into the memory array 103. In addition, the controller 109 may receive, from locations outside the memory array 103. The controller 109 may implement to select bit lines and word lines in particular patterns according to logic, microcode, or other algorithms.[0022] The memory device 100 may also include buffer memory 112. The buffer memory may be included as part of the controller 109 and / or a part of accumulator 106 or it may be external to the controller 109 and / or an accumulator
106 or it may be connected to these components 109 and 106 via an internal bus (e.g., a system link 115). Alternatively, the buffer memory 112 may be a part of memory array 103 allocated specifically for buffer purposes described herein. Specifically, a part of memory array 103 allocated for buffer memory may be a part of an array with faster access (e.g., having shorter path to the accumulator 106). The buffer memory 112 may comprise buffers to temporarily store data as the controller 109 and accumulator 106 perform multiplication operations. The controller 109 and / or accumulator 106 may write to or read from the buffer memory 112. For example, the buffer memory 112 may be used to store intermediate results as part of a multiplication operation. The buffer memory 112 may also store part of the operands of the multiplication operation such as, for example, one or more multipliers, while the multiplicands are stored in the memory array 103.[0023] The system link 115 of the memory device 100 may provide data and/or control signals between the memory device 100 and external systems. The system link 115 may couple to various components of the memory device 100 such as, for example, the memory array 103, the accumulator 106, the controller 109, the buffer memory 112, and other components. Thus, system link 115 may include and internal link amongst various components of memory device 100 that allow these components to exchange data and/or control signals among each other. The system link 115 may comprise input/output ports to couple to external systems outside the memory device 100. The system link 115 may be an Input / Output (IO) bus such as, for example, a DDR4 bus or PCIe bus. In this respect, an external system may read or write data to the memory array 103, the accumulator 106, and buffer memory 112. In addition, external systems may transmit control signals to the controller 109 to program or otherwise control the controller 109.[0024] An external system may include a host processor with a PCB motherboard, wherein the memory device 100 is connected to host processor over a bus such as DDR4, DDR5 or PCIe or alike. The external system may execute an operating system, applications, libraries, scripts, or programming languages. The external system may include one or more server racks or computers or other arrangements. A server may be a single installation or may be distributed among many different geographical locations. The external system may include a plurality of computing devices that together may comprise a hosted computing resource, a grid computing resource and/or any other distributed computing arrangement. In
some cases, the external system may correspond to an elastic computing resource where the allotted capacity of processing, network, storage, or other computing- related resources may vary over time. The external system may implement one or more virtual machines that use the resources of a computing system. Various software components may be executed on one or more virtual machines. The external system may also include additional memory devices 100. In this respect, an instance of a memory device 100 may query, control or access data in any additional memory device 100 installed in a system.[0025] The system link 115 may allow the memory device 100 to couple to external systems that together combined or separately implement a convolutional neural network. For example, the memory device 100 may implement a layer within a neural network or multiple layers within a neural network. For example, the memory device 100 may be used to implement a convolution layer.The system link 115 may extract outputs of a memory device 100 and input them into different layers of the neural network located in other memory devices or other parts of external system. A pooling layer in a neural network may obtain outputs from the system link 115 of a memory device 100, may perform pooling operations, and may pass the result as inputs into the memory device 100. For example, the output data generated by the accumulator 106 may be accessed by the system link 115 and processed externally by a pooling layer, where those results are supplied to the memory array 103 of the memory device 100 via the system link 115 for additional processing. The following FIGS provide examples of configurations and operations that may occur within a memory device such as, for example, the memory device 100 of FIG. 1.[0026] FIGS. 2A-2B are drawings of a memory device 100 that performs in-memory multiplication with a common multiplier using popcount operations according to various embodiments. In FIG. 2A, a memory device 100 is configured to perform a multiplication operation where a multiplier 203 is applied to the sum of a first multiplicand 206a and a second multiplicand 206b. In the example depicted in FIG. 2A, the multiplier 203 has a value of decimal number “6”, the first multiplicand 206a has a value of decimal number “7” and the second multiplicand 206b has a value of decimal number “10”. Alternatively, this can be interpreted as 6 x 7 + 6 x 10 as opposed to 6 x (7+10), according to the distributive property of multiplication. In this respect, A x (B+C) may be expressed as the following dot
product of two vectors: [A, A] x [B, C]. FIG. 2A thus, depicts an embodiment where a common multiplier 203 is applied to multiplicands 206a, 206b. However, any number of multiplicands may be included as part of the multiplication operation shown in FIG. 2A.[0027] The memory device 100 may store the first multiplicand 206a in a first bit line (e.g., BL 1 ) of a memory array and may store the second multiplicand 206b in a second bit line (e.g., BL 2) of the memory array. In some embodiments, the first and second bit lines are in the same memory array. In other embodiments, the first bit line and second bit lines are in different arrays. When storing a multiplicand 206a, 206b in a bit line, the value of the multiplicand may occupy a plurality of memory cells such that they span a plurality of word lines. For example, the first multiplicand 206a having the example decimal number “7” is represented in binary as “0111”. The least significant bit “1” may be stored at a first word line (e.g., WL 1). The second bit “1” may be stored at a second word line (e.g., WL 2). After that, the third bit “1” may be stored at a third word line (e.g., WL 3). And the most significant bit “0” may be stored at a fourth word line (e.g., WL 4). The second multiplicand 206b having the example decimal number “10” is represented in binary as “1010”. The least significant bit “0” may be stored at first word line (e.g., WL 1). The second bit “1” may be stored at a second word line (e.g., WL 2). After that, the third bit “0” may be stored at a third word line (e.g., WL 3). And the most significant bit “1” may be stored at a fourth word line (e.g., WL 4). Thus, the multiplicands 206a, 206b are stored in a bit-serial configuration.[0028] The multiplier 206 may be stored in a memory array 103 of the memory device 100, a buffer memory 112 of the memory device, or some other source. FIG. 2A shows the controller 109 that applies the multiplier 203 having the example decimal number “6” (represented as the binary number “0110”) to the sum of the multiplicands 206a, 206b.[0029] The memory device 100 may include a sense amplifier (SA) array 212. The sense amplifier array 212 may include a plurality of sense amplifiers (e.g., SA 1, SA 2). In some embodiments, each sense amplifier is dedicated to a corresponding bit line. For example, each sense amplifier may access only one bit line. In the example of FIG. 2A, a first sense amplifier (SA 1) accesses the first bit line (BL 1) and a second sense amplifier (SA 2) accesses the second bit line (BL 2).Each sense amplifier receives input when the controller 109 activates the bit line
coupled to the sense amplifier.[0030] The sense amplifiers in the sense amplifier array 212 are configured to perform a popcount operation when the bit line for a sense amplifier is activated and when one or more word lines are activated. The sense amplifier is configured to count the number of active bits (e.g., bits represented as a “1” or having “hi” value) for all memory cells that are selected by the controller 109. For example, counting can be implemented as a long shift register or chain, that matches the size of the sense amp array and integrated within it. In this case, the controller would shift out all bits. An alternative implementation includes a rolling ripple carry adder. In this embodiment, the right-most sense amp sends its value to the sense amp positioned to the left. The current sense amp accumulates the result from the right with the value sensed on a bit line and sends it to the next left sense amp. This continues until a final value is transmitted to the controller. Gradual bus width increase towards the controller would provide more bandwidth for a rolling value in this embodiment. Another embodiment includes using a logarithmic-reducing tree counter/adder. This embodiment performs relatively fast calculations, however, it may require relatively more wiring to connect various components. In yet another embodiment, the sense amplifier array may include a thermometric to binary converter with bypassing capabilities. In this embodiment, sense amps with a current value of zero (sensed from bit line) are bypassed. This may also include using a lookup table. In another embodiment, the sense amp array 212 includes a Flash analog to digital converter (ADC).[0031] FIG. 2A depicts an example where the bit positions of the multiplicands 206a, 206b are stored along the same word line. That is, least significant bits of each multiplicand 206a, 206b are stored on the same word line, the most significant bits of each multiplicand 206a, 206b are stored on the same word line, and the same applies for the bits in between. In other embodiments, the multiplicands 206a, 206b may be stored along different word line ranges or partially overlapping word line ranges.[0032] FIG. 2B represents an example of performing the multiplication operation of FIG. 2A using the multiplicands 206a, 206b that are summed together and multiplied by the multiplier 203. Specifically, FIG. 2B shows four sequences to perform multiplication using popcount operations by sequencing through bits of the multiplier 203. The memory device 100 may include one or more registers such as, to
for example, a result register 215 and an operand register 224. The result register 215 may store a sum value 221 that updates at each sequence until the multiplication operation is complete. The sum value 221 after sequencing through all bits of the multiplier represents the multiplication result 223 upon completion. The bit position 218 of the sum value is incremented at each sequence, where the controller selectively performs an accumulation operation depending on the bit position of the multiplier 203. This is explained in more detail below.[0033] In the first sequence 250, the bit position of the multiplier begins at position 1 , which is the least significant bit of the multiplier 206. As shown in FIG. 2B an arrow is used to illustrate the tracking of the bit position of the multiplier 203. The controller 109 is configured to activate word lines and send control signals to an accumulator 106 and the sense amp array 212. The controller 109 reads the value of the multiplier 203 at position 1 which is the binary bit “0” (e.g., the least significant bit of binary number “0110” or decimal number “6”). When the value is a predetermined value such as a binary “0” or “low,” the controller 109 records a “0” in the result register 215 at the current bit position (position 1 for the first sequence 250). Thereafter, the bit position is incremented to be position 2.[0034] In the second sequence 251 , the bit position of the multiplier 203 is at position 2, which is the binary bit “1” (e.g., the second bit from the least significant bit of binary number “0110” or decimal number “6”). If the value is a predetermined value such as a binary “0” or “low,” the controller 109 records a “0” in the result register 215 at the current bit position such as in the case of first sequence 250. However, because the bit value of the multiplier 203 at position 2 is a binary “1”, then the controller performs an initialization operation of the operand register 224 to store a popcount result 227, 230. For example, the first time a binary “1” appears, an initialization operation occurs. The initialization operation comprises performing popcount operations on the multiplicands 206a, 206b to generate a popcount result 227, where each number in the popcount result represents a number of ‘hi’ or bits (alternatively it can be any predetermined value of choice) in each memory row or a part of it that stores the multiplicands. The popcount result may be stored as a flattened binary number 230 in the operand register 224. The popcount result 227, 230, may remain in the operand register 224 until the multiplication operation is complete. The popcount operations may be performed only once in response to the presence of a binary “1” being present at a bit position of the multiplier 203.
[0035] To perform the popcount operations, the controller 109 sequentially activates the word line of the memory array(s) 103 that store the multiplicands 206a, 206b to count the number of binary “1s” that are present at each position of the multiplicands 206a, 206b. For example, referring to FIG. 2A, the controller 109 activates BL 1 and BL2 so as to perform popcount operations on the multiplicands 206a, 206b stored along these bit lines. The controller 109 activates WL 1 to perform a first popcount operation. With BL 1 , BL 2, and WL 1 activated, the sense amplifier array 212 counts a total of one binary “1s”. Specifically, (BL 1 , WL 1 ) has a binary “1” and (BL 2, WL 1) has no binary “1s.” Thus, the popcount operation for the least significant bit of the multiplicands yields one binary “1[0036] Next, WL 2 is activated for the next bit position of the multiplicands 206a, 206b. The popcount operation yields two binary “1s.” Next, WL 3 is activated for the next bit position of the multiplicands 206a, 206b. The popcount operation yields one binary “1 Lastly, WL 4 is activated for the next bit position of the multiplicands 206a, 206b. The popcount operation yields one binary “1 After sequencing through all word lines of the multiplicands 206a, 206b, the popcount result 227 is [1, 1, 2, 1] ranging from most significant bit to least significant bit. Thus, the popcount result counts the number of binary “1s” (e.g, a predetermined value) at each bit position across one or more multiplicands 206a, 206b. The popcount result 227 may be flattened into a binary number 230, where MSB=>[1 , 1, 2, 1 ]<=LSB equates to the binary number “10001”. To flatten a result, the flattening operation includes a binary summation and carry propagation from right to left. For example, flattening the array [1 , 1, 2, 1] is performed as follows starting from LSB to MSB (right to left): the value “1” generates a result of “1” at first bit position and a value of “0” to be carried over to the next bit position. Next, the value “2” with an incoming “0” carry value generates “0” result at the second bit position and value of “1” to be carried over to the next bit position. Next the value of “1” with incoming “1” carry value generates a “0” result at third bit position and value of “1” to be carried over to the next bit position. A value of “1” with an incoming “1” carry value generates a result of “0” at the fourth bit position and a value of “1” to be carried over to the next bit position. A value of “0” with an incoming “1” carry value generates a result of “1” at the fifth bit position and a value of “0” to be carried over. Each step in this computation sequence may be performed immediately upon generating a popcount result from each row. In some embodiments, the flattening operation may be fully
performed in the background of memory accesses performed during the performance of popcount operations of each memory row. Thus, this flattening operation may have no latency overhead.[0037] After generating the popcount result 227, 230, the operand register 224 is initialized. A flag may be set to indicate that the popcount operations do not need to be performed again for the instant multiplication operation. In addition to detecting a binary “1” at position 2 of the multiplier 203, the controller 109 performs an accumulation operation at position 2 where the current sum value 221 is added to the popcount result 227, 230 at the current bit position. This yields a current sum value of “100010.” For example, with a binary “0” recorded in position 1 , the accumulation operation adds the popcount result “10001” starting at position 2, which is the current position.[0038] In the third sequence 252, the bit position of the multiplier 203 is at position 3, which is the binary bit “1” (e.g., the third bit from the least significant bit of binary number “0110” or decimal number “6”). In response to detecting a binary “1”, the controller performs an accumulation operation starting at position 3 using the current sum value 221, which is “100010.” Specifically, the result register 215 is updated so that the new sum value is the sum of “100010” and the popcount result 230 which is “10001” but where the sum occurs at position 3 of the result register 215. In other words, the accumulation operation may be expressed as the binary sum of “100010” and “1000100”. This yields the result of the current sum value 221 to be “1100110”. The popcount result 230 does not need to be recalculated after the operand register was initialized.[0039] In the fourth sequence 253, the bit position of the multiplier203 is at position 4, which is the binary bit “0” (e.g., the most significant bit of binary number “0110” or decimal number “6”). In response to detecting a binary “0” (e.g., a predetermined binary value), the controller 109 bypasses the accumulation operation. Because all bit positions have been sequenced, the current sum value 233 is the multiplication result of “1100110,” which is expressed as decimal number “102” and which is also the multiplication result of 6 x (7+10).[0040] The example of a multiplication process described with respect to FIGS. 2A and 2B show how a common multiplier 203 is applied to the sum of multiplicands 206a, 206b using a plurality of popcount operations. For example, the popcount operations are performed on the multiplicands 206a, 206b to generate a
popcount result 230 which represents the sum of the multiplicands 206a, 206b. This sum of the multiplicands 206a, 206b, is stored in the operand register 224. As the controller 109 sequences through bit positions of the common multiplier 203 (from least to most significant bit), the popcount result 230 is selectively accumulated based on the current bit position and based on whether the bit value of the current bit position is a predetermined value (e.g., a binary Ί”). In other words, the controller 109 may selectively accumulate the current sum value 221 of the result register 215 when detecting a binary “1” as the multiplier bit positions are sequenced and where the accumulation is performed at the current bit position of the current sum value 221.[0041] To further illustrate the examples of FIGS. 2A and 2B, the examples show storing at least one summand in a memory array 103 of a memory device 100. The summand may be the multiplicands 206a, 206b that are summed together and then multiplied with the multiplier 203. The accumulator 106 may perform fused multiply-accumulation operations to multiply the multiplier 203 to the popcount result 230 determined from using the summand(s) as inputs. The fused multiply-accumulation operations may involve selectively accumulating a current sum value 221 with the popcount result 230 by sequencing through the at least one bit of the multiplier 203. The fused multiply-accumulation result is generated upon completing the sequencing through the bits of the multiplier 203.[0042] Selective accumulation may involve accumulation upon detecting a first predetermined value (e.g., binary “1”) at the current bit position of the multiplier 203 and not accumulating (e.g., bypassing the accumulation operation) upon detecting a second predetermined value (e.g., binary “0”) at the current bit position of the multiplier 203. The selective accumulation latency may be minor compared to the time it takes to perform popcount computations by sense amplifiers because the former may be performed in local fast registers while the latter may be performed as part of the memory array access, which normally has longer latency. Thus, to speed up the multiple operations in memory, embodiments may involve performing popcount operationss on rows of other memory arrays and storing other multipliers concurrently with performing selective accumulation on the current popcount results. In this respect, an extra operand register used to store and/or flatten next popcount result as it arrives from SA array may be included. In another embodiment, a selective accumulator with multiple operand registers may serve as
multiple arrays such that their operations are fully hidden by the memory accesses of memory arrays performing the popcount operations.[0043] FIG. 3 is a flowchart illustrating the functionality of a memory device that performs in-memory multiplication of a common multiplier using popcount operations according to various embodiments. The boxes in the flowchart may represent microcode, machine code, firmware, or other software executable by a controller of a memory device 100. The boxes of the flowchart may alternatively represent steps in a method 300. The method may be performed by a memory device 100.[0044] At item 303, the memory device 100 may store one or more multiplicands 206a, 206b in one or more memory arrays. The multiplicands 206a, 206b may be stored in a bit-serial configuration along separate bit lines. The bit position of each multiplicand 206a, 206b may be positioned on the same word lines or on different word lines.[0045] At item 306, the memory device 100 identifies the current bit position of the multiplier 203. The memory device may begin with the least significant bit of the multiplier 203 (e.g., position 1 ). The current bit position may be tracked and stored in a memory such as, for example, the buffer memory 112.When beginning the multiplication operation, the bit position begins with position 1 and then increments through each bit position of the multiplier 203 until the most significant bit of the multiplier is handled.[0046] At item 309, the memory device 100 checks the value of the multiplier 203 at the current bit position to determine whether it is a first predetermined value (e.g., binary “1” or “hi”) or a second predetermined value (e.g., binary “0” or “low”). If the value is the second predetermined value (e.g., binary “0” or “low”), the memory device 100, at item 312, records a binary “0” in the result register 215 at the current bit position. The result register 215 stores a current sum value 221 that gets updated until the multiplication operation completes. Thereafter, at item 315, the memory device 100 sequences to the next bit position of the multiplier 203. This completes the sequence for handling one bit position.[0047] If the current bit of the multiplier 203 is a first predetermined value (e.g., binary “1” or “hi”), then the memory device 100 checks if popcount result 330 has been calculated at 318. If not, then at 321 , the memory device 100 performs popcount operations on the multiplicand(s) 206a, 206b. For example, the
memory device 100 may activate the bit lines and word lines of the memory cells that store the multiplicands 206a, 206b, and then use a sense amplifier array 212 to count the number of binary “1s.”[0048] The popcount result 227 may be flattened into a binary number that represents the popcount result 230. Mathematically, the popcount result 230 represents the sum of the multiplicands (e.g., treated as summands).[0049] At item 324, the popcount result 230 is stored in an operand register 224. A flag may be set indicating that the operand register 224 contains a popcount result 230.[0050] At item 327, the memory device accumulates a current sum value based on the popcount 230 result and the current bit position of the multiplier 203. Because the accumulation operation is performed based on whether the value corresponding to the current bit of the multiplier 203, the accumulation operation is therefore selective. For example, the accumulation operation is performed selectively in response to detecting a binary “1” as the bit positions of the multiplier 203 are sequenced. In addition, the popcount result 230 is added to the current sum value at the current bit position. The memory device 100 then proceeds to item 315. If all bit positions of the multiplier have been handled, the multiplication process ends and the current sum value is the multiplication result.[0051] FIGS. 4A-2E are drawings of a memory device that perform in memory multiplication of different multipliers using popcount operations according to various embodiments. FIG. 4A illustrates an example of performing a multiplication operation using a plurality of multipliers 402a, 402b and corresponding multiplicands 405a, 405b. For example, FIG. 4A illustrates a memory device 100 that multiplies a first multiplier 402a with first multiplicand 405a and sums that result with the multiplication of a second multiplier 402b and a second multiplicand 405b. In the example depicted in FIG. 4A, the first multiplier 402a has a value of decimal number “6,” the first multiplicand 405a has a value of decimal number “9,” the second multiplier 402b has a value of decimal number “10, and the second multiplicand 405b has a value of decimal number “13”. FIG. 4A thus, depicts an embodiment where different multipliers 402a, 402b are applied to respective multiplicands 405a, 405b. Flowever, any number of multipliers and multiplicands may be included as part of the multiplication operation shown in FIG. 4A. Depending on context, the terms“multiplier” and “multiplicand” are interchangeable. For example, multiplicands 405
can be used as multipliers and multipliers 402 can be used as multiplicands.[0052] FIG. 4A shows different operands (e.g., multipliers 402a, 402b and multiplicands 405a, 405b) of a multiplication operation where the operands are stored in a bit-serial configuration and where the bit positions of each operand share the same word lines. For example, starting from the least significant bits of the operands, the bits of the operands are stored along a first word line (WL 1 ) and range to the fourth word line (WL 4). The first multiplier 402a is stored along a first bit line (BL 1), the first multiplicand 405a is stored along a second bit line (BL 2), a second multiplier 402b is stored along a third bit line (BL 3), and the second multiplicand 405b is stored along a fourth bit line (BL 4).[0053] FIG. 4A depicts an example where multipliers 402a, 402b are stored along odd-numbered bit lines and the multiplicands 405a, 405b are stored along even-numbered bit lines. In this respect, bit lines may alternate between storing multipliers 402a, 402b and storing multiplicands 405a, 405b.[0054] In addition, the memory device 100 may be configured so the consecutive pairs of bit lines are coupled to the same sense amplifier. For example, BL 1 and BL 2 couple to a first sense amplifier SA 1 and BL 3 and BL 4 couple to a second sense amplifier SA 2. In this respect, the bit lines that store a multiplier and corresponding multiplicand couple to the same sense amplifier.[0055] The multiplication operation illustrated in the example of FIG. 4A may represent a dot product calculation that generates a dot product result. For example, the multipliers may originate from a first matrix and the multiplicands may originate from a second matrix. The dot product calculation is applied to multiplying the first matrix with the second matrix to generate a dot product result. The dot product calculation may be used in as value in a feature map used in a convolutional neural network. For example, one matrix may comprise a convolutional filter while another matrix may comprise a portion of data that is subject to feature detection using the convolutional filter.[0056] While FIG. 4A depicts some embodiments of configuring a memory device to perform a multiplication operation using a plurality of multipliers 402a, 402b and multiplicands 405a, 405b, other arrangements are within the scope of the present disclosure. For example, the operands used in the multiplication operation of FIG. 4A may be stored on different bit lines of the same memory array103 or in bit lines of different memory arrays 103. For example, the first multiplier
402a and second multiplier 402b may be stored in a first memory array 103 while the second multiplicand 405a and second multiplicand 405b may be stored in a second memory array 103. The bit lines from the first and second memory arrays may be coupled with the same sense amplifiers. As another example, the multipliers 402a, 402b may be stored along a first bit line while the multiplicands 405a, 405b, may be stored along a second bit line. As another example, the multiplier 402a, and multiplicand 405a may be stored along a first bit line while the multiplier 402a, and multiplicand 405b, may be stored along a second bit line. In addition, in some embodiments, the multipliers 402a, 402b may be received by the controller 109 from a memory other than the memory array 103. For example, the multipliers 402a,402b may be stored in buffer memory 112 or received from an external source via a system link 115. In another embodiment, each sense amp may have a local to it, a respective register to store its multiplier (or multiplicand), and corresponding multiplicand (or multiplier) are accessed from memory array.[0057] FIGS. 4B-4E build on the example of FIG. 4A by showing the multiplication operation as it sequences through each bit position of the multipliers 402a, 402b. FIG. 4B shows the multiplication operation at bit position 1 of the multipliers 402a, 402b. The controller 109 activates the bit lines that store the multipliers 402a, 402b. In this case, the first multiplier 402a is stored along BL 1 and the second multiplier 402b is stored along BL 3. FIG. 4B illustrates the activation of BL 1 and BL 3 by presenting an arrow along these bit lines. The controller 109 also selects the word line corresponding to position 1 of the multipliers 402a, 402b, which, in the example, is WL 1. The activation of WL 1 is shown as by presenting an arrow at this word line. The first sense amplifier SA 1 detects a binary “0” and the second sense amplifier SA 2 also detects a binary “0”. When only binary “Os’ are detected for all multipliers at their current bit position, the controller 109 records a binary “0 as the current sum value 426 of a result register 423 as the current bit position 429. Based on the presence of only binary “0s” for a particular bit position, the controller bypasses popcount operations and accumulation operations. Thereafter, the current bit position increments by one to the next bit position of the multipliers 402a, 402b.[0058] FIG. 4C shows the multiplication operation at bit position 2 of the multipliers 402a, 402b. The controller 109 activates the bit lines that store the multipliers 402a, 402b which are BL 1 and BL 3. FIG. 4C illustrates the activation ofBL 1 and BL 3 by presenting an arrow (with the number 1) along these bit lines. The
controller 109 then selects the word line corresponding to position 2 of the multipliers 402a, 402b, which, in the example, is WL 2. The activation of WL 2 is shown as by presenting an arrow at this word line. The first sense amplifier SA 1 detects a binary “1” and the second sense amplifier SA 2 also detects a binary “1”. For each multiplier 402a, 402b that has a binary “1” at the current position, the corresponding multiplicand 405a, 405b, is identified and popcount operations are performed on the identified multiplicands 405a, 405b. For example, because the first multiplier 402a yielded a binary “1” at position 2, the bit line for its corresponding multiplicand (e.g., the first multiplicand 405a) is selected. Likewise, because the second multiplier 402b also yielded a binary “1” at position 2, the bit line for its corresponding multiplicand (e.g., the second multiplicand 405b) is selected. FIG. 4C illustrates the activation of BL 2 and BL 4 by presenting an arrow (with the number 2) along these bit lines.[0059] Popcount operations are then performed on the selected multiplicands 405a, 405b by activating the word lines associated with the selected multiplicands. In this case, the popcount operations include activating WL 1 , which yields a count of two “1s”, activating WL 2 which yields a count of zero “1s”, activating WL 3, which yields a count of one “1”, and activating WL 4, which yields a count of two “1s”. The popcount result 436 is thus, MSB=>[2, 1 , 0, 2]<=LSB. Flattening the popcount result 436 into binary produces the number “10110” which is a binary version of the popcount result that is stored in the operand register 433.The popcount result 439 represents the sum of the selected multiplicands 405a,405b.[0060] After the popcount result is generated, the controller 109 adds the popcount result 439 to the current sum value 426 stored in the result register 423 to update the current value 426 in the result register 423. Moreover, the accumulation operation occurs at bit position 2. This produces a current sum value 426 of “101100.” Thereafter, the current bit position increments by one to the next bit position of the multipliers 402a, 402b.[0061] FIG. 4D shows the multiplication operation at bit position 3 of the multipliers 402a, 402b. The controller 109 again activates the bit lines that store the multipliers 402a, 402b which are BL 1 and BL 3. FIG. 4D illustrates the activation of BL 1 and BL 3 by presenting an arrow (with the number 1) along these bit lines. The controller 109 then selects the word line corresponding to position 3 of the multipliers
402a, 402b, which, in the example, is WL 3. The activation of WL 3 is shown as by presenting an arrow at this word line. The first sense amplifier SA 1 detects a binary “1” and the second sense amplifier SA 2 also detects a binary “0”. For each multiplier 402a that has a binary Ί” at the current position, the corresponding multiplicand 405a, is identified and popcount operations are performed on the identified multiplicands 405a. For example, because the first multiplier 402a yielded a binary “1” at position 3, the bit line for its corresponding multiplicand (e.g., the first multiplicand 405a) is selected. Flowever, because the second multiplier 402b yielded a binary “0” at position 3, the bit line for its corresponding multiplicand (e.g., the second multiplicand 405b) is deactivated. FIG. 4D illustrates the activation of BL 2 by presenting an arrow (with the number 2) along this bit line.[0062] Popcount operations are then performed on the selected multiplicand 405a by activating the word lines associated with the selected multiplicand. In this case, the popcount operations include activating WL 1, which yields a count of one Ί”, activating WL 2 which yields a count of zero “1 s,” activating WL 3, which yields a count of zero “1 s,” and activating WL 4, which yields a count of one “1 The popcount result 436 is thus [1 , 0, 0, 1] Flattening the popcount result 436 into binary produces the number “1001” which is a binary version of the popcount result that is stored in the operand register 433. The popcount result 439 represents the sum of the selected multiplicands 405a. When only one multiplicand 405a is selected, the sum is equivalent to the value of the multiplicand 405a.[0063] After the popcount result is generated, the controller 109 adds the popcount result 439 to the current sum value 426 stored in the result register 423 to update the current value 426 in the result register 423. Moreover, the accumulation operation occurs at bit position 3. This produces a current sum value 426 of “1010000.” Thereafter, the current bit position increments by one to the next bit position of the multipliers 402a, 402b.[0064] FIG. 4E shows the multiplication operation at bit position 4 of the multipliers 402a, 402b. The controller 109 again activates the bit lines that store the multipliers 402a, 402b which are BL 1 and BL 3. FIG. 4E illustrates the activation of BL 1 and BL 3 by presenting an arrow (with the number 1) along these bit lines. The controller 109 then selects the word line corresponding to position 4 of the multipliers 402a, 402b, which, in the example, is WL 4. The activation of WL 4 is shown as by presenting an arrow at this word line. The first sense amplifier SA 1 detects a binary
“0” and the second sense amplifier SA 2 also detects a binary “1”. For each multiplier 402b that has a binary Ί” at the current position, the corresponding multiplicand 405b, is identified and popcount operations are performed on the identified multiplicands 405b. For example, because the first multiplier 402a yielded a binary “0” at position 4, the bit line for its corresponding multiplicand (e.g., the first multiplicand 405a) is deactivated. And, because the second multiplier 402b yielded a binary “1” at position 4, the bit line for its corresponding multiplicand (e.g., the second multiplicand 405b) is selected. FIG. 4D illustrates the activation of BL 4 by presenting an arrow (with the number 2) along this bit line.[0065] Popcount operations are then performed on the selected multiplicand 405b by activating the word lines associated with the selected multiplicand. In this case, the popcount operations include activating WL 1, which yields a count of one “1 ,” activating WL 2 which yields a count of zero “1 s,” activating WL 3, which yields a count of one “1 ,” and activating WL 4, which yields a count of one “1”. The popcount result 436 is thus [1, 0, 1, 1] MSB=> [1, 1, 0, 1]<=LSB. Flattening the popcount result 436 into binary produces the number “1101” which is a binary version of the popcount result that is stored in the operand register 433. The popcount result 439 represents the sum of the selected multiplicands 405b.[0066] After the popcount result is generated, the controller 109 adds the popcount result 439 to the current sum value 426 stored in the result register 423 to update the current value 426 in the result register 423. Moreover, the accumulation operation occurs at bit position 4. This produces a current sum value 426 of “10111000” (184 in decimal). Because the controller sequenced through all bit positions of the multipliers from least to most significant bit, the multiplication operation is complete: 184 = 6 x 9 + 10 x 13. The value in the result register 423 represents the multiplication result 445, which may also be a dot product result when the operands represent matrix elements.[0067] FIG. 5 is a flowchart illustrating the functionality of a memory device that performs in-memory multiplication of different multipliers using popcount operations according to various embodiments. The boxes in the flowchart may represent microcode, machine code, firmware, or other software executable by a controller of a memory device 100. The boxes of the flowchart may alternatively represent steps in a method 500. The method may be performed by a memory device 100.
[0068] At item 501 , the memory device 100 may store one or more multiplicands 405a, 405b in one or more memory arrays. In some embodiments, the memory device 100 may store multipliers. The multiplicands 405a, 405b may be stored in a bit-serial configuration along separate bit lines. The bit position of each multiplicand 405a, 405b may be positioned on the same word lines or on different word lines.[0069] At item 504, the memory device 100 identifies the current bit position of the multipliers 402a, 402b. The memory device 100 may begin with the least significant bit of the multipliers 402a, 402b (e.g., position 1). The current bit position may be tracked and stored in a memory such as, for example, the buffer memory 112. When beginning the multiplication operation, the bit position begins with position 1 and then increments through each bit position of the multiplier 402a, 402b until the most significant bit of the multiplier is handled.[0070] At item 507, the memory device 100 checks the values of the multipliers 402a, 402b at the current bit position to determine whether it is a first predetermined value (e.g., binary “1” or “hi”) or a second predetermined value (e.g., binary “0” or “low”). If the value is the second predetermined value (e.g., binary “0” or “low”) for all multipliers 402a, 402b, the memory device 100, at item 510, records a binary “0” in the result register 423 at the current bit position. The result register 423 stores a current sum value 426 that gets updated until the multiplication operation completes. Thereafter, at item 513, the memory device 100 sequences to the next bit position of the multipliers 402a, 402b. This completes the sequence for handling one bit position.[0071] If any current bit of the multipliers 402a, 402b is equal to a first predetermined value (e.g., binary “1” or “hi”), then the memory device 100, at 516 selects the multiplicands 405a, 405b, that correspond to multipliers having a binary “1” or “hi” value at the current bit position. The controller 109 may select the bit lines associated with the cells that store the selected multiplicands 405a, 405b.[0072] At item 519, the memory device 100 performs popcount operations on the selected multiplicand(s) 405a, 405b. For example, the memory device 100 may activate the bit lines and word lines of the memory cells that store the multiplicands 405a, 405b, and then use a sense amplifier array 212 to count the number of binary “1s.”[0073] The popcount result 227 may be flattened into a binary number
that represents the popcount result 439. Mathematically, the popcount result 439 represents the sum of the selected multiplicands (e.g., treated as summands).[0074] At item 522, the popcount result 439 is stored in an operand register 433.[0075] At item 525, the memory device 100 accumulates a current sum value based on the popcount result 439 and the current bit position of the multipliers 405a, 405b. Because the accumulation operation is performed based on whether any values corresponding to the current bits of the multiplier 402a, 402b, the accumulation operation is therefore selective. For example, the accumulation operation is performed selectively in response to detecting a binary “1” as the bit positions of any of the multipliers 402a, 402b that are sequenced. In addition, the popcount result 439 is added to the current sum value at the current bit position.The memory device 100 then proceeds to item 513. If all bit positions of the multipliers have been handled, the multiplication process ends and the current sum value is the multiplication result. An accumulation operation on the current bit position may be performed concurrently while accessing the next multiplier and / or performing a popcount operation.[0076] FIG. 6 illustrates an example networked system 600 that includes a memory device 100, in accordance with some embodiments of the present disclosure. FIG. 6 illustrates example parts of an example of a computing device 602 which is part of the networked system 600. FIG. 6 shows how such computing devices can be integrated into various machines, apparatuses, and systems, such as loT (Internet of Things) devices, mobile devices, communication network devices and apparatuses (e.g., see base station 630), appliances (e.g., see appliance 640), and vehicles (e.g., see vehicle 650).[0077] The computing device 602 and other computing devices of the networked system 600 (e.g., see computing devices 622a, 622b, 622c, and 622d) can be communicatively coupled to one or more communication networks 620. The computing device 602 includes, for example, a bus 606, a controller 608 (e.g., a CPU), other memory 610, a network interface 612, a storage system 614, other components 616 (e.g., any type of components found in mobile or computing devices, GPS components, Input / Output (I/O) components, various types of user interface components, sensors, a camera, etc.), and a memory device 100. The other components 616 may also include one or more user interfaces (e.g., GUIs,
auditory user interfaces, tactile user interfaces, etc.), displays, different types of sensors, tactile, audio and/or visual input/output devices, additional application- specific memory, one or more additional controllers (e.g., GPU), or any combination thereof. The bus 606 communicatively couples the controller 608, the other memory 610, the network interface 612, the data storage system 614 and the other components 616, and can couple such components to the memory device 100 in some embodiments. For example, a system link 115 of the memory device 100 may couple to the bus 606.[0078] The computing device 602 includes a computer system that includes at least controller 608, other memory 610 (e.g., random access memory (RAM), read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), static random-access memory (SRAM), cross-point or cross-bar memory, crossbar memory, etc.), the memory device 100, and data storage system 614, which may communicate with each other via bus 606 (which can include multiple buses). In some embodiments, the memory device 100 may not communicate over bus 606.[0079] To put it another way, FIG. 6 includes a block diagram of computing device 602 that has a computer system in which embodiments of the present disclosure can operate. In some embodiments, the computer system can include a set of instructions, for causing a machine to perform the methodologies discussed herein, when executed. In such embodiments, the machine can be connected (e.g., networked via network interface 612) to other machines in a Local Area Network (LAN), an intranet, an extranet, and/or the Internet (e.g., see network(s) 620). The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0080] Controller 608 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, single instruction multiple data (SIMD), multiple instructions multiple data (MIMD), or a processor implementing other instruction sets, or processors implementing a combination of instruction sets.
Controller 608 can also be one or more special-purpose processing devices such as an ASIC, a programmable logic such as an FPGA, a digital signal processor (DSP), network processor, or the like. Controller 608 is configured to execute instructions for performing the operations and steps discussed herein. Controller 608 can further include a network interface device such as network interface 612 to communicate over one or more communication networks (such as network(s) 620).[0081] The data storage system 614 can include a machine-readable storage medium (also known as a computer-readable medium) on which is stored one or more sets of instructions or software embodying any one or more of the methodologies or functions described herein. The data storage system 614 can have execution capabilities such as it can at least partly execute instructions residing in the data storage system. The instructions can also reside, completely or at least partially, within at least one of the other memory 610 and the memory device 100 and/or within the controller 608 during execution thereof by the computer system, at least one of the other memory 610 and the memory device 100 as well as the controller 608 also constituting machine-readable storage media. The other memory 610 can be or include main memory or system memory of the computing device 602. The other memory 610 and the memory device 100 can have execution capabilities such as it can at least partly execute instructions residing in any memory of the computing device 602.[0082] As mentioned, the networked system 600 includes computing devices, and each of the computing devices can include one or more buses, a controller, a memory, a network interface, a storage system, and other components. Also, each of the computing devices shown in FIG. 6 and described herein can include or be a part of a mobile device or the like, e.g., a smartphone, tablet computer, loT device, smart television, smart watch, glasses or other smart household appliance, in-vehicle information system, wearable smart device, game console, PC, digital camera, or any combination thereof. As shown, the computing devices can be connected to network(s) 620 that may include a local device network such as Bluetooth or the like, a wide area network (WAN), a local area network (LAN), an intranet, a mobile wireless network such as 4G or 5G, an extranet, the Internet, and/or any combination thereof. In some embodiments, as shown with the connection 619, the memory device 100 can include at least one network interface so that it can communicate separately with other devices via communication
network(s) 620. For example, the system link 115 may couple to the communication network 620. In this respect, a memory module or a memory module system of the memory device 100 may have its own network interface so that such a component can communicate separately with other devices via communication network(s) 620.[0083] Each of the computing devices described herein can be or be replaced by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.[0084] Also, while a single machine is illustrated for the computing device 602 shown in FIG. 6, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform one or more of the methodologies or operations discussed herein. And, each of the illustrated computing devices as well as computing systems can each include at least a bus and/or motherboard, one or more controllers (such as one or more CPUs), a main memory that can include temporary data storage, at least one type of network interface, a storage system that can include permanent data storage, and/or any combination thereof. In some multi-device embodiments, one device can complete some parts of the methods described herein, then send the result of completion over a network to another device such that another device can continue with other steps of the methods described herein.[0085] While the memory, controller, and data storage parts are shown as single parts, each part should be taken to include one or more parts that can store the instructions and perform their respective operations. The term “machine- readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0086] Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not
generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.[0087] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0088] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing devices, that manipulate and transform data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0089] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[0090] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can
prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0091] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[0092] It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. |
A semiconductor magnetic memory device has a magnetic tunneling junction formed over a memory cell. The memory cell has a control gate surrounded by a floating gate. The floating gate is coupled to the magnetic tunneling junction through a pinning layer that maintains the magnetic orientation of the lower magnetic layer of the junction. A current through a selected word line, coupled to the control gate, generates a first magnetic field. A current through a cell select line generates a second magnetic field that is orthogonal to the first magnetic field. This changes the magnetic orientation of the upper magnetic layer of the junction to lower its resistance, thus allowing a write/erase voltage on a program/erase line to program/erase the floating gate. |
What is claimed is: 1. A semiconductor magnetic memory cell comprising: a memory cell having a floating gate formed around a control gate for generating a first magnetic field; and a magnetic tunneling junction coupled to the memory cell for allowing programming and erasing of the floating gate in response to the first magnetic field and a second magnetic field. 2. The memory cell of claim 1 wherein the magnetic tunneling junction is coupled to a program/erase line for programming the floating gate in response to a write voltage and the first magnetic field. 3. The memory cell of claim 2 wherein the magnetic tunneling junction is coupled to the program/erase line through an electrode. 4. The memory cell of claim 2 and further including a cell select line formed over the program/erase line for generating the second magnetic field orthogonally to the first magnetic field such that the first and second magnetic fields reduce a resistance of the magnetic tunneling junction to a greater extent than just the first magnetic field. 5. The memory cell of claim 1 wherein the magnetic tunneling junction is comprised of a tunneling layer coupled between first and second magnetic layers. 6. The memory cell of claim 5 and further including a pinning layer, coupled between the first magnetic layer and the floating gate, for controlling direction of magnetization of the first magnetic layer. 7. The memory cell of claim 1 wherein the control gate is comprised of first and second layers. 8. The memory cell of claim 7 wherein the first layer is comprised of poly silicon and the second layer is comprised of polycide. 9. The memory cell of claim 1 and further including a magnetic memory array organized in rows and columns, the memory array comprising: a plurality of word lines coupled to rows of memory cells; a plurality of cell select lines formed in an orthogonal direction to the plurality of word lines, each selected cell select line generating a first magnetic field in response to a first current; and a plurality of magnetic memory cells, each memory cell comprising: a control gate coupled to a first word line for generating a second magnetic field in response to a second current on the first word line; a floating gate, formed around the control gate, for storing a charge; and a magnetic tunneling junction coupled to the floating gate for allowing programming and erasing of the floating gate in response to the first and second magnetic fields. 10. The array of claim 9 wherein the memory array is comprised of one of a NAND or a NOR architecture. 11. The array of claim 9 and further including a program/erase line formed between the magnetic tunneling junction and the cell select line, the program/erase line adapted to be biased with one of a program or an erase voltage in response to a respective program operation or erase operation. 12. A method for fabricating a magnetic memory cell, the method comprising: forming a first polysilicon layer; forming an oxide-nitride-oxide layer over the first polysilicon layer; forming a control gate over the oxide-nitride-oxide layer; \ etching down to the first polysilicon layer to form individual control gate stacks; forming an insulator layer over the control gate stacks and exposed first polysilicon layer between the stacks; performing an etch process to remove predetermined portions of the insulator layer; forming a polysilicon blanket over the etched insulator layer; etching the polysilicon blanket to form individual floating gates; forming a pinning layer over each of the floating gates; and forming a magnetic tunnel junction over each of the pinning layers. 13. The method of claim 12 and further including forming an electrode over the magnetic tunnel junction. 14. The method of claim 12 and further comprising forming a tunnel dielectric layer prior to forming the first polysilicon layer. 15. The method of claim 12 wherein forming the control gate comprises: forming a second polysilicon layer over the oxide-nitride-oxide layer; and forming a polycide layer over the second polysilicon layer; 16. The method of claim 12 wherein the insulator layer is comprised of nitride. 17. The method of claim 12 wherein forming the magnetic tunnel junction comprises: forming a first magnetic layer over the pinning layer; forming a tunneling barrier over the first magnetic layer; and forming a second magnetic layer over the tunneling barrier. 18. A method for programming a memory cell comprising a magnetic tunneling junction, the method comprising: generating a first magnetic field in response to a first current through a selected word line coupled to the memory cell; generating a second magnetic field in response to a second current through a cell select line located over the memory cell and orthogonal to the selected word line, and biasing a program/erase line with a write voltage. 19. A method for erasing a memory cell comprising a magnetic tunneling junction, the method comprising: generating a first magnetic field in response to a first current through a selected word line coupled to the memory cell; generating a second magnetic field in response to a second current through a cell select line located over the memory cell and orthogonal to the selected word line, and biasing a program/erase line with an erase voltage. 20. The method of claim 19 wherein the erase voltage is a positive voltage. |
SEMICONDUCTOR MAGNETIC MEMORYTECHNICAL FIELD OF THE INVENTIONThe present invention relates generally to memory devices and in particular the present invention relates to semiconductor magnetic memory architecture.BACKGROUND OF THE INVENTIONMemory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and flash memory.Flash memory devices have developed into a popular source of non- volatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Common uses for flash memory include personal computers, personal digital assistants (PDAs), digital cameras, and cellular telephones. Program code and system data such as a basic input/output system (BIOS) are typically stored in flash memory devices for use in personal computer systems. As the performance and complexity of electronic systems increase, the speed of the system memory needs to increase as well. However, one of the disadvantages of flash memory is the slow programming and erase speeds. Typical prior art programming uses either Fowler-Nordheim tunneling or hot electron injection to move a charge from a channel in the substrate onto the floating gate. The mechanism by which they tunnel through the oxide/insulator layer damages the layer. This limits the number of times that a flash memory device can be programmed reliably before the dielectric wears out and loses its insulating properties.The flash road map requires a memory cell structure change due to the scaling limitations of the floating gate technology. The floating gate stack has a problem with capacitive coupling to neighboring cells causing disturb problems. By lowering the stack height, the capacitance can be reduced. One approach is to eliminate the floating gate and use a SONOS approach to storing charge in the dielectric layer itself. A second approach that enhances the SONOS structure is to add nano-crystals under the word line poly.For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for a memory device that operates faster and has a longer life.BRIEF DESCRIPTION QF THE DRAWINGSFigure 1 shows a cross-sectional view of one embodiment of a semiconductor magnetic memory device of the present invention. Figure 2 shows a top plan view of one embodiment of the layout of the word lines/select lines of the present invention.Figure 3 shows a cross -sectional view of one embodiment of a fabrication step in accordance with the magnetic memory device of Figure 1.Figure 4 shows a cross-sectional view of one embodiment of another fabrication step in accordance with the magnetic memory device of Figure 1.Figure 5 shows a cross-sectional view of one embodiment of another fabrication step in accordance with the magnetic memory device of Figure 1.Figure 6 shows a cross-sectional view of one embodiment of another fabrication step in accordance with the magnetic memory device of Figure 1. Figure 7 shows a cross-sectional view of one embodiment of another fabrication step in accordance with the magnetic memory device of Figure 1.Figure 8 shows a cross-sectional view of one embodiment of another fabrication step in accordance with the magnetic memory device of Figure 1.Figure 9 shows a cross-sectional view of one embodiment of a programming operation of the present invention.Figure 10 shows a cross-sectional view of one embodiment of the programming operation of the present invention.Figure 11 shows a cross-sectional view of one embodiment of an erase method of the present invention. Figure 12 shows a cross-sectional view of one embodiment of the erase method of the present invention.Figure 13 shows a cross-sectional view of one embodiment of the magnetic memory cell in a NOR configuration. Figure 14 shows a top plan view of one embodiment of a NOR layout of control lines in accordance with the embodiment of Figure 13.Figure 15 shows a block diagram of an electronic memory system of the present invention.Figure 16 shows a block diagram of one embodiment of a memory module of the present invention .Figure 17 shows a plot of time versus write voltage Vw[eta]te including the refresh cycle for a DRAM embodiment of the present invention.DETAILED DESCRIPTION In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof. The terms wafer or substrate used in the following description include any base semiconductor structure. Both are to be understood as including silicon-on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, thin film transistor (TFT) technology, doped and undoped semiconductors, epitaxial layers of a silicon supported by a base semiconductor structure, as well as other semiconductor structures well known to one skilled in the art. Furthermore, when reference is made to a wafer or substrate in the following description, previous process steps may have been utilized to form regions/junctions in the base semiconductor structure, and terms wafer or substrate include the underlying layers containing such regions/junctions.The semiconductor magnetic memory device of the present invention is comprised of a memory array that incorporates floating gate memory cells with a magnetic tunnel junction (MTJ) layer. The MTJ layer uses the Giant Magneto Resistance Effect to turn on and off each cell, thus allowing access to the floating gate for storage of a charge.Figure 1 illustrates a cross-sectional view of one embodiment of the structure of the magnetic memory device of the present invention. The embodiment of Figure 1 is for a NAND architecture memory array. In the interest of clarity, only one cell structure will be described. However, each cell of the magnetic memory array of the present invention is constructed in substantially the same way.Each cell 100 is comprised of a floating gate 120 that wraps around the control gate 101, 102. In one embodiment, the floating gate 120 is comprised of polysilicon. Alternate embodiments can use other materials. A tunnel insulator layer 123, in one embodiment, is formed of an oxide material under the floating gate. Alternate embodiments can use other insulator materials.The control gate/word line 101, 102 is located within the floating gate 120. The control gate is comprised of two layers 101, 102. The upper layer, in one embodiment, is a tungsten suicide (WSix) layer 101. Tungsten suicide may also be referred to in the art as polycide. The lower layer 102, in one embodiment, is comprised of polysilicon. Alternate embodiments can use other materials for either of these layers.The control gates of each cell are coupled together by the word lines of the array. Figure 1 shows arrows coming out of the plane of the paper to indicate that the axis of the word lines extends into and out of the plane of the figure. An insulating layer of nitride 122 is formed around the control gate 101, 102. This layer insulates the control gate 101, 102 from the floating gate 120. An oxide-nitride-oxide layer 121 is formed at the bottom of the control gate 101, 102 to further separate the control gate 101, 102 and the floating gate 120.The MTJ can be comprised of an antiferromagnetic layer separated from a ferromagnetic layer by a relatively thin dielectric material. The dielectric should be thin enough to allow spin dependent electron tunneling while still forming a robust barrier to electrons that are not spin polarized. Also, it is desirable that the materials used for the barrier do not translate crystalline structure or contribute magnetic properties to the MTJ.The MTJ layer is formed over the floating gate 120 of each cell 100. Immediately adjacent to the floating gate is the pinning layer 103. This layer 103 can be comprised of a synthetic antiferromagnet such as combinations of manganese and a metal. Combinations can include irridium manganese, platinum manganese, iron manganese, and chromium platinum manganese. The pinning layer 103 is responsible for fixing the magnetic orientation of the lower magnetic layer 106. Alternate embodiments can use other materials and/or material combinations for this layer 103.The lower magnetic layer 106 is therefore the fixed magnetic layer 106 while the upper magnetic layer 107 changes magnetic orientation in response to the current flow as described subsequently with reference to the programming and erasing methods of Figures 10 - 13. Both the fixed magnetic layer 106 and the free magnetic layer 107 can be comprised of high susceptibility magnetic material such as cobalt, iron, or nickel. Additionally, combinations of these materials can be used to enhance their magnetic properties such as nickel iron and cobalt iron. Niobium, hafnium, and boron may be used in varying combinations to prevent migration or dilution of the magnetic materials at the interface boundaries. Alternate embodiments can use other materials and/or material combinations for these layers.A tunneling barrier 105 is formed between the fixed magnetic layer 106 and the free magnetic layer 107. The tunneling barrier 105 is a relatively thin dielectric film that separates the two magnetic layers 106, 107. In one embodiment, the tunneling barrier 105 is comprised of an oxide material such as aluminum oxide, titanium oxide, or manganese oxide. Additionally, materials such as silicon dioxide or hafnium oxide can be used. Alternate embodiments can use other materials for this layer 105 that do not introduce undesirable magnetic properties.An optional layer of relatively thin ruthenium can be inserted between the antiferromagnetic layer and the ferromagnetic layer in order to enhance the magnetic coupling of the synthetic antiferromagnet. Other oscillatory exchange materials can include chromium, rhodium, iridium copper and their alloys. The free magnetic layer 107 makes contact with a top electrode 105. The electrode 105 provides contact between the MTJ stack 103, 104, 106, 107 and the program/erase line 109. In one embodiment, the electrode is comprised of a metal material. Alternate embodiments can use other materials. The program/erase line 109 is a single line that is orthogonal to the word lines of the array and ties together the series string columns of the array. The cell select line 110 is formed parallel to and above the program/erase line 109. These lines 109, 110 are separated by an insulating material 111 such as an oxide material.The cell select line 110 is responsible for switching the magnetization of the free magnetic layer 107 of the MTJ layer. The orthogonality of the cell select lines 110 with the word lines 101, 102 provides the necessary selectivity require to program and erase each individual cell. The operation of the program/erase line 109 and the cell select line 110 is discussed further subsequently with reference to the program and erase embodiments of Figures 10 - 13. Cross-talk between cells 100, 160 may be substantially reduced or eliminated by the addition of a thin layer of magnetic material that acts as a magnetic spacer 150. The spacers enhance the ability to reduce both cell spacing and cell size. This strategy uses high permeability materials such as nickel, iron, cobalt, or combinations of these materials to confine the magnetic flux produced by the MTJ devices. High permeability "flux keepers" are often applied to both increase the switching efficiency of the MTJ as well as reduce unwanted stray magnetic fields that may affect the performance or readability of adjacent bits.Figure 2 illustrate a top plan view of one embodiment of the layout for the word lines and cell select/program erase lines of the present invention. The word lines 101 are shown running in the "y" direction. These lines include both the upper polycide layer 101 of the word line and the lower polysilicon layer 102. The cell select line 110 is shown running perpendicular to the word lines 101 in the "x" direction. The cell select line 110 is over the program/erase line 109. The intersection of each of the word lines and each of the select/program lines 101, 110 is over a memory cell 100 as described previously and shown in Figure 1. The following fabrication steps and materials for the magnetic memory device of the present invention are for purposes of illustration only. Alternate embodiments can use other materials than those disclosed and different fabrication steps in forming the structure of the present invention. Figure 3 illustrates a cross-sectional view of one embodiment of fabrication steps for the magnetic memory device of the present invention. A polysilicon layer 301 is formed over a tunneling layer 310. In one embodiment, the tunneling layer 310 is an oxide. Alternate embodiments can use other insulating materials.The floating gate cell structure is comprised of an oxide 320 layer formed over the polysilicon layer 301. A nitride layer 321 is formed over the oxide layer. A second oxide layer 322 is formed over the nitride layer 321. These three layer 320 - 322 together form the ONO 320 insulator structure of the floating gate cell. The ONO can be replaced by a high-K or a combination of high-K and ONO layers to increase coupling capacitance between the word line and the substrate thereby reducing the operational voltages of these memory cells. A poly silicon layer 303 is formed over the ONO layer 302 and a polycide (WSix) layer 305 is formed over the poly layer 303. Together these layer 303, 305 form the control gate/word line structure of the floating gate cell.Figure 4 illustrates a cross-sectional view of one embodiment of additional fabrication steps for the magnetic memory device of the present invention. In this figure, a nitride spacer 400 is formed over the structure of Figure 3.An in situ nitride and polysilicon etch is then performed in Figure 5. This step removes portions of the nitride layer 400 of Figure 4 and portions of the poly silicon layer 301 between each floating gate structure 300. This exposes the oxide layer 310 between each floating gate structure 300. A blanket polysilicon 600 deposition is then performed as shown in Figure 6. Figure7 shows the results of an in situ polysilicon etch and tunnel oxide etch of the tunnel oxide layer 310. This step forms the individual floating gates 700 that surround each cell structure. The tunnel oxide etch isolates each tunnel oxide layer for each individual cell.Figure 8 a structure insulator 800 is then formed between each of the floating gate structures. This may be a low-k insulator. Also in this step, a nitride barrier 801 may be formed over the structure to act as a chemical mechanical planarization (CMP) stop layer for future steps.A magnetic tunnel junction deposition and patterning is then performed to form the MTJ junction over each floating gate cell structure. Electrode deposition and program/erase line deposition with patterning forms the individual electrodes and program/erase line over each cell. The cell select line is then deposited over the oxide insulating layer and patterned, thus resulting in the structure illustrated in Figure 1.Figure 9 illustrates a schematic cross-sectional view of one embodiment of a programming operation of the magnetic memory device of the present invention. The programming is performed by isolating a desired MTJ stack of the cell to be programmed. This is accomplished bypassing currents through the specific cell select line 901 and word line 903 that are coupled to the cell to be programmed. The current through the cell select line 901 is denoted as Ij and the current through the word line is denoted as I2.Currents Ii and I2 each create magnetic fields in the directions as shown in Figure 9. The magnetic filed created by Ii causes the magnetic filed in the free magnetic layer 910 to orient in the direction of magnetization of the lower pinned layer 911. This is referred to in the art as the "easy axis". This axis is illustrated in Figure 10.Current I2 causes the magnetization of the free magnetic layer 910 to orient orthogonal to the direction of the pinned layer 911. This axis is referred to as the "hard axis" as illustrated in Figure 10.This combination of easy and hard axis orientation of the magnetic domains in the free layer 910 causes the resistance of the MTJ stack to drop at lower magnetic fields as compared to typical prior art stacks where the domains are oriented parallel/anti-parallel to each other. Additional information regarding orthogonal programming of easy and hard axes can be found in Tehrani et al., Proc. IEEE 91(5), (2003), pp. 703.As shown in Figure 10, a voltage V write is then applied on the program/erase line 905. This causes electrons 1003 to pass to the floating gate 1005 as a stored charge. The charge, Q, can be determined by (Vwrite/RM[tau]j)*[Delta]t where RMTJ is the resistance of the stack and[Delta]t is the time of the programming pulse. In one embodiment, the programming time can be measured in nanoseconds, depending on Vwrjte and the resistance of the MTJ. In one embodiment, Vw[pi]te is a programming pulse with an amplitude in the range of 3 V - 10V. The Vwrjte voltages are substantially lower than what current flash memory parts use. This is a direct result of using an MTJ to program the cells instead of the power intensive Fowler-Nordheim approach or channel hot electron injection. Vwrite can start at a lower write voltage and increase incrementally by a step voltage for any subsequent programming pulses required to properly program the non- volatile cell.By modulating the time [Delta]t, a variety of charges can be stored into the magnetic memory device of the present invention. Thus a plurality of threshold voltages can be programmed into the cell simply by opening the MTJ for a predetermined time period. This allows the memory cells of the present invention to operate as either single level cells or multiple level cells.Since neither Fowler-Nordheim nor hot electron injection is used for programming the cells of the present invention, the tunnel oxide is not damaged during programming. Thus, the reliability of the tunnel oxide is increased over a typical flash memory device and the number of program cycl es is infinite- Figures 11 and 12 illustrate schematic cross-sectional views of one embodiment of a method for erasing the magnetic memory devices of the present invention. The erase operation is substantially similar to the program operation as previously described. One difference is that the program/erase line 1101 is grounded or biased at a voltage level that is only slightly positive.In the same manner as programming, the resistance of the MTJ 1102 is reduced by the magnetic fields generated by currents Ii through the cell select line 1100 and I2 through the word line 1105. The ground or slightly positive biasing of the program/erase line 1101 attracts the electrons 1203 from the floating gate 1201. Reading the memory cells of the present invention is accomplished by a current flowing in the opposite direction than the program/erase operations. This "raises" the MTJ to its highest resistance state and prevents any leakage of charge. A stored charge on the floating gate of the memory cell causes a Vt shift that, with the proper combination of voltage on the substrate, drain region, and source region, will cause it to be read as a logical "0" or a logical "1". Similar techniques can be used to read the states of multiple level cells. In one embodiment, to read a single level cell, the word line is biased at 4.5V and the cell select line is biased at ground potential. The substrate is also at ground potential. The digit line, which is also referred to as a bit line, is coupled to the drain regions and is biased at 0.1 V. The program/erase line is left floating. The above embodiments illustrated a NAND architecture memory array. The magnetic memory cells of the present invention can also be incorporated into a NOR architecture memory array as illustrated in Figures 13 and 14.Figure 13 illustrates a cross-sectional view of a NOR embodiment of the magnetic non- volatile memory cells of the present invention. This view is across the active areas. The cell structure 1300 is substantially similar as that described previously.In this embodiment, the cell select line 1310 acts as a digit line. It is coupled to digit line contacts 1307 that couple the cell's drain regions 1305 to the select line 1310. In this view, the digit line contact 1307 is formed behind the program/erase line 1309.A common source line 1303 is formed in the substrate between rows of memory cells. In this embodiment, a low-k inter-level dielectric 1304 is formed between cells and over the common source lines 1303.With the help of refresh circuitry used in DRAMs and is well known in the art, the NOR embodiment of the semiconductor magnetic memory can act as a DRAM memory array as well. This can be the case where the MTJs are sufficiently thin and allow charge to bleed through them via direct tunneling mechanisms. As a result, the memory cell loses charge from its floating gate. This then needs to be replenished with the help of a refresh cycle. Thus, the memory cell behaves as a volatile or non-volatile memory, depending on the MTJ "OFF" state leakage behavior. The idea of refresh is illustrated in Figure 17.Figure 17 illustrates a plot of time versus the write voltage, VW[pi]te- Vt is the threshold voltage dividing the logical "0" state from the logical "1" state.Figure 14 illustrates a top plan view of one embodiment of a layout of the NOR array of the present invention. The layout is comprised of the drain regions 1305 and common source lines 1303 as illustrated in the embodiment of Figure 13. Also, the program/erase lines 1309 and underlying digit lines 1310 are shown. As in the NAND embodiment, a memory cell 1400 is formed at the intersection of each word line 1401 with the program/erase lines 1309 and digit lines 1310. Figure 15 illustrates a functional block diagram of a memory system 1520 comprising a memory device 1500 coupled to a processor 1510. The processor 1510 may be a microprocessor or some other type of controlling circuitry. They memory system 1520 can be made up of separate integrated circuits or both the processor 1510 and memory device 1500 on the same integrated circuit. The memory device 1500 has been simplified to focus on features of the memory that are helpful in understanding the present invention.The memory device includes an array of memory cells 1530 incorporating the magnetic memory cells of the present invention. The memory array 1530 can be a random access memory array (RAM) such as a dynamic random access memory array (DRAM), a flash memory array, or some other memory technology. The memory cells can be nonvolatile flash memory cells, volatile memory cells, or a combination of volatile and nonvolatile cells.The memory array 1530 is arranged in banks of rows and columns. The control gates of each row of memory cells are coupled with a word line while the drain regions of the memory cells are coupled to bit lines. The source regions of the memory cells are coupled to source lines. As is well known in the art, the connection of the cells to the bit lines and source lines depends on whether the array is a NAND architecture, a NOR architecture, an AND architecture or some other memory array architecture.A CELJSEL driver circuit 1556 is coupled to the address circuitry to generate the currents required for the CELJ3EL lines of the memory array 1530. The output of theCELJSEL circuit 1556 is coupled to the CELJSEL lines that have been previously described.In a NAND array, an address buffer circuit 1540 is provided to latch address signals provided over I/O connections 1562 through the I/O circuitry 1560. Address signals are received and decoded by a row decoder 1544 and a column decoder 1546 to access the memory array 1530. The word line/row decoder 1544 can be a current source as well as a voltage source. It will be appreciated by those skilled in the art that, with the benefit of the present description, the number of address input connections depends on the density and architecture of the memory array 1530. That is, the number of addresses increases with both increased memory cell counts and increased bank and block counts. The memory integrated circuit 1500 reads data in the memory array 1530 by sensing voltage or current changes in the memory array columns using sense/buffer circuitry 1550. The sense/buffer circuitry, in one embodiment, is coupled to read and latch a row of data from the memory array 1530. Data input and output buffer circuitry 1560 is included for bi-directional data communication over the I/O connections 1562 with the processor 1510. Write circuitry 1555 is provided to write data to the memory array. Control circuitry 1570 decodes signals provided on control connections 1572 from the processor 1510. These signals include chip enable signals, write enable signals, and address latch signals that are used to control the operations on the memory array 1530, including data read, data write, and erase operations. In one embodiment, the control circuitry 1570 is responsible for executing the programming, erase, and read operations of the present invention. The control circuitry 1570 may be a state machine, a sequencer, or some other type of controller.The memory device illustrated in Figure 15 has been simplified to facilitate a basic understanding of the features of the memory. A more detailed understanding of internal circuitry and functions of flash memories are known to those skilled in the art. Figure 16 is an illustration of an exemplary memory module 1600. Memory module1600 is illustrated as a memory card, although the concepts discussed with reference to memory module 1600 are applicable to other types of removable or portable memory, e.g., USB flash drives, and are intended to be within the scope of "memory module" as used herein. In addition, although one example form factor is depicted in Figure 16, these concepts are applicable to other form factors as well.In some embodiments, the memory module 1600 includes a housing 1605 (as depicted) to enclose one or more memory devices 1610, though such a housing is not essential to all devices or device applications. At least one memory device 1610 is a nonvolatile memory [including or adapted to perform elements of the invention]. Where present, the housing 1605 includes one or more contacts 1615 for communication with a host device. Examples of host devices include digital cameras, digital recording and playback devices, PDAs, personal computers, memory card readers, interface hubs and the like. For some embodiments, the contacts 1615 are in the form of a standardized interface. For example, with a USB flash drive, the contacts 1615 might be in the form of a USB Type-A male connector. For some embodiments, the contacts 1615 are in the form of a semi-proprietary interface, such as might be found on COMPACTFLASH memory cards licensed by SANDISK Corporation, MEMOR YSTICK memory cards licensed by SONY Corporation,SD SECURE DIGITAL memory cards licensed by TOSHIBA Corporation and the like. In general, however, contacts 1615 provide an interface for passing control, address and/or data signals between the memory module 1600 and a host having compatible receptors for the contacts 1615. The memory module 1600 may optionally include additional circuitry 1620 that may be one or more integrated circuits and/or discrete components. For some embodiments, the additional circuitry 1620 may include a memory controller for controlling access across multiple memory devices 1610 and/or for providing a translation layer between an external host and a memory device 1610. For example, there may not be a one-to-one correspondence between the number of contacts 1615 and a number of I/O connections to the one or more memory devices 1610. Thus, a memory controller could selectively couple an I/O connection (not shown in Figure 16) of a memory device 1610 to receive the appropriate signal at the appropriate I/O connection at the appropriate time or to provide the appropriate signal at the appropriate contact 1615 at the appropriate time. Similarly, the communication protocol between a host and the memory module 1600 may be different than what is required for access of a memory device 1610. A memory controller could then translate the command sequences received from a host into the appropriate command sequences to achieve the desired access to the memory device 1610. Such translation may further include changes in signal voltage levels in addition to command sequences. The additional circuitry 1620 may further include functionality unrelated to control of a memory device 1610 such as a logic functions as might be performed by an ASIC (application specific integrated circuit). Also, the additional circuitry 1620 may include circuitry to restrict read or write access to the memory module 1600, such as password protection, biometrics or the like. The additional circuitry 1620 may include circuitry to indicate a status of the memory module 1600. For example, the additional circuitry 1620 may include functionality to determine whether power is being supplied to the memory module 1600 and whether the memory module 1600 is currently being accessed, and to display an indication of its status, such as a solid light while powered and a flashing light while being accessed. The additional circuitry 1620 may further include passive devices, such as decoupling capacitors to help regulate power requirements within the memory module 1600. CONCLUSIONIn summary, the embodiments of the present invention provide a semiconductor magnetic memory device incorporating a magnetic tunneling junction to control access to a memory cell structure. The memory cell uses a floating gate that wraps around the control gate/word line to enable the floating gate to directly contact the lower layer of the MTJ.Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations of the invention will be apparent to those of ordinary skill in the art. Accordingly, this application is intended to cover any adaptations or variations of the invention. It is manifestly intended that this invention be limited only by the following claims and equivalents thereof. |
Apparatuses, systems, and techniques to determine one or more memory address values corresponding to one or more computing functions provided by one or more application programming interfaces to facilitate parallel computing. In at least one embodiment, one or more application programming interfaces to facilitate parallel computing determine one or more memory address values based, at least in part, on one or more function calls to one or more functions provided by said one or more application programming interfaces to facilitate parallel computing using one or more parallel processing units, such as a graphics processing unit. |
CLAIMSWHAT IS CLAIMED IS:1. A processor comprising: one or more circuits to perform an application programming interface (API) to identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.2. The processor of claim 1, wherein the API is to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating a location in memory of one or more instructions of a function based, at least in part, on a version of the function indicated to the API.3. The processor of claim 1, wherein the API is to receive one or more data values to indicate the one or more versions.4. The processor of claim 1, wherein the API is to receive one or more first data values to indicate a base name and one or more second data values to indicate the one or more versions.5. The processor of claim 1, wherein the one or more libraries are runtime libraries to be performed by the one or more circuits.6. The processor of claim 1, wherein the one or more libraries are drivers to be performed by the one or more circuits.7. A system comprising: one or more processors to perform an application programming interface (API) to identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.8. The system of claim 7, wherein the API is to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating one or more memory locations
of one or more instructions to perform the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the API.9. The system of claim 7, further comprising one or more data values indicating a base name and version number to be used by the API to identify the one or more versions.10. The system of claim 7, wherein the API is to receive one or more parameters comprising data to indicate at least a name value and a numerical value, the name value and the numerical value to be used by the API to identify the one or more versions of the one or more portions of the one or more libraries.11. The system of claim 7, wherein the one or more libraries are drivers to be performed by the one or more processors.12. The system of claim 7, wherein the one or more libraries are runtime libraries to be performed by the one or more processors.13. A machine-readable medium having stored thereon one or more application programming interfaces (APIs), which if performed at least in part by one or more processors, cause the one or more processors to at least: identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the one or more APIs.14. The machine-readable medium of claim 13, further comprising one or more instructions that, if performed by the one or more processors, cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the one or more APIs, the data values comprising information to indicate a name usable to identify the one or more versions.15. The machine-readable medium of claim 13, further comprising one or more instructions that, if performed by the one or more processors, cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the one or more APIs, the data values comprising information to indicate a numerical value usable to identify the one or more versions.16. The machine-readable medium of claim 13, wherein the one or more APIs are to identify the one or more versions based, at least in part, on one or more parameters indicated to the one or more APIs.17. The machine-readable medium of claim 13, wherein the one or more APIs are to cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating a location in memory of one or more instructions.18. The machine-readable medium of claim 13, wherein the one or more libraries are drivers to be performed by the one or more processors.19. A method comprising: identifying, in response to an application programming interface (API), one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.20. The method of claim 19, wherein the one or more versions are to be identified based, at least in part, on one or more parameters to the API, the one or more parameters comprising data to indicate at least a string usable to identify the one or more versions.21. The method of claim 19, wherein the one or more versions are to be identified based, at least in part, on one or more parameters to the API, the one or more parameters comprising data to indicate at least a numerical value usable to identify the one or more versions.22. The method of claim 19, further comprising identifying the one or more versions by indicating a location in memory of one or more instructions of the one or more versions of one or more portions of one or more libraries based, at least in part, on one or more data values indicated to the API.23. The method of claim 19, wherein the one or more portions comprise one or more sets of instructions to be performed by one or more software programs in conjunction with the API.24. The method of claim 19, wherein the one or more libraries are runtime libraries comprising instructions that, if executed, perform the API.25. The method of claim 19, wherein the one or more libraries are a driver and the driver comprises one or more instructions to perform the API. |
APPLICATION PROGRAMMING INTERFACE TO IDENTIFY FUNCTIONVERSIONSCLAIM OF PRIORITY[0001] This application claims the benefit of U.S. Provisional Application No.63/175,013 entitled “ENHANCEMENTS TO API FUNCTION ADDRESS QUERIES,” filed April 14, 2021, the entire contents of which is incorporated herein by reference.FIELD[0002] At least one embodiment pertains to processing resources used to execute one or more computing functions provided by one or more application programming interfaces to facilitate parallel computing. For example, one or more application programming interfaces to facilitate parallel computing determine one or more memory address values based, at least in part, on one or more function calls to one or more functions provided by said one or more application programming interfaces to facilitate parallel computing according to various novel techniques described herein.BACKGROUND[0003] Programming code is often reused in different computer programs. However, over time, the code may be updated for various reasons, such as performance, hardware compatibility, and/or to take advantages of new hardware features. As a result, reusing code for a particular application can be complex and potentially error prone due to the complexity of various versions of code being available.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 is a block diagram illustrating one or more application programming interfaces (APIs) or API functions provided by a driver and/or runtime to be performed as a result of invocation by a software program, in accordance with at least one embodiment;[0005] FIG. 2A is a block diagram illustrating a system loader that exposes one or more APIs, in accordance with at least one embodiment;
[0006] FIG. 2B is a block diagram illustrating a system loader that does not expose APIs, in accordance with at least one embodiment;[0007] FIG. 3 illustrates a process to query one or more libraries for one or more memory locations of one or more APIs or API functions, in accordance with at least one embodiment;[0008] FIG. 4 illustrates an exemplary data center, in accordance with at least one embodiment;[0009] FIG. 5 illustrates a processing system, in accordance with at least one embodiment;[0010] FIG. 6 illustrates a computer system, in accordance with at least one embodiment;[0011] FIG. 7 illustrates a system, in accordance with at least one embodiment;[0012] FIG. 8 illustrates an exemplary integrated circuit, in accordance with at least one embodiment;[0013] FIG. 9 illustrates a computing system, according to at least one embodiment;[0014] FIG. 10 illustrates an APU, in accordance with at least one embodiment;[0015] FIG. 11 illustrates a CPU, in accordance with at least one embodiment;[0016] FIG. 12 illustrates an exemplary accelerator integration slice, in accordance with at least one embodiment;[0017] FIGS. 13A-13B illustrate exemplary graphics processors, in accordance with at least one embodiment;[0018] FIG. 14A illustrates a graphics core, in accordance with at least one embodiment;[0019] FIG. 14B illustrates a GPGPU, in accordance with at least one embodiment;[0020] FIG. 15A illustrates a parallel processor, in accordance with at least one embodiment;[0021] FIG. 15B illustrates a processing cluster, in accordance with at least one embodiment;[0022] FIG. 15C illustrates a graphics multiprocessor, in accordance with at least one embodiment;[0023] FIG. 16 illustrates a graphics processor, in accordance with at least one embodiment;
[0024] FIG. 17 illustrates a processor, in accordance with at least one embodiment;[0025] FIG. 18 illustrates a processor, in accordance with at least one embodiment;[0026] FIG. 19 illustrates a graphics processor core, in accordance with at least one embodiment;[0027] FIG. 20 illustrates a PPU, in accordance with at least one embodiment;[0028] FIG. 21 illustrates a GPC, in accordance with at least one embodiment;[0029] FIG. 22 illustrates a streaming multiprocessor, in accordance with at least one embodiment;[0030] FIG. 23 illustrates a software stack of a programming platform, in accordance with at least one embodiment;[0031] FIG. 24 illustrates a CUDA implementation of a software stack of FIG. 23, in accordance with at least one embodiment;[0032] FIG. 25 illustrates a ROCm implementation of a software stack of FIG. 23, in accordance with at least one embodiment;[0033] FIG. 26 illustrates an OpenCL implementation of a software stack of FIG. 23, in accordance with at least one embodiment;[0034] FIG. 27 illustrates software that is supported by a programming platform, in accordance with at least one embodiment;[0035] FIG. 28 illustrates compiling code to execute on programming platforms of FIGS. 23 - 26, in accordance with at least one embodiment;[0036] FIG. 29 illustrates in greater detail compiling code to execute on programming platforms of FIGS. 23 - 26, in accordance with at least one embodiment;[0037] FIG. 30 illustrates translating source code prior to compiling source code, in accordance with at least one embodiment;[0038] FIG. 31 A illustrates a system configured to compile and execute CUDA source code using different types of processing units, in accordance with at least one embodiment;[0039] FIG. 3 IB illustrates a system configured to compile and execute CUDA source code of FIG. 31A using a CPU and a CUDA-enabled GPU, in accordance with at least one embodiment;
[0040] FIG. 31C illustrates a system configured to compile and execute CUDA source code of FIG. 31A using a CPU and a non-CUDA-enabled GPU, in accordance with at least one embodiment;[0041] FIG. 32 illustrates an exemplary kernel translated by CUDA-to-HIP translation tool of FIG. 31C, in accordance with at least one embodiment;[0042] FIG. 33 illustrates non-CUDA-enabled GPU of FIG. 31C in greater detail, in accordance with at least one embodiment;[0043] FIG. 34 illustrates how threads of an exemplary CUDA grid are mapped to different compute units of FIG. 33, in accordance with at least one embodiment; and[0044] FIG. 35 illustrates how to migrate existing CUDA code to Data Parallel C++ code, in accordance with at least one embodiment.DETAILED DESCRIPTION[0045] FIG. 1 is a block diagram illustrating one or more application programming interfaces (APIs) or API 110 functions 112, 114, 116, 118 provided by a driver and/or runtime 104 to be performed as a result of invocation by a software program 102, in accordance with at least one embodiment.[0046] In at least one embodiment, APIs 110 are sets of software instructions that, if executed by a processor, cause one or more processors to perform one or more computational operations. In at least one embodiment, one or more APIs 110 are distributed or otherwise provided as a part of one or more software libraries 106, runtimes 104, drivers 104, or any other grouping of software and/or executable code further described herein. In at least one embodiment, one or more APIs 110 provide functionality to user-implemented software programs 102. In at least one embodiment, a software program 102 is a collection of software code, commands, instructions, or other sequences of text to instruct a computing device to perform one or more computational operations and/or invoke one or more other sets of instructions, such as APIs 110 or API 110 functions 112, 114, 116, 118, to be executed. In at least one embodiment, functionality provided by one or more APIs 110 includes software functions 112, 114, 116, 118 and/or one or more software functions 112, 114, 116, 118 to accelerate user-implemented software programs 102 using one or more parallel processing units (PPUs), such as graphics processing units (GPUs).
[0047] In at least one embodiment, APIs 110 are hardware interfaces to one or more circuits to perform one or more computational operations. In at least one embodiment, one or more software APIs 110 described herein are implemented as one or more circuits to perform one or more techniques described below in conjunction with FIGS. 2A, 2B, and 3. In at least one embodiment, one or more software programs 102 comprise instructions that, if executed, cause one or more hardware devices and/or circuits to perform one or more techniques further described below in conjunction with FIGS. 2A, 2B and 3.[0048] In at least one embodiment, user-implemented software programs 102 utilize one or more APIs 110 to facilitate parallel computing, such as Compute Unified Device Architecture (CUD A), oneAPI, or any other API 110 further described herein. In at least one embodiment, one or more APIs to facilitate parallel computing provide a set of APIs 110, such as callable functions 112, 114, 116, 118, that individually perform one or more operations related to parallel computing. For example, in an embodiment, one or more APIs 110 to facilitate parallel computing provide functions 112, 114, 116, 118 to schedule one or more software instructions and/or operations to be performed on one or more parallel processing units (PPUs), such as graphics processing units (GPUs).[0049] In at least one embodiment, one or more user-implemented software programs 102 interact with one or more APIs 110 to facilitate parallel computing to perform one or more computing operations using one or more PPUs, such as GPUs. In at least one embodiment, one or more computing operations using one or more PPUs comprise at least one or more groups of computing operations to be accelerated by execution at least in part by said one or more PPUs. In at least one embodiment, one or more user-implemented software programs interact with one or more APIs 110 to facilitate parallel computing using a remote or local interface to said one or more APIs.[0050] In at least one embodiment, a remote interface 108 is a set of software instructions that, if executed, facilitate interaction between one or more user-implemented software programs 102 and one or more software libraries 106 providing one or more APIs 110 over a communication medium, such as a network. In at least one embodiment, one or more software libraries 106 are sets of instructions that, if executed, provide one or more functions, such as APIs or API functions, to perform one or more computational operations. In at least one embodiment, a library comprises one or more function implementations 112, 114, 116,118 to be provided as a result of one or more calls through an interface 108 to one or more APIs 110. In at least one embodiment, one or more function implementations 112, 114, 116,
118 are sets of software instructions that, if executed, perform one or more APIs or API functions, such as computational operations. In at least one embodiment, a remote interface 108 facilitates performance of one or more APIs by a remote computing service, such as a computing resource services provider. In another embodiment, one or more libraries 106 comprising one or more APIs 110 are performed by any other computing host providing said one or more APIs 110 to facilitate computing by or in conjunction with one or more user- implemented software programs 102.[0051] In at least one embodiment, a local interface 108 comprises software instructions that, if executed, facilitate interaction between a software program 102 and one or more APIs 110 or API 110 functions 112, 114, 116, 118 without remote or network communication. In at least one embodiment, a local interface 108 facilitates access by a software program 102 to one or more APIs 110 of a library 106 or libraries. In at least one embodiment, a local interface 108 is to be used by a user-implemented software program 102 compiling said user- implemented software program 102 in conjunction with one or more software libraries 106 comprising one or more APIs 110. In at least one embodiment, one or more user- implemented software programs 102 are compiled statically in conjunction with pre-compiled software libraries 106 or uncompiled source code implementing one or more APIs 110. In at least one embodiment, one or more user-implemented software programs 102 are compiled dynamically and said one or more user-implemented software programs 102 link to one or more pre-compiled software libraries 106 comprising one or more APIs 110 and API 110 functions 112, 114, 116, 118 using a compiler or other linking tool, such as those further described herein.[0052] In at least one embodiment, a driver or runtime 104 comprises a local or remote interface 108 to a library 106 implementing or otherwise providing one or more APIs 110. In at least one embodiment, one or more user-implemented software programs 102 perform one or more function calls, such as system and/or API function calls, to invoke or otherwise interact with one or more APIs 110 provided by one or more driver or runtime 104 libraries 106. In at least one embodiment, one or more user-implemented software programs 102 directly invoke one or more APIs 110 or API 110 functions 112, 114, 116, 118 provided by one or more libraries 106 in one or more drivers or runtimes 104 comprising said one or more APIs 110 by performing one or more function calls to a system loader, wherein said system loader then interacts with said one or more drivers or runtime 104 to invoke said one or more APIs 110, as described below in conjunction with FIGS. 2A and 2B.
[0053] In at least one embodiment, one or more user-implemented software programs 102 perform one or more system calls to a system loader to obtain one or more addresses of one or more APIs 110, API 110 functions 112, 114, 116, 118, and/or implementations of API functions 112, 114, 116, 118 in one or more libraries 106 provided by one or more drivers or runtimes 104. In at least one embodiment, one or more user-implemented software programs 102 invoke one or more APIs 110 or API 110 functions 112, 114, 116, 118based, at least in part, on one or more memory addresses or symbols provided by a system loader as a result of calls by said user-implemented software to said system loader to request addresses of one or more APIs 110 or API 110 functions 112, 114, 116, 118, as described below in conjunction with FIGS. 2A and 2B. In at least one embodiment, one or more user-implemented software programs 102 directly invoke one or more APIs 110 or API 110 functions 112, 114, 116,118based, at least in part, on one or more memory addresses or symbols provided as a result of one or more function calls to a driver or runtime 104 comprising or otherwise providing a library 106 implementing an API 110 and/or API 110 functions 112, 114, 116, 118.[0054] In at least one embodiment, one or more drivers or runtimes 104 comprising or otherwise providing an interface 108 to one or more libraries 106 contain instructions that, when executed, perform one or more APIs 110, API 110 functions 112, 114, 116, 118, or other computational operations, such as functions to facilitate parallel computing or any other purpose further described herein. In at least one embodiment, one or more APIs 110, API 110 functions 112, 114, 116, 118 implemented or otherwise provided by one or more drivers or runtimes 104 comprising or facilitating interaction with one or more libraries 106 are updated to more recent versions in order to add functionality, fix software bugs, meet new requirements, or for any other software development purpose. In at least one embodiment, one or more user-developed software programs 102 invokes one or more APIs 110, API 110 functions 112, 114, 116, 118 directly or by performing one or more system calls to a system loader, as described below in conjunction with FIGS. 2A and 2B. In at least one embodiment, one or more user-developed software programs 102 invoke one or more APIs 110, API 110 functions 112, 114, 116, 118 by invoking an API 110 or API 110 function 112, 114, 116, 118 at a memory address received as a result of one or more API 110 calls to obtain said memory address.[0055] In at least one embodiment, one or more function pointers are data values comprising an address of a specific API 110, API 110 function 112, 114, 116, 118, or other computing function implemented or otherwise provided by a driver or runtime 104
implementing one or more APIs 110. In at least one embodiment, one or more software programs 102 receive one or more function pointers corresponding to one or more APIs 110, API 110 functions 112, 114, 116, 118, or other computing functions implemented or otherwise provided by a driver or runtime 104 as a result of one or more function calls to an interface 108 and/or API 110. In at least one embodiment, in order to provide one or more pointers to memory address corresponding to one or more APIs 110, API 110 functions 112, 114, 116, 118, or other computing functions, a driver and/or runtime 104 provides at least one computing function to retrieve one or more memory addresses corresponding to one or more APIs 110, API 110 functions 112, 114, 116, 118, or other computing functions provided by said driver and/or runtime 104.[0056] Figure 2A is a block diagram illustrating a system loader 206 that exposes one or more application programming interfaces (APIs) or API functions, as described above in conjunction with FIG. 1 and further described herein, according to at least one embodiment.In at least one embodiment, a system loader 206 is a set of software instructions that, if executed, performs one or more computing operations to facilitate execution of one or more software programs. In at least one embodiment, a user-implemented software program 202, as described above in conjunction with FIG. 1 and further described herein, is data values and software instructions that, when executed, perform some function according to source code implementing said user-implemented software program 102. In at least one embodiment, a user-implemented software program 202 comprises instructions that, if executed, invoke or otherwise cause an API or API function call 204 to be performed. In at least one embodiment, an API or API function call 204 is one or more software instructions that, when executed, invoke one or more computing functions implemented or otherwise provided by one or more APIs, as described above in conjunction with FIG. 1 and further described herein.[0057] In at least one embodiment, a user-implemented software program 202 performs an API function call 204 or API by interacting with a system loader 106. In at least one embodiment, a system loader 206 is data values and software instructions that, when executed, perform operating system functions such as invoking one or more functions provided by a driver implementing one or more APIs to facilitate parallel computing. In at least one embodiment, a system loader 206 interacts with an API driver 210 to get an address of an API function call 208 or API. In at least one embodiment, an API driver 210 is data values and software instructions that, when executed, perform one or more APIs or API
functions as a result of one or more computing function calls and/or API calls to said API driver 110.[0058] In at least one embodiment, a system loader 206 receives an address of one or more APIs or API function calls 208 as a result of performing one or more computing function calls, such as getProcAddress, cuGetProcAddress, or any other function to receive one or more memory address corresponding to one or more APIs and/or implementations of one or more function calls provided by one or more APIs, as described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, as a result of a user- implemented software program 202 performing or otherwise invoking an API or API function call directly by performing one or more system function calls to a system loader 106, said system loader 206 determines one or more memory addresses associated with one or more implementations of an API or API function called 204 by said user-implemented software program 202 and begins execution of instructions to perform said API or API function at said one or more memory addresses. In at least one embodiment, a user- implemented software program 202 performs one or more APIs or API function calls 204 without regard to which implementation of said one or more APIs or API functions is to be invoked in an API driver 210 by a system loader 106.[0059] Figure 2B is a block diagram illustrating a system loader 216 that does not expose APIs or API functions, according to at least one embodiment. In at least one embodiment, rather than using a system loader 216 to invoke an API or API function implemented by an API driver 120, as described above in conjunction with FIG. 1 and further described herein, a user-implemented software program 212 performs one or more system function calls 214 to a system loader to get one or more memory addresses associated with one or more APIs or API function implementations provided by an API driver 120, as described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, a system loader 216 responds to one or more system function calls requesting one or more memory addresses of one or more API function calls 214 by requesting 218 said one or more memory addresses from an API driver 220 implementing said one or more APIs or API function calls. In at least one embodiment, a user-implemented software program 212 performs one or more API function calls by invoking one or more software instructions stored at one or more memory address locations determined as a result of one or more system function calls 214 to a system loader 216 to determine said one or more memory address locations as a result of one or more function calls to an API driver 120. In at least one embodiment, a user-implemented software
program 212 performs one or more APIs or API function calls by invoking one or more software instructions stored at one or more memory address locations determined as a result of one or more function calls directly to an API driver 120.[0060] In at least one embodiment, a user-implemented software program 212 indicates, to an API, one or more versions of one or more software functions, such as other APIs or API functions, implemented or otherwise provided by an API driver 220 as described above in conjunction with FIG. 1 and further described herein, when requesting one or more memory addresses corresponding to said one or more APIs or API functions. In at least one embodiment, a user-implemented software program 212 receives, as a result of one or more calls to one or more APIs and/or one or more API function calls to a driver or runtime implementing or otherwise providing an API, such as an API to facilitate parallel computing, one or more memory addresses corresponding to a specific version and/or implementation of one or more APIs or API functions implemented or otherwise provided by an API driver 120.[0061] In at least one embodiment, one or more APIs or API functions, such as functions, functions provided by an API to facilitate parallel computing, or any other API and/or function further described herein, are implemented or otherwise provided by a user-mode software driver and/or a runtime software library, as described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, to facilitate determination of one or more memory addresses associated with or corresponding to one or more APIs or API functions, a user-mode software driver and/or runtime software library provides one or more additional functions and/or APIs to retrieve and/or indicate said one or more memory addresses. For example, in an embodiment, a driver implementing an API to facilitate parallel computing, such as CUD A, provides a function and/or API to get one or more memory addresses corresponding to one or more implementations of one or more other APIs and/or API functions and/or functions as follows:CUresult cuGetProcAddress(const char* symbol, void** funcPtr, int cudaVersion, uint64_t flags);In at least one embodiment, one or more APIs, as described above in conjunction with FIG. 1 and further described herein, provide one or more software functions similar to cuGetProcAddress, such as a generic getProcAddress or function with any other name and/or definition, to get one or more memory addresses of one or more implementations of one or more APIs or API functions implemented or otherwise provided by by a user-mode driver. In
at least one embodiment, a user-implemented software program or system loader, as described above, provides one or more parameters to a software function and/or API, such as getProcAddress or cuGetProcAddress.[0062] In at least one embodiment, one or more parameters to a software function or API, such as getProcAddress or cuGetProcAddress, comprise a symbol. In at least one embodiment, a symbol is a data value comprising a name, pointer, or other value usable to identify a driver API function. In at least one embodiment, a name or other identifier provided by a symbol parameter is a base name of a driver API function. For example, in an API to facilitate parallel computing such as CUD A, a symbol value may “cuMemAlloc” corresponding to an API or API function implemented by a driver named “cuMemAlloc” having one or more implementation versions.[0063] In at least one embodiment, one or more parameters to a software function, such as getProcAddress or cuGetProcAddress, comprise a function pointer “funcPtr”. In at least one embodiment, a function pointer is a data value comprising a memory address of or pointing to a driver implementation of an API or API function in memory. In at least one embodiment, a software function such as getProcAddress or cuGetProcAddress, when invoked, gets a function pointer value with a memory address corresponding to a driver- specific implementation of an API or API function requested in “symbol” having a version corresponding to a specific driver version indicated by “cudaVersion”.[0064] In at least one embodiment, one or more parameters to a software and/or API function, such as getProcAddress or cuGetProcAddress, comprise a driver version. In at least one embodiment, a driver version is a data value indicating a numeric value to identify a specific implementation or version of a driver that further implements or otherwise provides an API. In at least one embodiment, a driver version, such as “cudaV ersion” corresponding to a specific version of CUDA as described herein, indicates a driver version comprising and/or providing an implementation of an API function indicated by “symbol”. In at least one embodiment, indication of a specific driver version causes getProcAddress or cuGetProcAddress to determine one or more addresses of one or more specific implementations or versions of an API or API function indicated by “symbol” and set a memory address in a function pointer also passed as a parameter to getProcAddress or cuGetProcAddress. In at least one embodiment, a driver version provided as a parameter to getProcAddress or cuGetProcAddress causes a specific implementation of “symbol” to be searched for by a library providing getProcAddress or cuGetProcAddress. If, in an
embodiment, a driver version is less than or equal to a currently running driver version, getProcAddress or cuGetProcAddress will find a corresponding function or API indicated by “symbol”.[0065] In at least one embodiment, one or more parameters to a software function or API, such as getProcAddress or cuGetProcAddress, comprise one or more flags. In at least one embodiment, a flag is a data value indicating one or more options usable by a software function or API when searching for a specific implementation of an API or API function provided a driver or other software. In at least one embodiment, a parameter comprising no specific flags will cause a function or API, such as getProcAddress or cuGetProcAddress, to search for a default and/or most recent implementation of an API or API function indicated by a “symbol” parameter.[0066] In at least one embodiment, one or more software functions or APIs, such as getProcAddress or cuGetProcAddress, return a value indicating a status corresponding to a determination or locating of one or more addresses of an API or API function indicated by a “symbol” parameter. In at least one embodiment, one or more software functions, such as getProcAddress or cuGetProcAddress, return a success value, such as CUDA_SUCCESS or any other data value to indicate success, to indicate that an API matching a “symbol” parameter was found and a respective memory address was returned or otherwise set in a function pointer such as “funcPtr”. In at least one embodiment, one or more software functions, such as getProcAddress or cuGetProcAddress, return a value indicating one or more invalid parameters, such as CUDA ERROR INVALID VALUE, to indicate that one or more parameters provided to getProcAddress or cuGetProcAddress are null or otherwise invalid. In at least one embodiment, one or more software functions, such as getProcAddress or cuGetProcAddress, return a value indicating that a specific API function indicated by a “symbol” parameter was not found or no memory address could be located or calculated corresponding to a specific API or API function indicated by said “symbol” parameter. In at least one embodiment, if an API or API function indicated by a “symbol” parameter could not be located, a function such as getProcAddress or cuGetProcAddress returns a value indicating that said API function could not be located, such as CUDA ERROR NOT FOUND or any other value to indicate failure.[0067] In at least one embodiment, a runtime library implementing an API or API function, as described above in conjunction with FIG. 1 and further described herein,
provides a function to get one or more memory addresses corresponding to one or more implementations or versions of one or more APIs or API functions as follows:_ host _ cudaError t CUDARTAPI cudaDriverGetEntryPoint(const coar* symbol, void** funcPtr, uint64_t flags)In at least one embodiment, one or more APIs may provide one or more software functions similar to cudaDriverGetEntryPoint, such as a generic getDriverEntryPoint to get one or more memory addresses corresponding to one or more implementations or versions of one or more APIs or API functions implemented or otherwise provided by a runtime library.[0068] In at least one embodiment, one or more parameters to an API, API function, or other software function such as getDriverEntryPoint or cuGetDriverEntryPoint comprises a symbol. In at least one embodiment, a symbol is a data value, such as a pointer, comprising a name of a driver-implemented API function to search for or determine one or more memory addresses corresponding to. In at least one embodiment, a name provided by a symbol parameter is a base name of a driver-implemented API function. For example, in an API to facilitate parallel computing such as CUD A, a symbol value may “cuMemAlloc” corresponding to an API function implemented by a driver named “cuMemAlloc” having one or more driver version-specific implementations. In at least one embodiment, an API, API function, or software function, such as getDriverEntryPoint or cuGetDriverEntryPoint, determines a memory address or function pointer corresponding to a most recent driver implementation of an API or API function indicated by a “symbol” parameter.[0069] In at least one embodiment, one or more parameters to an API, API function, or software function, such as getDriverEntryPoint or cuGetDriverEntryPoint, comprises a function pointer “funcPtr”. In at least one embodiment, a function pointer is a data value comprising a memory address pointing to a current or most-recent driver implementation of an API or API function, such as those described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, an API, API function, or software function, such as getDriverEntryPoint or cuGetDriverEntryPoint, sets a function pointer value with a memory address corresponding to a current or most recent driver-specific implementation of an API or API function requested in “symbol” having a version corresponding to a current or most recent driver version.[0070] In at least one embodiment, one or more parameters to an API, API function, or software function, such as getDriverEntryPoint or cuGetDriverEntryPoint, comprises one or
more flags. In at least one embodiment, a flag passed as a parameter to getDriverEntryPoint or cuGetDriverEntryPoint are data values indicating one or more options to consider when searching for a specific implementation of an API or API function in a driver that implements an API, as described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, a parameter comprising no specific flags will cause an API or API function, such as getDriverEntryPoint or cuGetDriverEntryPoint, to search for a default and/or most recent driver implementation of an API function indicated by a “symbol” parameter.[0071] In at least one embodiment, one or more APIs, API functions, or software functions, such as getDriverEntryPoint or cuGetDriverEntryPoint, return a value indicating a status corresponding to a determination or locating of one or more addresses corresponding to an API or API function implementation in a driver, as described above in conjunction with FIG. 1 and further described herein, indicated by a “symbol” parameter passed to getDriverEntryPoint or cuGetDriverEntryPoint. In at least one embodiment, one or more APIs, API functions, or software functions, such as getDriverEntryPoint or cuGetDriverEntryPoint, return a success value, such as cudaSuccess corresponding to cuGetDriverEntryPoint, to indicate that an API or API function implementation matching a “symbol” parameter was found and a respective memory address was returned or otherwise set in a function pointer such as “funcPtr”. In at least one embodiment, one or more APIs,API functions, or software functions, such as getDriverEntryPoint or cuGetDriverEntryPoint, return a value indicating one or more invalid parameters, such as cudaErrorlnvalidValue corresponding to cuGetDriverEntryPoint, to indicate that one or more parameters provided to getDriverEntryPoint or cuGetDriverEntryPoint are null or otherwise invalid. In at least one embodiment, one or more software functions such as getDriverEntryPoint or cuGetDriverEntryPoint return a value indicating that a specific API or API function indicated by a “symbol” parameter was not found or no memory address could be located or calculated corresponding to a driver-implemented specific API or API function indicated by said “symbol” parameter. In at least one embodiment, if an API or API function indicated by a “symbol” parameter could not be located or is invalid, or is otherwise not available in a current driver implementation of an API, as described above in conjunction with FIG. 1 and further described herein, an API or API function, such as getDriverEntryPoint or cuGetDriverEntryPoint, returns a value indicating that said API or API function could not be located, such as cudaErrorNotFound corresponding to cuGetDriverEntryPoint.
[0072] In at least one embodiment, to determine a memory address corresponding to specific driver implementations of one or more versions of one or more APIs or API functions, as described above in conjunction with FIG. 1 and further described herein, a driver maintains a table consisting of driver API or API function entries, where each entry consists of a set of driver functions that includes default implementations of driver functions, versioned implementations of driver functions, and specialized variants of driver functions. Each driver function, in an embodiment, has corresponding metadata such as version information including a driver version indicating when a specific API or API function was introduced, removal information indicating a driver version when a specific API or API function was removed, and a pointer to one or more memory addresses corresponding to a specific implementation of an API or API function.[0073] In at least one embodiment, when one or more calls to a driver API or API function, such as getProcAddress or cuGetProcAddress, are made, a driver searches for a requested symbol, as described above, in a proc table and returns its address if a match is found. In at least one embodiment, a driver implements a hash table and precomputes all hashes based, at least in part, on symbol names, memory addresses, and/or other metadata associated with each API or API function corresponding to each symbol, as described above.[0074] As described above, in at least one embodiment, a driver API or API function, such as getProcAddress or cuGetProcAddress, accepts flags as a parameter or argument, where said flags may indicate specialized variants of driver-implemented API or API functions. In at least one embodiment, an example enumerated type indicating one or more flags to be provided as a parameter or argument to getProcAddress or cuGetProcAddress is as follows: typedef enum driverProcAddress flags enum {GET PROC ADDRESS DEFAULT = 0,GET PROC ADDRESS LEGACY STREAM = 1 « 0,GET PROC ADDRESS PER THREAD DEFAULT STREAM = 1 « 1 } driverProcAddress_flags;[0075] In at least one embodiment, a flag value of GET PROC ADDRESS DEFAULT or CU GET PROC ADDRESS DEFAULT indicates that a default driver implementation of a specific API or API function, as described above in conjunction with FIG. 1 and further
described herein, is to be searched for by getProcAddress or cuGetProcAddress. In at least one embodiment, GET PROC ADDRES S DEF AULT orCU GET PROC ADDRESS DEFAULT is equivalent to passingGET PROC ADDRESS LEGACY STREAM orCU GET PROC ADDRES S LEGACY STREAM whenAPI PER THREAD DEFAULT STREAM orCUD A API PER THREAD DEF AULT S TRE AM is not set andGET PROC ADDRESS PER THREAD DEFAULT STREAM orCU GET PROC ADDRES S PER THREAD DEFAULT STREAM whenAPI PER THREAD DEFAULT STREAM orCUDA API PER THREAD DEFAULT STREAM is set. In at least one embodiment,GET PROC ADDRESS LEGACY STREAM orCU GET PROC ADDRESS LEGACY STREAM causes getProcAddress or cuGetProcAddress to search for all symbols that match a requested symbol passed to or otherwise provided as an argument. In at least one embodiment,GET PROC ADDRESS PER THREAD DEFAULT STREAM or CU GET PROC ADDRES S PER THREAD DEFAULT STREAM causes getProcAddress or cuGetProcAddress to search for all symbols that match a requested symbol pass to or otherwise provided as an argument to getProcAddress or cuGetProcAddress including all ptds versions that match said symbol.[0076] In at least one embodiment, a driver may implement or otherwise provide one or more inline functions to modify flag parameters or arguments in order to conform to specific behavior for a given implementation of said driver. In at least one embodiment, a driver may implement or otherwise provide a list of publicly exposed type definitions or typedefs in various header files available to one or more user-implemented software programs for each driver version or implementation version available corresponding to various APIs or API functions of an API to facilitate parallel computing, such as CUD A, or any other API further described herein.[0077] As described above, in at least one embodiment, a runtime API or API function, such as driverGetEntryPoint or cudaDriverGetEntryPoint, accepts flags as a parameter or argument, where said flags may indicate specialized variants of driver-implemented APIs or API functions, as described above in conjunction with FIG. 1 and further described herein. In at least one embodiment, one or more flags may be defined is as follows:
#defme enableDefault 0x0#defme enableLegacyStream Oxl #defme enablePerThreadDefaultStream 0x2[0078] In at least one embodiment, a flag value of enableDefault or cudaEnableDefault indicates that a default driver implementation of a specific API or API function is to be searched for by a runtime API or API function such as driverGetEntry Point or cudaDriverGetEntryPoint. In at least one embodiment, enableDefault or cudaEnableDefault is equivalent to passing enableLegacyStream or cudaEnableLegacy Stream when API PER THREAD DEFAULT STREAM or CUDA API PER THREAD DEFAULT STREAM is not set and enablePerThreadDefaultStream or cudaEnablePerThreadDefaultStream when API PER THREAD DEFAULT STREAM orCUDA API PER THREAD DEFAULT STREAM is set. In at least one embodiment, enableLegacyStream or cudaEnableLegacy Stream causes runtime functions such as driverGetEntry Point or cudaDriverGetEntryPoint to search all symbols that match a requested symbol passed as a parameter or argument to driverGetEntry Point or cudaDriverGetEntryPoint except a corresponding ptds version. In at least one embodiment, enablePerThreadDefaultStream or cudaEnablePerThreadDefaultStream causes driverGetEntry Point or cudaDriverGetEntryPoint to search for all symbols that match a requested symbol passed as a parameter or other argument including one or more ptds versions. In at least one embodiment, if a ptds version of a function indicated by a symbol parameter or argument to driverGetEntryPoint or cudaDriverGetEntryPoint, a default version of said function implemented by a current driver is returned or set in a function pointer parameter. In at least one embodiment, a runtime function driverGetEntryPoint or cudaDriverGetEntryPoint also returns ptds versions of a specific driver-implemented API or API function to support per-thread stream overloads.[0079] In at least one embodiment, a runtime implementing an API or API function such as driverGetEntryPoint or cudaDriverGetEntryPoint dynamically loads all driver symbols it needs during initialization. In at least one embodiment, a runtime implementing an API or API function such as driverGetEntryPoint or cudaDriverGetEntryPoint utilizes one or more driver functions, such as getProcAddress or cuGetProcAddress, to determine one or more memory addresses corresponding to one or more driver sysmbols. In at least one
embodiment, a runtime implementing an API or API function such as driverGetEntryPoint or cudaDriverGetEntryPoint utilizes one or more hash tables, as described above in conjunction with a driver implementing one or more API functions, such as API functions to facilitate parallel computing or any other API functions as a part of any API further described herein.[0080] In at least one embodiment, a driver or runtime implementing one or more functions to determine one or more addresses associated with one or more implementations of one or more APIs or API functions, such as functions provided by an API to facilitate parallel computing or any other API further describe herein, may embed versioning information, such as “_vl”, “_v2”, etc.) in a symbol name itself rather than specifying a separate argument, in a driver-specific implementation, for a compatible driver version. In at least one embodiment, if a driver embeds versioning information, said driver does not have to maintain a map of driver functions and other metadata as described above. By contrast, in an embodiment, a driver can dynamically load each symbol and get its address.[0081] In at least one embodiment, instead of a symbol passed as a parameter or argument to a runtime or driver, as described above, an ordinal value may be provided as an argument or parameter. In at least one embodiment, an ordinal value is a data value indicating a specific version or any other information about an API or API function to be searched by one or more driver or runtime functions to determine a memory address. In at least one embodiment, if an ordinal value is specified, a direct lookup in a linear table can be performed by a runtime or driver instead of utilizing a hash table as described above.[0082] In at least one embodiment, a runtime or driver implementing one or more APIs or API functions, as described above in conjunction with FIG. 1 and further described herein, may accept, as an argument or parameter, one or more device identifiers. In at least one embodiment, a device identifier is a data value indicating and identification value or handle corresponding to one or more devices. In at least one embodiment, a device identifier allows for searching specific drivers corresponding to specific devices that may implement one or more versions of one or more APIs or API functions corresponding to an API to facilitate parallel computing or any other API further described herein.[0083] FIG. 3 illustrates a process 300 to query one or more libraries for one or more memory locations storing application programming interface (API) or API function implementations or instructions that, if executed, perform one or more versions of one or more APIs or API functions, in accordance with at least one embodiment. In at least one
embodiment, process 300 begins when a driver or runtime, as described above in conjunction with FIGS. 1, 2A, and 2B, receives one or more identifier 304 data values indicating one or more properties of an API or API function to be located, as described above in conjunction with FIGS. 2A and 2B. In at least one embodiment, an identifier comprises a specific function name and/or version identifier. In at least one embodiment, an identifier comprises information to indicate one or more APIs or API functions, or instructions that, if executed, perform one or more APIs or API functions, in one or more libraries, as described above in conjunction with FIG. 1[0084] In at least one embodiment, once a driver or runtime receives an identifier 304, as described above, said driver or runtime locates an API or API function 306 in a library comprising instructions that, if executed, perform said API or API function. In at least one embodiment, a driver or runtime locates an API or API function in library based, at least in part, on one or more data values indicated to said driver or runtime to identify said API or API function, such as data values described above in conjunction with FIGS. 2A and 2B.[0085] In at least one embodiment, if a driver or runtime locates 308 an implementation of an API or API function, such as software instructions that, if executed, perform an API or API function, said driver or runtime returns a pointer 310 to said implementation of said API or API function. In at least one embodiment, a pointer is a data value comprising an address of a first software instruction of a set of software instructions that, if executed, perform an API or API function.[0086] In at least one embodiment, if a driver or runtime does not locate 308 an implementation of an API or API function, such as software instructions that, if executed, perform an API or API function, said driver or runtime returns a NULL or nil value 312. In at least one embodiment, a NULL or nil value is any data value indicating failure of a driver or runtime to locate an implementation of an API or API function. In at least one embodiment, once a driver or runtime either returns a pointer 310 or returns a NULL or nil value 312, a process 300 to query one or more libraries for one or more memory locations storing API or API function implementations ends 314.[0087] In the following description, numerous specific details are set forth to provide a more thorough understanding of at least one embodiment. However, it will be apparent to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details.
Data Center[0088] FIG. 4 illustrates an exemplary data center 400, in accordance with at least one embodiment. In at least one embodiment, data center 400 includes, without limitation, a data center infrastructure layer 410, a framework layer 420, a software layer 430 and an application layer 440. In at least one embodiment, a software layer 430 and/or application layer 440 comprise instructions to perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0089] In at least one embodiment, as shown in FIG. 4, data center infrastructure layer 410 may include a resource orchestrator 412, grouped computing resources 414, and node computing resources (“node C.R.s”) 416(1)-416(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.S 416(1)-416(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (“FPGAs”), data processing units (“DPUs”) in network devices, graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output ("NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.S from among node C.R.S 416(1)-416(N) may be a server having one or more of above-mentioned computing resources.[0090] In at least one embodiment, grouped computing resources 414 may include separate groupings of node C.R.S housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). Separate groupings of node C.R.S within grouped computing resources 414 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.[0091] In at least one embodiment, resource orchestrator 412 may configure or otherwise control one or more node C.R.S 416(1)-416(N) and/or grouped computing resources 414. In at least one embodiment, resource orchestrator 412 may include a software design
infrastructure (“SDI”) management entity for data center 400. In at least one embodiment, resource orchestrator 412 may include hardware, software or some combination thereof.[0092] In at least one embodiment, as shown in FIG. 4, framework layer 420 includes, without limitation, a job scheduler 432, a configuration manager 434, a resource manager 436 and a distributed file system 438. In at least one embodiment, framework layer 420 may include a framework to support software 452 of software layer 430 and/or one or more application(s) 442 of application layer 440. In at least one embodiment, software 452 or application(s) 442 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 420 may be, but is not limited to, a type of free and open- source software web application framework such as Apache SparkTM (hereinafter “Spark”) that may utilize distributed file system 438 for large-scale data processing (e.g., "big data").In at least one embodiment, job scheduler 432 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 400. In at least one embodiment, configuration manager 434 may be capable of configuring different layers such as software layer 430 and framework layer 420, including Spark and distributed file system 438 for supporting large-scale data processing. In at least one embodiment, resource manager 436 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 438 and job scheduler 432. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource 414 at data center infrastructure layer 410. In at least one embodiment, resource manager 436 may coordinate with resource orchestrator 412 to manage these mapped or allocated computing resources.[0093] In at least one embodiment, software 452 included in software layer 430 may include software used by at least portions of node C.R.S 416(1)-416(N), grouped computing resources 414, and/or distributed file system 438 of framework layer 420. One or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.[0094] In at least one embodiment, application(s) 442 included in application layer 440 may include one or more types of applications used by at least portions of node C.R.s 416(1)- 416(N), grouped computing resources 414, and/or distributed file system 438 of framework layer 420. In at least one or more types of applications may include, without limitation, CUDA applications.
[0095] In at least one embodiment, any of configuration manager 434, resource manager 436, and resource orchestrator 412 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center 400 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.Computer-Based Systems[0096] The following figures set forth, without limitation, exemplary computer-based systems that can be used to implement at least one embodiment.[0097] FIG. 5 illustrates a processing system 500, in accordance with at least one embodiment. In at least one embodiment, processing system 500 includes one or more processors 502 and one or more graphics processors 508, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 502 or processor cores 507. In at least one embodiment, processing system 500 is a processing platform incorporated within a system-on-a-chip (“SoC”) integrated circuit for use in mobile, handheld, or embedded devices. In at least one embodiment, processing system 500 is to perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0098] In at least one embodiment, processing system 500 can include, or be incorporated within a server-based gaming platform, a game console, a media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, processing system 500 is a mobile phone, smart phone, tablet computing device or mobile Internet device. In at least one embodiment, processing system 500 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In at least one embodiment, processing system 500 is a television or set top box device having one or more processors 502 and a graphical interface generated by one or more graphics processors 508.[0099] In at least one embodiment, one or more processors 502 each include one or more processor cores 507 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 507 is configured to process a specific instruction set 509. In at least one embodiment, instruction set 509 may facilitate Complex Instruction Set Computing (“CISC”), Reduced
Instruction Set Computing (“RISC”), or computing via a Very Long Instruction Word (“VLIW"). In at least one embodiment, processor cores 507 may each process a different instruction set 509, which may include instructions to facilitate emulation of other instruction sets. In at least one embodiment, processor core 507 may also include other processing devices, such as a digital signal processor (“DSP”).[0100] In at least one embodiment, processor 502 includes cache memory (‘cache”) 504. In at least one embodiment, processor 502 can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor 502. In at least one embodiment, processor 502 also uses an external cache (e.g., a Level 3 (“L3”) cache or Last Level Cache (“LLC”)) (not shown), which may be shared among processor cores 507 using known cache coherency techniques.In at least one embodiment, register file 506 is additionally included in processor 502 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file 506 may include general-purpose registers or other registers.[0101] In at least one embodiment, one or more processor(s) 502 are coupled with one or more interface bus(es) 510 to transmit communication signals such as address, data, or control signals between processor 502 and other components in processing system 500. In at least one embodiment interface bus 510, in one embodiment, can be a processor bus, such as a version of a Direct Media Interface (“DMI”) bus. In at least one embodiment, interface bus 510 is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., “PCI,” PCI Express (“PCIe”)), memory buses, or other types of interface buses. In at least one embodiment processor(s) 502 include an integrated memory controller 516 and a platform controller hub 530. In at least one embodiment, memory controller 516 facilitates communication between a memory device and other components of processing system 500, while platform controller hub (“PCH”) 530 provides connections to Input/Output (“I/O”) devices via a local I/O bus.[0102] In at least one embodiment, memory device 520 can be a dynamic random access memory (“DRAM”) device, a static random access memory (“SRAM”) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 520 can operate as system memory for processing system 500, to store data 522 and instructions
521 for use when one or more processors 502 executes an application or process. In at least one embodiment, memory controller 516 also couples with an optional external graphics processor 512, which may communicate with one or more graphics processors 508 in processors 502 to perform graphics and media operations. In at least one embodiment, a display device 511 can connect to processor(s) 502. In at least one embodiment display device 511 can include one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device 511 can include a head mounted display (“HMD”) such as a stereoscopic display device for use in virtual reality (“VR”) applications or augmented reality (“AR”) applications.[0103] In at least one embodiment, platform controller hub 530 enables peripherals to connect to memory device 520 and processor 502 via a high-speed I/O bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller 546, a network controller 534, a firmware interface 528, a wireless transceiver 526, touch sensors 525, a data storage device 524 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device 524 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as PCI, or PCIe. In at least one embodiment, touch sensors 525 can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver 526 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (“LTE”) transceiver. In at least one embodiment, firmware interface 528 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (“UEFI”). In at least one embodiment, network controller 534 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus 510. In at least one embodiment, audio controller 546 is a multi channel high definition audio controller. In at least one embodiment, processing system 500 includes an optional legacy I/O controller 540 for coupling legacy (e.g., Personal System 2 (“PS/2”)) devices to processing system 500. In at least one embodiment, platform controller hub 530 can also connect to one or more Universal Serial Bus (“USB”) controllers 542 connect input devices, such as keyboard and mouse 543 combinations, a camera 544, or other USB input devices.[0104] In at least one embodiment, an instance of memory controller 516 and platform controller hub 530 may be integrated into a discreet external graphics processor, such as
external graphics processor 512. In at least one embodiment, platform controller hub 530 and/or memory controller 516 may be external to one or more processor(s) 502. For example, in at least one embodiment, processing system 500 can include an external memory controller 516 and platform controller hub 530, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s) 502.[0105] FIG. 6 illustrates a computer system 600, in accordance with at least one embodiment. In at least one embodiment, computer system 600 may be a system with interconnected devices and components, an SOC, or some combination. In at least on embodiment, computer system 600 is formed with a processor 602 that may include execution units to execute an instruction. In at least one embodiment, computer system 600 may include, without limitation, a component, such as processor 602 to employ execution units including logic to perform algorithms for processing data. In at least one embodiment, computer system 600 may include processors, such as PENTIUM® Processor family, XeonTM, Itanium®, XScaleTM and/or StrongARMTM, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 600 may execute a version of WINDOWS’ operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. In at least one embodiment, computer system 600 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0106] In at least one embodiment, computer system 600 may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (“PDAs”), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (DSP), an SoC, network computers (“NetPCs”), set-top boxes, network hubs, wide area network (“WAN”) switches, or any other system that may perform one or more instructions.[0107] In at least one embodiment, computer system 600 may include, without limitation, processor 602 that may include, without limitation, one or more execution units 608 that may be configured to execute a Compute Unified Device Architecture (“CUD A”) (CUD A® is
developed by NVIDIA Corporation of Santa Clara, CA) program. In at least one embodiment, a CUDA program is at least a portion of a software application written in a CUDA programming language. In at least one embodiment, computer system 600 is a single processor desktop or server system. In at least one embodiment, computer system 600 may be a multiprocessor system. In at least one embodiment, processor 602 may include, without limitation, a CISC microprocessor, a RISC microprocessor, a VLIW microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In at least one embodiment, processor 602 may be coupled to a processor bus 610 that may transmit data signals between processor 602 and other components in computer system 600.[0108] In at least one embodiment, processor 602 may include, without limitation, a Level 1 (“LI”) internal cache memory (“cache”) 604. In at least one embodiment, processor 602 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor 602. In at least one embodiment, processor 602 may also include a combination of both internal and external caches. In at least one embodiment, a register file 606 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer register.[0109] In at least one embodiment, execution unit 608, including, without limitation, logic to perform integer and floating point operations, also resides in processor 602.Processor 602 may also include a microcode (“ucode”) read only memory (“ROM”) that stores microcode for certain macro instructions. In at least one embodiment, execution unit 608 may include logic to handle a packed instruction set 609. In at least one embodiment, by including packed instruction set 609 in an instruction set of a general-purpose processor 602, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in a general-purpose processor 602. In at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across a processor's data bus to perform one or more operations one data element at a time.[0110] In at least one embodiment, execution unit 608 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 600 may include, without limitation, a
memory 620. In at least one embodiment, memory 620 may be implemented as a DRAM device, an SRAM device, flash memory device, or other memory device. Memory 620 may store instruction(s) 619 and/or data 621 represented by data signals that may be executed by processor 602.[0111] In at least one embodiment, a system logic chip may be coupled to processor bus 610 and memory 620. In at least one embodiment, the system logic chip may include, without limitation, a memory controller hub (“MCH”) 616, and processor 602 may communicate with MCH 616 via processor bus 610. In at least one embodiment, MCH 616 may provide a high bandwidth memory path 618 to memory 620 for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH 616 may direct data signals between processor 602, memory 620, and other components in computer system 600 and to bridge data signals between processor bus 610, memory 620, and a system I/O 622. In at least one embodiment, system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH 616 may be coupled to memory 620 through high bandwidth memory path 618 and graphics/video card 612 may be coupled to MCH 616 through an Accelerated Graphics Port (“AGP”) interconnect 614.[0112] In at least one embodiment, computer system 600 may use system I/O 622 that is a proprietary hub interface bus to couple MCH 616 to I/O controller hub (“ICH”) 630. In at least one embodiment, ICH 630 may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory 620, a chipset, and processor 602. Examples may include, without limitation, an audio controller 629, a firmware hub (“flash BIOS”) 628, a wireless transceiver 626, a data storage 624, a legacy I/O controller 623 containing a user input interface 625 and a keyboard interface, a serial expansion port 627, such as a USB, and a network controller 634. Data storage 624 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0113] In at least one embodiment, FIG. 6 illustrates a system, which includes interconnected hardware devices or “chips.” In at least one embodiment, FIG. 6 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in FIG. 6 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of system 600 are interconnected using compute express link (“CXL”) interconnects.
[0114] FIG. 7 illustrates a system 700, in accordance with at least one embodiment. In at least one embodiment, system 700 is an electronic device that utilizes a processor 710. In at least one embodiment, system 700 may be, for example and without limitation, a notebook, a tower server, a rack server, a blade server, an edge device communicatively coupled to one or more on-premise or cloud service providers, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device. In at least one embodiment, system 700 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0115] In at least one embodiment, system 700 may include, without limitation, processor 710 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor 710 is coupled using a bus or interface, such as an I2C bus, a System Management Bus (“SMBus”), a Low Pin Count (“LPC”) bus, a Serial Peripheral Interface (“SPI”), a High Definition Audio (“HD A”) bus, a Serial Advance Technology Attachment (“SATA”) bus, a USB (versions 1, 2, 3), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus. In at least one embodiment, FIG. 7 illustrates a system which includes interconnected hardware devices or “chips.” In at least one embodiment, FIG. 7 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in FIG. 7 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of FIG. 7 are interconnected using CXL interconnects.[0116] In at least one embodiment, FIG 7 may include a display 724, a touch screen 725, a touch pad 730, a Near Field Communications unit (“NFC”) 745, a sensor hub 740, a thermal sensor 746, an Express Chipset (“EC”) 735, a Trusted Platform Module (“TPM”)738, BlOS/firmware/flash memory (“BIOS, FW Flash”) 722, a DSP 760, a Solid State Disk (“SSD”) or Hard Disk Drive (“HDD”) 720, a wireless local area network unit (“WLAN”)750, a Bluetooth unit 752, a Wireless Wide Area Network unit (“WWAN”) 756, a Global Positioning System (“GPS”) 755, a camera (“USB 3.0 camera”) 754 such as a USB 3.0 camera, or a Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”) 715 implemented in, for example, LPDDR3 standard. These components may each be implemented in any suitable manner.[0117] In at least one embodiment, other components may be communicatively coupled to processor 710 through components discussed above. In at least one embodiment, an accelerometer 741, an Ambient Light Sensor (“ALS”) 742, a compass 743, and a gyroscope
744 may be communicatively coupled to sensor hub 740. In at least one embodiment, a thermal sensor 739, a fan 737, a keyboard 736, and a touch pad 730 may be communicatively coupled to EC 735. In at least one embodiment, a speaker 763, a headphones 764, and a microphone (“mic”) 765 may be communicatively coupled to an audio unit (“audio codec and class d amp”) 762, which may in turn be communicatively coupled to DSP 760. In at least one embodiment, audio unit 762 may include, for example and without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, a SIM card (“SIM”) 757 may be communicatively coupled to WWAN unit 756. In at least one embodiment, components such as WLAN unit 750 and Bluetooth unit 752, as well as WWAN unit 756 may be implemented in a Next Generation Form Factor (“NGFF”).[0118] FIG. 8 illustrates an exemplary integrated circuit 800, in accordance with at least one embodiment. In at least one embodiment, exemplary integrated circuit 800 is an SoC that may be fabricated using one or more IP cores. In at least one embodiment, integrated circuit 800 includes one or more application processor(s) 805 (e.g., CPUs, DPUs), at least one graphics processor 810, and may additionally include an image processor 815 and/or a video processor 820, any of which may be a modular IP core. In at least one embodiment, integrated circuit 800 includes peripheral or bus logic including a USB controller 825, a UART controller 830, an SPI/SDIO controller 835, and an I2S/I2C controller 840. In at least one embodiment, integrated circuit 800 can include a display device 845 coupled to one or more of a high-definition multimedia interface (“HDMI”) controller 850 and a mobile industry processor interface (“MIPI”) display interface 855. In at least one embodiment, storage may be provided by a flash memory subsystem 860 including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller 865 for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine 870. In at least one embodiment, exemplary integrated circuit 800 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0119] FIG. 9 illustrates a computing system 900, according to at least one embodiment; In at least one embodiment, computing system 900 includes a processing subsystem 901 having one or more processor(s) 902 and a system memory 904 communicating via an interconnection path that may include a memory hub 905. In at least one embodiment, memory hub 905 may be a separate component within a chipset component or may be
integrated within one or more processor(s) 902. In at least one embodiment, memory hub 905 couples with an I/O subsystem 911 via a communication link 906. In at least one embodiment, I/O subsystem 911 includes an I/O hub 907 that can enable computing system 900 to receive input from one or more input device(s) 908. In at least one embodiment, I/O hub 907 can enable a display controller, which may be included in one or more processor(s) 902, to provide outputs to one or more display device(s) 910A. In at least one embodiment, one or more display device(s) 910A coupled with I/O hub 907 can include a local, internal, or embedded display device. In at least one embodiment, computing system 900 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0120] In at least one embodiment, processing subsystem 901 includes one or more parallel processor(s) 912 coupled to memory hub 905 via a bus or other communication link 913. In at least one embodiment, communication link 913 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCIe, or may be a vendor specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s) 912 form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core processor. In at least one embodiment, one or more parallel processor(s) 912 form a graphics processing subsystem that can output pixels to one of one or more display device(s) 910A coupled via I/O Hub 907. In at least one embodiment, one or more parallel processor(s) 912 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 910B.[0121] In at least one embodiment, a system storage unit 914 can connect to I/O hub 907 to provide a storage mechanism for computing system 900. In at least one embodiment, an I/O switch 916 can be used to provide an interface mechanism to enable connections between I/O hub 907 and other components, such as a network adapter 918 and/or wireless network adapter 919 that may be integrated into a platform, and various other devices that can be added via one or more add-in device(s) 920. In at least one embodiment, network adapter 918 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 919 can include one or more of a Wi-Fi, Bluetooth, NFC, or other network device that includes one or more wireless radios.
[0122] In at least one embodiment, computing system 900 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, that may also be connected to I/O hub 907. In at least one embodiment, communication paths interconnecting various components in FIG. 9 may be implemented using any suitable protocols, such as PCI based protocols (e.g., PCIe), or other bus or point-to-point communication interfaces and/or protocol(s), such as NVLink high speed interconnect, or interconnect protocols.[0123] In at least one embodiment, one or more parallel processor(s) 912 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (“GPU”). In at least one embodiment, one or more parallel processor(s) 912 incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system 900 may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, one or more parallel processor(s) 912, memory hub 905, processor(s) 902, and I/O hub 907 can be integrated into an SoC integrated circuit. In at least one embodiment, components of computing system 900 can be integrated into a single package to form a system in package (“SIP”) configuration. In at least one embodiment, at least a portion of the components of computing system 900 can be integrated into a multi-chip module (“MCM”), which can be interconnected with other multi-chip modules into a modular computing system. In at least one embodiment, I/O subsystem 911 and display devices 910B are omitted from computing system 900.Processing Systems[0124] The following figures set forth, without limitation, exemplary processing systems that can be used to implement at least one embodiment.[0125] FIG. 10 illustrates an accelerated processing unit (“APU”) 1000, in accordance with at least one embodiment. In at least one embodiment, APU 1000 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, APU 1000 can be configured to execute an application program, such as a CUDA program. In at least one embodiment, APU 1000 includes, without limitation, a core complex 1010, a graphics complex 1040, fabric 1060, I/O interfaces 1070, memory controllers 1080, a display controller 1092, and a multimedia engine 1094. In at least one embodiment, APU 1000 may include, without limitation, any number of core complexes 1010, any number of graphics complexes 1050, any number of display controllers 1092, and any number of multimedia engines 1094 in any
combination. For explanatory purposes, multiple instances of like objects are denoted herein with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. In at least one embodiment, APU 1000 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0126] In at least one embodiment, core complex 1010 is a CPU, graphics complex 1040 is a GPU, and APU 1000 is a processing unit that integrates, without limitation, 1010 and 1040 onto a single chip. In at least one embodiment, some tasks may be assigned to core complex 1010 and other tasks may be assigned to graphics complex 1040. In at least one embodiment, core complex 1010 is configured to execute main control software associated with APU 1000, such as an operating system. In at least one embodiment, core complex 1010 is the master processor of APU 1000, controlling and coordinating operations of other processors. In at least one embodiment, core complex 1010 issues commands that control the operation of graphics complex 1040. In at least one embodiment, core complex 1010 can be configured to execute host executable code derived from CUDA source code, and graphics complex 1040 can be configured to execute device executable code derived from CUDA source code.[0127] In at least one embodiment, core complex 1010 includes, without limitation, cores 1020(1)-1020(4) and an L3 cache 1030. In at least one embodiment, core complex 1010 may include, without limitation, any number of cores 1020 and any number and type of caches in any combination. In at least one embodiment, cores 1020 are configured to execute instructions of a particular instruction set architecture (“ISA”). In at least one embodiment, each core 1020 is a CPU core.[0128] In at least one embodiment, each core 1020 includes, without limitation, a fetch/decode unit 1022, an integer execution engine 1024, a floating point execution engine 1026, and an L2 cache 1028. In at least one embodiment, fetch/decode unit 1022 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 1024 and floating point execution engine 1026. In at least one embodiment, fetch/decode unit 1022 can concurrently dispatch one micro instruction to integer execution engine 1024 and another micro-instruction to floating point execution engine 1026. In at least one embodiment, integer execution engine 1024 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 1026 executes, without limitation, floating point and vector operations. In at least one
embodiment, fetch-decode unit 1022 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 1024 and floating point execution engine 1026.[0129] In at least one embodiment, each core 1020(i), where i is an integer representing a particular instance of core 1020, may access L2 cache 1028(i) included in core 1020(i). In at least one embodiment, each core 1020 included in core complex 1010(j), where j is an integer representing a particular instance of core complex 1010, is connected to other cores 1020 included in core complex 1010(j) via L3 cache 1030(j) included in core complex 1010(j). In at least one embodiment, cores 1020 included in core complex 1010(j), where j is an integer representing a particular instance of core complex 1010, can access all of L3 cache 1030(j) included in core complex 10100. In at least one embodiment, L3 cache 1030 may include, without limitation, any number of slices.[0130] In at least one embodiment, graphics complex 1040 can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, graphics complex 1040 is configured to execute graphics pipeline operations such as draw commands, pixel operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, graphics complex 1040 is configured to execute operations unrelated to graphics. In at least one embodiment, graphics complex 1040 is configured to execute both operations related to graphics and operations unrelated to graphics.[0131] In at least one embodiment, graphics complex 1040 includes, without limitation, any number of compute units 1050 and an L2 cache 1042. In at least one embodiment, compute units 1050 share L2 cache 1042. In at least one embodiment, L2 cache 1042 is partitioned. In at least one embodiment, graphics complex 1040 includes, without limitation, any number of compute units 1050 and any number (including zero) and type of caches. In at least one embodiment, graphics complex 1040 includes, without limitation, any amount of dedicated graphics hardware.[0132] In at least one embodiment, each compute unit 1050 includes, without limitation, any number of SIMD units 1052 and a shared memory 1054. In at least one embodiment, each SIMD unit 1052 implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each compute unit 1050 may execute any number of thread blocks, but each thread block executes on a single compute unit 1050. In at
least one embodiment, a thread block includes, without limitation, any number of threads of execution. In at least one embodiment, a workgroup is a thread block. In at least one embodiment, each SIMD unit 1052 executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in the warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory 1054.[0133] In at least one embodiment, fabric 1060 is a system interconnect that facilitates data and control transmissions across core complex 1010, graphics complex 1040, I/O interfaces 1070, memory controllers 1080, display controller 1092, and multimedia engine 1094. In at least one embodiment, APU 1000 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 1060 that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to APU 1000. In at least one embodiment, I/O interfaces 1070 are representative of any number and type of I/O interfaces (e.g., PCI , PCI- Extended (“PCI-X"), PCIe, gigabit Ethernet (“GBE”), USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces 1070 In at least one embodiment, peripheral devices that are coupled to I/O interfaces 1070 may include, without limitation, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0134] In at least one embodiment, display controller AMD92 displays images on one or more display device(s), such as a liquid crystal display (“LCD”) device. In at least one embodiment, multimedia engine 1094 includes, without limitation, any amount and type of circuitry that is related to multimedia, such as a video decoder, a video encoder, an image signal processor, etc. In at least one embodiment, memory controllers 1080 facilitate data transfers between APU 1000 and a unified system memory 1090. In at least one embodiment, core complex 1010 and graphics complex 1040 share unified system memory 1090.[0135] In at least one embodiment, APU 1000 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 1080 and memory
devices (e.g., shared memory 1054) that may be dedicated to one component or shared among multiple components. In at least one embodiment, APU 1000 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches 1128, L3 cache 1030, and L2 cache 1042) that may each be private to or shared between any number of components (e.g., cores 1020, core complex 1010, SIMD units 1052, compute units 1050, and graphics complex 1040).[0136] FIG. 11 illustrates a CPU 1100, in accordance with at least one embodiment. In at least one embodiment, CPU 1100 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, CPU 1100 can be configured to execute an application program. In at least one embodiment, CPU 1100 is configured to execute main control software, such as an operating system. In at least one embodiment, CPU 1100 issues commands that control the operation of an external GPU (not shown). In at least one embodiment, CPU 1100 can be configured to execute host executable code derived from CUDA source code, and an external GPU can be configured to execute device executable code derived from such CUDA source code. In at least one embodiment, CPU 1100 includes, without limitation, any number of core complexes 1110, fabric 1160, I/O interfaces 1170, and memory controllers 1180. In at least one embodiment, CPU 1100 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0137] In at least one embodiment, core complex 1110 includes, without limitation, cores 1120(1)-1120(4) and an L3 cache 1130. In at least one embodiment, core complex 1110 may include, without limitation, any number of cores 1120 and any number and type of caches in any combination. In at least one embodiment, cores 1120 are configured to execute instructions of a particular ISA. In at least one embodiment, each core 1120 is a CPU core.[0138] In at least one embodiment, each core 1120 includes, without limitation, a fetch/decode unit 1122, an integer execution engine 1124, a floating point execution engine 1126, and an L2 cache 1128. In at least one embodiment, fetch/decode unit 1122 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 1124 and floating point execution engine 1126. In at least one embodiment, fetch/decode unit 1122 can concurrently dispatch one micro instruction to integer execution engine 1124 and another micro-instruction to floating point execution engine 1126. In at least one embodiment, integer execution engine 1124 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 1126 executes, without limitation, floating point and vector operations. In at least one
embodiment, fetch-decode unit 1122 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 1124 and floating point execution engine 1126.[0139] In at least one embodiment, each core 1120(i), where i is an integer representing a particular instance of core 1120, may access L2 cache 1128(i) included in core 1120(i). In at least one embodiment, each core 1120 included in core complex 1110(j), where j is an integer representing a particular instance of core complex 1110, is connected to other cores 1120 in core complex 1110(j) via L3 cache 1130(j) included in core complex 1110(j). In at least one embodiment, cores 1120 included in core complex 1110(j), where j is an integer representing a particular instance of core complex 1110, can access all of L3 cache 11300 included in core complex 1110(j). In at least one embodiment, L3 cache 1130 may include, without limitation, any number of slices.[0140] In at least one embodiment, fabric 1160 is a system interconnect that facilitates data and control transmissions across core complexes 1110(1)-1110(N) (where N is an integer greater than zero), I/O interfaces 1170, and memory controllers 1180. In at least one embodiment, CPU 1100 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 1160 that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to CPU 1100. In at least one embodiment, I/O interfaces 1170 are representative of any number and type of I/O interfaces (e.g., PCI , PCI-X, PCIe, GBE, USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces 1170 In at least one embodiment, peripheral devices that are coupled to I/O interfaces 1170 may include, without limitation, displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0141] In at least one embodiment, memory controllers 1180 facilitate data transfers between CPU 1100 and a system memory 1190. In at least one embodiment, core complex 1110 and graphics complex 1140 share system memory 1190. In at least one embodiment, CPU 1100 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 1180 and memory devices that may be dedicated to one component or shared among multiple components. In at least one embodiment, CPU 1100 implements a cache subsystem that includes, without limitation, one or more cache memories
(e.g., L2 caches 1128 and L3 caches 1130) that may each be private to or shared between any number of components (e.g., cores 1120 and core complexes 1110).[0142] FIG. 12 illustrates an exemplary accelerator integration slice 1290, in accordance with at least one embodiment. As used herein, a “slice” comprises a specified portion of processing resources of an accelerator integration circuit. In at least one embodiment, the accelerator integration circuit provides cache management, memory access, context management, and interrupt management services on behalf of multiple graphics processing engines included in a graphics acceleration module. The graphics processing engines may each comprise a separate GPU. Alternatively, the graphics processing engines may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, the graphics acceleration module may be a GPU with multiple graphics processing engines. In at least one embodiment, the graphics processing engines may be individual GPUs integrated on a common package, line card, or chip. In at least one embodiment, accelerator integration slice 1290 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0143] An application effective address space 1282 within system memory 1214 stores process elements 1283. In one embodiment, process elements 1283 are stored in response to GPU invocations 1281 from applications 1280 executed on processor 1207. A process element 1283 contains process state for corresponding application 1280. A work descriptor (“WD”) 1284 contained in process element 1283 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1284 is a pointer to a job request queue in application effective address space 1282.[0144] Graphics acceleration module 1246 and/or individual graphics processing engines can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process state and sending WD 1284 to graphics acceleration module 1246 to start a job in a virtualized environment may be included.[0145] In at least one embodiment, a dedicated-process programming model is implementation-specific. In this model, a single process owns graphics acceleration module 1246 or an individual graphics processing engine. Because graphics acceleration module 1246 is owned by a single process, a hypervisor initializes an accelerator integration circuit
for an owning partition and an operating system initializes accelerator integration circuit for an owning process when graphics acceleration module 1246 is assigned.[0146] In operation, a WD fetch unit 1291 in accelerator integration slice 1290 fetches next WD 1284 which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 1246. Data from WD 1284 may be stored in registers 1245 and used by a memory management unit (“MMU”) 1239, interrupt management circuit 1247 and/or context management circuit 1248 as illustrated. For example, one embodiment of MMU 1239 includes segment/page walk circuitry for accessing segment/page tables 1286 within OS virtual address space 1285. Interrupt management circuit 1247 may process interrupt events (“INT”) 1292 received from graphics acceleration module 1246. When performing graphics operations, an effective address 1293 generated by a graphics processing engine is translated to a real address by MMU 1239.[0147] In one embodiment, a same set of registers 1245 are duplicated for each graphics processing engine and/or graphics acceleration module 1246 and may be initialized by a hypervisor or operating system. Each of these duplicated registers may be included in accelerator integration slice 1290. Exemplary registers that may be initialized by a hypervisor are shown in Table 1.Table 1 -Hypervisor Initialized Registers[0148] Exemplary registers that may be initialized by an operating system are shown inTable 2.Table 2 -Operating System Initialized Registers[0149] In one embodiment, each WD 1284 is specific to a particular graphics acceleration module 1246 and/or a particular graphics processing engine. It contains all information required by a graphics processing engine to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed.[0150] FIGS. 13A-13B illustrate exemplary graphics processors, in accordance with at least one embodiment. In at least one embodiment, any of the exemplary graphics processors may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. In at least one embodiment, the exemplary graphics processors are for use within an SoC.[0151] FIG. 13A illustrates an exemplary graphics processor 1310 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. FIG. 13B illustrates an additional exemplary graphics processor 1340 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. In at least one embodiment, graphics processor 1310 of FIG. 13A is a low power graphics processor core. In at least one embodiment, graphics processor 1340 of FIG. 13B is a higher performance graphics processor core. In at least one embodiment, each
of graphics processors 1310, 1340 can be variants of graphics processor 810 of FIG. 8. In at least one embodiment, graphics processor 1310 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0152] In at least one embodiment, graphics processor 1310 includes a vertex processor 1305 and one or more fragment processor(s) 1315A-1315N (e.g., 1315A, 1315B, 1315C, 1315D, through 1315N-1, and 1315N). In at least one embodiment, graphics processor 1310 can execute different shader programs via separate logic, such that vertex processor 1305 is optimized to execute operations for vertex shader programs, while one or more fragment processor(s) 1315A-1315N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 1305 performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s) 1315A-1315N use primitive and vertex data generated by vertex processor 1305 to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s) 1315A-1315N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API.[0153] In at least one embodiment, graphics processor 1310 additionally includes one or more MMU(s) 1320A-1320B, cache(s) 1325A-1325B, and circuit interconnect(s) 1330A- 1330B. In at least one embodiment, one or more MMU(s) 1320A-1320B provide for virtual to physical address mapping for graphics processor 1310, including for vertex processor 1305 and/or fragment processor(s) 1315A-1315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s) 1325A-1325B. In at least one embodiment, one or more MMU(s) 1320A-1320B may be synchronized with other MMUs within a system, including one or more MMUs associated with one or more application processor(s) 805, image processors 815, and/or video processors 820 of FIG. 8, such that each processor 805-820 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s) 1330A-1330B enable graphics processor 1310 to interface with other IP cores within an SoC, either via an internal bus of the SoC or via a direct connection.[0154] In at least one embodiment, graphics processor 1340 includes one or more MMU(s) 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B of graphics processor 1310 of FIG. 13 A. In at least one embodiment, graphics processor 1340 includes one or more shader core(s) 1355A-1355N (e.g., 1355A, 1355B, 1355C, 1355D,
1355E, 1355F, through 1355N-1, and 1355N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor 1340 includes an inter-core task manager 1345, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1355A-1355N and a tiling unit 1358 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.[0155] FIG. 14A illustrates a graphics core 1400, in accordance with at least one embodiment. In at least one embodiment, graphics core 1400 may be included within graphics processor 810 of FIG. 8. In at least one embodiment, graphics core 1400 may be a unified shader core 1355A-1355N as in FIG. 13B. In at least one embodiment, graphics core 1400 includes a shared instruction cache 1402, a texture unit 1418, and a cache/shared memory 1420 that are common to execution resources within graphics core 1400. In at least one embodiment, graphics core 1400 can include multiple slices 1401 A-1401N or partition for each core, and a graphics processor can include multiple instances of graphics core 1400. Slices 1401A-1401N can include support logic including a local instruction cache 1404A- 1404N, a thread scheduler 1406A-1406N, a thread dispatcher 1408A-1408N, and a set of registers 1410A-1410N. In at least one embodiment, slices 1401A-1401N can include a set of additional function units (“AFUs”) 1412A-1412N, floating-point units (“FPUs”) 1414A- 1414N, integer arithmetic logic units (“ALUs”) 1416-1416N, address computational units (“ACUs”) 1413A-1413N, double-precision floating-point units (“DPFPUs”) 1415A-1415N, and matrix processing units (“MPUs”) 1417A-1417N. In at least one embodiment, graphics core 1400 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0156] In at least one embodiment, FPUs 1414A-1414N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs 1415A-1415N perform double precision (64-bit) floating point operations. In at least one embodiment,ALUs 1416A-1416N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs 1417A-1417N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one
embodiment, MPUs 1417-1417N can perform a variety of matrix operations to accelerate CUDA programs, including enabling support for accelerated general matrix to matrix multiplication (“GEMM”). In at least one embodiment, AFUs 1412A-1412N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.).[0157] FIG. 14B illustrates a general-purpose graphics processing unit (“GPGPU”) 1430, in accordance with at least one embodiment. In at least one embodiment, GPGPU 1430 is highly-parallel and suitable for deployment on a multi-chip module. In at least one embodiment, GPGPU 1430 can be configured to enable highly-parallel compute operations to be performed by an array of GPUs. In at least one embodiment, GPGPU 1430 can be linked directly to other instances of GPGPU 1430 to create a multi-GPU cluster to improve execution time for CUDA programs. In at least one embodiment, GPGPU 1430 includes a host interface 1432 to enable a connection with a host processor. In at least one embodiment, host interface 1432 is a PCIe interface. In at least one embodiment, host interface 1432 can be a vendor specific communications interface or communications fabric. In at least one embodiment, GPGPU 1430 receives commands from a host processor and uses a global scheduler 1434 to distribute execution threads associated with those commands to a set of compute clusters 1436A-1436H. In at least one embodiment, compute clusters 1436A-1436H share a cache memory 1438. In at least one embodiment, cache memory 1438 can serve as a higher-level cache for cache memories within compute clusters 1436A-1436H.[0158] In at least one embodiment, GPGPU 1430 includes memory 1444A-1444B coupled with compute clusters 1436A-1436H via a set of memory controllers 1442A-1442B. In at least one embodiment, memory 1444A-1444B can include various types of memory devices including DRAM or graphics random access memory, such as synchronous graphics random access memory (“SGRAM”), including graphics double data rate (“GDDR”) memory.[0159] In at least one embodiment, compute clusters 1436A-1436H each include a set of graphics cores, such as graphics core 1400 of FIG. 14A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for computations associated with CUDA programs. For example, in at least one embodiment, at least a subset of floating point units in each of compute clusters 1436A-1436H can be configured to perform 16-bit or 32-bit floating point
operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations.[0160] In at least one embodiment, multiple instances of GPGPU 1430 can be configured to operate as a compute cluster. Compute clusters 1436A-1436H may implement any technically feasible communication techniques for synchronization and data exchange. In at least one embodiment, multiple instances of GPGPU 1430 communicate over host interface 1432. In at least one embodiment, GPGPU 1430 includes an I/O hub 1439 that couples GPGPU 1430 with a GPU link 1440 that enables a direct connection to other instances of GPGPU 1430. In at least one embodiment, GPU link 1440 is coupled to a dedicated GPU-to- GPU bridge that enables communication and synchronization between multiple instances of GPGPU 1430. In at least one embodiment GPU link 1440 couples with a high speed interconnect to transmit and receive data to other GPGPUs 1430 or parallel processors. In at least one embodiment, multiple instances of GPGPU 1430 are located in separate data processing systems and communicate via a network device that is accessible via host interface 1432. In at least one embodiment GPU link 1440 can be configured to enable a connection to a host processor in addition to or as an alternative to host interface 1432. In at least one embodiment, GPGPU 1430 can be configured to execute a CUDA program.[0161] FIG. 15A illustrates a parallel processor 1500, in accordance with at least one embodiment. In at least one embodiment, various components of parallel processor 1500 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (“ASICs”), or FPGAs. In at least one embodiment, parallel processor 1500 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0162] In at least one embodiment, parallel processor 1500 includes a parallel processing unit 1502. In at least one embodiment, parallel processing unit 1502 includes an I/O unit 1504 that enables communication with other devices, including other instances of parallel processing unit 1502. In at least one embodiment, I/O unit 1504 may be directly connected to other devices. In at least one embodiment, I/O unit 1504 connects with other devices via use of a hub or switch interface, such as memory hub 1505. In at least one embodiment, connections between memory hub 1505 and I/O unit 1504 form a communication link. In at least one embodiment, I/O unit 1504 connects with a host interface 1506 and a memory crossbar 1516, where host interface 1506 receives commands directed to performing
processing operations and memory crossbar 1516 receives commands directed to performing memory operations.[0163] In at least one embodiment, when host interface 1506 receives a command buffer via I/O unit 1504, host interface 1506 can direct work operations to perform those commands to a front end 1508. In at least one embodiment, front end 1508 couples with a scheduler 1510, which is configured to distribute commands or other work items to a processing array 1512. In at least one embodiment, scheduler 1510 ensures that processing array 1512 is properly configured and in a valid state before tasks are distributed to processing array 1512. In at least one embodiment, scheduler 1510 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler 1510 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array 1512. In at least one embodiment, host software can prove workloads for scheduling on processing array 1512 via one of multiple graphics processing doorbells. In at least one embodiment, workloads can then be automatically distributed across processing array 1512 by scheduler 1510 logic within a microcontroller including scheduler 1510.[0164] In at least one embodiment, processing array 1512 can include up to “N” clusters (e.g., cluster 1514A, cluster 1514B, through cluster 1514N). In at least one embodiment, each cluster 1514A-1514N of processing array 1512 can execute a large number of concurrent threads. In at least one embodiment, scheduler 1510 can allocate work to clusters 1514A- 1514N of processing array 1512 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler 1510, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing array 1512. In at least one embodiment, different clusters 1514A-1514N of processing array 1512 can be allocated for processing different types of programs or for performing different types of computations.[0165] In at least one embodiment, processing array 1512 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing array 1512 is configured to perform general-purpose parallel compute operations. For example, in at least one embodiment, processing array 1512 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.
[0166] In at least one embodiment, processing array 1512 is configured to perform parallel graphics processing operations. In at least one embodiment, processing array 1512 can include additional logic to support execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing array 1512 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit 1502 can transfer data from system memory via I/O unit 1504 for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., a parallel processor memory 1522) during processing, then written back to system memory.[0167] In at least one embodiment, when parallel processing unit 1502 is used to perform graphics processing, scheduler 1510 can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters 1514A-1514N of processing array 1512. In at least one embodiment, portions of processing array 1512 can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters 1514A-1514N may be stored in buffers to allow intermediate data to be transmitted between clusters 1514A- 1514N for further processing.[0168] In at least one embodiment, processing array 1512 can receive processing tasks to be executed via scheduler 1510, which receives commands defining processing tasks from front end 1508. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler 1510 may be configured to fetch indices corresponding to tasks or may receive indices from front end 1508. In at least one embodiment, front end 1508 can be configured to ensure processing array 1512 is configured to a valid state before a workload specified by incoming command buffers (e.g., batch- buffers, push buffers, etc.) is initiated.
[0169] In at least one embodiment, each of one or more instances of parallel processing unit 1502 can couple with parallel processor memory 1522. In at least one embodiment, parallel processor memory 1522 can be accessed via memory crossbar 1516, which can receive memory requests from processing array 1512 as well as I/O unit 1504. In at least one embodiment, memory crossbar 1516 can access parallel processor memory 1522 via a memory interface 1518. In at least one embodiment, memory interface 1518 can include multiple partition units (e.g., a partition unit 1520A, partition unit 1520B, through partition unit 1520N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 1522. In at least one embodiment, a number of partition units 1520A-1520N is configured to be equal to a number of memory units, such that a first partition unit 1520A has a corresponding first memory unit 1524A, a second partition unit 1520B has a corresponding memory unit 1524B, and an Nth partition unit 1520N has a corresponding Nth memory unit 1524N. In at least one embodiment, a number of partition units 1520A-1520N may not be equal to a number of memory devices.[0170] In at least one embodiment, memory units 1524A-1524N can include various types of memory devices, including DRAM or graphics random access memory, such as SGRAM, including GDDR memory. In at least one embodiment, memory units 1524A- 1524N may also include 3D stacked memory, including but not limited to high bandwidth memory (“HBM”). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units 1524A-1524N, allowing partition units 1520A-1520N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory 1522. In at least one embodiment, a local instance of parallel processor memory 1522 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.[0171] In at least one embodiment, any one of clusters 1514A-1514N of processing array 1512 can process data that will be written to any of memory units 1524A-1524N within parallel processor memory 1522. In at least one embodiment, memory crossbar 1516 can be configured to transfer an output of each cluster 1514A-1514N to any partition unit 1520A- 1520N or to another cluster 1514A-1514N, which can perform additional processing operations on an output. In at least one embodiment, each cluster 1514A-1514N can communicate with memory interface 1518 through memory crossbar 1516 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 1516 has a connection to memory interface 1518 to communicate with I/O unit 1504, as well as a
connection to a local instance of parallel processor memory 1522, enabling processing units within different clusters 1514A-1514N to communicate with system memory or other memory that is not local to parallel processing unit 1502. In at least one embodiment, memory crossbar 1516 can use virtual channels to separate traffic streams between clusters 1514A-1514N and partition units 1520A-1520N.[0172] In at least one embodiment, multiple instances of parallel processing unit 1502 can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit 1502 can be configured to inter-operate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit 1502 can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit 1502 or parallel processor 1500 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.[0173] FIG. 15B illustrates a processing cluster 1594, in accordance with at least one embodiment. In at least one embodiment, processing cluster 1594 is included within a parallel processing unit. In at least one embodiment, processing cluster 1594 is one of processing clusters 1514A-1514N of FIG. 15. In at least one embodiment, processing cluster 1594 can be configured to execute many threads in parallel, where the term “thread” refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single instruction, multiple data (“SIMD”) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single instruction, multiple thread (“SIMT”) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster 1594. In at least one embodiment, processing cluster 1594 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0174] In at least one embodiment, operation of processing cluster 1594 can be controlled via a pipeline manager 1532 that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 1532 receives instructions from scheduler 1510 of
FIG. 15 and manages execution of those instructions via a graphics multiprocessor 1534 and/or a texture unit 1536. In at least one embodiment, graphics multiprocessor 1534 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors of differing architectures may be included within processing cluster 1594. In at least one embodiment, one or more instances of graphics multiprocessor 1534 can be included within processing cluster 1594. In at least one embodiment, graphics multiprocessor 1534 can process data and a data crossbar 1540 can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager 1532 can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar 1540.[0175] In at least one embodiment, each graphics multiprocessor 1534 within processing cluster 1594 can include an identical set of functional execution logic (e.g., arithmetic logic units, load/store units (“LSUs”), etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present.[0176] In at least one embodiment, instructions transmitted to processing cluster 1594 constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within graphics multiprocessor 1534. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor 1534. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed.In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor 1534. In at least one embodiment, when a thread group includes more threads than the number of processing engines within graphics multiprocessor 1534, processing can be performed over consecutive clock cycles. In at least
one embodiment, multiple thread groups can be executed concurrently on graphics multiprocessor 1534.[0177] In at least one embodiment, graphics multiprocessor 1534 includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor 1534 can forego an internal cache and use a cache memory (e.g., LI cache 1548) within processing cluster 1594. In at least one embodiment, each graphics multiprocessor 1534 also has access to Level 2 (“L2”) caches within partition units (e.g., partition units 1520A-1520N of FIG. 15A) that are shared among all processing clusters 1594 and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor 1534 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 1502 may be used as global memory. In at least one embodiment, processing cluster 1594 includes multiple instances of graphics multiprocessor 1534 that can share common instructions and data, which may be stored in LI cache 1548.[0178] In at least one embodiment, each processing cluster 1594 may include an MMU 1545 that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU 1545 may reside within memory interface 1518 of FIG. 15. In at least one embodiment, MMU 1545 includes a set of page table entries (“PTEs”) used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MMU 1545 may include address translation lookaside buffers (“TLBs”) or caches that may reside within graphics multiprocessor 1534 or LI cache 1548 or processing cluster 1594. In at least one embodiment, a physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss.[0179] In at least one embodiment, processing cluster 1594 may be configured such that each graphics multiprocessor 1534 is coupled to a texture unit 1536 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture LI cache (not shown) or from an LI cache within graphics multiprocessor 1534 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor 1534 outputs a processed task to data
crossbar 1540 to provide the processed task to another processing cluster 1594 for further processing or to store the processed task in an L2 cache, a local parallel processor memory, or a system memory via memory crossbar 1516. In at least one embodiment, a pre-raster operations unit (“preROP”) 1542 is configured to receive data from graphics multiprocessor 1534, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 1520A-1520N of FIG. 15). In at least one embodiment, PreROP 1542 can perform optimizations for color blending, organize pixel color data, and perform address translations.[0180] FIG. 15C illustrates a graphics multiprocessor 1596, in accordance with at least one embodiment. In at least one embodiment, graphics multiprocessor 1596 is graphics multiprocessor 1534 of FIG. 15B. In at least one embodiment, graphics multiprocessor 1596 couples with pipeline manager 1532 of processing cluster 1594. In at least one embodiment, graphics multiprocessor 1596 has an execution pipeline including but not limited to an instruction cache 1552, an instruction unit 1554, an address mapping unit 1556, a register file 1558, one or more GPGPU cores 1562, and one or more LSUs 1566. GPGPU cores 1562 and LSUs 1566 are coupled with cache memory 1572 and shared memory 1570 via a memory and cache interconnect 1568. In at least one embodiment, graphics multiprocessor 1596 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0181] In at least one embodiment, instruction cache 1552 receives a stream of instructions to execute from pipeline manager 1532. In at least one embodiment, instructions are cached in instruction cache 1552 and dispatched for execution by instruction unit 1554. In at least one embodiment, instruction unit 1554 can dispatch instructions as thread groups (e.g., warps), with each thread of a thread group assigned to a different execution unit within GPGPU core 1562. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 1556 can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by LSUs 1566.[0182] In at least one embodiment, register file 1558 provides a set of registers for functional units of graphics multiprocessor 1596. In at least one embodiment, register file 1558 provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores 1562, LSUs 1566) of graphics multiprocessor 1596. In at least one embodiment, register file 1558 is divided between each of functional units such that each
functional unit is allocated a dedicated portion of register file 1558. In at least one embodiment, register file 1558 is divided between different thread groups being executed by graphics multiprocessor 1596.[0183] In at least one embodiment, GPGPU cores 1562 can each include FPUs and/or integer ALUs that are used to execute instructions of graphics multiprocessor 1596. GPGPU cores 1562 can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores 1562 include a single precision FPU and an integer ALU while a second portion of GPGPU cores 1562 include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor 1596 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment one or more of GPGPU cores 1562 can also include fixed or special function logic. In at least one embodiment, GPGPU cores 1562 are to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0184] In at least one embodiment, GPGPU cores 1562 include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment GPGPU cores 1562 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for GPGPU cores 1562 can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (“SPMD”) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.[0185] In at least one embodiment, memory and cache interconnect 1568 is an interconnect network that connects each functional unit of graphics multiprocessor 1596 to register file 1558 and to shared memory 1570. In at least one embodiment, memory and cache interconnect 1568 is a crossbar interconnect that allows LSU 1566 to implement load and store operations between shared memory 1570 and register file 1558. In at least one embodiment, register file 1558 can operate at a same frequency as GPGPU cores 1562, thus data transfer between GPGPU cores 1562 and register file 1558 is very low latency. In at
least one embodiment, shared memory 1570 can be used to enable communication between threads that execute on functional units within graphics multiprocessor 1596. In at least one embodiment, cache memory 1572 can be used as a data cache for example, to cache texture data communicated between functional units and texture unit 1536. In at least one embodiment, shared memory 1570 can also be used as a program managed cached. In at least one embodiment, threads executing on GPGPU cores 1562 can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory 1572.[0186] In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine- learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In at least one embodiment, a GPU may be integrated on the same package or chip as cores and communicatively coupled to cores over a processor bus/interconnect that is internal to a package or a chip. In at least one embodiment, regardless of the manner in which a GPU is connected, processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a WD. In at least one embodiment, the GPU then uses dedicated circuitry /logic for efficiently processing these commands/instructions.[0187] FIG. 16 illustrates a graphics processor 1600, in accordance with at least one embodiment. In at least one embodiment, graphics processor 1600 includes a ring interconnect 1602, a pipeline front-end 1604, a media engine 1637, and graphics cores 1680A-1680N. In at least one embodiment, ring interconnect 1602 couples graphics processor 1600 to other processing units, including other graphics processors or one or more general-purpose processor cores. In at least one embodiment, graphics processor 1600 is one of many processors integrated within a multi-core processing system. In at least one embodiment, graphics processor 1600 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0188] In at least one embodiment, graphics processor 1600 receives batches of commands via ring interconnect 1602. In at least one embodiment, incoming commands are interpreted by a command streamer 1603 in pipeline front-end 1604. In at least one embodiment, graphics processor 1600 includes scalable execution logic to perform 3D geometry processing and media processing via graphics core(s) 1680A-1680N. In at least one
embodiment, for 3D geometry processing commands, command streamer 1603 supplies commands to geometry pipeline 1636. In at least one embodiment, for at least some media processing commands, command streamer 1603 supplies commands to a video front end 1634, which couples with a media engine 1637. In at least one embodiment, media engine 1637 includes a Video Quality Engine (“VQE”) 1630 for video and image post-processing and a multi-format encode/decode (“MFX”) engine 1633 to provide hardware-accelerated media data encode and decode. In at least one embodiment, geometry pipelinel636 and media enginel637 each generate execution threads for thread execution resources provided by at least one graphics core 1680A.[0189] In at least one embodiment, graphics processor 1600 includes scalable thread execution resources featuring modular graphics cores 1680A-1680N (sometimes referred to as core slices), each having multiple sub-cores 1650A-550N, 1660A-1660N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor 1600 can have any number of graphics cores 1680A through 1680N. In at least one embodiment, graphics processor 1600 includes a graphics core 1680A having at least a first sub-core 1650A and a second sub-core 1660A. In at least one embodiment, graphics processor 1600 is a low power processor with a single sub-core (e.g., sub-core 1650A). In at least one embodiment, graphics processor 1600 includes multiple graphics cores 1680A-1680N, each including a set of first sub-cores 1650A-1650N and a set of second sub-cores 1660A-1660N. In at least one embodiment, each sub-core in first sub-cores 1650A-1650N includes at least a first set of execution units (“EUs”) 1652A-1652N and media/texture samplers 1654A-1654N. In at least one embodiment, each sub-core in second sub-cores 1660A-1660N includes at least a second set of execution units 1662A-1662N and samplers 1664A-1664N. In at least one embodiment, each sub-core 1650A-1650N, 1660A-1660N shares a set of shared resources 1670A-1670N. In at least one embodiment, shared resources 1670 include shared cache memory and pixel operation logic.[0190] FIG. 17 illustrates a processor 1700, in accordance with at least one embodiment. In at least one embodiment, processor 1700 may include, without limitation, logic circuits to perform instructions. In at least one embodiment, processor 1700 may perform instructions, including x86 instructions, ARM instructions, specialized instructions for ASICs, etc. In at least one embodiment, processor 1710 may include registers to store packed data, such as 64- bit wide MMXTM registers in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, Calif. In at least one embodiment, MMX registers, available in
both integer and floating point forms, may operate with packed data elements that accompany SIMD and streaming SIMD extensions (“SSE”) instructions. In at least one embodiment, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, AVX, or beyond (referred to generically as “SSEx”) technology may hold such packed data operands. In at least one embodiment, processors 1710 may perform instructions to accelerate CUDA programs. In at least one embodiment, processor 1700 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0191] In at least one embodiment, processor 1700 includes an in-order front end (“front end”) 1701 to fetch instructions to be executed and prepare instructions to be used later in processor pipeline. In at least one embodiment, front end 1701 may include several units. In at least one embodiment, an instruction prefetcher 1726 fetches instructions from memory and feeds instructions to an instruction decoder 1728 which in turn decodes or interprets instructions. For example, in at least one embodiment, instruction decoder 1728 decodes a received instruction into one or more operations called “micro-instructions” or “micro operations” (also called “micro ops”or “uops”) for execution. In at least one embodiment, instruction decoder 1728 parses instruction into an opcode and corresponding data and control fields that may be used by micro-architecture to perform operations. In at least one embodiment, a trace cache 1730 may assemble decoded uops into program ordered sequences or traces in a uop queue 1734 for execution. In at least one embodiment, when trace cache 1730 encounters a complex instruction, a microcode ROM 1732 provides uops needed to complete an operation.[0192] In at least one embodiment, some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete full operation. In at least one embodiment, if more than four micro-ops are needed to complete an instruction, instruction decoder 1728 may access microcode ROM 1732 to perform instruction. In at least one embodiment, an instruction may be decoded into a small number of micro-ops for processing at instruction decoder 1728. In at least one embodiment, an instruction may be stored within microcode ROM 1732 should a number of micro-ops be needed to accomplish operation. In at least one embodiment, trace cache 1730 refers to an entry point programmable logic array (“PLA”) to determine a correct micro-instruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM 1732. In at least one embodiment, after microcode ROM 1732 finishes sequencing micro-ops for an instruction, front end 1701 of machine may resume fetching micro-ops from trace cache 1730.
[0193] In at least one embodiment, out-of-order execution engine (“out of order engine”) 1703 may prepare instructions for execution. In at least one embodiment, out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down a pipeline and get scheduled for execution. Out-of- order execution engine 1703 includes, without limitation, an allocator/register renamer 1740, a memory uop queue 1742, an integer/floating point uop queue 1744, a memory scheduler 1746, a fast scheduler 1702, a slow/general floating point scheduler (“slow/general FP scheduler”) 1704, and a simple floating point scheduler (“simple FP scheduler”) 1706. In at least one embodiment, fast schedule 1702, slow/general floating point scheduler 1704, and simple floating point scheduler 1706 are also collectively referred to herein as “uop schedulers 1702, 1704, 1706.” Allocator/register renamer 1740 allocates machine buffers and resources that each uop needs in order to execute. In at least one embodiment, allocator/register renamer 1740 renames logic registers onto entries in a register file. In at least one embodiment, allocator/register renamer 1740 also allocates an entry for each uop in one of two uop queues, memory uop queue 1742 for memory operations and integer/floating point uop queue 1744 for non-memory operations, in front of memory scheduler 1746 and uop schedulers 1702, 1704, 1706. In at least one embodiment, uop schedulers 1702, 1704, 1706, determine when a uop is ready to execute based on readiness of their dependent input register operand sources and availability of execution resources uops need to complete their operation. In at least one embodiment, fast scheduler 1702 of at least one embodiment may schedule on each half of main clock cycle while slow/general floating point scheduler 1704 and simple floating point scheduler 1706 may schedule once per main processor clock cycle. In at least one embodiment, uop schedulers 1702, 1704, 1706 arbitrate for dispatch ports to schedule uops for execution.[0194] In at least one embodiment, execution block 1711 includes, without limitation, an integer register file/bypass network 1708, a floating point register file/bypass network (“FP register file/bypass network”) 1710, address generation units (“AGUs”) 1712 and 1714, fast ALUs 1716 and 1718, a slow ALU 1720, a floating point ALU (“FP”) 1722, and a floating point move unit (“FP move”) 1724. In at least one embodiment, integer register file/bypass network 1708 and floating point register file/bypass network 1710 are also referred to herein as “register files 1708, 1710.” In at least one embodiment, AGUSs 1712 and 1714, fast ALUs 1716 and 1718, slow ALU 1720, floating point ALU 1722, and floating point move unit 1724 are also referred to herein as “execution units 1712, 1714, 1716, 1718, 1720, 1722, and
1724.” In at least one embodiment, an execution block may include, without limitation, any number (including zero) and type of register files, bypass networks, address generation units, and execution units, in any combination.[0195] In at least one embodiment, register files 1708, 1710 may be arranged between uop schedulers 1702, 1704, 1706, and execution units 1712, 1714, 1716, 1718, 1720, 1722, and 1724. In at least one embodiment, integer register file/bypass network 1708 performs integer operations. In at least one embodiment, floating point register file/bypass network 1710 performs floating point operations. In at least one embodiment, each of register files 1708, 1710 may include, without limitation, a bypass network that may bypass or forward just completed results that have not yet been written into register file to new dependent uops. In at least one embodiment, register files 1708, 1710 may communicate data with each other. In at least one embodiment, integer register file/bypass network 1708 may include, without limitation, two separate register files, one register file for low-order thirty -two bits of data and a second register file for high order thirty -two bits of data. In at least one embodiment, floating point register file/bypass network 1710 may include, without limitation, 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0196] In at least one embodiment, execution units 1712, 1714, 1716, 1718, 1720, 1722, 1724 may execute instructions. In at least one embodiment, register files 1708, 1710 store integer and floating point data operand values that micro-instructions need to execute. In at least one embodiment, processor 1700 may include, without limitation, any number and combination of execution units 1712, 1714, 1716, 1718, 1720, 1722, 1724. In at least one embodiment, floating point ALU 1722 and floating point move unit 1724 may execute floating point, MMX, SIMD, AVX and SSE, or other operations. In at least one embodiment, floating point ALU 1722 may include, without limitation, a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro ops. In at least one embodiment, instructions involving a floating point value may be handled with floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs 1716, 1718. In at least one embodiment, fast ALUS 1716, 1718 may execute fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations go to slow ALU 1720 as slow ALU 1720 may include, without limitation, integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. In at least one embodiment, memory load/store operations may be
executed by AGUs 1712, 1714. In at least one embodiment, fast ALU 1716, fast ALU 1718, and slow ALU 1720 may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU 1716, fast ALU 1718, and slow ALU 1720 may be implemented to support a variety of data bit sizes including sixteen, thirty -two, 128, 256, etc. In at least one embodiment, floating point ALU 1722 and floating point move unit 1724 may be implemented to support a range of operands having bits of various widths. In at least one embodiment, floating point ALU 1722 and floating point move unit 1724 may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.[0197] In at least one embodiment, uop schedulers 1702, 1704, 1706 dispatch dependent operations before parent load has finished executing. In at least one embodiment, as uops may be speculatively scheduled and executed in processor 1700, processor 1700 may also include logic to handle memory misses. In at least one embodiment, if a data load misses in a data cache, there may be dependent operations in flight in pipeline that have left a scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, dependent operations might need to be replayed and independent ones may be allowed to complete. In at least one embodiment, schedulers and replay mechanisms of at least one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations.[0198] In at least one embodiment, the term “registers” may refer to on-board processor storage locations that may be used as part of instructions to identify operands. In at least one embodiment, registers may be those that may be usable from outside of a processor (from a programmer's perspective). In at least one embodiment, registers might not be limited to a particular type of circuit. Rather, in at least one embodiment, a register may store data, provide data, and perform functions described herein. In at least one embodiment, registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In at least one embodiment, integer registers store 32-bit integer data.A register file of at least one embodiment also contains eight multimedia SIMD registers for packed data.[0199] FIG. 18 illustrates a processor 1800, in accordance with at least one embodiment. In at least one embodiment, processor 1800 includes, without limitation, one or more
processor cores (“cores”) 1802A-1802N, an integrated memory controller 1814, and an integrated graphics processor 1808. In at least one embodiment, processor 1800 can include additional cores up to and including additional processor core 1802N represented by dashed lined boxes. In at least one embodiment, each of processor cores 1802A-1802N includes one or more internal cache units 1804A-1804N. In at least one embodiment, each processor core also has access to one or more shared cached units 1806. In at least one embodiment, processor 1800 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0200] In at least one embodiment, internal cache units 1804A-1804N and shared cache units 1806 represent a cache memory hierarchy within processor 1800. In at least one embodiment, cache memory units 1804A-1804N may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as an L2, L3, Level 4 (“L4”), or other levels of cache, where a highest level of cache before external memory is classified as an LLC. In at least one embodiment, cache coherency logic maintains coherency between various cache units 1806 and 1804A-1804N.[0201] In at least one embodiment, processor 1800 may also include a set of one or more bus controller units 1816 and a system agent core 1810. In at least one embodiment, one or more bus controller units 1816 manage a set of peripheral buses, such as one or more PCI or PCI express buses. In at least one embodiment, system agent core 1810 provides management functionality for various processor components. In at least one embodiment, system agent core 1810 includes one or more integrated memory controllers 1814 to manage access to various external memory devices (not shown).[0202] In at least one embodiment, one or more of processor cores 1802A-1802N include support for simultaneous multi-threading. In at least one embodiment, system agent core 1810 includes components for coordinating and operating processor cores 1802A-1802N during multi -threaded processing. In at least one embodiment, system agent core 1810 may additionally include a power control unit (“PCU”), which includes logic and components to regulate one or more power states of processor cores 1802A-1802N and graphics processor 1808.[0203] In at least one embodiment, processor 1800 additionally includes graphics processor 1808 to execute graphics processing operations. In at least one embodiment, graphics processor 1808 couples with shared cache units 1806, and system agent core 1810,
including one or more integrated memory controllers 1814. In at least one embodiment, system agent core 1810 also includes a display controller 1811 to drive graphics processor output to one or more coupled displays. In at least one embodiment, display controller 1811 may also be a separate module coupled with graphics processor 1808 via at least one interconnect, or may be integrated within graphics processor 1808.[0204] In at least one embodiment, a ring based interconnect unit 1812 is used to couple internal components of processor 1800. In at least one embodiment, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques. In at least one embodiment, graphics processor 1808 couples with ring interconnect 1812 via an I/O link 1813.[0205] In at least one embodiment, I/O link 1813 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1818, such as an eDRAM module. In at least one embodiment, each of processor cores 1802A-1802N and graphics processor 1808 use embedded memory modules 1818 as a shared LLC.[0206] In at least one embodiment, processor cores 1802A-1802N are homogeneous cores executing a common instruction set architecture. In at least one embodiment, processor cores 1802A-1802N are heterogeneous in terms of ISA, where one or more of processor cores 1802A-1802N execute a common instruction set, while one or more other cores of processor cores 1802A-18-02N executes a subset of a common instruction set or a different instruction set. In at least one embodiment, processor cores 1802A-1802N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more cores having a lower power consumption. In at least one embodiment, processor 1800 can be implemented on one or more chips or as an SoC integrated circuit.[0207] FIG. 19 illustrates a graphics processor core 1900, in accordance with at least one embodiment described. In at least one embodiment, graphics processor core 1900 is included within a graphics core array. In at least one embodiment, graphics processor core 1900, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core 1900 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple
graphics core slices based on target power and performance envelopes. In at least one embodiment, each graphics core 1900 can include a fixed function block 1930 coupled with multiple sub-cores 1901A-1901F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. In at least one embodiment, graphics processor core 1900 is to comprise and/or perform, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0208] In at least one embodiment, fixed function block 1930 includes a geometry /fixed function pipeline 1936 that can be shared by all sub-cores in graphics processor 1900, for example, in lower performance and/or lower power graphics processor implementations. In at least one embodiment, geometry/fixed function pipeline 1936 includes a 3D fixed function pipeline, a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers.[0209] In at least one embodiment, fixed function block 1930 also includes a graphics SoC interface 1937, a graphics microcontroller 1938, and a media pipeline 1939. Graphics SoC interface 1937 provides an interface between graphics core 1900 and other processor cores within an SoC integrated circuit. In at least one embodiment, graphics microcontroller 1938 is a programmable sub-processor that is configurable to manage various functions of graphics processor 1900, including thread dispatch, scheduling, and pre-emption. In at least one embodiment, media pipeline 1939 includes logic to facilitate decoding, encoding, pre processing, and/or post-processing of multimedia data, including image and video data. In at least one embodiment, media pipeline 1939 implements media operations via requests to compute or sampling logic within sub-cores 1901-1901F.[0210] In at least one embodiment, SoC interface 1937 enables graphics core 1900 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared LLC memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, SoC interface 1937 can also enable communication with fixed function devices within an SoC, such as camera imaging pipelines, and enables use of and/or implements global memory atomics that may be shared between graphics core 1900 and CPUs within an SoC. In at least one embodiment, SoC interface 1937 can also implement power management controls for graphics core 1900 and enable an interface between a clock domain of graphic core 1900 and other clock domains within an SoC. In at least one embodiment, SoC interface 1937 enables receipt of command buffers from a command streamer and global thread
dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions can be dispatched to media pipeline 1939, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 1936, geometry and fixed function pipeline 1914) when graphics processing operations are to be performed.[0211] In at least one embodiment, graphics microcontroller 1938 can be configured to perform various scheduling and management tasks for graphics core 1900. In at least one embodiment, graphics microcontroller 1938 can perform graphics and/or compute workload scheduling on various graphics parallel engines within execution unit (EU) arrays 1902A- 1902F, 1904A-1904F within sub-cores 1901 A- 190 IF. In at least one embodiment, host software executing on a CPU core of an SoC including graphics core 1900 can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on an appropriate graphics engine. In at least one embodiment, scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In at least one embodiment, graphics microcontroller 1938 can also facilitate low-power or idle states for graphics core 1900, providing graphics core 1900 with an ability to save and restore registers within graphics core 1900 across low-power state transitions independently from an operating system and/or graphics driver software on a system.[0212] In at least one embodiment, graphics core 1900 may have greater than or fewer than illustrated sub-cores 1901A-1901F, up to N modular sub-cores. For each set of N sub cores, in at least one embodiment, graphics core 1900 can also include shared function logic 1910, shared and/or cache memory 1912, a geometry/fixed function pipeline 1914, as well as additional fixed function logic 1916 to accelerate various graphics and compute processing operations. In at least one embodiment, shared function logic 1910 can include logic units (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within graphics core 1900. Shared and/or cache memory 1912 can be an LLC for N sub-cores 1901 A- 190 IF within graphics core 1900 and can also serve as shared memory that is accessible by multiple sub-cores. In at least one embodiment, geometry/fixed function pipeline 1914 can be included instead of geometry/fixed function pipeline 1936 within fixed function block 1930 and can include same or similar logic units.
[0213] In at least one embodiment, graphics core 1900 includes additional fixed function logic 1916 that can include various fixed function acceleration logic for use by graphics core 1900. In at least one embodiment, additional fixed function logic 1916 includes an additional geometry pipeline for use in position only shading. In position-only shading, at least two geometry pipelines exist, whereas in a full geometry pipeline within geometry/fixed function pipeline 1916, 1936, and a cull pipeline, which is an additional geometry pipeline which may be included within additional fixed function logic 1916. In at least one embodiment, cull pipeline is a trimmed down version of a full geometry pipeline. In at least one embodiment, a full pipeline and a cull pipeline can execute different instances of an application, each instance having a separate context. In at least one embodiment, position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, in at least one embodiment, cull pipeline logic within additional fixed function logic 1916 can execute position shaders in parallel with a main application and generally generates critical results faster than a full pipeline, as a cull pipeline fetches and shades position attribute of vertices, without performing rasterization and rendering of pixels to a frame buffer. In at least one embodiment, a cull pipeline can use generated critical results to compute visibility information for all triangles without regard to whether those triangles are culled. In at least one embodiment, a full pipeline (which in this instance may be referred to as a replay pipeline) can consume visibility information to skip culled triangles to shade only visible triangles that are finally passed to a rasterization phase.[0214] In at least one embodiment, additional fixed function logic 1916 can also include general purpose processing acceleration logic, such as fixed function matrix multiplication logic, for accelerating CUD A programs.[0215] In at least one embodiment, each graphics sub-core 1901A-1901F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. In at least one embodiment, graphics sub-cores 1901A-1901F include multiple EU arrays 1902A-1902F, 1904A-1904F, thread dispatch and inter-thread communication (“TD/IC”) logic 1903A- 1903F, a 3D (e.g., texture) sampler 1905A-1905F, a media sampler 1906A-1906F, a shader processor 1907A-1907F, and shared local memory (“SLM”) 1908A-1908F. EU arrays 1902A-1902F, 1904A-1904F each include multiple execution units, which are GPGPUs capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader
programs. In at least one embodiment, TD/IC logic 1903A-1903F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitate communication between threads executing on execution units of a sub-core. In at least one embodiment, 3D sampler 1905A-1905F can read texture or other 3D graphics related data into memory. In at least one embodiment, 3D sampler can read texture data differently based on a configured sample state and texture format associated with a given texture. In at least one embodiment, media sampler 1906A-1906F can perform similar read operations based on a type and format associated with media data. In at least one embodiment, each graphics sub core 1901 A- 190 IF can alternately include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each of sub-cores 1901 A- 190 IF can make use of shared local memory 1908A-1908F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.[0216] FIG. 20 illustrates a parallel processing unit (“PPU”) 2000, in accordance with at least one embodiment. In at least one embodiment, PPU 2000 is configured with machine- readable code that, if executed by PPU 2000, causes PPU 2000 to perform some or all of processes and techniques described herein. In at least one embodiment, PPU 2000 is a multi threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency -hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel. In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU 2000. In at least one embodiment, PPU 2000 is a GPU configured to implement a graphics rendering pipeline for processing three-dimensional (“3D”) graphics data in order to generate two-dimensional (“2D”) image data for display on a display device such as an LCD device.In at least one embodiment, PPU 2000 is utilized to perform computations such as linear algebra operations and machine-learning operations. FIG. 20 illustrates an example parallel processor for illustrative purposes only and should be construed as a non-limiting example of a processor architecture that may be implemented in at least one embodiment. In at least one embodiment, PPU 2000 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0217] In at least one embodiment, one or more PPUs 2000 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, one or more PPUs 2000 are configured to accelerate CUDA programs.
In at least one embodiment, PPU 2000 includes, without limitation, an I/O unit 2006, a front- end unit 2010, a scheduler unit 2012, a work distribution unit 2014, a hub 2016, a crossbar (“Xbar”) 2020, one or more general processing clusters (“GPCs”) 2018, and one or more partition units (“memory partition units”) 2022. In at least one embodiment, PPU 2000 is connected to a host processor or other PPUs 2000 via one or more high-speed GPU interconnects (“GPU interconnects”) 2008. In at least one embodiment, PPU 2000 is connected to a host processor or other peripheral devices via a system bus or interconnect 2002. In at least one embodiment, PPU 2000 is connected to a local memory comprising one or more memory devices (“memory”) 2004. In at least one embodiment, memory devices 2004 include, without limitation, one or more dynamic random access memory (DRAM) devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as high-bandwidth memory (“HBM”) subsystems, with multiple DRAM dies stacked within each device.[0218] In at least one embodiment, high-speed GPU interconnect 2008 may refer to a wire-based multi-lane communications link that is used by systems to scale and include one or more PPUs 2000 combined with one or more CPUs, supports cache coherence between PPUs 2000 and CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transmitted by high-speed GPU interconnect 2008 through hub 2016 to/from other units of PPU 2000 such as one or more copy engines, video encoders, video decoders, power management units, and other components which may not be explicitly illustrated in FIG. 20.[0219] In at least one embodiment, I/O unit 2006 is configured to transmit and receive communications (e.g., commands, data) from a host processor (not illustrated in FIG. 20) over system bus 2002. In at least one embodiment, I/O unit 2006 communicates with host processor directly via system bus 2002 or through one or more intermediate devices such as a memory bridge. In at least one embodiment, I/O unit 2006 may communicate with one or more other processors, such as one or more of PPUs 2000 via system bus 2002. In at least one embodiment, I/O unit 2006 implements a PCIe interface for communications over a PCIe bus. In at least one embodiment, I/O unit 2006 implements interfaces for communicating with external devices.[0220] In at least one embodiment, I/O unit 2006 decodes packets received via system bus 2002. In at least one embodiment, at least some packets represent commands configured to cause PPU 2000 to perform various operations. In at least one embodiment, I/O unit 2006
transmits decoded commands to various other units of PPU 2000 as specified by commands. In at least one embodiment, commands are transmitted to front-end unit 2010 and/or transmitted to hub 2016 or other units of PPU 2000 such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly illustrated in FIG. 20). In at least one embodiment, I/O unit 2006 is configured to route communications between and among various logical units of PPU 2000.[0221] In at least one embodiment, a program executed by host processor encodes a command stream in a buffer that provides workloads to PPU 2000 for processing. In at least one embodiment, a workload comprises instructions and data to be processed by those instructions. In at least one embodiment, buffer is a region in a memory that is accessible (e.g., read/write) by both a host processor and PPU 2000 — a host interface unit may be configured to access buffer in a system memory connected to system bus 2002 via memory requests transmitted over system bus 2002 by I/O unit 2006. In at least one embodiment, a host processor writes a command stream to a buffer and then transmits a pointer to the start of the command stream to PPU 2000 such that front-end unit 2010 receives pointers to one or more command streams and manages one or more command streams, reading commands from command streams and forwarding commands to various units of PPU 2000.[0222] In at least one embodiment, front-end unit 2010 is coupled to scheduler unit 2012 that configures various GPCs 2018 to process tasks defined by one or more command streams. In at least one embodiment, scheduler unit 2012 is configured to track state information related to various tasks managed by scheduler unit 2012 where state information may indicate which of GPCs 2018 a task is assigned to, whether task is active or inactive, a priority level associated with task, and so forth. In at least one embodiment, scheduler unit 2012 manages execution of a plurality of tasks on one or more of GPCs 2018.[0223] In at least one embodiment, scheduler unit 2012 is coupled to work distribution unit 2014 that is configured to dispatch tasks for execution on GPCs 2018. In at least one embodiment, work distribution unit 2014 tracks a number of scheduled tasks received from scheduler unit 2012 and work distribution unit 2014 manages a pending task pool and an active task pool for each of GPCs 2018. In at least one embodiment, pending task pool comprises a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC 2018; active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by GPCs 2018 such that as one of GPCs 2018 completes execution of a task, that task is evicted from active task pool for GPC 2018 and one of other
tasks from pending task pool is selected and scheduled for execution on GPC 2018. In at least one embodiment, if an active task is idle on GPC 2018, such as while waiting for a data dependency to be resolved, then the active task is evicted from GPC 2018 and returned to a pending task pool while another task in the pending task pool is selected and scheduled for execution on GPC 2018.[0224] In at least one embodiment, work distribution unit 2014 communicates with one or more GPCs 2018 via XBar 2020. In at least one embodiment, XBar 2020 is an interconnect network that couples many units of PPU 2000 to other units of PPU 2000 and can be configured to couple work distribution unit 2014 to a particular GPC 2018. In at least one embodiment, one or more other units of PPU 2000 may also be connected to XBar 2020 via hub 2016.[0225] In at least one embodiment, tasks are managed by scheduler unit 2012 and dispatched to one of GPCs 2018 by work distribution unit 2014. GPC 2018 is configured to process task and generate results. In at least one embodiment, results may be consumed by other tasks within GPC 2018, routed to a different GPC 2018 via XBar 2020, or stored in memory 2004. In at least one embodiment, results can be written to memory 2004 via partition units 2022, which implement a memory interface for reading and writing data to/from memory 2004. In at least one embodiment, results can be transmitted to another PPU 2004 or CPU via high-speed GPU interconnect 2008. In at least one embodiment, PPU 2000 includes, without limitation, a number U of partition units 2022 that is equal to number of separate and distinct memory devices 2004 coupled to PPU 2000.[0226] In at least one embodiment, a host processor executes a driver kernel that implements an application programming interface (“API”) that enables one or more applications executing on host processor to schedule operations for execution on PPU 2000.In at least one embodiment, multiple compute applications are simultaneously executed by PPU 2000 and PPU 2000 provides isolation, quality of service (“QoS”), and independent address spaces for multiple compute applications. In at least one embodiment, an application generates instructions (e.g., in the form of API calls) that cause a driver kernel to generate one or more tasks for execution by PPU 2000 and the driver kernel outputs tasks to one or more streams being processed by PPU 2000. In at least one embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In at least one embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In at least one embodiment, cooperating threads can refer to a plurality
of threads including instructions to perform a task and that exchange data through shared memory.[0227] FIG. 21 illustrates a GPC 2100, in accordance with at least one embodiment. In at least one embodiment, GPC 2100 is GPC 2018 of FIG. 20. In at least one embodiment, each GPC 2100 includes, without limitation, a number of hardware units for processing tasks and each GPC 2100 includes, without limitation, a pipeline manager 2102, a pre-raster operations unit (“PROP”) 2104, a raster engine 2108, a work distribution crossbar (“WDX”) 2116, an MMU 2118, one or more Data Processing Clusters (“DPCs”) 2106, and any suitable combination of parts.[0228] In at least one embodiment, operation of GPC 2100 is controlled by pipeline manager 2102. In at least one embodiment, pipeline manager 2102 manages configuration of one or more DPCs 2106 for processing tasks allocated to GPC 2100. In at least one embodiment, pipeline manager 2102 configures at least one of one or more DPCs 2106 to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC 2106 is configured to execute a vertex shader program on a programmable streaming multiprocessor (“SM”) 2114. In at least one embodiment, pipeline manager 2102 is configured to route packets received from a work distribution unit to appropriate logical units within GPC 2100 and, in at least one embodiment, some packets may be routed to fixed function hardware units in PROP 2104 and/or raster engine 2108 while other packets may be routed to DPCs 2106 for processing by a primitive engine 2112 or SM 2114. In at least one embodiment, pipeline manager 2102 configures at least one of DPCs 2106 to implement a computing pipeline. In at least one embodiment, pipeline manager 2102 configures at least one of DPCs 2106 to execute at least a portion of a CUDA program. In at least one embodiment, GPC 2100 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0229] In at least one embodiment, PROP unit 2104 is configured to route data generated by raster engine 2108 and DPCs 2106 to a Raster Operations (“ROP”) unit in a partition unit, such as memory partition unit 2022 described in more detail above in conjunction with FIG. 20. In at least one embodiment, PROP unit 2104 is configured to perform optimizations for color blending, organize pixel data, perform address translations, and more. In at least one embodiment, raster engine 2108 includes, without limitation, a number of fixed function hardware units configured to perform various raster operations and, in at least one embodiment, raster engine 2108 includes, without limitation, a setup engine, a coarse raster
engine, a culling engine, a clipping engine, a fine raster engine, a tile coalescing engine, and any suitable combination thereof. In at least one embodiment, a setup engine receives transformed vertices and generates plane equations associated with geometric primitive defined by vertices; plane equations are transmitted to a coarse raster engine to generate coverage information (e.g., an x, y coverage mask for a tile) for a primitive; the output of the coarse raster engine is transmitted to a culling engine where fragments associated with a primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. In at least one embodiment, fragments that survive clipping and culling are passed to a fine raster engine to generate attributes for pixel fragments based on plane equations generated by a setup engine. In at least one embodiment, the output of raster engine 2108 comprises fragments to be processed by any suitable entity such as by a fragment shader implemented within DPC 2106.[0230] In at least one embodiment, each DPC 2106 included in GPC 2100 comprise, without limitation, an M-Pipe Controller (“MPC”) 2110; primitive engine 2112; one or more SMs 2114; and any suitable combination thereof. In at least one embodiment, MPC 2110 controls operation of DPC 2106, routing packets received from pipeline manager 2102 to appropriate units in DPC 2106. In at least one embodiment, packets associated with a vertex are routed to primitive engine 2112, which is configured to fetch vertex attributes associated with vertex from memory; in contrast, packets associated with a shader program may be transmitted to SM 2114.[0231] In at least one embodiment, SM 2114 comprises, without limitation, a programmable streaming processor that is configured to process tasks represented by a number of threads. In at least one embodiment, SM 2114 is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently and implements a SIMD architecture where each thread in a group of threads (e.g., a warp) is configured to process a different set of data based on same set of instructions. In at least one embodiment, all threads in group of threads execute same instructions. In at least one embodiment, SM 2114 implements a SIMT architecture wherein each thread in a group of threads is configured to process a different set of data based on same set of instructions, but where individual threads in group of threads are allowed to diverge during execution. In at least one embodiment, a program counter, a call stack, and an execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within a warp diverge. In another embodiment, a program counter, a call
stack, and an execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. In at least one embodiment, an execution state is maintained for each individual thread and threads executing the same instructions may be converged and executed in parallel for better efficiency. At least one embodiment of SM 2114 is described in more detail in conjunction with FIG. 22.[0232] In at least one embodiment, MMU 2118 provides an interface between GPC 2100 and a memory partition unit (e.g., partition unit 2022 of FIG. 20) and MMU 2118 provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In at least one embodiment, MMU 2118 provides one or more translation lookaside buffers (TLBs) for performing translation of virtual addresses into physical addresses in memory.[0233] FIG. 22 illustrates a streaming multiprocessor (“SM”) 2200, in accordance with at least one embodiment. In at least one embodiment, SM 2200 is SM 2114 of FIG. 21. In at least one embodiment, SM 2200 includes, without limitation, an instruction cache 2202; one or more scheduler units 2204; a register file 2208; one or more processing cores (“cores”) 2210; one or more special function units (“SFUs”) 2212; one or more LSUs 2214; an interconnect network 2216; a shared memory /LI cache 2218; and any suitable combination thereof. In at least one embodiment, a work distribution unit dispatches tasks for execution on GPCs of parallel processing units (PPUs) and each task is allocated to a particular Data Processing Cluster (DPC) within a GPC and, if a task is associated with a shader program, then the task is allocated to one of SMs 2200. In at least one embodiment, scheduler unit 2204 receives tasks from a work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM 2200. In at least one embodiment, scheduler unit 2204 schedules thread blocks for execution as warps of parallel threads, wherein each thread block is allocated at least one warp. In at least one embodiment, each warp executes threads. In at least one embodiment, scheduler unit 2204 manages a plurality of different thread blocks, allocating warps to different thread blocks and then dispatching instructions from a plurality of different cooperative groups to various functional units (e.g., processing cores 2210, SFUs 2212, and LSUs 2214) during each clock cycle. In at least one embodiment, SM 2200 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0234] In at least one embodiment, “cooperative groups” may refer to a programming model for organizing groups of communicating threads that allows developers to express
granularity at which threads are communicating, enabling expression of richer, more efficient parallel decompositions. In at least one embodiment, cooperative launch APIs support synchronization amongst thread blocks for execution of parallel algorithms. In at least one embodiment, APIs of conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., syncthreads( ) function). However, in at least one embodiment, programmers may define groups of threads at smaller than thread block granularities and synchronize within defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces. In at least one embodiment, cooperative groups enable programmers to define groups of threads explicitly at sub-block and multi-block granularities, and to perform collective operations such as synchronization on threads in a cooperative group. In at least one embodiment, a sub-block granularity is as small as a single thread. In at least one embodiment, a programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. In at least one embodiment, cooperative group primitives enable new patterns of cooperative parallelism, including, without limitation, producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks.[0235] In at least one embodiment, a dispatch unit 2206 is configured to transmit instructions to one or more of functional units and scheduler unit 2204 includes, without limitation, two dispatch units 2206 that enable two different instructions from same warp to be dispatched during each clock cycle. In at least one embodiment, each scheduler unit 2204 includes a single dispatch unit 2206 or additional dispatch units 2206.[0236] In at least one embodiment, each SM 2200, in at least one embodiment, includes, without limitation, register file 2208 that provides a set of registers for functional units of SM 2200. In at least one embodiment, register file 2208 is divided between each of the functional units such that each functional unit is allocated a dedicated portion of register file 2208. In at least one embodiment, register file 2208 is divided between different warps being executed by SM 2200 and register file 2208 provides temporary storage for operands connected to data paths of functional units. In at least one embodiment, each SM 2200 comprises, without limitation, a plurality of L processing cores 2210. In at least one embodiment, SM 2200 includes, without limitation, a large number (e.g., 128 or more) of distinct processing cores 2210. In at least one embodiment, each processing core 2210 includes, without limitation, a
fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes, without limitation, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, floating point arithmetic logic units implement IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores 2210 include, without limitation, 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores.[0237] In at least one embodiment, tensor cores are configured to perform matrix operations. In at least one embodiment, one or more tensor cores are included in processing cores 2210. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation D = A X B + C, where A, B, C, and D are 4x4 matrices.[0238] In at least one embodiment, matrix multiply inputs A and B are 16-bit floating point matrices and accumulation matrices C and D are 16-bit floating point or 32-bit floating point matrices. In at least one embodiment, tensor cores operate on 16-bit floating point input data with 32-bit floating point accumulation. In at least one embodiment, 16-bit floating point multiply uses 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with other intermediate products for a 4x4x4 matrix multiply. Tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements, in at least one embodiment. In at least one embodiment, an API, such as a CUDA-C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. In at least one embodiment, at the CUDA level, a warp-level interface assumes 16x16 size matrices spanning all 32 threads of a warp.[0239] In at least one embodiment, each SM 2200 comprises, without limitation, M SFUs 2212 that perform special functions (e.g., attribute evaluation, reciprocal square root, and like). In at least one embodiment, SFUs 2212 include, without limitation, a tree traversal unit configured to traverse a hierarchical tree data structure. In at least one embodiment, SFUs 2212 include, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, texture units are configured to load texture maps (e.g., a 2D array of texels) from memory and sample texture maps to produce sampled texture values for use in shader programs executed by SM 2200. In at least one embodiment, texture
maps are stored in shared memory /LI cache 2218. In at least one embodiment, texture units implement texture operations such as filtering operations using mip-maps (e.g., texture maps of varying levels of detail). In at least one embodiment, each SM 2200 includes, without limitation, two texture units.[0240] In at least one embodiment, each SM 2200 comprises, without limitation, N LSUs 2214 that implement load and store operations between shared memory /LI cache 2218 and register file 2208. In at least one embodiment, each SM 2200 includes, without limitation, interconnect network 2216 that connects each of the functional units to register file 2208 and LSU 2214 to register file 2208 and shared memory/ LI cache 2218. In at least one embodiment, interconnect network 2216 is a crossbar that can be configured to connect any of the functional units to any of the registers in register file 2208 and connect LSUs 2214 to register file 2208 and memory locations in shared memory /LI cache 2218.[0241] In at least one embodiment, shared memory /LI cache 2218 is an array of on-chip memory that allows for data storage and communication between SM 2200 and a primitive engine and between threads in SM 2200. In at least one embodiment, shared memory /LI cache 2218 comprises, without limitation, 128KB of storage capacity and is in a path from SM 2200 to a partition unit. In at least one embodiment, shared memory /LI cache 2218 is used to cache reads and writes. In at least one embodiment, one or more of shared memory /LI cache 2218, L2 cache, and memory are backing stores.[0242] In at least one embodiment, combining data cache and shared memory functionality into a single memory block provides improved performance for both types of memory accesses. In at least one embodiment, capacity is used or is usable as a cache by programs that do not use shared memory, such as if shared memory is configured to use half of capacity, texture and load/store operations can use remaining capacity. In at least one embodiment, integration within shared memory /LI cache 2218 enables shared memory /LI cache 2218 to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data. In at least one embodiment, when configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. In at least one embodiment, fixed function GPUs are bypassed, creating a much simpler programming model. In at least one embodiment and in a general purpose parallel computation configuration, a work distribution unit assigns and distributes blocks of threads directly to DPCs. In at least one embodiment, threads in a block execute the same program, using a unique thread ID in a
calculation to ensure each thread generates unique results, using SM 2200 to execute a program and perform calculations, shared memory /LI cache 2218 to communicate between threads, and LSU 2214 to read and write global memory through shared memory /LI cache 2218 and a memory partition unit. In at least one embodiment, when configured for general purpose parallel computation, SM 2200 writes commands that scheduler unit 2204 can use to launch new work on DPCs.[0243] In at least one embodiment, PPU is included in or coupled to a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), a PDA, a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and more. In at least one embodiment, PPU is embodied on a single semiconductor substrate. In at least one embodiment, PPU is included in an SoC along with one or more other devices such as additional PPUs, memory, a RISC CPU, an MMU, a digital-to-analog converter (“DAC”), and like.[0244] In at least one embodiment, PPU may be included on a graphics card that includes one or more memory devices. In at least one embodiment, a graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In at least one embodiment, PPU may be an integrated GPU (“iGPU”) included in chipset of motherboard.Software Constructions for General-Purpose Computing [0245] The following figures set forth, without limitation, exemplary software constructs for implementing at least one embodiment.[0246] FIG. 23 illustrates a software stack of a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform is a platform for leveraging hardware on a computing system to accelerate computational tasks. A programming platform may be accessible to software developers through libraries, compiler directives, and/or extensions to programming languages, in at least one embodiment. In at least one embodiment, a programming platform may be, but is not limited to, CUD A, Radeon Open Compute Platform (“ROCm”), OpenCL (OpenCL™ is developed by Khronos group), SYCL, or Intel One API. In at least one embodiment, software stack 2300 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0247] In at least one embodiment, a software stack 2300 of a programming platform provides an execution environment for an application 2301. In at least one embodiment,
application 2301 may include any computer software capable of being launched on software stack 2300. In at least one embodiment, application 2301 may include, but is not limited to, an artificial intelligence (“AI”)/machine learning (“ML”) application, a high performance computing (“HPC”) application, a virtual desktop infrastructure (“VDI”), or a data center workload.[0248] In at least one embodiment, application 2301 and software stack 2300 run on hardware 2307. Hardware 2307 may include one or more GPUs, CPUs, FPGAs, AI engines, and/or other types of compute devices that support a programming platform, in at least one embodiment. In at least one embodiment, such as with CUD A, software stack 2300 may be vendor specific and compatible with only devices from particular vendor(s). In at least one embodiment, such as in with OpenCL, software stack 2300 may be used with devices from different vendors. In at least one embodiment, hardware 2307 includes a host connected to one more devices that can be accessed to perform computational tasks via application programming interface (“API”) calls. A device within hardware 2307 may include, but is not limited to, a GPU, FPGA, AI engine, or other compute device (but may also include a CPU) and its memory, as opposed to a host within hardware 2307 that may include, but is not limited to, a CPU (but may also include a compute device) and its memory, in at least one embodiment.[0249] In at least one embodiment, software stack 2300 of a programming platform includes, without limitation, a number of libraries 2303, a runtime 2305, and a device kernel driver 2306. Each of libraries 2303 may include data and programming code that can be used by computer programs and leveraged during software development, in at least one embodiment. In at least one embodiment, libraries 2303 may include, but are not limited to, pre-written code and subroutines, classes, values, type specifications, configuration data, documentation, help data, and/or message templates. In at least one embodiment, libraries 2303 include functions that are optimized for execution on one or more types of devices. In at least one embodiment, libraries 2303 may include, but are not limited to, functions for performing mathematical, deep learning, and/or other types of operations on devices. In at least one embodiment, libraries 2303 are associated with corresponding APIs 2302, which may include one or more APIs, that expose functions implemented in libraries 2303.[0250] In at least one embodiment, application 2301 is written as source code that is compiled into executable code, as discussed in greater detail below in conjunction with FIGS. 28 - 30. Executable code of application 2301 may run, at least in part, on an execution
environment provided by software stack 2300, in at least one embodiment. In at least one embodiment, during execution of application 2301, code may be reached that needs to run on a device, as opposed to a host. In such a case, runtime 2305 may be called to load and launch requisite code on the device, in at least one embodiment. In at least one embodiment, runtime2305 may include any technically feasible runtime system that is able to support execution of application SOI.[0251] In at least one embodiment, runtime 2305 is implemented as one or more runtime libraries associated with corresponding APIs, which are shown as API(s) 2304. One or more of such runtime libraries may include, without limitation, functions for memory management, execution control, device management, error handling, and/or synchronization, among other things, in at least one embodiment. In at least one embodiment, memory management functions may include, but are not limited to, functions to allocate, deallocate, and copy device memory, as well as transfer data between host memory and device memory. In at least one embodiment, execution control functions may include, but are not limited to, functions to launch a function (sometimes referred to as a “kernel” when a function is a global function callable from a host) on a device and set attribute values in a buffer maintained by a runtime library for a given function to be executed on a device.[0252] Runtime libraries and corresponding API(s) 2304 may be implemented in any technically feasible manner, in at least one embodiment. In at least one embodiment, one (or any number ol) API may expose a low-level set of functions for fine-grained control of a device, while another (or any number ol) API may expose a higher-level set of such functions. In at least one embodiment, a high-level runtime API may be built on top of a low- level API. In at least one embodiment, one or more of runtime APIs may be language- specific APIs that are layered on top of a language-independent runtime API.[0253] In at least one embodiment, device kernel driver 2306 is configured to facilitate communication with an underlying device. In at least one embodiment, device kernel driver2306 may provide low-level functionalities upon which APIs, such as API(s) 2304, and/or other software relies. In at least one embodiment, device kernel driver 2306 may be configured to compile intermediate representation (“IR”) code into binary code at runtime. For CUD A, device kernel driver 2306 may compile Parallel Thread Execution (“PTX”) IR code that is not hardware specific into binary code for a specific target device at runtime (with caching of compiled binary code), which is also sometimes referred to as “finalizing” code, in at least one embodiment. Doing so may permit finalized code to run on a target
device, which may not have existed when source code was originally compiled into PTX code, in at least one embodiment. Alternatively, in at least one embodiment, device source code may be compiled into binary code offline, without requiring device kernel driver 2306 to compile IR code at runtime.[0254] FIG. 24 illustrates a CUDA implementation of software stack 2300 of FIG. 23, in accordance with at least one embodiment. In at least one embodiment, a CUDA software stack 2400, on which an application 2401 may be launched, includes CUDA libraries 2403, a CUDA runtime 2405, a CUDA driver 2407, and a device kernel driver 2408. In at least one embodiment, CUDA software stack 2400 executes on hardware 2409, which may include a GPU that supports CUDA and is developed by NVIDIA Corporation of Santa Clara, CA. In at least one embodiment, CUDA software stack 2400 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0255] In at least one embodiment, application 2401, CUDA runtime 2405, and device kernel driver 2408 may perform similar functionalities as application 2301, runtime 2305, and device kernel driver 2306, respectively, which are described above in conjunction with FIG. 23. In at least one embodiment, CUDA driver 2407 includes a library (libcuda.so) that implements a CUDA driver API 2406. Similar to a CUDA runtime API 2404 implemented by a CUDA runtime library (cudart), CUDA driver API 2406 may, without limitation, expose functions for memory management, execution control, device management, error handling, synchronization, and/or graphics interoperability, among other things, in at least one embodiment. In at least one embodiment, CUDA driver API 2406 differs from CUDA runtime API 2404 in that CUDA runtime API 2404 simplifies device code management by providing implicit initialization, context (analogous to a process) management, and module (analogous to dynamically loaded libraries) management. In contrast to high-level CUDA runtime API 2404, CUDA driver API 2406 is a low-level API providing more fine-grained control of the device, particularly with respect to contexts and module loading, in at least one embodiment. In at least one embodiment, CUDA driver API 2406 may expose functions for context management that are not exposed by CUDA runtime API 2404. In at least one embodiment, CUDA driver API 2406 is also language-independent and supports, e.g., OpenCL in addition to CUDA runtime API 2404. Further, in at least one embodiment, development libraries, including CUDA runtime 2405, may be considered as separate from driver components, including user-mode CUDA driver 2407 and kernel-mode device driver 2408 (also sometimes referred to as a “display” driver).
[0256] In at least one embodiment, CUDA libraries 2403 may include, but are not limited to, mathematical libraries, deep learning libraries, parallel algorithm libraries, and/or signal/image/video processing libraries, which parallel computing applications such as application 2401 may utilize. In at least one embodiment, CUDA libraries 2403 may include mathematical libraries such as a cuBLAS library that is an implementation of Basic Linear Algebra Subprograms (“BLAS”) for performing linear algebra operations, a cuFFT library for computing fast Fourier transforms (“FFTs”), and a cuRAND library for generating random numbers, among others. In at least one embodiment, CUDA libraries 2403 may include deep learning libraries such as a cuDNN library of primitives for deep neural networks and a TensorRT platform for high-performance deep learning inference, among others.[0257] FIG. 25 illustrates a ROCm implementation of software stack 2300 of FIG. 23, in accordance with at least one embodiment. In at least one embodiment, a ROCm software stack 2500, on which an application 2501 may be launched, includes a language runtime 2503, a system runtime 2505, a thunk 2507, and a ROCm kernel driver 2508. In at least one embodiment, ROCm software stack 2500 executes on hardware 2509, which may include a GPU that supports ROCm and is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, ROCm software stack 2500 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0258] In at least one embodiment, application 2501 may perform similar functionalities as application 2301 discussed above in conjunction with FIG. 23. In addition, language runtime 2503 and system runtime 2505 may perform similar functionalities as runtime 2305 discussed above in conjunction with FIG. 23, in at least one embodiment. In at least one embodiment, language runtime 2503 and system runtime 2505 differ in that system runtime 2505 is a language-independent runtime that implements a ROCr system runtime API 2504 and makes use of a Heterogeneous System Architecture (“HSA”) Runtime API. HSA runtime API is a thin, user-mode API that exposes interfaces to access and interact with an AMD GPU, including functions for memory management, execution control via architected dispatch of kernels, error handling, system and agent information, and runtime initialization and shutdown, among other things, in at least one embodiment. In contrast to system runtime 2505, language runtime 2503 is an implementation of a language-specific runtime API 2502 layered on top of ROCr system runtime API 2504, in at least one embodiment. In at least one embodiment, language runtime API may include, but is not limited to, a Heterogeneous
compute Interface for Portability (“HIP”) language runtime API, a Heterogeneous Compute Compiler (“HCC”) language runtime API, or an OpenCL API, among others. HIP language in particular is an extension of C++ programming language with functionally similar versions of CUDA mechanisms, and, in at least one embodiment, a HIP language runtime API includes functions that are similar to those of CUDA runtime API 2404 discussed above in conjunction with FIG. 24, such as functions for memory management, execution control, device management, error handling, and synchronization, among other things.[0259] In at least one embodiment, thunk (ROCt) 2507 is an interface 2506 that can be used to interact with underlying ROCm driver 2508. In at least one embodiment, ROCm driver 2508 is a ROCk driver, which is a combination of an AMDGPU driver and a HSA kernel driver (amdkfd). In at least one embodiment, AMDGPU driver is a device kernel driver for GPUs developed by AMD that performs similar functionalities as device kernel driver 2306 discussed above in conjunction with FIG. 23. In at least one embodiment, HSA kernel driver is a driver permitting different types of processors to share system resources more effectively via hardware features.[0260] In at least one embodiment, various libraries (not shown) may be included in ROCm software stack 2500 above language runtime 2503 and provide functionality similarity to CUDA libraries 2403, discussed above in conjunction with FIG. 24. In at least one embodiment, various libraries may include, but are not limited to, mathematical, deep learning, and/or other libraries such as a hipBLAS library that implements functions similar to those of CUDA cuBLAS, a rocFFT library for computing FFTs that is similar to CUDA cuFFT, among others.[0261] FIG. 26 illustrates an OpenCL implementation of software stack 2300 of FIG. 23, in accordance with at least one embodiment. In at least one embodiment, an OpenCL software stack 2600, on which an application 2601 may be launched, includes an OpenCL framework 2610, an OpenCL runtime 2606, and a driver 2607. In at least one embodiment, OpenCL software stack 2600 executes on hardware 2409 that is not vendor-specific. As OpenCL is supported by devices developed by different vendors, specific OpenCL drivers may be required to interoperate with hardware from such vendors, in at least one embodiment. In at least one embodiment, OpenCL software stack 2600 comprises and/or performs, at least in part, various components and/or operations described above in conj unchon with FIGS. 1-3.
[0262] In at least one embodiment, application 2601, OpenCL runtime 2606, device kernel driver 2607, and hardware 2608 may perform similar functionalities as application 2301, runtime 2305, device kernel driver 2306, and hardware 2307, respectively, that are discussed above in conjunction with FIG. 23. In at least one embodiment, application 2601 further includes an OpenCL kernel 2602 with code that is to be executed on a device.[0263] In at least one embodiment, OpenCL defines a “platform” that allows a host to control devices connected to the host. In at least one embodiment, an OpenCL framework provides a platform layer API and a runtime API, shown as platform API 2603 and runtime API 2605. In at least one embodiment, runtime API 2605 uses contexts to manage execution of kernels on devices. In at least one embodiment, each identified device may be associated with a respective context, which runtime API 2605 may use to manage command queues, program objects, and kernel objects, share memory objects, among other things, for that device. In at least one embodiment, platform API 2603 exposes functions that permit device contexts to be used to select and initialize devices, submit work to devices via command queues, and enable data transfer to and from devices, among other things. In addition, OpenCL framework provides various built-in functions (not shown), including math functions, relational functions, and image processing functions, among others, in at least one embodiment.[0264] In at least one embodiment, a compiler 2604 is also included in OpenCL frame work 2610. Source code may be compiled offline prior to executing an application or online during execution of an application, in at least one embodiment. In contrast to CUDA and ROCm, OpenCL applications in at least one embodiment may be compiled online by compiler 2604, which is included to be representative of any number of compilers that may be used to compile source code and/or IR code, such as Standard Portable Intermediate Representation (“SPIR-V”) code, into binary code. Alternatively, in at least one embodiment, OpenCL ap-plications may be compiled offline, prior to execution of such applications.[0265] FIG. 27 illustrates software that is supported by a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform 2704 is configured to support various programming models 2703, middlewares and/or libraries 2702, and frameworks 2701 that an application 2700 may rely upon. In at least one embodiment, application 2700 may be an AI/ML application implemented using, for example, a deep learning framework such as MXNet, PyTorch, or TensorFlow, which may rely on libraries such as cuDNN, NVIDIA Collective Communications Library
(“NCCL”), and/or NVIDA Developer Data Loading Library (“DALI”) CUDA libraries to provide accelerated computing on underlying hardware. In at least one embodiment, programming platform 2704 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0266] In at least one embodiment, programming platform 2704 may be one of a CUDA, ROCm, or OpenCL platform described above in conjunction with FIG. 24, FIG. 25, and FIG. 26, respectively. In at least one embodiment, programming platform 2704 supports multiple programming models 2703, which are abstractions of an underlying computing system permitting expressions of algorithms and data structures. Programming models 2703 may expose features of underlying hardware in order to improve performance, in at least one embodiment. In at least one embodiment, programming models 2703 may include, but are not limited to, CUDA, HIP, OpenCL, C++ Accelerated Massive Parallelism (“C++ AMP”), Open Multi-Processing (“OpenMP”), Open Accelerators (“OpenACC”), and/or Vulcan Compute.[0267] In at least one embodiment, libraries and/or middlewares 2702 provide implementations of abstractions of programming models 2704. In at least one embodiment, such libraries include data and programming code that may be used by computer programs and leveraged during software development. In at least one embodiment, such middlewares include software that provides services to applications beyond those available from programming platform 2704. In at least one embodiment, libraries and/or middlewares 2702 may include, but are not limited to, cuBLAS, cuFFT, cuRAND, and other CUDA libraries, or rocBLAS, rocFFT, rocRAND, and other ROCm libraries. In addition, in at least one embodiment, libraries and/or middlewares 2702 may include NCCL and ROCm Communication Collectives Library (“RCCL”) libraries providing communication routines for GPUs, a MIOpen library for deep learning acceleration, and/or an Eigen library for linear algebra, matrix and vector operations, geometrical transformations, numerical solvers, and related algorithms.[0268] In at least one embodiment, application frameworks 2701 depend on libraries and/or middlewares 2702. In at least one embodiment, each of application frameworks 2701 is a software framework used to implement a standard structure of application software. Returning to the AI/ML example discussed above, an AI/ML application may be implemented using a framework such as Caffe, Caffe2, TensorFlow, Keras, PyTorch, or MxNet deep learning frameworks, in at least one embodiment.
[0269] FIG. 28 illustrates compiling code to execute on one of programming platforms of FIGS. 23 - 26, in accordance with at least one embodiment. In at least one embodiment, a compiler 2801 receives source code 2800 that includes both host code as well as device code. In at least one embodiment, compiler 2801 is configured to convert source code 2800 into host executable code 2802 for execution on a host and device executable code 2803 for execution on a device. In at least one embodiment, source code 2800 may either be compiled offline prior to execution of an application, or online during execution of an application.[0270] In at least one embodiment, source code 2800 may include code in any programming language supported by compiler 2801, such as C++, C, Fortran, etc. In at least one embodiment, source code 2800 may be included in a single-source file having a mixture of host code and device code, with locations of device code being indicated therein. In at least one embodiment, a single-source file may be a .cu file that includes CUDA code or a .hip.cpp file that includes HIP code. Alternatively, in at least one embodiment, source code 2800 may include multiple source code files, rather than a single-source file, into which host code and device code are separated.[0271] In at least one embodiment, compiler 2801 is configured to compile source code 2800 into host executable code 2802 for execution on a host and device executable code 2803 for execution on a device. In at least one embodiment, compiler 2801 performs operations including parsing source code 2800 into an abstract system tree (AST), performing optimizations, and generating executable code. In at least one embodiment in which source code 2800 includes a single-source file, compiler 2801 may separate device code from host code in such a single-source file, compile device code and host code into device executable code 2803 and host executable code 2802, respectively, and link device executable code 2803 and host executable code 2802 together in a single file, as discussed in greater detail below with respect to FIG. 29.[0272] In at least one embodiment, host executable code 2802 and device executable code 2803 may be in any suitable format, such as binary code and/or IR code. In the case of CUDA, host executable code 2802 may include native object code and device executable code 2803 may include code in PTX intermediate representation, in at least one embodiment. In the case of ROCm, both host executable code 2802 and device executable code 2803 may include target binary code, in at least one embodiment.
[0273] FIG. 29 is a more detailed illustration of compiling code to execute on one of programming platforms of FIGS. 23 - 26, in accordance with at least one embodiment. In at least one embodiment, a compiler 2901 is configured to receive source code 2900, compile source code 2900, and output an executable file 2910. In at least one embodiment, source code 2900 is a single-source file, such as a .cu file, a .hip.cpp file, or a file in another format, that includes both host and device code. In at least one embodiment, compiler 2901 may be, but is not limited to, an NVIDIA CUDA compiler (“NVCC”) for compiling CUDA code in .cu files, or a HCC compiler for compiling HIP code in .hip.cpp files.[0274] In at least one embodiment, compiler 2901 includes a compiler front end 2902, a host compiler 2905, a device compiler 2906, and a linker 2909. In at least one embodiment, compiler front end 2902 is configured to separate device code 2904 from host code 2903 in source code 2900. Device code 2904 is compiled by device compiler 2906 into device executable code 2908, which as described may include binary code or IR code, in at least one embodiment. Separately, host code 2903 is compiled by host compiler 2905 into host executable code 2907, in at least one embodiment. For NVCC, host compiler 2905 may be, but is not limited to, a general purpose C/C++ compiler that outputs native object code, while device compiler 2906 may be, but is not limited to, a Low Level Virtual Machine (“LLVM”)- based compiler that forks a LLVM compiler infrastructure and outputs PTX code or binary code, in at least one embodiment. For HCC, both host compiler 2905 and device compiler 2906 may be, but are not limited to, LLVM-based compilers that output target binary code, in at least one embodiment.[0275] Subsequent to compiling source code 2900 into host executable code 2907 and device executable code 2908, linker 2909 links host and device executable code 2907 and 2908 together in executable file 2910, in at least one embodiment. In at least one embodiment, native object code for a host and PTX or binary code for a device may be linked together in an Executable and Linkable Format (“ELF”) file, which is a container format used to store object code.[0276] FIG. 30 illustrates translating source code prior to compiling source code, in accordance with at least one embodiment. In at least one embodiment, source code 3000 is passed through a translation tool 3001, which translates source code 3000 into translated source code 3002. In at least one embodiment, a compiler 3003 is used to compile translated source code 3002 into host executable code 3004 and device executable code 3005 in a process that is similar to compilation of source code 2800 by compiler 2801 into host
executable code 2802 and device executable 2803, as discussed above in conjunction with FIG. 28.[0277] In at least one embodiment, a translation performed by translation tool 3001 is used to port source 3000 for execution in a different environment than that in which it was originally intended to run. In at least one embodiment, translation tool 3001 may include, but is not limited to, a HIP translator that is used to “hipify” CUDA code intended for a CUDA platform into HIP code that can be compiled and executed on a ROCm platform. In at least one embodiment, translation of source code 3000 may include parsing source code 3000 and converting calls to API(s) provided by one programming model (e.g., CUDA) into corresponding calls to API(s) provided by another programming model (e.g., HIP), as discussed in greater detail below in conjunction with FIGS. 31 A - 32. Returning to the example of hipifying CUDA code, calls to CUDA runtime API, CUDA driver API, and/or CUDA libraries may be converted to corresponding HIP API calls, in at least one embodiment. In at least one embodiment, automated translations performed by translation tool 3001 may sometimes be incomplete, requiring additional, manual effort to fully port source code 3000.CONFIGURING GPUS FOR GENERAL-PURPOSE COMPUTING[0278] The following figures set forth, without limitation, exemplary architectures for compiling and executing compute source code, in accordance with at least one embodiment.[0279] FIG. 31A illustrates a system 31A00 configured to compile and execute CUDA source code 3110 using different types of processing units, in accordance with at least one embodiment. In at least one embodiment, system 31 A00 includes, without limitation, CUDA source code 3110, a CUDA compiler 3150, host executable code 3170(1), host executable code 3170(2), CUDA device executable code 3184, a CPU 3190, a CUDA-enabled GPU 3194, a GPU 3192, a CUDA to HIP translation tool 3120, HIP source code 3130, a HIP compiler driver 3140, an HCC 3160, and HCC device executable code 3182.[0280] In at least one embodiment, CUDA source code 3110 is a collection of human- readable code in a CUDA programming language. In at least one embodiment, CUDA code is human-readable code in a CUDA programming language. In at least one embodiment, a CUDA programming language is an extension of the C++ programming language that includes, without limitation, mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, device code is source code that, after
compilation, is executable in parallel on a device. In at least one embodiment, a device may be a processor that is optimized for parallel instruction processing, such as CUDA-enabled GPU 3190, GPU 31192, or another GPGPU, etc. In at least one embodiment, host code is source code that, after compilation, is executable on a host. In at least one embodiment, a host is a processor that is optimized for sequential instruction processing, such as CPU 3190.[0281] In at least one embodiment, CUDA source code 3110 includes, without limitation, any number (including zero) of global functions 3112, any number (including zero) of device functions 3114, any number (including zero) of host functions 3116, and any number (including zero) of host/device functions 3118. In at least one embodiment, global functions 3112, device functions 3114, host functions 3116, and host/device functions 3118 may be mixed in CUDA source code 3110. In at least one embodiment, each of global functions 3112 is executable on a device and callable from a host. In at least one embodiment, one or more of global functions 3112 may therefore act as entry points to a device. In at least one embodiment, each of global functions 3112 is a kernel. In at least one embodiment and in a technique known as dynamic parallelism, one or more of global functions 3112 defines a kernel that is executable on a device and callable from such a device. In at least one embodiment, a kernel is executed N (where N is any positive integer) times in parallel by N different threads on a device during execution.[0282] In at least one embodiment, each of device functions 3114 is executed on a device and callable from such a device only. In at least one embodiment, each of host functions 3116 is executed on a host and callable from such a host only. In at least one embodiment, each of host/device functions 3116 defines both a host version of a function that is executable on a host and callable from such a host only and a device version of the function that is executable on a device and callable from such a device only.[0283] In at least one embodiment, CUDA source code 3110 may also include, without limitation, any number of calls to any number of functions that are defined via a CUDA runtime API 3102. In at least one embodiment, CUDA runtime API 3102 may include, without limitation, any number of functions that execute on a host to allocate and deallocate device memory, transfer data between host memory and device memory, manage systems with multiple devices, etc. In at least one embodiment, CUDA source code 3110 may also include any number of calls to any number of functions that are specified in any number of other CUDA APIs. In at least one embodiment, a CUDA API may be any API that is designed for use by CUDA code. In at least one embodiment, CUDA APIs include, without
limitation, CUD A runtime API 3102, a CUD A driver API, APIs for any number of CUD A libraries, etc. In at least one embodiment and relative to CUDA runtime API 3102, a CUDA driver API is a lower-level API but provides finer-grained control of a device. In at least one embodiment, examples of CUDA libraries include, without limitation, cuBLAS, cuFFT, cuRAND, cuDNN, etc.[0284] In at least one embodiment, CUDA compiler 3150 compiles input CUDA code (e.g., CUDA source code 3110) to generate host executable code 3170(1) and CUDA device executable code 3184. In at least one embodiment, CUDA compiler 3150 is NVCC. In at least one embodiment, host executable code 3170(1) is a compiled version of host code included in input source code that is executable on CPU 3190. In at least one embodiment, CPU 3190 may be any processor that is optimized for sequential instruction processing.[0285] In at least one embodiment, CUDA device executable code 3184 is a compiled version of device code included in input source code that is executable on CUDA-enabled GPU 3194. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, binary code. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, IR code, such as PTX code, that is further compiled at runtime into binary code for a specific target device (e.g., CUDA-enabled GPU 3194) by a device driver. In at least one embodiment, CUDA-enabled GPU 3194 may be any processor that is optimized for parallel instruction processing and that supports CUDA. In at least one embodiment, CUDA-enabled GPU 3194 is developed by NVIDIA Corporation of Santa Clara, CA.[0286] In at least one embodiment, CUDA to HIP translation tool 3120 is configured to translate CUDA source code 3110 to functionally similar HIP source code 3130. In a least one embodiment, HIP source code 3130 is a collection of human-readable code in a HIP programming language. In at least one embodiment, HIP code is human-readable code in a HIP programming language. In at least one embodiment, a HIP programming language is an extension of the C++ programming language that includes, without limitation, functionally similar versions of CUDA mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, a HIP programming language may include a subset of functionality of a CUDA programming language. In at least one embodiment, for example, a HIP programming language includes, without limitation, mechanism(s) to define global functions 3112, but such a HIP programming language may lack support for dynamic
parallelism and therefore global functions 3112 defined in HIP code may be callable from a host only.[0287] In at least one embodiment, HIP source code 3130 includes, without limitation, any number (including zero) of global functions 3112, any number (including zero) of device functions 3114, any number (including zero) of host functions 3116, and any number (including zero) of host/device functions 3118. In at least one embodiment, HIP source code 3130 may also include any number of calls to any number of functions that are specified in a HIP runtime API 3132. In at least one embodiment, HIP runtime API 3132 includes, without limitation, functionally similar versions of a subset of functions included in CUDA runtime API 3102. In at least one embodiment, HIP source code 3130 may also include any number of calls to any number of functions that are specified in any number of other HIP APIs. In at least one embodiment, a HIP API may be any API that is designed for use by HIP code and/or ROCm. In at least one embodiment, HIP APIs include, without limitation, HIP runtime API 3132, a HIP driver API, APIs for any number of HIP libraries, APIs for any number of ROCm libraries, etc.[0288] In at least one embodiment, CUDA to HIP translation tool 3120 converts each kernel call in CUDA code from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in CUDA code to any number of other functionally similar HIP calls. In at least one embodiment, a CUDA call is a call to a function specified in a CUDA API, and a HIP call is a call to a function specified in a HIP API. In at least one embodiment, CUDA to HIP translation tool 3120 converts any number of calls to functions specified in CUDA runtime API 3102 to any number of calls to functions specified in HIP runtime API 3132.[0289] In at least one embodiment, CUDA to HIP translation tool 3120 is a tool known as hipify-perl that executes a text-based translation process. In at least one embodiment, CUDA to HIP translation tool 3120 is a tool known as hipify-clang that, relative to hipify-perl, executes a more complex and more robust translation process that involves parsing CUDA code using clang (a compiler front-end) and then translating resulting symbols. In at least one embodiment, properly converting CUDA code to HIP code may require modifications (e.g., manual edits) in addition to those performed by CUDA to HIP translation tool 3120.[0290] In at least one embodiment, HIP compiler driver 3140 is a front end that determines a target device 3146 and then configures a compiler that is compatible with target device 3146 to compile HIP source code 3130. In at least one embodiment, target device
3146 is a processor that is optimized for parallel instruction processing. In at least one embodiment, HIP compiler driver 3140 may determine target device 3146 in any technically feasible fashion.[0291] In at least one embodiment, if target device 3146 is compatible with CUD A (e.g., CUDA-enabled GPU 3194), then HIP compiler driver 3140 generates a HIP/NVCC compilation command 3142. In at least one embodiment and as described in greater detail in conjunction with FIG. 31B, HIP/NVCC compilation command 3142 configures CUDA compiler 3150 to compile HIP source code 3130 using, without limitation, a HIP to CUDA translation header and a CUDA runtime library. In at least one embodiment and in response to HIP/NVCC compilation command 3142, CUDA compiler 3150 generates host executable code 3170(1) and CUDA device executable code 3184.[0292] In at least one embodiment, if target device 3146 is not compatible with CUDA, then HIP compiler driver 3140 generates aHIP/HCC compilation command 3144. In at least one embodiment and as described in greater detail in conjunction with FIG. 31C, HIP/HCC compilation command 3144 configures HCC 3160 to compile HIP source code 3130 using, without limitation, an HCC header and a HIP/HCC runtime library. In at least one embodiment and in response to HIP/HCC compilation command 3144, HCC 3160 generates host executable code 3170(2) and HCC device executable code 3182. In at least one embodiment, HCC device executable code 3182 is a compiled version of device code included in HIP source code 3130 that is executable on GPU 3192. In at least one embodiment, GPU 3192 may be any processor that is optimized for parallel instruction processing, is not compatible with CUDA, and is compatible with HCC. In at least one embodiment, GPU 3192 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment GPU, 3192 is a non-CUDA-enabled GPU 3192.[0293] For explanatory purposes only, three different flows that may be implemented in at least one embodiment to compile CUDA source code 3110 for execution on CPU 3190 and different devices are depicted in FIG. 31 A. In at least one embodiment, a direct CUDA flow compiles CUDA source code 3110 for execution on CPU 3190 and CUDA-enabled GPU 3194 without translating CUDA source code 3110 to HIP source code 3130. In at least one embodiment, an indirect CUDA flow translates CUDA source code 3110 to HIP source code 3130 and then compiles HIP source code 3130 for execution on CPU 3190 and CUDA- enabled GPU 3194. In at least one embodiment, a CUDA/HCC flow translates CUDA source
code 3110 to HIP source code 3130 and then compiles HIP source code 3130 for execution on CPU 3190 and GPU 3192.[0294] A direct CUDA flow that may be implemented in at least one embodiment is depicted via dashed lines and a series of bubbles annotated A1-A3. In at least one embodiment and as depicted with bubble annotated Al, CUDA compiler 3150 receives CUDA source code 3110 and a CUDA compile command 3148 that configures CUDA compiler 3150 to compile CUDA source code 3110. In at least one embodiment, CUDA source code 3110 used in a direct CUDA flow is written in a CUDA programming language that is based on a programming language other than C++ (e.g., C, Fortran, Python, Java, etc.). In at least one embodiment and in response to CUDA compile command 3148, CUDA compiler 3150 generates host executable code 3170(1) and CUDA device executable code 3184 (depicted with bubble annotated A2). In at least one embodiment and as depicted with bubble annotated A3, host executable code 3170(1) and CUDA device executable code 3184 may be executed on, respectively, CPU 3190 and CUDA-enabled GPU 3194. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, binary code. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime.[0295] An indirect CUDA flow that may be implemented in at least one embodiment is depicted via dotted lines and a series of bubbles annotated B1-B6. In at least one embodiment and as depicted with bubble annotated Bl, CUDA to HIP translation tool 3120 receives CUDA source code 3110. In at least one embodiment and as depicted with bubble annotated B2, CUDA to HIP translation tool 3120 translates CUDA source code 3110 to HIP source code 3130. In at least one embodiment and as depicted with bubble annotated B3, HIP compiler driver 3140 receives HIP source code 3130 and determines that target device 3146 is CUDA-enabled.[0296] In at least one embodiment and as depicted with bubble annotated B4, HIP compiler driver 3140 generates HIP/NVCC compilation command 3142 and transmits both HIP/NVCC compilation command 3142 and HIP source code 3130 to CUDA compiler 3150. In at least one embodiment and as described in greater detail in conjunction with FIG. 3 IB, HIP/NVCC compilation command 3142 configures CUDA compiler 3150 to compile HIP source code 3130 using, without limitation, a HIP to CUDA translation header and a CUDA runtime library. In at least one embodiment and in response to HIP/NVCC compilation command 3142, CUDA compiler 3150 generates host executable code 3170(1) and CUDA
device executable code 3184 (depicted with bubble annotated B5). In at least one embodiment and as depicted with bubble annotated B6, host executable code 3170(1) and CUDA device executable code 3184 may be executed on, respectively, CPU 3190 and CUDA-enabled GPU 3194. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, binary code. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime.[0297] A CUDA/HCC flow that may be implemented in at least one embodiment is depicted via solid lines and a series of bubbles annotated C1-C6. In at least one embodiment and as depicted with bubble annotated Cl, CUDA to HIP translation tool 3120 receives CUDA source code 3110. In at least one embodiment and as depicted with bubble annotated C2, CUDA to HIP translation tool 3120 translates CUDA source code 3110 to HIP source code 3130. In at least one embodiment and as depicted with bubble annotated C3, HIP compiler driver 3140 receives HIP source code 3130 and determines that target device 3146 is not CUDA-enabled.[0298] In at least one embodiment, HIP compiler driver 3140 generates HIP/HCC compilation command 3144 and transmits both HIP/HCC compilation command 3144 and HIP source code 3130 to HCC 3160 (depicted with bubble annotated C4). In at least one embodiment and as described in greater detail in conjunction with FIG. 31C, HIP/HCC compilation command 3144 configures HCC 3160 to compile HIP source code 3130 using, without limitation, an HCC header and a HIP/HCC runtime library. In at least one embodiment and in response to HIP/HCC compilation command 3144, HCC 3160 generates host executable code 3170(2) and HCC device executable code 3182 (depicted with bubble annotated C5). In at least one embodiment and as depicted with bubble annotated C6, host executable code 3170(2) and HCC device executable code 3182 may be executed on, respectively, CPU 3190 and GPU 3192.[0299] In at least one embodiment, after CUDA source code 3110 is translated to HIP source code 3130, HIP compiler driver 3140 may subsequently be used to generate executable code for either CUDA-enabled GPU 3194 or GPU 3192 without re-executing CUDA to HIP translation tool 3120. In at least one embodiment, CUDA to HIP translation tool 3120 translates CUDA source code 3110 to HIP source code 3130 that is then stored in memory. In at least one embodiment, HIP compiler driver 3140 then configures HCC 3160 to generate host executable code 3170(2) and HCC device executable code 3182 based on HIP
source code 3130. In at least one embodiment, HIP compiler driver 3140 subsequently configures CUDA compiler 3150 to generate host executable code 3170(1) and CUDA device executable code 3184 based on stored HIP source code 3130.[0300] FIG. 3 IB illustrates a system 3104 configured to compile and execute CUDA source code 3110 of FIG. 31A using CPU 3190 and CUDA-enabled GPU 3194, in accordance with at least one embodiment. In at least one embodiment, system 3104 includes, without limitation, CUDA source code 3110, CUDA to HIP translation tool 3120, HIP source code 3130, HIP compiler driver 3140, CUDA compiler 3150, host executable code 3170(1), CUDA device executable code 3184, CPU 3190, and CUDA-enabled GPU 3194. In at least one embodiment, system 3104 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0301] In at least one embodiment and as described previously herein in conjunction with FIG. 31 A, CUDA source code 3110 includes, without limitation, any number (including zero) of global functions 3112, any number (including zero) of device functions 3114, any number (including zero) of host functions 3116, and any number (including zero) of host/device functions 3118. In at least one embodiment, CUDA source code 3110 also includes, without limitation, any number of calls to any number of functions that are specified in any number of CUDA APIs.[0302] In at least one embodiment, CUDA to HIP translation tool 3120 translates CUDA source code 3110 to HIP source code 3130. In at least one embodiment, CUDA to HIP translation tool 3120 converts each kernel call in CUDA source code 3110 from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in CUDA source code 3110 to any number of other functionally similar HIP calls.[0303] In at least one embodiment, HIP compiler driver 3140 determines that target device 3146 is CUDA-enabled and generates HIP/NVCC compilation command 3142. In at least one embodiment, HIP compiler driver 3140 then configures CUDA compiler 3150 via HIP/NVCC compilation command 3142 to compile HIP source code 3130. In at least one embodiment, HIP compiler driver 3140 provides access to a HIP to CUDA translation header 3152 as part of configuring CUDA compiler 3150. In at least one embodiment, HIP to CUDA translation header 3152 translates any number of mechanisms (e.g., functions) specified in any number of HIP APIs to any number of mechanisms specified in any number of CUDA APIs. In at least one embodiment, CUDA compiler 3150 uses HIP to CUDA translation
header 3152 in conjunction with a CUDA runtime library 3154 corresponding to CUDA runtime API 3102 to generate host executable code 3170(1) and CUDA device executable code 3184. In at least one embodiment, host executable code 3170(1) and CUDA device executable code 3184 may then be executed on, respectively, CPU 3190 and CUDA-enabled GPU 3194. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, binary code. In at least one embodiment, CUDA device executable code 3184 includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime.[0304] FIG. 31C illustrates a system 3106 configured to compile and execute CUDA source code 3110 of FIG. 31A using CPU 3190 and non-CUDA-enabled GPU 3192, in accordance with at least one embodiment. In at least one embodiment, system 3106 includes, without limitation, CUDA source code 3110, CUDA to HIP translation tool 3120, HIP source code 3130, HIP compiler driver 3140, HCC 3160, host executable code 3170(2), HCC device executable code 3182, CPU 3190, and GPU 3192. In at least one embodiment, system 3106 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0305] In at least one embodiment and as described previously herein in conjunction with FIG. 31 A, CUDA source code 3110 includes, without limitation, any number (including zero) of global functions 3112, any number (including zero) of device functions 3114, any number (including zero) of host functions 3116, and any number (including zero) of host/device functions 3118. In at least one embodiment, CUDA source code 3110 also includes, without limitation, any number of calls to any number of functions that are specified in any number of CUDA APIs.[0306] In at least one embodiment, CUDA to HIP translation tool 3120 translates CUDA source code 3110 to HIP source code 3130. In at least one embodiment, CUDA to HIP translation tool 3120 converts each kernel call in CUDA source code 3110 from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in source code 3110 to any number of other functionally similar HIP calls.[0307] In at least one embodiment, HIP compiler driver 3140 subsequently determines that target device 3146 is not CUDA-enabled and generates HIP/HCC compilation command 3144. In at least one embodiment, HIP compiler driver 3140 then configures HCC 3160 to execute HIP/HCC compilation command 3144 to compile HIP source code 3130. In at least
one embodiment, HIP/HCC compilation command 3144 configures HCC 3160 to use, without limitation, a HIP/HCC runtime library 3158 and an HCC header 3156 to generate host executable code 3170(2) and HCC device executable code 3182. In at least one embodiment, HIP/HCC runtime library 3158 corresponds to HIP runtime API 3132. In at least one embodiment, HCC header 3156 includes, without limitation, any number and type of interoperability mechanisms for HIP and HCC. In at least one embodiment, host executable code 3170(2) and HCC device executable code 3182 may be executed on, respectively, CPU 3190 and GPU 3192.[0308] FIG. 32 illustrates an exemplary kernel translated by CUDA-to-HIP translation tool 3120 of FIG. 31C, in accordance with at least one embodiment. In at least one embodiment, CUDA source code 3110 partitions an overall problem that a given kernel is designed to solve into relatively coarse sub-problems that can independently be solved using thread blocks. In at least one embodiment, each thread block includes, without limitation, any number of threads. In at least one embodiment, each sub-problem is partitioned into relatively fine pieces that can be solved cooperatively in parallel by threads within a thread block. In at least one embodiment, threads within a thread block can cooperate by sharing data through shared memory and by synchronizing execution to coordinate memory accesses.[0309] In at least one embodiment, CUDA source code 3110 organizes thread blocks associated with a given kernel into a one-dimensional, a two-dimensional, or a three- dimensional grid of thread blocks. In at least one embodiment, each thread block includes, without limitation, any number of threads, and a grid includes, without limitation, any number of thread blocks.[0310] In at least one embodiment, a kernel is a function in device code that is defined using a ” _ global _ ” declaration specifier. In at least one embodiment, the dimension of a grid that executes a kernel for a given kernel call and associated streams are specified using a CUDA kernel launch syntax 3210. In at least one embodiment, CUDA kernel launch syntax 3210 is specified as “KemelName«<GridSize, BlockSize, SharedMemorySize, Stream»>(KemelArguments);”. In at least one embodiment, an execution configuration syntax is a “<«...»>” construct that is inserted between a kernel name (“KemelName”) and a parenthesized list of kernel arguments (“KemelArguments”). In at least one embodiment, CUDA kernel launch syntax 3210 includes, without limitation, a CUDA launch function syntax instead of an execution configuration syntax.
[0311] In at least one embodiment, “GridSize” is of a type dim3 and specifies the dimension and size of a grid. In at least one embodiment, type dim3 is a CUDA-defmed structure that includes, without limitation, unsigned integers x, y, and z. In at least one embodiment, if z is not specified, then z defaults to one. In at least one embodiment, if y is not specified, then y defaults to one. In at least one embodiment, the number of thread blocks in a grid is equal to the product of GridSize.x, GridSize.y, and GridSize.z. In at least one embodiment, “BlockSize” is of type dim3 and specifies the dimension and size of each thread block. In at least one embodiment, the number of threads per thread block is equal to the product of BlockSize.x, BlockSize.y, and BlockSize.z. In at least one embodiment, each thread that executes a kernel is given a unique thread ID that is accessible within the kernel through a built-in variable (e.g., ’’threadldx”).[0312] In at least one embodiment and with respect to CUDA kernel launch syntax 3210, “SharedMemorySize” is an optional argument that specifies a number of bytes in a shared memory that is dynamically allocated per thread block for a given kernel call in addition to statically allocated memory. In at least one embodiment and with respect to CUDA kernel launch syntax 3210, SharedMemorySize defaults to zero. In at least one embodiment and with respect to CUDA kernel launch syntax 3210, “Stream” is an optional argument that specifies an associated stream and defaults to zero to specify a default stream. In at least one embodiment, a stream is a sequence of commands (possibly issued by different host threads) that execute in order. In at least one embodiment, different streams may execute commands out of order with respect to one another or concurrently.[0313] In at least one embodiment, CUDA source code 3110 includes, without limitation, a kernel definition for an exemplary kernel “MatAdd” and a main function. In at least one embodiment, main function is host code that executes on a host and includes, without limitation, a kernel call that causes kernel MatAdd to execute on a device. In at least one embodiment and as shown, kernel MatAdd adds two matrices A and B of size NxN, where N is a positive integer, and stores the result in a matrix C. In at least one embodiment, main function defines a threadsPerBlock variable as 16 by 16 and a numBlocks variable as N/16 by N/16. In at least one embodiment, main function then specifies kernel call “MatAdd«<numBlocks, threadsPerBlock»>(A, B, C);”. In at least one embodiment and as per CUDA kernel launch syntax 3210, kernel MatAdd is executed using a grid of thread blocks having a dimension N/16 by N/16, where each thread block has a dimension of 16 by 16. In at least one embodiment, each thread block includes 256 threads, a grid is created with
enough blocks to have one thread per matrix element, and each thread in such a grid executes kernel MatAdd to perform one pair-wise addition.[0314] In at least one embodiment, while translating CUDA source code 3110 to HIP source code 3130, CUDA to HIP translation tool 3120 translates each kernel call in CUDA source code 3110 from CUDA kernel launch syntax 3210 to a HIP kernel launch syntax 3220 and converts any number of other CUDA calls in source code 3110 to any number of other functionally similar HIP calls. In at least one embodiment, HIP kernel launch syntax 3220 is specified as “hipLaunchKemelGGL(KemelName,GridSize, BlockSize, SharedMemorySize, Stream, KemelArguments);” In at least one embodiment, each of KemelName, GridSize, BlockSize, ShareMemorySize, Stream, and KemelArguments has the same meaning in HIP kernel launch syntax 3220 as in CUDA kernel launch syntax 3210 (described previously herein). In at least one embodiment, arguments SharedMemorySize and Stream are required in HIP kernel launch syntax 3220 and are optional in CUDA kernel launch syntax 3210.[0315] In at least one embodiment, a portion of HIP source code 3130 depicted in FIG.32 is identical to a portion of CUDA source code 3110 depicted in FIG. 32 except for a kernel call that causes kernel MatAdd to execute on a device. In at least one embodiment, kernel MatAdd is defined in HIP source code 3130 with the same ” _ global _ ” declaration specifier with which kernel MatAdd is defined in CUDA source code 3110. In at least one embodiment, a kernel call in HIP source code 3130 is “hipLaunchKemelGGL(MatAdd, numBlocks, threadsPerBlock, 0, 0, A, B, C);”, while a corresponding kernel call in CUDA source code 3110 is “MatAdd«<numBlocks, threadsPerBlock»>(A, B, C);”.[0316] FIG. 33 illustrates non-CUDA-enabled GPU 3192 of FIG. 31C in greater detail, in accordance with at least one embodiment. In at least one embodiment, GPU 3192 is developed by AMD corporation of Santa Clara. In at least one embodiment, GPU 3192 can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, GPU 3192 is configured to execute graphics pipeline operations such as draw commands, pixel operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, GPU 3192 is configured to execute operations unrelated to graphics. In at least one embodiment, GPU 3192 is configured to execute both operations related to graphics and operations unrelated to graphics. In at least one embodiment, GPU 3192 can be configured to execute device code included in HIP source code 3130.
[0317] In at least one embodiment, GPU 3192 includes, without limitation, any number of programmable processing units 3320, a command processor 3310, an L2 cache 3322, memory controllers 3370, DMA engines 3380(1), system memory controllers 3382, DMA engines 3380(2), and GPU controllers 3384. In at least one embodiment, each programmable processing unit 3320 includes, without limitation, a workload manager 3330 and any number of compute units 3340. In at least one embodiment, command processor 3310 reads commands from one or more command queues (not shown) and distributes commands to workload managers 3330. In at least one embodiment, for each programmable processing unit 3320, associated workload manager 3330 distributes work to compute units 3340 included in programmable processing unit 3320. In at least one embodiment, each compute unit 3340 may execute any number of thread blocks, but each thread block executes on a single compute unit 3340. In at least one embodiment, a workgroup is a thread block.[0318] In at least one embodiment, each compute unit 3340 includes, without limitation, any number of SIMD units 3350 and a shared memory 3360. In at least one embodiment, each SIMD unit 3350 implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each SIMD unit 3350 includes, without limitation, a vector ALU 3352 and a vector register file 3354. In at least one embodiment, each SIMD unit 3350 executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in the warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory 3360.[0319] In at least one embodiment, programmable processing units 3320 are referred to as “shader engines.” In at least one embodiment, each programmable processing unit 3320 includes, without limitation, any amount of dedicated graphics hardware in addition to compute units 3340. In at least one embodiment, each programmable processing unit 3320 includes, without limitation, any number (including zero) of geometry processors, any number (including zero) of rasterizers, any number (including zero) of render back ends, workload manager 3330, and any number of compute units 3340.
[0320] In at least one embodiment, compute units 3340 share L2 cache 3322. In at least one embodiment, L2 cache 3322 is partitioned. In at least one embodiment, a GPU memory 3390 is accessible by all compute units 3340 in GPU 3192. In at least one embodiment, memory controllers 3370 and system memory controllers 3382 facilitate data transfers between GPU 3192 and a host, and DMA engines 3380(1) enable asynchronous memory transfers between GPU 3192 and such a host. In at least one embodiment, memory controllers 3370 and GPU controllers 3384 facilitate data transfers between GPU 3192 and other GPUs 3192, and DMA engines 3380(2) enable asynchronous memory transfers between GPU 3192 and other GPUs 3192.[0321] In at least one embodiment, GPU 3192 includes, without limitation, any amount and type of system interconnect that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to GPU 3192. In at least one embodiment, GPU 3192 includes, without limitation, any number and type of I/O interfaces (e.g., PCIe) that are coupled to any number and type of peripheral devices. In at least one embodiment, GPU 3192 may include, without limitation, any number (including zero) of display engines and any number (including zero) of multimedia engines. In at least one embodiment, GPU 3192 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers (e.g., memory controllers 3370 and system memory controllers 3382) and memory devices (e.g., shared memories 3360) that may be dedicated to one component or shared among multiple components. In at least one embodiment, GPU 3192 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 cache 3322) that may each be private to or shared between any number of components (e.g., SIMD units 3350, compute units 3340, and programmable processing units 3320).[0322] FIG. 34 illustrates how threads of an exemplary CUDA grid 3420 are mapped to different compute units 3340 of FIG. 33, in accordance with at least one embodiment. In at least one embodiment and for explanatory purposes only, grid 3420 has a GridSize of BX by BY by 1 and a BlockSize of TX by TY by 1. In at least one embodiment, grid 3420 therefore includes, without limitation, (BX * BY) thread blocks 3430 and each thread block 3430 includes, without limitation, (TX * TY) threads 3440. Threads 3440 are depicted in FIG. 34 as squiggly arrows.[0323] In at least one embodiment, grid 3420 is mapped to programmable processing unit 3320(1) that includes, without limitation, compute units 3340(1)-3340(C). In at least one
embodiment and as shown, (BJ * BY) thread blocks 3430 are mapped to compute unit 3340(1), and the remaining thread blocks 3430 are mapped to compute unit 3340(2). In at least one embodiment, each thread block 3430 may include, without limitation, any number of warps, and each warp is mapped to a different SIMD unit 3350 of FIG. 33.[0324] In at least one embodiment, warps in a given thread block 3430 may synchronize together and communicate through shared memory 3360 included in associated compute unit 3340. For example and in at least one embodiment, warps in thread block 3430(BJ,1) can synchronize together and communicate through shared memory 3360(1). For example and in at least one embodiment, warps in thread block 3430(BJ+1,1) can synchronize together and communicate through shared memory 3360(2).[0325] FIG. 35 illustrates how to migrate existing CUDA code to Data Parallel C++ code, in accordance with at least one embodiment. Data Parallel C++ (DPC++) may refer to an open, standards-based alternative to single-architecture proprietary languages that allows developers to reuse code across hardware targets (CPUs and accelerators such as GPUs and FPGAs) and also perform custom tuning for a specific accelerator. DPC++ use similar and/or identical C and C++ constructs in accordance with ISO C++ which developers may be familiar with. DPC++ incorporates standard SYCL from The Khronos Group to support data parallelism and heterogeneous programming. SYCL refers to a cross-platform abstraction layer that builds on underlying concepts, portability and efficiency of OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using standard C++. SYCL may enable single source development where C++ template functions can contain both host and device code to construct complex algorithms that use OpenCL acceleration, and then re-use them throughout their source code on different types of data.[0326] In at least one embodiment, a DPC++ compiler is used to compile DPC++ source code which can be deployed across diverse hardware targets. In at least one embodiment, a DPC++ compiler is used to generate DPC++ applications that can be deployed across diverse hardware targets and a DPC++ compatibility tool can be used to migrate CUDA applications to a multiplatform program in DPC++. In at least one embodiment, a DPC++ base tool kit includes a DPC++ compiler to deploy applications across diverse hardware targets; a DPC++ library to increase productivity and performance across CPUs, GPUs, and FPGAs; a DPC++ compatibility tool to migrate CUDA applications to multi-platform applications; and any suitable combination thereof.
[0327] In at least one embodiment, a DPC++ programming model is utilized to simply one or more aspects relating to programming CPUs and accelerators by using modem C++ features to express parallelism with a programming language called Data Parallel C++. DPC++ programming language may be utilized to code reuse for hosts (e.g., a CPU) and accelerators (e.g., a GPU or FPGA) using a single source language, with execution and memory dependencies being clearly communicated. Mappings within DPC++ code can be used to transition an application to run on a hardware or set of hardware devices that best accelerates a workload. A host may be available to simplify development and debugging of device code, even on platforms that do not have an accelerator available.[0328] In at least one embodiment, CUDA source code 3500 is provided as an input to a DPC++ compatibility tool 3502 to generate human readable DPC++ 3504. In at least one embodiment, human readable DPC++ 3504 includes inline comments generated by DPC++ compatibility tool 3502 that guides a developer on how and/or where to modify DPC++ code to complete coding and tuning to desired performance 3506, thereby generating DPC++ source code 3508. In at least one embodiment, DPC++ 3504 comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0329] In at least one embodiment, CUDA source code 3500 is or includes a collection of human-readable source code in a CUDA programming language. In at least one embodiment, CUDA source code 3500 is human-readable source code in a CUDA programming language. In at least one embodiment, a CUDA programming language is an extension of the C++ programming language that includes, without limitation, mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, device code is source code that, after compilation, is executable on a device (e.g., GPU or FPGA) and may include or more parallelizable workflows that can be executed on one or more processor cores of a device. In at least one embodiment, a device may be a processor that is optimized for parallel instruction processing, such as CUDA-enabled GPU, GPU, or another GPGPU, etc. In at least one embodiment, host code is source code that, after compilation, is executable on a host. In least one embodiment, some or all of host code and device code can be executed in parallel across a CPU and GPU/FPGA. In at least one embodiment, a host is a processor that is optimized for sequential instruction processing, such as CPU. CUDA source code 3500 described in connection with FIG. 35 may be in accordance with those discussed elsewhere in this document.
[0330] In at least one embodiment, DPC++ compatibility tool 3502 refers to an executable tool, program, application, or any other suitable type of tool that is used to facilitate migration of CUDA source code 3500 to DPC++ source code 3508. In at least one embodiment, DPC++ compatibility tool 3502 is a command-line-based code migration tool available as part of a DPC++ tool kit that is used to port existing CUDA sources to DPC++.In at least one embodiment, DPC++ compatibility tool 3502 converts some or all source code of a CUDA application from CUDA to DPC++ and generates a resulting file that is written at least partially in DPC++, referred to as human readable DPC++ 3504. In at least one embodiment, human readable DPC++ 3504 includes comments that are generated by DPC++ compatibility tool 3502 to indicate where user intervention may be necessary. In at least one embodiment, user intervention is necessary when CUDA source code 3500 calls a CUDA API that has no analogous DPC++ API; other examples where user intervention is required are discussed later in greater detail.[0331] In at least one embodiment, a workflow for migrating CUDA source code 3500 (e.g., application or portion thereof) includes creating one or more compilation database files; migrating CUDA to DPC++ using a DPC++ compatibility tool3502 ; completing migration and verifying correctness, thereby generating DPC++ source code 3508; and compiling DPC++ source code 3508 with a DPC++ compiler to generate a DPC++ application. In at least one embodiment, a compatibility tool provides a utility that intercepts commands used when Makefile executes and stores them in a compilation database file. In at least one embodiment, a file is stored in JSON format. In at least one embodiment, an intercept-built command converts Makefile command to a DPC compatibility command.[0332] In at least one embodiment, intercept-build is a utility script that intercepts a build process to capture compilation options, macro defs, and include paths, and writes this data to a compilation database file. In at least one embodiment, a compilation database file is a JSON file. In at least one embodiment, DPC++ compatibility tool 3502 parses a compilation database and applies options when migrating input sources. In at least one embodiment, use of intercept-build is optional, but highly recommended for Make or CMake based environments. In at least one embodiment, a migration database includes commands, directories, and files: command may include necessary compilation flags; directory may include paths to header files; file may include paths to CUDA files.[0333] In at least one embodiment, DPC++ compatibility tool 3502 migrates CUDA code (e.g., applications) written in CUDA to DPC++ by generating DPC++ wherever possible. In
at least one embodiment, DPC++ compatibility tool 3502 is available as part of a tool kit. In at least one embodiment, a DPC++ tool kit includes an intercept-build tool. In at least one embodiment, an intercept-built tool creates a compilation database that captures compilation commands to migrate CUDA files. In at least one embodiment, a compilation database generated by an intercept-built tool is used by DPC++ compatibility tool 3502 to migrate CUDA code to DPC++. In at least one embodiment, non-CUDA C++ code and files are migrated as is. In at least one embodiment, DPC++ compatibility tool 3502 generates human readable DPC++ 3504 which may be DPC++ code that, as generated by DPC++ compatibility tool 3502, cannot be compiled by DPC++ compiler and requires additional plumbing for verifying portions of code that were not migrated correctly, and may involve manual intervention, such as by a developer. In at least one embodiment, DPC++ compatibility tool 3502 provides hints or tools embedded in code to help developers manually migrate additional code that could not be migrated automatically. In at least one embodiment, migration is a one-time activity for a source file, project, or application.[0334] In at least one embodiment, DPC++ compatibility tool 35002 is able to successfully migrate all portions of CUDA code to DPC++ and there may simply be an optional step for manually verifying and tuning performance of DPC++ source code that was generated. In at least one embodiment, DPC++ compatibility tool 3502 directly generates DPC++ source code 3508 which is compiled by a DPC++ compiler without requiring or utilizing human intervention to modify DPC++ code generated by DPC++ compatibility tool 3502. In at least one embodiment, DPC++ compatibility tool generates compile-able DPC++ code which can be optionally tuned by a developer for performance, readability, maintainability, other various considerations; or any combination thereof.[0335] In at least one embodiment, one or more CUDA source files are migrated to DPC++ source files at least partially using DPC++ compatibility tool 3502. In at least one embodiment, CUDA source code includes one or more header files which may include CUDA header files. In at least one embodiment, a CUDA source file includes a <cuda.h> header file and a <stdio.h> header file which can be used to print text. In at least one embodiment, a portion of a vector addition kernel CUDA source file may be written as or related to:#include <cuda.h>#include <stdio.h>
#defme VECTOR SIZE 256[] global _ void VectorAddKemel (float* A, float* B, float* C){A[threadldx.x] = threadldx.x + l.Of;B[threadldx.x] = threadldx.x + l.Of;C[threadldx.x] = A[threadldx.x] + B [threadldx.x];} int main(){ float *d_A, *d_B, *d_C; cudaMalloc(&d_A, VECTOR_SIZE*sizeof(float)); cudaMalloc(&d_B, VECTOR_SIZE*sizeof(float)); cudaMalloc(&d_C, VECTOR_SIZE*sizeof(float));VectorAddKemel«<l, VECTOR_SIZE»>(d_A, d_B, d_C); float Result[VECTOR_SIZE] = { }; cudaMemcpy(Result, d_C, VECTOR_SIZE*sizeof(float), cudaMemcpyDeviceToHost); cudaFree(d A); cudaFree(d B); cudaFree(d C);
for (int i=0; i<VECTOR SIZE; i++ { if (i % 16 == 0) { printf("\n");} printf("%f ", Result[i]);} return 0;}[0336] In at least one embodiment and in connection with CUDA source file presented above, DPC++ compatibility tool 3502 parses a CUDA source code and replaces header files with appropriate DPC++ and SYCL header files. In at least one embodiment, DPC++ header files includes helper declarations. In CUDA, there is a concept of a thread ID and correspondingly, in DPC++ or SYCL, for each element there is a local identifier.[0337] In at least one embodiment and in connection with CUDA source file presented above, there are two vectors A and B which are initialized and a vector addition result is put into vector C as part of VectorAddKemel(). In at least one embodiment, DPC++ compatibility tool 3502 converts CUDA thread IDs used to index work elements to SYCL standard addressing for work elements via a local ID as part of migrating CUDA code to DPC++ code. In at least one embodiment, DPC++ code generated by DPC++ compatibility tool 3502 can be optimized - for example, by reducing dimensionality of an nd item, thereby increasing memory and/or processor utilization.[0338] In at least one embodiment and in connection with CUDA source file presented above, memory allocation is migrated. In at least one embodiment, cudaMalloc() is migrated to a unified shared memory SYCL call malloc_device() to which a device and context is passed, relying on SYCL concepts such as platform, device, context, and queue. In at least one embodiment, a SYCL platform can have multiple devices (e.g., host and GPU devices); a
device may have multiple queues to which jobs can be submitted; each device may have a context; and a context may have multiple devices and manage shared memory objects.[0339] In at least one embodiment and in connection with CUDA source file presented above, a main() function invokes or calls VectorAddKemel() to add two vectors A and B together and store result in vector C. In at least one embodiment, CUDA code to invoke Vector AddKemelO is replaced by DPC++ code to submit a kernel to a command queue for execution. In at least one embodiment, a command group handler cgh passes data, synchronization, and computation that is submitted to the queue, parallel for is called for a number of global elements and a number of work items in that work group where Vector AddKemelO is called.[0340] In at least one embodiment and in connection with CUDA source file presented above, CUDA calls to copy device memory and then free memory for vectors A, B, and C are migrated to corresponding DPC++ calls. In at least one embodiment, C++ code (e.g., standard ISO C++ code for printing a vector of floating point variables) is migrated as is, without being modified by DPC++ compatibility tool 3502. In at least one embodiment, DPC++ compatibility tool 3502 modify CUDA APIs for memory setup and/or host calls to execute kernel on the acceleration device. In at least one embodiment and in connection with CUDA source file presented above, a corresponding human readable DPC++ 3504 (e.g., which can be compiled) is written as or related to:#include <CL/sycl.hpp>//include <dpct/dpct.hpp>//define VECTOR SIZE 256 void VectorAddKemel (float* A, float* B, float* C, sycl::nd_item<3> item ctl){A[item_ctl.get_local_id(2)] = item_ctl.get_local_id(2) + l.Of; B[item_ctl.get_local_id(2)] = item_ctl.get_local_id(2) + l.Of; C[item_ctl.get_local_id(2)] =A[item_ctl.get_local_id(2)] + B[item_ctl.get_local_id(2)];
} int main(){ float *d_A, *d_B, *d_C; d_A = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float), dpct: :get_current_device(), dpct: : get_default_context()); d_B = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float), dpct: :get_current_device(), dpct: : get_default_context()); d_C = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float), dpct: :get_current_device(), dpct: : get_default_context()); dpct::get_default_queue_wait().submit([&](sycl::handler &cgh) { cgh.parallel_for( sycl::nd_range<3>(sycl::range<3>(l, 1, 1) * sycl::range<3>(l, 1, VECTOR_SIZE) * sycl::range<3>(l, 1, VECTOR_SIZE)), [=](sycl::nd_items<3> item ctl) {VectorAddKemel(d_A, d_B, d_C, item_ctl);});
float Result[VECTOR_SIZE] = { }; dpct: : get_default_queue_wait().memcpy (Result, d_C, VECTOR SIZE * sizeof(float)) wait(); sy cl : : free(d_A, dpct: : get_default_context()); sycl: :free(d_B, dpct: :get_default_context()); sycl: :free(d_C, dpct: :get_default_context()); for (int i=0; i<VECTOR SIZE; i++ { if (i % 16 == 0) { printf("\n");} printf("%f ", Result[i]);} return 0;}[0341] In at least one embodiment, human readable DPC++ 3504 refers to output generated by DPC++ compatibility tool 3502 and may be optimized in one manner or another. In at least one embodiment, human readable DPC++ 3504 generated by DPC++ compatibility tool 3502 can be manually edited by a developer after migration to make it more maintainable, performance, or other considerations. In at least one embodiment, DPC++ code generated by DPC++ compatibility tool 35002 such as DPC++ disclosed can be optimized by removing repeat calls to get_current_device() and/or get_default_context() for each malloc_device() call. In at least one embodiment, DPC++ code generated above uses a 3 dimensional nd range which can be refactored to use only a single dimension, thereby
reducing memory usage. In at least one embodiment, a developer can manually edit DPC++ code generated by DPC++ compatibility tool 3502 replace uses of unified shared memory with accessors. In at least one embodiment, DPC++ compatibility tool 3502 has an option to change how it migrates CUDA code to DPC++ code. In at least one embodiment, DPC++ compatibility tool 3502 is verbose because it is using a general template to migrate CUDA code to DPC++ code that works for a large number of cases.[0342] In at least one embodiment, a CUDA to DPC++ migration workflow includes steps to: prepare for migration using intercept-build script; perform migration of CUDA projects to DPC++ using DPC++ compatibility tool 3502; review and edit migrated source files manually for completion and correctness; and compile final DPC++ code to generate a DPC++ application. In at least one embodiment, manual review of DPC++ source code may be required in one or more scenarios including but not limited to: migrated API does not return error code (CUDA code can return an error code which can then be consumed by the application but SYCL uses exceptions to report errors, and therefore does not use error codes to surface errors); CUDA compute capability dependent logic is not supported by DPC++; statement could not be removed. In at least one embodiment, scenarios in which DPC++ code requires manual intervention may include, without limitation: error code logic replaced with (*,0) code or commented out; equivalent DPC++ API not available; CUDA compute capability-dependent logic; hardware-dependent API (clock()); missing features unsupported API; execution time measurement logic; handling built-in vector type conflicts; migration of cuBLAS API; and more.[0343] In at least one embodiment, one or more techniques described herein utilize a oneAPI programming model. In at least one embodiment, a oneAPI programming model refers to a programming model for interacting with various compute accelerator architectures. In at least one embodiment, oneAPI refers to an application programming interface (API) designed to interact with various compute accelerator architectures. In at least one embodiment, a oneAPI programming model utilizes a DPC++ programming language. In at least one embodiment, a DPC++ programming language refers to a high-level language for data parallel programming productivity. In at least one embodiment, a DPC++ programming language is based at least in part on C and/or C++ programming languages. In at least one embodiment, a oneAPI programming model is a programming model such as those developed by Intel Corporation of Santa Clara, CA. In at least one embodiment, oneAPI
and/or a oneAPI programming model comprises and/or performs, at least in part, various components and/or operations described above in conjunction with FIGS. 1-3.[0344] In at least one embodiment, oneAPI and/or oneAPI programming model is utilized to interact with various accelerator, GPU, processor, and/or variations thereof, architectures. In at least one embodiment, oneAPI includes a set of libraries that implement various functionalities. In at least one embodiment, oneAPI includes at least a oneAPI DPC++ library, a oneAPI math kernel library, a oneAPI data analytics library, a oneAPI deep neural network library, a oneAPI collective communications library, a oneAPI threading building blocks library, a oneAPI video processing library, and/or variations thereof.[0345] In at least one embodiment, a oneAPI DPC++ library, also referred to as oneDPL, is a library that implements algorithms and functions to accelerate DPC++ kernel programming. In at least one embodiment, oneDPL implements one or more standard template library (STL) functions. In at least one embodiment, oneDPL implements one or more parallel STL functions. In at least one embodiment, oneDPL provides a set of library classes and functions such as parallel algorithms, iterators, function object classes, range- based API, and/or variations thereof. In at least one embodiment, oneDPL implements one or more classes and/or functions of a C++ standard library. In at least one embodiment, oneDPL implements one or more random number generator functions.[0346] In at least one embodiment, a oneAPI math kernel library, also referred to as oneMKL, is a library that implements various optimized and parallelized routines for various mathematical functions and/or operations. In at least one embodiment, oneMKL implements one or more basic linear algebra subprograms (BLAS) and/or linear algebra package (LAPACK) dense linear algebra routines. In at least one embodiment, oneMKL implements one or more sparse BLAS linear algebra routines. In at least one embodiment, oneMKL implements one or more random number generators (RNGs). In at least one embodiment, oneMKL implements one or more vector mathematics (VM) routines for mathematical operations on vectors. In at least one embodiment, oneMKL implements one or more Fast Fourier Transform (FFT) functions.[0347] In at least one embodiment, a oneAPI data analytics library, also referred to as oneDAL, is a library that implements various data analysis applications and distributed computations. In at least one embodiment, oneDAL implements various algorithms for preprocessing, transformation, analysis, modeling, validation, and decision making for data
analytics, in batch, online, and distributed processing modes of computation. In at least one embodiment, oneDAL implements various C++ and/or Java APIs and various connectors to one or more data sources. In at least one embodiment, oneDAL implements DPC++ API extensions to a traditional C++ interface and enables GPU usage for various algorithms.[0348] In at least one embodiment, a oneAPI deep neural network library, also referred to as oneDNN, is a library that implements various deep learning functions. In at least one embodiment, oneDNN implements various neural network, machine learning, and deep learning functions, algorithms, and/or variations thereof.[0349] In at least one embodiment, a oneAPI collective communications library, also referred to as oneCCL, is a library that implements various applications for deep learning and machine learning workloads. In at least one embodiment, oneCCL is built upon lower-level communication middleware, such as message passing interface (MPI) and libfabrics. In at least one embodiment, oneCCL enables a set of deep learning specific optimizations, such as prioritization, persistent operations, out of order executions, and/or variations thereof. In at least one embodiment, oneCCL implements various CPU and GPU functions.[0350] In at least one embodiment, a oneAPI threading building blocks library, also referred to as oneTBB, is a library that implements various parallelized processes for various applications. In at least one embodiment, oneTBB is utilized for task-based, shared parallel programming on a host. In at least one embodiment, oneTBB implements generic parallel algorithms. In at least one embodiment, oneTBB implements concurrent containers. In at least one embodiment, oneTBB implements a scalable memory allocator. In at least one embodiment, oneTBB implements a work-stealing task scheduler. In at least one embodiment, oneTBB implements low-level synchronization primitives. In at least one embodiment, oneTBB is compiler-independent and usable on various processors, such as GPUs, PPUs, CPUs, and/or variations thereof.[0351] In at least one embodiment, a oneAPI video processing library, also referred to as oneVPL, is a library that is utilized for accelerating video processing in one or more applications. In at least one embodiment, oneVPL implements various video decoding, encoding, and processing functions. In at least one embodiment, oneVPL implements various functions for media pipelines on CPUs, GPUs, and other accelerators. In at least one embodiment, oneVPL implements device discovery and selection in media centric and video
analytics workloads. In at least one embodiment, oneVPL implements API primitives for zero-copy buffer sharing.[0352] In at least one embodiment, a oneAPI programming model utilizes a DPC++ programming language. In at least one embodiment, a DPC++ programming language is a programming language that includes, without limitation, functionally similar versions of CUDA mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, a DPC++ programming language may include a subset of functionality of a CUDA programming language. In at least one embodiment, one or more CUDA programming model operations are performed using a oneAPI programming model using a DPC++ programming language.[0353] It should be noted that, while example embodiments described herein may relate to a CUDA programming model, techniques described herein can be utilized with any suitable programming model, such HIP, oneAPI (e.g., using oneAPI-based programming to perform or implement a method disclosed herein), and/or variations thereof.[0354] In at least one embodiment, one or more components of systems and/or processors disclosed above can communicate with one or more CPUs, ASICs, GPUs, FPGAs, or other hardware, circuitry, or integrated circuit components that include, e.g., an upscaler or upsampler to upscale an image, an image blender or image blender component to blend, mix, or add images together, a sampler to sample an image (e.g., as part of a DSP), a neural network circuit that is configured to perform an upscaler to upscale an image (e.g., from a low resolution image to a high resolution image), or other hardware to modify or generate an image, frame, or video to adjust its resolution, size, or pixels; one or more components of systems and/or processors disclosed above can use components described in this disclosure to perform methods, operations, or instructions that generate or modify an image.[0355] At least one embodiment of the disclosure can be described in view of the following clauses:1. A processor comprising: one or more circuits to perform an application programming interface (API) to identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.2. The processor of clause 1, wherein the API is to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating a location in
memory of one or more instructions of a function based, at least in part, on a version of the function indicated to the API.3. The processor of clause 1 or 2, wherein the API is to receive one or more data values to indicate the one or more versions.4. The processor of any of clauses 1-3, wherein the API is to receive one or more first data values to indicate a base name and one or more second data values to indicate the one or more versions.5. The processor of any of clauses 1-4, wherein the one or more libraries are runtime libraries to be performed by the one or more circuits.6. The processor of any of clauses 1-5, wherein the one or more libraries are drivers to be performed by the one or more circuits.7. A system comprising: one or more processors to perform an application programming interface (API) to identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.8. The system of clause 7, wherein the API is to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating one or more memory locations of one or more instructions to perform the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the API.9. The system of clause 7 or 8, further comprising one or more data values indicating a base name and version number to be used by the API to identify the one or more versions.10. The system of any of clauses 7-9, wherein the API is to receive one or more parameters comprising data to indicate at least a name value and a numerical value, the name value and the numerical value to be used by the API to identify the one or more versions of the one or more portions of the one or more libraries.11. The system of any of clauses 7-10, wherein the one or more libraries are drivers to be performed by the one or more processors.
12. The system of any of clauses 7-11, wherein the one or more libraries are runtime libraries to be performed by the one or more processors.13. A machine-readable medium having stored thereon one or more application programming interfaces (APIs), which if performed at least in part by one or more processors, cause the one or more processors to at least: identify one or more versions of one or more portions of one or more libraries to be used in conjunction with the one or more APIs.14. The machine-readable medium of clause 13, further comprising one or more instructions that, if performed by the one or more processors, cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the one or more APIs, the data values comprising information to indicate a name usable to identify the one or more versions.15. The machine-readable medium of clause 13 or 14, further comprising one or more instructions that, if performed by the one or more processors, cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries based, at least in part, on one or more data values indicated to the one or more APIs, the data values comprising information to indicate a numerical value usable to identify the one or more versions.16. The machine-readable medium of any of clauses 13-15, wherein the one or more APIs are to identify the one or more versions based, at least in part, on one or more parameters indicated to the one or more APIs.17. The machine-readable medium of any of clauses 13-16, wherein the one or more APIs are to cause the one or more processors to identify the one or more versions of the one or more portions of the one or more libraries by at least indicating a location in memory of one or more instructions.18. The machine-readable medium of any of clauses 13-17, wherein the one or more libraries are drivers to be performed by the one or more processors.19. A method comprising:- Ill -
identifying, in response to an application programming interface (API), one or more versions of one or more portions of one or more libraries to be used in conjunction with the API.20. The method of clause 19, wherein the one or more versions are to be identified based, at least in part, on one or more parameters to the API, the one or more parameters comprising data to indicate at least a string usable to identify the one or more versions.21. The method of clause 19 or 20, wherein the one or more versions are to be identified based, at least in part, on one or more parameters to the API, the one or more parameters comprising data to indicate at least a numerical value usable to identify the one or more versions.22. The method of any of clauses 19-21, further comprising identifying the one or more versions by indicating a location in memory of one or more instructions of the one or more versions of one or more portions of one or more libraries based, at least in part, on one or more data values indicated to the API.23. The method of any of clauses 19-22, wherein the one or more portions comprise one or more sets of instructions to be performed by one or more software programs in conjunction with the API.24. The method of any of clauses 19-23, wherein the one or more libraries are runtime libraries comprising instructions that, if executed, perform the API.25. The method of any of clauses 19-24, wherein the one or more libraries are a driver and the driver comprises one or more instructions to perform the API.[0356] Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims.
[0357] Use of terms “a” and “an” and “the” and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (meaning “including, but not limited to,”) unless otherwise noted term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. Use of term “set”(e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term “subset” of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal.[0358] Conjunctive language, such as phrases of form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. For instance, in illustrative example of a set having three members, conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). A number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase “based on” means “based at least in part on” and not “based solely on.”[0359] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or
combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium, for example, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non- transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (e.g., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. A set of non- transitory computer-readable storage media, in at least one embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors — for example, a non-transitory computer- readable storage medium store instructions and a main central processing unit (“CPU”) executes some of instructions while a graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions.[0360] Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations.
[0361] Use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure.[0362] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.[0363] In description and claims, terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms may be not intended as synonyms for each other. Rather, in particular examples, “connected” or “coupled” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. “Coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.[0364] Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as “processing,” “computing,” “calculating,” “determining,” or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system’s registers and/or memories into other data similarly represented as physical quantities within computing system’s memories, registers or other such information storage, transmission or display devices.[0365] In a similar manner, term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting examples, “processor” may be a CPU or a GPU. A “computing platform” may comprise one or more processors. As used herein, “software” processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. Terms “system” and “method” are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system.
[0366] In at least one embodiment, an arithmetic logic unit is a set of combinational logic circuitry that takes one or more inputs to produce a result. In at least one embodiment, an arithmetic logic unit is used by a processor to implement mathematical operation such as addition, subtraction, or multiplication. In at least one embodiment, an arithmetic logic unit is used to implement logical operations such as logical AND/OR or XOR. In at least one embodiment, an arithmetic logic unit is stateless, and made from physical switching components such as semiconductor transistors arranged to form logical gates. In at least one embodiment, an arithmetic logic unit may operate internally as a stateful logic circuit with an associated clock. In at least one embodiment, an arithmetic logic unit may be constructed as an asynchronous logic circuit with an internal state not maintained in an associated register set. In at least one embodiment, an arithmetic logic unit is used by a processor to combine operands stored in one or more registers of the processor and produce an output that can be stored by the processor in another register or a memory location.[0367] In at least one embodiment, as a result of processing an instruction retrieved by the processor, the processor presents one or more inputs or operands to an arithmetic logic unit, causing the arithmetic logic unit to produce a result based at least in part on an instruction code provided to inputs of the arithmetic logic unit. In at least one embodiment, the instruction codes provided by the processor to the ALU are based at least in part on the instruction executed by the processor. In at least one embodiment combinational logic in the ALU processes the inputs and produces an output which is placed on a bus within the processor. In at least one embodiment, the processor selects a destination register, memory location, output device, or output storage location on the output bus so that clocking the processor causes the results produced by the ALU to be sent to the desired location.[0368] In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer- implemented machine. Process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In some implementations, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In another implementation, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. References may also be made to providing, outputting, transmitting, sending, or
presenting analog or digital data. In various examples, process of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism.[0369] Although discussion above sets forth example implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. [0370] Furthermore, although subject maher has been described in language specific to structural features and/or methodological acts, it is to be understood that subject maher claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims. |
The present neutron sensing device includes a first substantially planar array of flash memory cells, a second substantially planar array of flash memory cells having an edge adjacent an edge of the first substantially planar array of flash memory cells, and a third substantially planar array of flash memory cells having a first edge adjacent an edge adjacent an edge of the first substantially planar array of flash memory cells and a second edge adjacent an edge of the second substantially planar array of flash memory cells. The plane of the second substantially planar array of flash memory cells is at an angle relative to the plane of the first substantially planar array of flash memory cells, and the plane of the third substantially planar array of flash memory cells is at an angle relative to the plane of the first substantially planar array of flash memory cells, and is at an angle relative to the plane of the second substantially planar array of flash memory cells, in the preferred embodiment of all such angles being indicated as 90°. |
1. Apparatus for sensing neutron flow comprising:a first substantially planar array of flash memory cells; and a second substantially planar array of memory cell, the plane of the second substantially planar array of memory cells being at an angle relative to the plane of the first substantially planar array of memory cells. 2. The apparatus of claim 1 wherein at least one of the first and second substantially planar arrays is mounted on a neutron-absorbing substrate.3. The apparatus of claim 1 wherein the first and second substantially planar arrays are mounted on neutro-absorbing substrates.4. The apparatus of claim 1 wherein the angle between the plane of the first substantially planar array of memory cells and the plane of the second substantially planar array of memory cells is substantially 90[deg.].5. The apparatus of claim 1 and further comprising a third substantially planar array of memory cells, the plane of the third substantially planar array of memory cells being at an angle relative to the plane of the first substantially planar array of memory cells and being at an angle relative to the plane of the second substantially planar array of memory cells.6. The apparatus of claim 5 wherein the angle between the plane of the first substantially planar array of memory cells and the plane of the second substantially planar array of memory cells is substantially 90[deg.].7. The apparatus of claim 5 wherein the angle between the plane of the first substantially planar array of memory cells and the plane of the second substantially planar array of memory cells is substantially 90[deg.], and the angle between the plane of the second substantially planar array of memory cells and the plane of the third substantially planar array of memory cells is substantially 90[deg.].8. The apparatus of claim 7 wherein the angle between the plane of the first substantially planar array of memory cells and the plane of the third substantially planar array of memory cells is substantially 90[deg.].9. The apparatus of claim 8 wherein the memory cells are flash memory cells.10. Apparatus for sensing neutron flow comprising:a first substantially planar array of flash memory cells; a second substantially planar array of flash memory cells having an edge adjacent an edge of the first substantially planar array of flash memory cells; and a third substantially planar array of flash memory cells having a first edge adjacent an edge adjacent an edge of the first substantially planar array of flash memory cells and a second edge adjacent an edge of the second substantially planar array of flash memory cells; the plane of the second substantially planar array of flash memory cells being at an angle relative to the plane of the first substantially planar array of flash memory cells; the plane of the third substantially planar array of flash memory cells being at an angle relative to the plane of the first substantially planar array of flash memory cells and being at an angle relative to the plane of the second substantially planar array of flash memory cells. 11. The apparatus of claim 10 wherein the first, second and third substantially planar arrays are mounted on neutron-absorbing substrates.12. The apparatus of claim 11 wherein the angle between the plane of the first substantially planar array of flash memory cells and the plane of the second substantially planar array of flash memory cells is substantially 90[deg.].13. The apparatus of claim 11 wherein the angle between the plane of the first substantially planar array of flash memory cells and the plane of the second substantially planar array of flash memory cells is substantially 90[deg.], and the angle between the plane of the second substantially planar array of flash memory cells and the plane of the third substantially planar array is substantially 90[deg.].14. The apparatus of claim 13 wherein the angle between the plane of the first substantially planar array of flash memory cells and the plane of the third substantially planar array of flash memory cells is substantially 90[deg.].15. The apparatus of claim 14 wherein each of the first, second and third planar arrays of flash memory cells is substantially rectangular in configuration. |
BACKGROUND OF THE INVENTION1. Technical FieldThis invention relates generally to semiconductor devices, and more particularly, to a neutron detecting device.2. Background ArtU.S. Pat. No. 6,075,261 entitled NEUTRON DETECTING SEMICONDUCTOR DEVICE, invented by Hossain et al., assigned to the Assignee of this invention, discloses a neutron detecting device which is formed by providing an array of flash memory cells, with neutron-reactant material over the memory cells. Upon been penetrated by a neutron, the neutron-reacting material emits one or more particles capable of inducing a state change in a memory cell. For example, as disclosed in that patent, the state of the flash memory transistor illustrated and described therein is an on-state or a logical 1 state, associated with a negative charge on the floating gate and an inversion layer beneath the floating gate. In such case, the neutron-reactant material, upon being penetrated by a neutron, emits one or more particles which pass through the inversion layer, sufficiently reducing the charge in the channel region of the transistor to remove the inversion layer and change the state of the memory cell to an off-state or logical 0 state.The neutron detecting device includes a memory arrangement which includes a plurality of flash memory cells in the form of an array, as described above. Typically, the initial, undisturbed state of each memory cell is set to a logical 1. During a detection cycle, the state of each cell is read to determine whether such state has changed, indicating detection of neutrons in accordance with the above mechanism. The proportion of cells which have changed state compared to the overall number of cells in the array can be used to determine the presence and intensity of a neutron field. In a typical embodiment, the percentage of state changes can range from for example 0.001% to 0.1% of the total number of memory cells in the array. After a chosen time interval during which the reading of the cells takes place as described above, all of the memory cells are reset to logical 1 in preparation for the next detection cycle.In such a device, a reading of intensity of the neutron field as indicated by the device is dependent on the orientation of the device relative to the path of travel of the neutrons of the neutron field, as will now be described and illustrated with regard to FIGS. 1 and 2.FIG. 1 illustrates a neutron field 20 which includes a plurality of neutrons 22 flowing in the direction indicated. It will be understood that the neutrons 22 illustrated are a portion of a large neutron field 20, which field 20 extends sidewardly of FIG. 1 and also perpendicular to the plane of FIG. 1. The neutrons 22 are indicated as generally equally spaced apart a distance A for purposes of simplicity in this example. FIG. 2 illustrates portions of the subject matter of FIG. 1 enlarged for clarity.With the memory cell array 24 (mounted on a substrate 26) oriented as shown in FIGS. 1A, 2A, the plane of the array 24 is substantially perpendicular to the direction of the flow of neutrons 22 ([theta] indicates the angle between the plane of the array 24 and the direction of flow of neutrons 22, in this case [theta]1=90[deg.]). In this situation, the spacing of the neutrons 22 impinging on the array 24 is substantially the same as the spacing A. A reading of intensity I of the neutron field 20 taken in accordance with the above procedure will indicate an intensity of, for example, I1. If the memory cell array 24 is oriented in the same neutron field 20 as shown at FIG. 1B, 2B (array 24 rotated counterclockwise relative to FIGS. 1A, 2A), with the plane of the array 24 not substantially perpendicular to the direction of flow of neutrons 22 but at an angle [theta]2 relative thereto, the spacing B of the neutrons 22 impinging on the array 24 is greater than the spacing A in the previous example. With this being the case, over a given period of time, the array 24 will be exposed to a smaller number of neutrons 22 than in the example of FIGS. 1A, 2A, decreasing the percentage of state changes in the array 24 as compared to the example of FIGS. 1A, 2A. Indeed, it will be seen that, with reference to FIG. 2B, the reading of intensity with the memory cell array 24 oriented as shown at FIGS. 1B, 2B isI=ksin [theta]where k is a constant, and [theta] is the angle between the direction of flow of neutrons 22 and the plane of the array 24.Likewise, if the memory cell array 24 is oriented as shown at FIGS. 1C, 2C (array 24 rotated clockwise relative to FIGS. 1A, 2A), with the plane of the array 24 not substantially perpendicular to the direction of flow neutrons 22 but at an angle [theta]3 relative thereto, the spacing C of the neutrons 22 impinging on the array 24 is greater than the spacing A in the example of FIG. 1A, 2A. Which this being the case, over a given period of time, the array 24 will be exposed to a lower number of neutrons 22 than in the example of FIGS. 1A, 2A, decreasing the percentage of state changes in the array 24 as compared to the example of FIGS. 1A, 2A.Indeed, the above cited formula indicates a maximum intensity reading at [theta]=90[deg.] (sin [theta]=1, FIG. 1A, 2A), which will readily be seen to be the case in reviewing FIGS. 1 and 2 in their entirety.Thus, it will be seen that the level of intensity of the neutron field 20 indicated by the present device is dependent on the orientation of the device relative to the direction of flow of the neutrons 22.In addition, while a level of intensity is read with the army 24 in a variety of positions relative to the direction of flow of neutrons 22, no indication is given as to the direction of neutron flow, i.e., the direction of the source of neutrons relative to the array 24.Therefore, what is needed is a neutron detecting device which is capable of properly measuring the intensity of a neutron field and indicating the direction of the source of neutrons.DISCLOSURE OF THE INVENTIONThe present invention is an apparatus for sensing neutron flow. The apparatus includes a first substantially planar array of flash memory cells, a second substantially planar array of flash memory cells having an edge adjacent an edge of the first substantially planar array of flash memory cells, and a third substantially planar array of flash memory cells having a first edge adjacent an edge adjacent an edge of the fist substantially planar array of flash memory cells and a second edge adjacent an edge of the second substantially planar array of flash memory cells. The plane of the second substantially planar array of flash memory cells is at an angle relative to the plane of the first substantially planar array of flash memory cells, and the plane of the third substantially planar array of flash memory cells is at an angle relative to the plane of the first substantially planar array of flash memory cells and is at an angle relative to the plane of the second substantially planar array of flash memory cells.The present invention is better understood upon consideration of the detailed description below, in conjunction with the accompanying drawings. As will become readily apparent to those skilled in the art from the following description, there is shown and described embodiment of this invention simply by way of the illustration of the best mode to carry out the invention As will be realized, the invention is capable of other embodiments and its several details are capable of modifications and various obvious aspects, all without departing from the scope of the invention. Accordingly, the drawings and detailed description will be regarded as illustrative in nature and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSThe novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as said preferred mode of use, and further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:FIG. 1 is a view illustrating various orientations of a memory cell array relative to the path of the neutrons of a neutron field;FIG. 2 is a view of enlarged portions of FIG. 1;FIG. 3 is a perspective view illustrating an embodiment of the present invention;FIG. 4 is a view illustrating the orientations of several memory cell arrays of the present device relative to the path of neutrons of a neutron field;FIG. 5 is a view illustrating the orientations of several memory cell arrays of the present device relative to the path of neutrons of a neutron field, with the device in an orientation different from that shown in FIG. 4; andFIG. 6 is a view illustrating an orientation of a pair of arrays of the device relative to the path of neutrons of a neutron field; andFIG. 7 is a geometric representation of the subject matter of FIG. 6.BEST MODE(S) FOR CARRYING OUT THE INVENTIONReference is now made in detail to a specific embodiment of the present invention which illustrates the best mode presently contemplated by the inventors for practicing the invention.FIG. 3 illustrates an embodiment of the present invention. As shown therein, the neutron detecting device 30 is in the form of a cube and includes first second, third, fourth, fifth and sixth substantially planar arrays 32, 34, 36, 38, 40, 42 of flash memory cells at the faces of the cube. The arrays 32, 34, 36 are mounted on respective neutron-absorbing substrates 33, 35, 37, and the arrays 38, 40, 42 are also mounted on respective neutron-absorbing substrates (not shown in FIG. 3 for clarity), with the arrays 32, 34, 36, 38, 40, 42 being fixed in position relative to each other. Each of the arrays 32, 34, 36, 38, 40, 42 is substantially rectangular in configuration, i.e., in this particular embodiment, substantially square in configuration, and the arrays 32,34, 36, 38, 40, 42 are in this embodiment substantially equal in array. The arrays 32-42 are arranged so that each edge of each array lies along an edge of another array, with the angle between the planes of such arrays being 90[deg.]. For example, the array 32 has an edge 32A adjacent and lying along an edge 34A of the array 34, the angle between the plane of the array 32 and the plane of the array 34 being 90[deg.]. The array 36 has an edge 36A adjacent and lying along an edge 34B of the array 34, and an edge 36B adjacent and lying along an edge 32B of the array 32, the angle between the plane of the array 36 and the plane of the array 34 being 90[deg.], the angle between the plane of the array 36 and the plane of the array 32 being 90[deg.].With reference to both FIG. 3 and 4, assuming presence of a neutron field 50 wherein the direction of travel of the neutrons 52 is directly toward the device 30 from the position of the observer of FIG. 3, i.e., substantially perpendicular to and into the plane of the drawing of FIG. 3, neutrons 52 will strike the array 32 at angle [beta]1a relative to the plane of the array 32, at angle [beta]2a relative to the plane to the array 34, and at angle [beta]3a relative to the plane to the array 36 (FIG. 4). With the device 30 so positioned relative to the direction of travel of the neutrons 52, a reading of level of intensity is taken at each of the arrays 32, 34, 36, resulting in intensity readings I1a for array 32, 12, for array 34, and I3a for array 36. Intensity level indicated at each of the arrays 38, 40, 42 will be zero because of the neutron-absorbing substrates associated with each of the arrays 32-42, which will absorb neutrons passing through an array from reaching another array of the device 30.Next, the device 30 is rotated in a manner so that only two of the arrays indicate an intensity level, that is, all of the other arrays indicate zero intensity level. For example, with reference to FIGS. 5 and 6, the device 30 is rotated until the intensity level indicated that the array 32 is zero (the intensity level indicated at each of the arrays 38, 40, 42, also being zero because of the neutron absorbing substrates), leaving only arrays 34, 36 indicating an intensity level (it will be understood that one is careful not to rotate and position the device 30 so that an array other than arrays 34, 36, for example array 42, indicates an intensity level, the point being to arrive at a device position where only two of the arrays, in this example arrays 34, 36, provide a reading of intensity level while all the other arrays, in this example arrays 32, 38, 40, 42, indicate an intensity level of zero. This situation is illustrated in FIG. 5, wherein neutrons 52 will not strike the array 32 ([beta]1=0, sin B1b=0), FIG. 5A, will strike the array 34 at an angle [beta]2b relative to the plane of the array 34, FIG. 5B, and will strike the array 36 at an angle [beta]3b relative to the plane of the array 36, FIG. 5C. See also FIG. 6.It will be seen that the ratios of the intensities indicated by the arrays 34, 36, i.e., I2b:I3b, is readily determined. That is, in accordance with the above discussion, sinceI=k sin [beta],for each of the arrays 34, 36,I2b=k sin [beta]2b for array 34I3b=k sin [beta]3b for array 36and the ratio of the sines of [beta]2b, [beta]3bsin [beta]2b:sin [beta]3bcan be readily determined this ratio being the sane as the ratioI2b:I3bUpon noting that the plane of the array 34 as at an angle of 90[deg.] relative to the plane of the array 36, it will be realized that [beta]3b=90[deg.]-[beta]2b (see FIGS. 6 and 7). Knowing the value of the ratio of the sines of the angles [beta]2b, [beta]3b, i.e., for example, the value of the ratio of the sines of the angles [beta]2b, [beta]3b=M, one can determine the value of [beta]2b as will now be described.sin [beta]2b/sin [beta]3b=Msin [beta]2b=K/L (opposite/hypotenuse)sin [beta]3b=J/L (opposite/hypotenuse) [mathematical formula - see original document]As will be noted in FIG. 7, the ratio K/J is tangent [beta]2b. Thus, arrangement M=[beta]2b.One is thus able to determine the unique, particular, single angle of direction of travel of neutrons 52 relative to the array 34, and thus relative to the device 30 itself. Based on this information, an indicator provided on the device 30 can visually indicate the direction (relative to the device 30) from which the neutrons 52 are traveling.Once the direction of travel of the neutrons 52 is determined, the device 30 can be rotated so that a single array (for example array 34, FIG. 3) is positioned with its plane substantially perpendicular to the direction of neutron 42 travel, using the indicator described above. With such a single array 34 so positioned (in turn resulting in the all the other arrays 32, 36, 38, 40, 42 being positioned so that they are not exposed to neutron flow), a direct reading of the intensity of the neutron field 50 can be taken by this single array 34, with sin [beta] for the array 34 being 1 (maximum intensity read by array 34).In accordance with the above description, the direction of a source of neutrons can be readily determined, and the intensity of the neutrons field can be read in a proper manner, consistent from one reading to the next. Furthermore, it will be realized that one skilled in the art could use a tensor approach and direction cosines to determine the direction of neutron travel based on intensity information of three arrays exposed to neutrons, while holding the device 30 in place.The foregoing description of the embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the, invention to the precise form disclosed. Other modifications or variations are possible in light of the above teachings.The embodiment was chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill of the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled. |
Briefly, in accordance with an embodiment of the invention, a method and apparatus to transfer information is provided, wherein the method includes monitoring activity on a bus during a transfer of information from a device using the bus and generating a direct memory access (DMA) request based on the bus activity. |
Claims 1. A method, comprising: monitoring activity on a bus during a transfer of information from a device using the bus; and generating a direct memory access (DMA) request based on the activity on the bus. 2. The method of claim 1, wherein generating comprises generating the DMA request if a signal on the bus indicates that the transfer of information from the device using the bus is complete. 3. The method of claim 1, wherein generating comprises generating the DMA request if a signal on the bus transitions from a first level to a second level. 4. The method of claim 1, wherein generating comprises generating the DMA request if a signal on the bus is at a predetermined level. 5. The method of claim 1, wherein monitoring comprises monitoring the bus to detect a DMA event. <Desc/Clms Page number 22> 6. The method of claim 5, wherein generating further comprises generating the DMA request in response to the DMA event. 7. The method of claim 5, further comprising generating the DMA request in response to a predetermined number of DMA events. 8. The method of claim 5, further comprising generating the DMA request in response to the DMA event, wherein the DMA request is generated a predetermined amount of time after the DMA event. 9. The method of claim 5, wherein the DMA event is an event indicating that the transfer of information from the device using the bus is complete. 10. A method, comprising: using a direct memory access (DMA) controller to transfer information from a non-DMA device. 11. The method of claim 10, further comprising monitoring a bus coupled to the non-DMA device to determine if information is ready to be transferred from the non-DMA device. <Desc/Clms Page number 23> 12. The method of claim 11, further comprising generating a DMA request if a signal on the bus coupled to the non-DMA device indicates that information is ready to be transferred from the non-DMA device. 13. An apparatus, comprising: a first device adapted to determine if a transfer of information from a second device is complete and adapted to generate a direct memory access (DMA) request if the transfer of the information from the second device is complete. 14. The apparatus of claim 13, further comprising a bus coupled to the second device, wherein the first device monitors a signal on the bus to determine if the transfer of information from the second device is complete. 15. The apparatus of claim 13, wherein the second device is a non-DMA device. 16. The apparatus of claim 13, further comprising a DMA controller coupled to the first device, wherein the DMA controller is adapted to receive the DMA request and adapted to transfer information from or to the second device in response to the DMA request. <Desc/Clms Page number 24> 17. The apparatus of claim 16, wherein the DMA controller has at least two DMA request input terminals to receive the DMA request and wherein the second device is a non-DMA device and the second device is not connected to any of the DMA request input terminals of the DMA controller. 18. A system, comprising: a processor; a wireless transceiver coupled to the processor; a bus coupled to the processor; a first device coupled to the bus; and a second device adapted to monitor the bus to determine if a transfer of information from the first device is complete and adapted to generate a direct memory access (DMA) request if the transfer of the information from the second device is complete. 19. The system of claim 18, further comprising a DMA controller coupled to the second device. 20. The system of claim 19, wherein the DMA controller has a DMA request input terminal to receive the DMA request and wherein the first device is not connected to any of the DMA request input terminals of the DMA controller. |
<Desc/Clms Page number 1> METHOD AND APPARATUS TO TRANSFER INFORMATION BACKGROUND A microprocessor in a computing system may initiate and control the transfer of information within the system. A microprocessor may operate at a relatively greater speed than other components within the system. Accordingly, the microprocessor may incur a significant amount of idle time while waiting for data to be transferred between two relatively slower peripheral devices after initiating the transfer of data. Thus, there is a continuing need for alternate ways to transfer information. BRIEF DESCRIPTION OF THE DRAWINGS The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The claimed subject matter, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which: FIG. 1 is a block diagram of a computing system in accordance with an embodiment of the claimed subject matter; <Desc/Clms Page number 2> FIG. 2 is a block diagram of a direct memory access (DMA) request generator in accordance with an embodiment of the claimed subject matter; and FIG. 3 is a block diagram illustrating a portable communication device in accordance with an embodiment of the claimed subject matter. It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that the claimed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the claimed subject matter. <Desc/Clms Page number 3> Embodiments of the claimed subject matter may include an apparatus for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computing device selectively activated or reconfigured by a program stored in the device. Such a program may be stored on a storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, electromechanical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), flash memory, magnetic or optical cards, or any other type of media suitable for storing electronic instructions and data. In the following description and claims, the terms "coupled"and"connected,"along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments,"connected"may be used to indicate that two or more elements are in direct physical or electrical contact with each other."Coupled"may mean that two or more elements are in direct physical or electrical contact. However, "coupled"may also mean that two or more elements are not in direct contact with each other, but yet still co- operate or interact with each other. <Desc/Clms Page number 4> Turning to FIG. 1, an embodiment of a computing system 100 is illustrated. Computing system 100 may be used in a variety of applications such as, for example, a personal digital assistant (PDA), a two-way pager, a cellular phone, a portable computer, a desktop computer, a workstation, a server, or video equipment. Although it should be pointed out that the scope and application of the claimed subject matter is in no way limited to these examples. In this embodiment, computing system 100 may comprise a processor 110 that may be connected to an external bus controller 120, a communication bus controller 130, an internal bus controller 140, a direct memory access (DMA) controller 150, and a DMA request generator 160. DMA controller 150 may be connected to external bus controller 120, communication bus controller 130, internal bus controller 140, and DMA request generator 160. External bus controller 120 may be connected to a bus 170; communication bus controller 130 may be connected to a bus 180; and internal bus controller 140 may be connected to a bus 190. DMA request generator 160 may be connected to buses 170, 180, and 190. Computing system 100 may further comprise an internal memory 270 connected to bus 190. Although not shown in the embodiment illustrated in FIG. 1, in alternate embodiments, processor 110 may be directly connected to buses 170,180, and 190. In addition, in alternate <Desc/Clms Page number 5> embodiments, DMA controller 150 may be directly connected to buses 170,180, and 190. In addition, computing system 100 may comprise devices to interface to peripheral devices (not shown) such as, for example, a digital camera, a display, a keyboard, a memory device, a printer, an audio device, etc. These peripheral devices may also be referred to as Input/Output (I/O) devices or external devices. In the embodiment illustrated in FIG. 1, computing system 100 may include the following devices to interface to peripheral devices: an external memory controller 210, a display controller 220, a camera controller 230, an audio controller 240, a serial peripheral interface (SPI) 250, a universal asynchronous receiver transmitter (UART) 260. These interface devices may be integrated ("on-chip") with the peripheral devices, or in alternate embodiments, may be discrete components. The interface devices may also be referred to as peripheral devices. External memory controller 210, display controller 220, camera controller 230, and audio controller 240 may be connected to bus 170. SPI 250 and UART 260 may be connected to bus 180. Although the scope of the claimed subject matter is not limited in this respect, buses 170,180, and 190 may be data paths comprising, for example, a collection of data lines to transmit information from one part of computing system 100 to another. Processor 110 may comprise, for example, one or <Desc/Clms Page number 6> more microprocessors, digital signal processors, microcontrollers, or the like. Processor 110 may execute a software process such as, for example, a software program or an operating system, wherein the software process may use digital information such as, for example, data and/or instructions. Internal memory 270 may be referred to as a storage device and may be adapted to store information such as, for example, instructions or data used by an operating system or a software program that may be executed by processor 110. In some embodiments, internal memory 270 may be a volatile memory such as, for example, a static random access memory (SRAM) or a dynamic random access memory (DRAM), although the scope of the claimed subject matter is not limited in this respect. In alternate embodiments, internal memory 270 may be nonvolatile memory such as, for example, an electrically programmable read-only memory (EPROM), an electrically erasable and programmable read only memory (EEPROM), or a flash memory (NAND or NOR type, including multiple bits per cell). It should be noted that herein, the terms data and information may be used in interchangeably. For example, data may also refer to both data and/or instructions. In addition, the terms information and data may refer to a single bit of information or more than one bit of information. <Desc/Clms Page number 7> In some embodiments, bus controllers 120,130, and 140 may be used with processor 110 or DMA controller 150 to control the transfer of information within computing system 100. Bus controllers 120,130, and 140 may include buffers, queues, or registers to store information and may also comprise circuitry adapted to generate control, address, and data signals to control the transfer of information in computing system 100. For example, bus controllers 120, 130, and 140 may generate control signals, address signals, and data signals that may be associated with a particular write or read operation to the various devices in computing system 100. As stated above, processor 110 may also be used with bus controllers 120,130, and 140 to control the transfer of information. For example, processor 110 may provide data, address, and control information to bus controllers 120, 130, and 140 to initiate a transfer of information between the various peripheral and internal devices of computing system 100. DMA controller 150 may be used with bus controllers 120,130, and 140 to control the transfer of information between memory devices in computing system 100 or control the transfer of information between a memory device and a peripheral device in computing system 100. DMA controller 150 may perform a transfer of information to or from a <Desc/Clms Page number 8> memory device without using processor 110. Transfers using DMA controller 150 may be referred to as DMA transfers. DMA controller 150 may have a predetermined number of DMA channels, wherein each channel may be dedicated to a specific device or devices in computing system 100. DMA controller 150 may include a predetermined number of DMA request input terminals to receive DMA requests from memory devices or peripheral devices in computing system 100. In response to receiving a DMA request, DMA controller 150 may initiate a DMA transfer. If a peripheral device or memory device is adapted to transmit a DMA request to one of the DMA request input terminals, then the peripheral device or memory device may be referred to as a DMA device and may be said to have a DMA interface. The DMA interface of a DMA device may provide handshaking between DMA controller 150 and the DMA device to transfer information to and from the DMA device. Non-DMA devices may be devices that have no DMA interface, e. g. , these devices may not have access to a DMA request input terminal. In some embodiments, a non-DMA device may use processor 110, rather than DMA controller 150, to transfer information to and from the non-DMA device. Internal memory 270, SPI 250, UART 260, and controllers 210, 220,230, and 240 may be configured as either DMA devices or non-DMA devices. <Desc/Clms Page number 9> As an example, camera controller 230 may be connected to a camera (not shown). Camera controller 230 may be a DMA interface, i. e. , in this example, camera controller 230 may be adapted to send a DMA request to a DMA request input terminal of DMA controller 150 via bus 170 and external bus controller 120. In this example, the camera and camera controller 230 may be referred to a DMA device having a DMA interface. DMA controller 150 may be used to transfer a block of data from the camera to internal memory 270. In this example, prior to a DMA request, processor 110 may supply to DMA controller 150 the following: a source address, a destination address, and the size of the data transfer. The source address may be the location of the block of data in the camera and the destination address may be the location of where the data is to placed in internal memory 270 during the DMA transfer. Camera controller 230 may be configured to trigger the DMA transfer by generating a DMA request. The DMA request may be transferred from camera controller 230 to one of the DMA request input terminals via bus 170 and external bus controller 120. In response to the DMA request, DMA controller 150 may transmit a signal to processor 110 indicating that DMA controller 150 is to take control of buses 170 and 190. After processor 110 releases control of buses 170 and 190, DMA controller 150 may transmit a DMA acknowledge signal to camera controller 230. <Desc/Clms Page number 10> During the DMA transfer, buses 170 and 190 may be driven by DMA controller 150, not processor 110, and DMA controller 150 may generate the appropriate signals to perform the DMA transfer. During a DMA transfer, data may be transferred directly from the camera to internal memory 270, or in alternate embodiments, data may go through DMA controller 150. In this embodiment, during the DMA transfer, the block of data may be initially transferred from the camera to external bus controller 120 via bus 170, the block of data may then be transferred from external bus controller 120 to internal bus controller 140, and the block of data may then be transferred from internal bus controller 140 to internal memory 270 via bus 190. DMA request generator 160 may be connected to buses 170,180 and 190 to monitor activity on these buses. DMA request generator 160 may be connected to one or more of the DMA request input terminals of DMA controller 150. DMA request generator 160 may monitor activity of a signal transferred on a bus during a transfer of information to or from a device using the bus and DMA request generator 160 may generate a DMA request based on the bus activity. In some embodiments, DMA request generator 160 may monitor activity on bus 170 during a transfer of information to or from controller 210, controller 220, controller 230, or controller 240. In addition, DMA request generator 160 may monitor activity on bus 180 during a transfer of information <Desc/Clms Page number 11> to or from SPI 250, or UART 260. Further, DMA request generator 160 may monitor activity on bus 190 during a transfer of information to or from internal memory 270. DMA request generator 160 may be adapted to detect DMA events and may generate a DMA trigger in response to the DMA event. In some embodiments, DMA request generator 160 may be connected to external dedicated pins (not shown) to detect a DMA event. DMA events may be predefined events. For example, although the scope of the claimed subject matter is not limited in this respect, the completion of the transfer of a block of information from a device may be defined as a DMA event. Alternatively, a request to transfer information from a non-DMA device may be a DMA event. DMA request generator 160 may monitor a bus coupled to the device to determine if the DMA event occurred, e. g. , if the transfer of the block of information from the device is complete. In response to the detection of a DMA event, DMA request generator 160 may generate a DMA request and may transfer this request to one of the DMA request input terminals of DMA controller 150. In other words, DMA request generator 160 may be adapted to monitor a bus to determine if a transfer of a block of information from a device is complete and may be adapted to generate a DMA request if the transfer of the block of information from the device is complete. In response to receiving the DMA <Desc/Clms Page number 12> request from DMA request generator 160, DMA controller 150 may respond in many ways. For example, DMA controller 150 may initiate a DMA transfer of the block of information to another device, or in alternate embodiments, DMA controller 150 may initiate another transfer of another block of information from the device. To determine if the transfer of a block of information from a device is complete, DMA request generator 160 may monitor one or many signals on a bus. For example, DMA request generator 160 may monitor chip select (CS) signals of peripheral or memory devices, access signals (e. g. , read or write signals) transmitted over the bus, or address signals on the bus. In some embodiments, DMA controller 150 may be used to transfer information from a non-DMA device. For example, if camera controller 230 is a non-DMA device, then DMA request generator 160 may monitor bus activity on bus 170 to determine if information is to be transferred from camera controller 230. A signal on bus 170 coupled to camera controller 230 may indicate that information is ready to be transferred from camera controller 230, then request generator 160 may generate a DMA request to initiate a DMA transfer from camera controller 230 using DMA controller 150. In some embodiments, DMA request generator 160 may control the timing of the transfer of a DMA request to DMA <Desc/Clms Page number 13> controller 150. For example, DMA request generator 160 may transmit a DMA request to DMA controller 150 after a predetermined delay or a predetermined amount of time after detecting a DMA event or after receiving a DMA trigger. In alternate embodiments, DMA request generator 160 may transfer a DMA request to DMA controller 150 immediately following receiving DMA trigger. Or, DMA request generator 160 may transfer a DMA request to DMA controller 150 after detecting a predetermined number of DMA events. By controlling the timing of sending a DMA request to DMA controller 150, DMA request generator 160 may control and balance the flow of information in computing system 100. Turning to FIG. 2, an embodiment of DMA request generator 160 is illustrated in accordance with an embodiment of the claimed subject matter. In this embodiment, DMA request generator 160 may comprise a trigger generator 370, a request generator 380 connected to trigger generator 370, and a control device 390 connected to trigger generator 370 and request generator 380. Trigger generator 370 may be connected to buses 170, 180, and 190 to monitor activity on these buses. Trigger generator 370 may generate a DMA trigger in response to activity on buses 170,180, and 190. The DMA trigger may be transferred to request generator 380. Request generator 380 may be connected to one or more than one of the DMA request input terminals of DMA <Desc/Clms Page number 14> controller 150. In some embodiments, in response to a DMA trigger, request generator 380 may immediately transmit a DMA request to one of the DMA request input terminals to initiate a DMA transfer. In alternate embodiments, request generator 380 may transmit a DMA request to DMA controller 150 a predetermined amount of time after receiving a DMA trigger. In other embodiments, request generator 380 may transmit a DMA request to DMA controller 150 after receiving multiple DMA triggers. For example, request generator 380 may be configured to transmit a DMA request to DMA controller 150 after receiving at least three DMA triggers. Control device 390 may be adapted to control and configure trigger generator 370 and request generator 380. In some embodiments, control device 390 may be connected to processor 110 to receive configuration information from processor 110. For example, processor 110 may define what information trigger generator 370 monitors on buses 170, 180, and 190. In addition, processor 110 may define under what conditions and when request generator 380 generates a DMA request. Referring to both FIGS. 1 and 2, as an example, two blocks of information may be transferred from a camera (not shown) which may be coupled to camera controller 230. The two blocks of information may be transferred to internal memory 270. Two separate transfer operations may be used to transfer the two blocks of information, wherein each <Desc/Clms Page number 15> transfer includes transmitting a block of information in stages from the camera to internal memory 270. For example, during an initial stage, a block of information may initially be transferred to camera controller 230. During the next stage, the block of information may be transferred from camera controller 230 to external bus controller 120 via bus 170. In the following stage, the block of information may be transferred from external bus controller 120 to internal bus controller 140. In a final stage, the block of information may be transferred from internal bus controller 140 to internal memory 270 via bus 190. Camera controller 230 may be a relatively slow device compared to, for example, controllers 120,130, and 140, processor 110, DMA controller 150, DMA request generator 160, and internal memory 270. Accordingly, the transfer of information from camera controller 230 to external bus controller 120 may be relatively slow compared to, for example, the transfer of information between external bus controller 120 and internal bus controller 140 or compared to the transfer of information between internal bus controller 140 and internal memory 270. In some embodiments, while the initial block of information is transferred from camera controller 230 to external bus controller 120, DMA request generator 160 may monitor bus 170 to determine if the transfer of the initial block of information from camera controller 230 is complete. <Desc/Clms Page number 16> During this transfer, DMA controller 150 may be free to perform a DMA transfer between other devices in computing system 100 since DMA request generator 160 is monitoring the transfer of information between camera controller 230 and external bus controller 120. For example, during the transfer of the initial block of information between camera controller 230 and external bus controller 120, rather than having DMA controller 150 in an idle state waiting for this transfer to complete, DMA controller 150 may be used to assist the transfer of information between, for example, SPI 250 and internal memory 270. DMA request generator 160 may monitor bus 170 to determine if the transfer of the initial block of information from camera controller 230 is complete. If the transfer of the initial block of information from camera controller 230 is complete, DMA request generator 160 may transmit a DMA request to DMA controller 150 to initiate a DMA transfer of the second block of information from camera controller 230. DMA request generator 160 may monitor one or many signals on bus 170. For example, DMA request generator 160 may monitor a chip select (CS) signal transmitted to a CS input terminal of camera controller 230. If the CS signal transferred to camera controller 230 is asserted low during a read operation, then trigger generator 370 may be configured to detect a rising edge of the CS signal to determine if the transfer of information from camera <Desc/Clms Page number 17> controller 230 is complete. If trigger generator 370 detects a rising edge of the CS signal, then trigger generator 370 may generate a DMA trigger and transmit the DMA trigger to request generator 380. In other words, if trigger generator 370 detects that the CS signal transitions from a relatively lower voltage level to a relatively higher voltage potential, then trigger generator 370 may transmit a DMA trigger to request generator 380. In alternate embodiments, DMA request generator 160 may monitor access signals (e. g. , a read signal or a write signal) transferred to input terminals of camera controller 230 via bus 170. For example, if a read signal transmitted to camera controller 230 is asserted low during a read operation, then trigger generator 370 may be configured to detect a rising edge of the read signal to determine if the transfer of information from camera controller 230 is complete. In other embodiments, DMA request generator 160 may monitor address or data signals transferred to input terminals of camera controller 230 via bus 170. One or more of the address or data signals transmitted to camera controller 230 may provide an indication of when the transfer of information from camera controller 230 is complete. For example, the level or value of one or more signals may be compared to a predetermined level or value to determine if the transfer of information from camera controller 230 is complete. Trigger generator 370 may be <Desc/Clms Page number 18> configured to perform the comparison to determine if the transfer of information from camera controller 230 is complete. If the level or value equals the predetermined level or value, then trigger generator 370 may be configured to generate a DMA trigger. Turning to FIG. 3, a portable communication device 400 in accordance with an embodiment of the claimed subject matter is described. Portable communication device 400 may include a processor 410 that may be connected to a bus controller 420, a bus controller 430, a DMA controller 450, and a DMA request generator 460. DMA controller 450 may be connected to bus controller 420, bus controller 430, and DMA request generator 460. Bus controller 420 may be connected to a bus 470 and bus controller 430 may be connected to a bus 480. DMA request generator 460 may be connected to buses 470 and 480. Portable communication device 400 may further comprise a memory 570 connected to bus 480. A wireless transceiver 500 may be connected to an antenna 510 and bus 470. In addition, portable communication device 400 may include interface devices 520 and 530, both may be connected to bus 470. Referring to FIGS. 1 and 3, the operation of interface devices 520 and 530 may be similar to the operation of SPI 250, UART 260, controller 210, controller 220, controller 230, or controller 240. The operation of bus controller 420 may be similar to the operation of external bus controller <Desc/Clms Page number 19> 120 or communication bus controller 130. The operation of bus controller 430 may be similar to the operation of internal bus controller 140. The operation of processor 410, DMA controller 450, and DMA request generator 460 may be similar to the operations of processor 110, DMA controller 150, and DMA request generator 160, respectively. Portable communication device 400 may use wireless transceiver 500 with antenna 510 to transmit and receive messages to and from a wireless communication network with a radio frequency (RF) signal. Although the scope of the claimed subject matter is not limited in this respect, portable communication device 400 may use one of the following communication air interface protocols to transmit and receive messages: Code Division Multiple Access (CDMA), cellular radiotelephone communication systems, Global System for Mobile Communications (GSM) cellular radiotelephone systems, North American Digital Cellular (NADC) cellular radiotelephone systems, Time Division Multiple Access (TDMA) systems, Extended-TDMA (E-TDMA) cellular radiotelephone systems, third generation (3G) systems like Wide-band CDMA (WCDMA), CDMA-2000, and the like. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be <Desc/Clms Page number 20> understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. |
Disclosed are examples of a device and method of fabricating a device including a first top contact, a second top contact, adjacent the first top contact, a first mesa disposed below the first top contact and a second mesa disposed below the second top contact. A first plate of a metal-insulator-metal (MIM) capacitor is disposed below the first top contact and electrically coupled to the first top contact. A first insulator of the MIM capacitor is disposed on the first plate. A second plate of the MIM capacitor is disposed on the first insulator and electrically coupled to the second top contact. A second insulator of the MIM capacitor is disposed on the second plate. A third plate of the MIM capacitor is disposed on the second insulator and electrically coupled to the first top contact. |
CLAIMSWHAT IS CLAIMED IS:1. An apparatus comprising: a first top contact; a second top contact, adjacent the first top contact; a first mesa disposed below the first top contact; a second mesa disposed below the second top contact; a first plate of a metal-insulator-metal (MIM) capacitor disposed below the first top contact and electrically coupled to the first top contact; a first insulator of the MIM capacitor disposed on the first plate; a second plate of the MIM capacitor disposed on the first insulator and electrically coupled to the second top contact; a second insulator of the MIM capacitor disposed on the second plate; and a third plate of the MIM capacitor disposed on the second insulator and electrically coupled to the first top contact.2. The apparatus of claim 1, further comprising: a first partial via disposed between the first top contact and the first mesa, wherein the first plate and the third plate are electrically coupled to the first top contact through the first partial via; and a second partial via disposed between the second top contact and the second mesa, wherein the second plate is electrically coupled to the second top contact through the second partial via.3. The apparatus of claim 1, wherein the first top contact is directly disposed on the first mesa and the second top contact is directly disposed on the second mesa.4. The apparatus of claim 1, wherein the first mesa and the second mesa are formed in a first inter-metal dielectric (IMD) layer.5. The apparatus of claim 4, further comprising: a second inter-metal dielectric (IMD) layer, wherein the first top contact and the second top contact are at least partially disposed in the second IMD layer.6. The apparatus of claim 5, wherein the first top contact and the second top contact are in a same metal layer in the second IMD layer.7. The apparatus of claim 6, further comprising: a lower metal layer, wherein the first IMD layer is disposed on the lower metal layer.8. The apparatus of claim 4, wherein the first insulator comprises a high dielectric constant (high-k) dielectric material, and wherein the first IMD layer comprises a low dielectric constant (low-k) dielectric material.9. The apparatus of claim 5, wherein the second IMD layer comprises a low dielectric constant (low-k) dielectric material.10. The apparatus of claim 1, wherein the first plate, the second plate, the third plate, the first insulator and the second insulator are disposed between the first top contact and the second top contact.11. The apparatus of claim 1 , wherein the first plate and the third plate are coupled to a first power connection, and the second plate is coupled to a second power connection.12. The apparatus of claim 11, wherein the first power connection is configured to be at a positive potential and wherein the second power connection is configured to be at a negative potential or ground.13. The apparatus of claim 1, further comprising:
a second MIM capacitor, wherein the second MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the first plate and the third plate are coupled to the first top contact.14. The apparatus of claim 1, further comprising: a third MIM capacitor, wherein the third MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the second plate is coupled to the second top contact.15. The apparatus of claim 1, wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, an access point, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (loT) device, a laptop computer, a server, a base station and a device in an automotive vehicle.16. A method of fabricating an apparatus, the method comprising: forming a first mesa; forming a second mesa adjacent the first mesa; depositing a first plate of a metal-insulator-metal (MIM) capacitor between the first mesa and the second mesa, wherein a portion of the first plate extends to the first mesa; depositing a first insulator of the MIM capacitor on the first plate, wherein a portion of the first insulator extends to the first mesa and the second mesa; depositing a second plate of the MIM capacitor on the first insulator between the first mesa and the second mesa, wherein a portion of the second plate extends to the second mesa; depositing a second insulator of the MIM capacitor on the second plate, wherein a portion of the second insulator extends to the first mesa and the second mesa; depositing a third plate of the MIM capacitor on the second insulator between the first mesa and the second mesa, wherein a portion of the third plate extends to the first mesa;
forming a first top contact, wherein the first mesa is disposed below the first contact and the first plate and the second plate are electrically coupled to the first top contact; and forming a second top contact, wherein the second mesa is disposed below the second contact and the second plate is electrically coupled to the second contact.17. The method of claim 16, further comprising: disposing a first partial via between the first top contact and the first mesa, wherein the first plate and the third plate are electrically coupled to the first top contact through the first partial via; and disposing a second partial via between the second top contact and the second mesa, wherein the second plate is electrically coupled to the second top contact through the second partial via.18. The method of claim 16, wherein the first top contact is directly disposed on the first mesa and the second top contact is directly disposed on the second mesa.19. The method of claim 16, wherein the first mesa and the second mesa are formed in a first inter-metal dielectric (IMD) layer.20. The method of claim 19, further comprising: forming a second inter-metal dielectric (IMD) layer, wherein the first top contact and the second top contact are at least partially disposed in the second IMD layer.21. The method of claim 20, wherein the first top contact and the second top contact are in a same metal layer in the second IMD layer.22. The method of claim 21, further comprising: disposing a lower metal layer, wherein the first IMD layer is on the lower metal layer.23. The method of claim 19, wherein the first insulator comprises a high dielectric constant (high-k) dielectric material, and wherein the first IMD layer comprises a low dielectric constant (low-k) dielectric material.24. The method of claim 20, wherein the second IMD layer comprises a low dielectric constant (low-k) dielectric material.25. The method of claim 16, wherein the first plate, the second plate, the third plate, the first insulator and the second insulator are disposed between the first top contact and the second top contact.26. The method of claim 16, wherein the first plate and the third plate are coupled to a first power connection, and the second plate is coupled to a second power connection.27. The method of claim 26, wherein the first power connection is configured to be at a positive potential and wherein the second power connection is configured to be at a negative potential or ground.28. The method of claim 16, further comprising: forming a second MIM capacitor, wherein the second MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the first plate and the third plate are coupled to the first top contact.29. The method of claim 16, further comprising: forming a third MIM capacitor, wherein the third MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the second plate is coupled to the second top contact.30. The method of claim 16, wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, an access point, a fixed location terminal, a tablet computer, a
computer, a wearable device, an Internet of things (loT) device, a laptop computer, a server, a base station and a device in an automotive vehicle. |
METAL-INSULATOR-METAL CAPACITOR WITH TOP CONTACTFIELD OF DISCLOSURE[0001] This disclosure relates generally to semiconductor devices including capacitors, and more specifically, but not exclusively, to metal-insulator-metal (MIM) capacitors and fabrication techniques thereof.BACKGROUND[0002] High performance computation (HPC) processors, such as those for artificial intelligence (Al), are large and use capacitors for power decoupling to improve power IR drop for high performance high frequency computations. Multiple plate MIM capacitors can be used to decouple the power supply lines (Vdd) to improve processor performance. The MIM capacitors also may have other uses. However, conventional MIM capacitors may provide insufficient decoupling performance for HPC processors and other high-performance systems.[0003] Accordingly, there is a need for systems, apparatus, and methods that overcome the deficiencies of conventional capacitor configurations including the methods, systems and apparatuses provided herein.SUMMARY[0004] The following presents a simplified summary relating to one or more aspects and/or examples associated with the apparatus and methods disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or examples, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or examples or to delineate the scope associated with any particular aspect and/or example. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or examples relating to the apparatus and methods disclosed herein in a simplified form to precede the detailed description presented below.[0005] In accordance with the various aspects disclosed herein, at least one aspect includes an apparatus including a first top contact; a second top contact, adjacent the first top
contact; a first mesa disposed below the first top contact; and a second mesa disposed below the second top contact; a first plate of a metal-insulator-metal (MIM) capacitor disposed below the first top contact and electrically coupled to the first top contact; a first insulator of the MIM capacitor disposed on the first plate; a second plate of the MIM capacitor disposed on the first insulator and electrically coupled to the second top contact; a second insulator of the MIM capacitor disposed on the second plate; and a third plate of the MIM capacitor disposed on the second insulator and electrically coupled to the first top contact.[0006] In accordance with the various aspects disclosed herein, at least one aspect includes a method of fabricating a device. The method may include: forming a first mesa; forming a second mesa adjacent the first mesa; depositing a first plate of a metalinsulator-metal (MIM) capacitor between the first mesa and the second mesa, wherein a portion of the first plate extends to the first mesa; depositing a first insulator of the MIM capacitor on the first plate, wherein a portion of the first insulator extends to the first mesa and the second mesa; depositing a second plate of the MIM capacitor on the first insulator between the first mesa and the second mesa, wherein a portion of the second plate extends to the second mesa; depositing a second insulator of the MIM capacitor on the second plate, wherein a portion of the second insulator extends to the first mesa and the second mesa; depositing a third plate of the MIM capacitor on the second insulator between the first mesa and the second mesa, wherein a portion of the third plate extends to the first mesa; forming a first top contact, wherein the first mesa is disposed below the first contact and the first plate and the second plate are electrically coupled to the first top contact; and forming a second top contact, wherein the second mesa is disposed below the second contact and the second plate is electrically coupled to the second contact.[0007] Other features and advantages associated with the apparatus and methods disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS[0008] A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure.[0009] FIG. 1 illustrates a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0010] FIG. 2 illustrates a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0011] FIGs. 3A-K illustrate portions of a process for fabricating a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0012] FIG. 4 illustrates a top view of a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0013] FIG. 5 illustrates a portion of a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0014] FIG. 6 illustrates a portion of a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0015] FIG. 7 illustrates a mobile device in accordance with at least one aspect of the disclosure.[0016] FIG. 8 illustrates various electronic devices which may utilize one or more aspects of the disclosure.[0017] FIG. 9 illustrates a flow chart for fabricating a device including a MIM capacitor in accordance with one or more aspects of the disclosure.[0018] Other objects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description. In accordance with common practice, the features depicted by the drawings may not be drawn to scale. Accordingly, the dimensions of the depicted features may be arbitrarily expanded or reduced for clarity. In accordance with common practice, some of the drawings are simplified for clarity. Thus, the drawings may not depict all components of a particular apparatus or method.
Further, like reference numerals denote like features throughout the specification and figures.DETAILED DESCRIPTION[0019] Aspects of the present disclosure are illustrated in the following description and related drawings directed to specific aspects. Alternate aspects may be devised without departing from the scope of the teachings herein. Additionally, well-known elements of the illustrative aspects herein may not be described in detail or may be omitted so as not to obscure the relevant details of the teachings in the present disclosure.[0020] In certain described example implementations, instances are identified where various component structures and portions of operations can be taken from known, conventional techniques, and then arranged in accordance with one or more exemplary aspects. In such instances, internal details of the known, conventional component structures and/or portions of operations may be omitted to help avoid potential obfuscation of the concepts illustrated in the illustrative aspects disclosed herein.[0021] Further, it should be noted that terms or phrases such as “lower”, “upper”, “left”, “right”, “below”, “above”, “horizontal, “vertical”, “top”, “bottom”, “side”, “sidewall”, etc. are used for convenience. Unless otherwise specifically indicated, such terms/phrased are not intended to indicate absolute orientations or directions. Also as indicated, terms “on” and “in contact with” may be used synonymously unless otherwise specifically indicated.[0022] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0023] As discussed in the background, in high performance computing integrated circuit (IC) design, a large size decoupling capacitor can be used for VDD decoupling to reduce IR drop, from the front side. Further, the top metal layer (TME) MIM capacitors have less power decoupling effectiveness and larger IR drop.[0024] IC level power distribution network (PDN) IR drop from front side of BEOL presents additional problems for IC scaling of 5nm technologies. PDN IR drop degrades performance improvement from the reduced scale technologies, as technology scaling continues to shrink area and improve performance. Current process integration techniques do not allow for improved PDN IR drop when technology scales. High density MIM caps with multi-plate (e.g., 3 to 4-plate) configurations are beneficial for decoupling, however, these configurations can present increased process challenges during the fabrication. Various aspects disclosed and discussed in further detail herein provide for MIM capacitors and fabrication processes to facilitate fabrication of multi-plate MIM capacitors with no limit in MIM metal plate number. Metal and via processes are compatible with conventional metal modules.[0025] FIG. 1 illustrates a partial cross-sectional view of device 100 including a multiplate MIM capacitor 150 in accordance with one or more aspects of the disclosure. In some aspects, the device 100 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 100 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 100 may include a first inter-metal dielectric (IMD) layer 130 and a second IMD layer 135, which each may comprise one or more layers of dielectric material. The first IMD layer 130 is disposed on a first metal layer Mx. A second metal layer Mx+1 is at least partially embedded in the second IMD layer 135 along with one or more vias 165 or partial vias 115 and 125. The second IMD layer 135 may have top contacts formed in the Mx+1 metal layer, such as a first top contact 110 and a second top contact 120. The first top contact 110 may be coupled to a first partial via 115 to facilitate electrical connection to a first plate 152 of MIM capacitor 150and a third plate 156 of MIM capacitor 150. The first partial via 115 may be disposed on a first mesa 131 formed in the first IMD layer 130. In some aspects the first mesa 131 may have tapered sides and be formed as a conical
structure so that when viewed from a top view it will have a generally circular shape. In some aspects, the second top contact 120 may have a similar structure to the first top contact 110. The second top contact 120 may be coupled to a second partial via 125 to facilitate electrical connection to a second plate 154 of the MIM capacitor 150. The second partial via 125 may be disposed on a second mesa 132 formed in the first IMD layer 130. In some aspects the second mesa 132 may have tapered sides and formed as a conical structure so that when viewed from a top view it will have a generally circular shape. It will be appreciated the various aspects disclosed are not limited to this example configuration and can include other geometric shapes. For example, the first mesa 131 and/or the second mesa 132 may having a trapezoidal, rectangular, square, oval, etc. shaped top view of a corresponding structure forming the first mesa 131 and/or the second mesa.[0026] The multiplate MIM capacitor 150 is illustrated in a 3 -plate configuration and includes the second plate 154, which is separated from the first plate 152 by a first insulator 153 of the MIM capacitor 150. The second plate 154 is separated from the third plate 156 by a second insulator 155. As discussed in the foregoing, first plate 152 and third plate 156 are coupled to the first top contact 110 and the second plate154 is coupled to the second top contact 120, which allows for both connections to MIM capacitor 150 to be located on the same side. Additionally, in some aspects the first plate 152, the first insulator 153 the second plate 154, the second insulator155 and third plate 156 may extend beyond the top contacts 110 and 220 and even beyond the via 165. Other configurations are discussed herein. Accordingly, the various aspects disclosed are not limited to the illustrated example configurations.[0027] The second IMD layer 135 may have additional structures formed from the Mx+1 metal layer, such as metal trace 160 which may be coupled to metal trace 140 formed in metal layer Mx using via 165. The metal traces 140 and 160 and via 165 may be coupled to a positive potential, a ground potential, a digital signal, an analog signal, or any other suitable signal for routing in the device 100.[0028] Further, in some aspects, additional separate MIM capacitors be formed. For example, MIM capacitor 170 may be coupled to the first top contact 110. Likewise, MIM capacitor 180 may be coupled to the second top contact 120. In the configuration with MIM capacitor 180 there will be a physical plate separation (by
plate patterning) in the region below 125 or 115 (not illustrated in Fig. 1, but see, e.g., example configurations illustrated in part in FIG. 5 and FIG. 6). In this optional aspect, it will be appreciated that another top contact (not illustrated) may be adjacent to top contact 110 and coupled to the center plate of MIM capacitor 170. Likewise, still another top contact (not illustrated) may be adjacent the second top contact (but not visible in the cross-sectional view) and coupled to the top and bottom plates of MIM capacitor 180. In further aspects, one or both of the other contacts may be located in the Mx metal layer or other metal layer different from Mx+1. Accordingly, it is possible to have a portion of the MIM capacitors having top contacts on a same metal layer and others having contacts on different metal layers.[0029] It will be appreciated that the various plates (e.g., 152, 154, and 156) and other metal layers and structures (e.g., 110, 115, 120, 125, 140, 160 and 165) may be any high conductive material, such as, copper (Cu), aluminum (AL), silver (Ag), gold (Au), titanium (Ti), nickel(Ni), tungsten (W), ruthenium (Ru), cobalt (Co), alloys or combinations thereof. It will be appreciated that in some aspects, the Mx and Mx+1 metals may be different. The insulators (e.g., 153 and 155) may be a high dielectric constant (high-k) material, such as hafnium oxide (HfOx) or similar materials. The first IMD layer 130 and the second IMD layer 135, each may be a low dielectric constant (low-k) material, such as doped silicon dioxide (SiO2), or its fluorinedoped, carbon-doped, and carbon-doped forms, as well as spin-on organic polymeric dielectrics such as polyimide (PI), benzocyclobutene (BCB), polytetrafluoroethylene (PTFE) and/or silicone based polymeric dielectrics. It will be appreciated that the illustrated configuration and example materials are provided merely to aid in explanation of the various aspects and should not be construed to limit the various aspects disclosed. For example, although a 3-plate configuration is illustrated, the various aspects of the disclosure allow for four or more plates in the MIM capacitors.[0030] In some aspects of the disclosure, the first top contact 110 may be coupled to a first power connection, which may be coupled to a power supply (not illustrated). The second top contact 120 may be coupled to a second power connection, which may also be coupled to the power supply. In some aspects, the power supply may be
located remote from the first power connection and the second power connection. In some aspects, the power supply may be local to or even in direct contact with the first power connection and the second power connection. In some aspects, the first power connection may be configured to be at a positive potential (e.g., Vdd). The second power connection may be configured to be at a negative potential (e.g., Vss) or ground. In other aspects, these may be reversed, so the first power connection is configured to be at Vss, or ground and the second power connection is configured to be at Vdd. The first power connection and the second power connection may be formed, at least in part, from portions of the metal layer Mx+1 or may be coupled to the top contacts using vias coupling the top contacts to other metal layers. It will be appreciated that having the MIM capacitors in close proximity to a power input provide for improved decoupling and performance of the power distribution network. It will be appreciated that various aspects disclosed herein are not limited to decoupling capacitor applications and may be used in any conventional capacitor application.[0031] FIG. 2 illustrates a partial cross-sectional view of device 200 including a multiplate MIM capacitor 250 in accordance with one or more aspects of the disclosure. In some aspects, the device 200 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 200 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 200 may include a first inter-metal dielectric (IMD) layer 230, and a second IMD layer 235, each of which may comprise one or more layers of dielectric material. The first IMD layer 230 is disposed on a first metal layer Mx. A second metal layer Mx+1 is at least partially embedded in the second IMD layer 235 along with one or more vias 265. The second IMD layer 235 may have MIM capacitor top contacts formed in the Mx+1 metal layer, such as a first top contact 210 and a second top contact 220. The first top contact 210 may be coupled directly to a first plate 252 of MIM capacitor 250 and a third plate 256 of MIM capacitor 250. The top contact 210 may be disposed on a first mesa 231 formed in the second IMD layer 235. In some aspects the first mesa 231 may have tapered sides and be formed as a conical structure so that when viewed from a top view it will have a generally circular shape. In some aspects, the second top contact 220 may have a
similar structure to the first top contact 210. The second top contact 220 may be coupled to a second plate 254 of the MIM capacitor 250. The second top contact 220 may be disposed on a second mesa 232 formed in the second IMD layer 235. In some aspects the second mesa 232 may have tapered sides and formed as a conical structure so that when viewed from a top view it will have a generally circular shape.[0032] The multiplate MIM capacitor 250 is illustrated in a 3 -plate configuration and includes the second plate 254, which is separated from the first plate 252 by a first insulator 253 of the MIM capacitor 250. The second plate 254 is separated from the third plate 256 by a second insulator 255. As discussed in the foregoing, first plate 252 and third plate 256 are coupled to the first top contact 210 and the second plate254 is coupled to the second top contact 220, which allows for both connections to MIM capacitor 250 to be located on the same side. Additionally, in some aspects the first plate 252, the first insulator 253 the second plate 254, the second insulator255 and third plate 256 may extend beyond the top contacts 210 and 220 and even beyond the via 265. Other configurations are discussed herein. Accordingly, the various aspects disclosed are not limited to the illustrated example configurations.[0033] The second IMD layer 235 may have additional structures formed from the Mx+1 metal layer, such as metal trace 260 which may be coupled to metal trace 240 formed in metal layer Mx using via 265. The metal traces 240 and 260 and via 265 may be coupled to a positive potential, a ground potential, a digital signal, an analog signal, or any other suitable signal for routing in the device 200.[0034] Further, in some aspects, additional separate MIM capacitors be formed. For example, optional MIM capacitor 270 may be coupled to the first top contact 210. Likewise, optional MIM capacitor 280 may be coupled to the second top contact 120. An example configuration is illustrated in part in FIG. 5. It will be appreciated that another top contact (not illustrated) may be adjacent to top contact 210 and coupled to the center plate of MIM capacitor 270. Likewise, still another top contact (not illustrated) may be adjacent the second top contact 220 (but not visible in the cross-sectional view) and coupled to the top and bottom plates of MIM capacitor 280. In further aspects, one or both of the other contacts may be located in the Mx metal layer or other metal layer different from Mx+1. Accordingly, it is possible to
have a portion of the MIM capacitors having top contacts on a same metal layer and others having contacts on different metal layers.[0035] It will be appreciated that the various plates (e.g., 252, 254, and 256) and other metal layers and structures (e.g., 210, 215, 220, 225, 240, 260 and 265) may be any high conductive material, such as, copper (Cu), aluminum (AL), silver (Ag), gold (Au), titanium (Ti), nickel(Ni), alloys or combinations thereof. The insulators (e.g., 253 and 255) may be a high dielectric constant (high-k) material. The first IMD layer 230 and second IMD layer 235 may each be a low dielectric constant (low-k) material, such as doped silicon dioxide (SiO2), or its fluorine-doped, carbon-doped, and carbon-doped forms, as well as spin-on organic polymeric dielectrics such as polyimide (PI), benzocyclobutene (BCB), polytetrafluoroethylene (PTFE) and/or silicone based polymeric dielectrics. It will be appreciated that the illustrated configuration and example materials are provided merely to aid in explanation of the various aspects and should not be construed to limit the various aspects disclosed. For example, although a 3-plate configuration is illustrated, the various aspects of the disclosure allow for four or more plates in the MIM capacitors.[0036] In accordance with the various aspects disclosed herein, at least one aspect includes an apparatus including a multiplate MIM capacitor (e.g., 150, 250). The apparatus includes a first top contact (110, 210); a second top contact (120, 220), adjacent the first top contact; a first plate (152, 252) of a metal-insulator-metal (MIM) capacitor (150, 250) disposed below the first top contact and electrically coupled to the first top contact; a first insulator (153, 253) of the MIM capacitor (150, 250) disposed on the first plate (152, 252); a second plate (154, 254) of the MIM capacitor disposed on the first insulator (153, 253) and electrically coupled to the second top contact (120, 220); a second insulator (155, 255) of the MIM capacitor disposed on the second plate (154, 254); and a third plate (156, 256) of the MIM capacitor (150, 250) disposed on the second insulator and electrically coupled to the first top contact (110, 210). It will be appreciated that the various aspects disclosed provide various technical advantages. For example, in at least some aspects, having both MIM contacts being adjacent and a same side, allow for improved manufacturing and is compatible with standard metal and via processes. Other technical advantages will be recognized from various aspects disclosed herein and these technical advantages
are merely provided as examples and should not be construed to limit any of the various aspects disclosed herein.[0037] Other embodiments of this aspect include one or more of the following features. The apparatus may include: a first mesa (131, 231) disposed below the first top contact; and a second mesa (132, 232) disposed below the second top contact. In some aspects, a first partial via (115) disposed between the first top contact (110) and the first mesa, where the first plate and third plate may be electrically coupled to the first top contact (110) through the first partial via (115); and a second partial via (125) disposed between the second top contact (120) and the second mesa (132), where the second plate (154) is electrically coupled to the second top contact (120) through the second partial via. In other aspects, the first top contact (210) is directly disposed on the first mesa (231) and the second top contact (220) is directly disposed on the second mesa (232). The first top contact (110, 220) and the second top contact (120, 220) are at least partially disposed in the second IMD layer (135, 235). The first top contact (110, 210) and the second top contact (120, 220) are in a same metal layer (Mx+1) in the second IMD layer (135, 235). The first IMD layer (130, 230) is disposed on the lower metal layer (Mx). Additional aspects will be appreciated from the various aspects disclosed herein.[0038] In order to fully illustrate aspects of the design of the present disclosure, methods of fabrication are presented. Other methods of fabrication are possible and discussed fabrication methods are presented only to aid understanding of the concepts disclosed herein.[0039] FIGS. 3A-3K illustrate example portions of fabricating a device 300, such as the devices illustrated in FIGS. l and 2, in accordance with one or more aspects of the disclosure. FIGS. 3A-3K generally illustrate cross-sectional views of the various stages of fabrication.[0040] FIG. 3A illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3A, the process can begin with an inter-metal dielectric (IMD) layer 330 being deposited on a metal layer Mx.[0041] FIG. 3B illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3B, the process can
continue with the IMD layer 330 deposited on the metal layer Mx. In this portion, the IMD layer 330 is patterned and etched to form a first mesa 331 and a second mesa 332. In some aspects, the IMD layer 330 may be formed by depositing one layer that will have a thickness greater than or equal to the height of the first mesa 331 and the second mesa 332. In alternative aspects, IMD layer 330 may be formed by depositing more than one layer, which may then be patterned and etched to form the first mesa 331 and the second mesa 332.[0042] FIG. 3C illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3B, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed. In this portion, a first metal 381 for the MIM capacitor 350 (not fully formed) is deposited over the IMD 330 including the first mesa 331 and the second mesa 332. The first metal 381 is patterned and etched and part is used to form the first plate 352, which is still connected to other portions of first metal 381. In some aspects, the first plate 352 and the MIM capacitor 350 may extend beyond the MIM capacitor 350 node region (e.g., to the opposite side of the first mesa 331). Further, as illustrated the metal first metal 381 extends over the first mesa 331, while it has been removed from the second mesa 332. In some aspects, where there are additional MIM capacitors, it will be appreciated that other plates for other MIM capacitors can be formed at this time from first metal 381. Further, in some aspects the first metal 381 may be patterned to form other metal structures. Likewise, it will be appreciated that the fabrication process can proceed simultaneously for the MIM capacitor 350 node region (e.g., region where top contacts for MIM capacitor 350 are formed) and the regular via region.[0043] FIG. 3D illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3D, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed and the first metal 381 deposited. In this portion, a first insulator layer 391 (e.g., high-k dielectric) for the MIM capacitor 350 is deposited over the first metal 381 and the IMD 330 including the first mesa 331 and the second mesa 332. In some aspects, the first insulator 353 is formed from a portion of the first insulator layer 391. It will be appreciated that in some aspects
the various insulator (dielectric) layers (e.g., 391) and metal layers (e.g., 381) are formed by conformal deposition. Accordingly, the surface profile of the subsequent layer will generally follow the previous layer surface profile. For ease of illustration, the various surface profiles have been illustrated as simple geometric shapes. However, these illustrations should not be construed to be limiting of the various aspects disclosed herein. Further, it will be appreciated that the illustrated aspects represent only a portion of the structure.[0044] FIG. 3E illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3E, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed. Additionally, the first metal 381 and the first insulator layer 391 are deposited. In this portion, a second metal 382 for the MIM capacitor 350 (not fully formed) is deposited over the first insulator layer 391 including over the first mesa 331 and the second mesa 332. The second metal 382 is patterned and etched and part is used to form the second plate 354, which is still connected to other portions of second metal 382. Further, as illustrated the second metal 382 extends over the second mesa 332, while it has been removed from the first mesa 331. In some aspects, where there are additional MIM capacitors, it will be appreciated that other plates for other MIM capacitors can be formed at this time from second metal 382.[0045] FIG. 3F illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3D, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed. Additionally, the first metal 381, the first insulator layer 391, and the second metal 382 are deposited. In this portion, a second insulator layer 392 (e.g., high-k dielectric) for the MIM capacitor 350 is deposited over the second metal 382 and exposed portions of the first insulator layer 391, which are deposited over the IMD layer 330 including the first mesa 331 and the second mesa 332. In some aspects, second insulator 355 is formed from a portion of the second insulator layer 392. In other aspects, the second insulator layer 392 forms the second insulator and/or other insulator structures that may extend beyond the MIM capacitor 350 node region. Further, it will be appreciated that
sections of the illustration where the first insulator layer 391 and the second insulator 392 overlap may be referred to as 391+392, as these portions may be represented as common insulator element for convenience of illustration.[0046] FIG. 3G illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3G, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed. Additionally, the first metal 381, the first insulator layer 391, the second metal 382, and the second insulator layer 392 are deposited. In the illustrated aspect, where the first insulator layer 391 and the second insulator layer 392 overlap, these sections may be referred to as 391+392. In this portion of the process, a third metal 383 for the MIM capacitor 350 is deposited over the second insulator layer 392 including over the first mesa 331 and the second mesa 332. The third metal 383 is patterned and etched and part is used to form the second plate 355, which is still connected to other portions of third metal 383. Further, as illustrated the third metal 383 extends over the first mesa 331, while it has been removed from the second mesa 332. In some aspects, where there are additional MIM capacitors, it will be appreciated that other plates for other MIM capacitors can be formed at this time from third metal 383.[0047] FIG. 3H illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3H, the process can continue with the IMD layer 330 deposited on the metal layer Mx and with the first mesa 331 and the second mesa 332 formed. Additionally, the first metal 381, the first insulator layer 391, the second metal 382, the second insulator layer 392, and the third metal 383 are deposited. In this portion, a second IMD layer 335 is deposited over the third metal 383 and exposed portions of the second insulator layer 392. A chemical mechanical polish (CMP) is performed to remove excess material and planarize the top surface of the first mesa and the second mesa along with other portions of the device 300. As illustrated, extensions of the first plate 352 and the third plate 356 are exposed adjacent the top of the first mesa 331, along with the combined first insulator 353 and second insulator 355. The first plate 352 and the third plate 356 are separated by the first insulator 353 and the second insulator 355 at the first mesa 331. Further, an extension of the second plate 354 is exposed
adjacent the top of the second mesa 332. The extension of the second plate 354 is disposed between the first insulator 353 and the second insulator 355 at the second mesa 332.[0048] FIG. 31 illustrates a top view of a portion of the device 300, at the portion of the fabrication process illustrated in FIG. 3H, in accordance with one or more aspects of the disclosure. As shown in FIG. 31, the first plate 352 and the third plate 356 are exposed on a side wall adjacent the top of the first mesa 331. The first plate 352 and the third plate 356 are separated by the first insulator 353 and the second insulator 355 at the exposed top of the first mesa 331. The second plate 354 is exposed on the sidewalls adjacent the exposed top of the second mesa 332. The second plate 354 is disposed between the first insulator 353 and the second insulator 355 adjacent exposed on the sidewalls adjacent the exposed top of the second mesa 332. In some aspects, the top cross-section of the first mesa 331 and the second mesa 332 may have a generally circular shape, however, it will be appreciated that the cross-sections of the first mesa 331 and the second mesa 332 are not limited to the circular shape and any geometric configuration can be used for the first mesa 331 and/or the second mesa 332. For example, the mesas could have an oval, a square, or rectangular cross-section when viewed from the top.[0049] FIG. 3 J illustrates a portion of a fabrication process of the device 300 in accordance with one or more aspects of the disclosure. As shown in FIG. 3J, the process can continue from FIG. 3H. In some aspects, the device 300 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 300 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 300 may include first IMD layer 330, which may comprise one or more layers of dielectric material. The first IMD layer 330 is disposed on a first metal layer Mx. In this portion of the process, partial via 315 is deposited on the first mesa 331, which allows for the partial via 315 to make electrical contact to the first plate 352 and the third plate 356. The partial via 325 is deposited on the second mesa 332, which allows for the partial via 325 to make electrical contact to the second plate 354. Further, via 365 is formed to make electrical contact with metal trace 340, which is in the Mx metal layer. Additionally, a second metal layer Mx+1 is deposited over the partial vias 315 and 325, via 365
and exposed portions of the second IMD 335. The second metal layer Mx+1 is patterned and etched to form a first top contact 310, a second top contact 320, and a metal trace 360. The first top contact 310 is electrically coupled to the first plate 352 and the third plate 356 through partial via 315. The second top contact 320 is electrically coupled to the second plate 354 through partial via 325. One or more additional layers may be added to the second IMD layer 335, which allows the first top contact 310, the second top contact 320 and metal trace 360 to be at least partially embedded in the second IMD layer 335 along with via 365 and partial vias 315 and 325.[0050] Accordingly, it will be appreciated the device 300 is similar to the device 100, discussed above. The second IMD layer 335 has top contacts formed in the Mx+1 metal layer, such as the first top contact 310 and the second top contact 320. The first top contact 310 may be coupled to a first partial via 315 to facilitate electrical connection to the first plate 352 of MIM capacitor 350 and the third plate 356 of MIM capacitor 350. The first partial via 315 may be disposed on the first mesa 331 formed in the first IMD layer 330. In some aspects the first mesa 331 may have tapered sides and be formed as a conical structure so that when viewed from a top view it will have a generally circular shape. In some aspects, the second top contact 320 may have a similar structure to the first top contact 310. The second top contact 320 may be coupled to a second partial via 325 to facilitate electrical connection to a second plate 354 of the MIM capacitor 350. The second partial via 325 may be disposed on a second mesa 332 formed in the first IMD layer 330. In some aspects the second mesa 332 may have tapered sides and formed as a conical structure so that when viewed from a top view it will have a generally circular shape.[0051] The multiplate MIM capacitor 350 is illustrated in a 3-plate configuration and includes the second plate 354, which is separated from the first plate 352 by the first insulator layer 353 of the MIM capacitor 350. The second plate 354 is separated from the third plate 356 by the second insulator 355. As discussed in the foregoing, first plate 352 and third plate 356 are coupled to the first top contact 310 and the second plate 354 is coupled to the second top contact 320, which allows for both connections to MIM capacitor 350 to be located on the same side.
[0052] The IMD layer 330 may have additional structures formed from the Mx+1 metal layer, such as metal trace 360 which may be coupled to metal trace 340 formed in metal layer Mx by via 365. The metal traces 340 and 360 and via 365 may be coupled to a positive potential, a ground potential, a digital signal, an analog signal, or any other suitable signal for routing in the device 300.[0053] FIG. 3K illustrates a portion of a fabrication process of the device 302 in accordance with one or more aspects of the disclosure. As shown in FIG. 3K, the process can continue from FIG. 3H. In some aspects, the device 302 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 302 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 302 may include first IMD layer 330, which may comprise one or more layers of dielectric material. The first IMD layer 330 is disposed on a first metal layer Mx. In this portion of the process, via 365 is formed to make electrical contact with metal trace 340, which is in the Mx metal layer. Additionally, a second metal layer Mx+1 is deposited over the first mesa 331, the second mesa 332, via 365 and portions of the second IMD 335. The second metal layer Mx+1 is patterned and etched to form a first top contact 310, a second top contact 320, and a metal trace 360. The first top contact 310 is electrically coupled to the first plate 352 and the third plate 356 through direct contact. The second top contact 320 is electrically coupled to the second plate 354 through direct contact. One or more additional layers may be added to the second IMD layer 335, which allows the first top contact 310, the second top contact 320, and metal trace 360 to be at least partially embedded in the second IMD layer 335 along with via 365.[0054] Accordingly, it will be appreciated the device 302 is similar to the device 200, discussed above. The second IMD layer 335 has top contacts formed in the Mx+1 metal layer, such as the first top contact 310 and the second top contact 320. The first top contact 310 may be directly electrically coupled to the first plate 352 of MIM capacitor 350 and the third plate 356 of MIM capacitor 350. In some aspects the first mesa 331 may have tapered sides and be formed as a conical structure so that when viewed from a top view it will have a generally circular shape. In some aspects, the second top contact 320 may have a similar structure to the first top
contact 310. The second top contact 320 may be directly electrically coupled to the second plate 354 of the MIM capacitor 350. In some aspects the second mesa 332 may have tapered sides and formed as a conical structure so that when viewed from a top view it will have a generally circular shape.[0055] The multiplate MIM capacitor 350 is illustrated in a 3-plate configuration and includes the second plate 354, which is separated from the first plate 352 by a first insulator layer 353 of the MIM capacitor 350. The second plate 354 is separated from the third plate 356 by the second insulator 355. As discussed in the foregoing, first plate 352 and third plate 356 are coupled to the first top contact 310 and the second plate 354 is coupled to the second top contact 320, which allows for both connections to MIM capacitor 350 to be located on the same side.[0056] The IMD layer 330 may have additional structures formed from the Mx+1 metal layer, such as metal trace 360 which may be coupled to metal trace 340 formed in metal layer Mx by via 365. The metal traces 340 and 360 and via 365 may be coupled to a positive potential, a ground potential, a digital signal, an analog signal, or any other suitable signal for routing in the device 302.[0057] FIG. 4 illustrates a portion of a device 400 in accordance with one or more aspects of the disclosure. In some aspects, the device 400 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 400 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 400 may include a multiplate MIM capacitor 450, which is similar in structure to the MIM capacitors previously discussed (e.g., 150, 250, 350). The multiplate MIM capacitor 450 has a first plate 452, a second plate 454, and a third plate 456. Portions of the metal forming the first plate 452 and the third plate 456 extend up the sidewalls of the first mesa 431 and are illustrated as circular rings around the first mesa 431. The first plate 452 and the third plate 456 are separated by the first insulator 453 and the second insulator 455, which are illustrated as a combined concentric circle around the first mesa 431, but not specifically illustrated as layers between the perspective view of the first plate 452, the second plate 454, and the third plate 456. Portions of the metal forming the second plate 454 extend up the sidewalls of the second mesa 431 and is illustrated as a circular ring around the first mesa 431. The second plate 454 is disposed
between the first insulator 453 and the second insulator 455, which are illustrated as concentric circles around the second mesa 432. In some aspects, as illustrated, the first plate 452, the second plate 454, and the third plate 456, extend beyond the first mesa 431 and second mesa 432, and in some aspects extend around the via 465. However, it will be appreciated that the various aspects disclosed are not limited to the illustrated configuration.[0058] FIG. 5 illustrates a portion of a device 500 in accordance with one or more aspects of the disclosure. In some aspects, the device 500 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 500 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 500 may include a multi plate MIM capacitor 550, which is similar in structure to the MIM capacitors previously discussed (e.g., 150, 250, 350). The multiplate MIM capacitor 550 has a first plate 552, a second plate 554, and a third plate 556. Portions of the metal forming the first plate 552 and the third plate 556 extend up the sidewalls of the first mesa 531 and are illustrated as circular rings around the first mesa 531. The first plate 552 and the third plate 556 are separated by the first insulator 553 and the second insulator 555, which are illustrated as a combined concentric circle around the first mesa 531. Portions of the metal forming the second plate 554 extend up the sidewalls of the second mesa 532 and is illustrated as a circular ring around the first mesa 531. The second plate 554 is disposed between the first insulator 553 and the second insulator 555, which are illustrated as concentric circles around the second mesa 532. In some aspects, as illustrated, the first plate 552, the second plate 554, and the third plate 556, may be disposed between the first mesa 531 and second mesa 532. Additional MIM capacitors (e.g., MIM capacitor 570 and MIM capacitor 580), may be coupled to one of the contact points around the mesas. For example, the MIM capacitor 570 may be coupled to the first plate 552 and the third plate 556 at the first mesa 531. Likewise, the MIM capacitor 580 may be coupled to the second plate 554 at the first mesa 532. Additionally, a contact point 585 for the first and third plates of the MIM capacitor 580 may be offset from the second mesa 532. Accordingly, it will be appreciated that the various aspects disclosed are not limited to the illustrated configurations.
[0059] FIG. 6 illustrates a portion of a device 600 in accordance with one or more aspects of the disclosure. In some aspects, the device 600 may be a die, an integrated circuit, a package, and the like. Additionally, it will be appreciated that the device 600 may include multiple components in an integrated device, of which only a portion is illustrated. As illustrated, the device 600 may include a multiplate MIM capacitor 650, which is similar in structure to the MIM capacitors previously discussed (e.g., 150, 250, 350). The multiplate MIM capacitor 650 has a first plate 652, a second plate 654, and a third plate 656. Portions of the metal forming the first plate 652 and the third plate 656 extend up the sidewalls of a first mesa 631 and are illustrated as semi-circular rings around the first mesa 631. The first plate 652 and the third plate 656 are separated by the first insulator 653 and the second insulator 655, which are illustrated as a combined concentric circle around the first mesa 631. It will be appreciated, as illustrated, that the first insulator 653 and the second insulator 655 may be combined at the first mesa 631, as the second plate 654 does not extend into this portion. Alternatively, in some aspects, only one of the first insulator 653 and the second insulator 655 may extend onto the first mesa 631 sidewalls. Portions of the metal forming the second plate 654 extend up the sidewalls of the second mesa 632 and is illustrated as a semi-circular ring around the first mesa 631. The second plate 654 is disposed between the first insulator 653 and the second insulator 655, which are illustrated as concentric semi-circles around the second mesa 632. In some aspects, as illustrated, the first plate 652, the second plate 654, and the third plate 656, may be disposed between the first mesa 631 and second mesa 632. Additional MIM capacitors (e.g., MIM capacitor 670 and MIM capacitor 680), may be coupled to one of the contact points around the mesas. For example, in the illustrated cross-sectional detail, the MIM capacitor 670 has a first plate 672, a second plate 674, and a third plate 676. Portions of the metal forming the first plate 672 and the third plate 676 extend up the sidewalls of the first mesa 631 and are illustrated as semi-circular rings around the first mesa 631. The first plate 672 and the third plate 676 portions on the sidewall of mesa 631 are separated by the first insulator 673 and/or the second insulator 675, which are illustrated as a combined concentric circle around the first mesa 631. In this configuration, the first plate 672 and the third plate 676 of MIM capacitor 670 are separated from the first plate 652
and the third plate 656 of MIM capacitor 650. It will be appreciated, that various plates and insulators of the MIM capacitor 680 may also be separated from the plates and insulators of MIM capacitor 650. However, in alternative configurations, one or more of the plates and insulators may be common. For example, in some aspects, the metal plates of MIM capacitor 650 and MIM capacitor 670 may be separated, but one or more of the insulators (e.g., 653 / 673 and 655 / 675) may be continuous layers between MIM capacitor 650 and MIM capacitor 670. In a further alternative aspects, the MIM capacitor 650 may be rotated by 180 degree (in top drawing) or horizontally flipped (in lower drawing). In this configuration, the left part of mesa 631 (not illustrated) will have the first and third metal plates and the right part of mesa 631 (not illustrated) will have second metal plates, instead of the mesa 631 (as illustrated) having the first and third metal plates on both its left and right parts. Accordingly, it will be appreciated that the various configurations will be appreciated by those skilled in the art and the various aspects disclosed are not limited to the illustrated configurations.[0060] FIG. 7 illustrates an exemplary mobile device in accordance with some examples of the disclosure. Referring now to FIG. 7, a block diagram of a mobile device that is configured according to exemplary aspects is depicted and generally designated mobile device 700. In some aspects, mobile device 700 may be configured as a wireless communication device. As shown, mobile device 700 includes processor 701. Processor 701 may be communicatively coupled to memory 732 over a link, which may be a die-to-die or chip-to-chip link. Mobile device 700 also includes display 728 and display controller 726, with display controller 726 coupled to processor 701 and to display 728.[0061] In some aspects, FIG. 7 may include coder/decoder (CODEC) 734 (e.g., an audio and/or voice CODEC) coupled to processor 701; speaker 736 and microphone 738 coupled to CODEC 734; and wireless circuits 740 (which may include a modem, RF circuitry, filters, etc., which may be implemented using one or more devices including a multiplate MIM capacitor, as disclosed herein) coupled to wireless antenna 742 and to processor 701.[0062] In a particular aspect, where one or more of the above-mentioned blocks are present, processor 701, display controller 726, memory 732, CODEC 634, and wireless
circuits 740 can be included in a system-in-package or system-on-chip device 722 which may be implemented using the which may be implemented using one or more devices including a MIM capacitor in a backside BEOL metallization, as disclosed herein. Input device 730 (e.g., physical or virtual keyboard), power supply 744 (e.g., buried), display 728, input device 730, speaker 736, microphone 738, wireless antenna 742, and power supply 744 may be external to system-on-chip device 722 and may be coupled to a component of system-on-chip device 722, such as an interface or a controller.[0063] It should be noted that although FIG. 7 depicts a mobile device 700, processor 701 and memory 732 may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices.[0064] FIG. 8 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device or semiconductor device accordance with various examples of the disclosure. For example, a mobile phone device 802, a laptop computer device 804, and a fixed location terminal device 806 may each be considered generally user equipment (UE) and may include a device 800 including a MIM capacitor in a backside BEOL metallization as described herein. The device 800 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 802, 804, 806 illustrated in FIG. 8 are merely exemplary. Other electronic devices may also feature the device 800 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), an Internet of
things (loT) device or any other device that stores or retrieves data or computer instructions or any combination thereof.[0065] It will be appreciated from the foregoing that there are various methods for fabricating devices including a multiplate MIM capacitor as disclosed herein. FIG. 9 illustrates a flowchart of an example method 900 for fabricating a device including a metal-insulator-metal (MIM) capacitor in accordance with at least one aspect disclosed. In block 902, the fabrication process can include forming a first mesa (e.g., 131, 231). In block 904, the fabrication process can further include forming a second top contact (e.g., 120, 220), adjacent the first top contact. In block 906, the fabrication process can include depositing a first plate of a metal-insulator-metal (MIM) capacitor (e.g., 150, 250) between the first mesa and the second mesa, where a portion of the first plate extends to the first mesa. In block 908, the fabrication process can include depositing a first insulator (e.g., 153, 253) of the MIM capacitor on the first plate, where a portion of the first insulator extends to the first mesa and the second mesa. In block 910, the fabrication process can include depositing a second plate (e.g., 154, 254) of the MIM capacitor disposed on the first insulator, where a portion of the second plate extends to the second mesa. In block 912, the fabrication process can include depositing a second insulator (e.g., 155, 255) of the MIM capacitor on the second plate, where a portion of the second insulator extends to the first mesa and the second mesa. In block 914, the fabrication process can include depositing a third plate (e.g., 156, 256) of the MIM capacitor on the second insulator between the first mesa and the second mesa, wherein a portion of the third plate extends to the first mesa. In block 916, the fabrication process can include forming a first top contact, where the first mesa is disposed below the first contact and the first plate and the second plate are electrically coupled to the first top contact. In block 918, the fabrication process can forming a second top contact, wherein the second mesa is disposed below the second contact and the second plate is electrically coupled to the second contact.[0066] It will be appreciated from the foregoing disclosure that additional processes for fabricating the various aspects disclosed herein will be apparent to those skilled in the art and a literal rendition of the processes discussed above will not be provided or illustrated in the included drawings.
[0067] It will be appreciated that various aspects disclosed herein can be described as functional equivalents to the structures, materials and/or devices described and/or recognized by those skilled in the art. For example, in one aspect, an apparatus may comprise a means for performing the various functionalities discussed above. It will be appreciated that the aforementioned aspects are merely provided as examples and the various aspects claimed are not limited to the specific references and/or illustrations cited as examples.[0068] One or more of the components, processes, features, and/or functions illustrated in FIGs. 1-9 may be rearranged and/or combined into a single component, process, feature, or function or incorporated in several components, processes, or functions. Additional elements, components, processes, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGs. 1-9 and corresponding description in the present disclosure are not limited to dies and/or integrated circuits (ICs). In some implementations, FIGs. 1-9 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an IC package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer.[0069] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any details described herein as “exemplary” is not to be construed as advantageous over other examples. Likewise, the term “examples” does not mean that all examples include the discussed feature, advantage or mode of operation. Furthermore, a particular feature and/or structure can be combined with one or more other features and/or structures. Moreover, at least a portion of the apparatus described herein can be configured to perform at least a portion of a method described herein.[0070] It should be noted that the terms "connected," "coupled," or any variant thereof, mean any connection or coupling, either direct or indirect, between elements, and can encompass a presence of an intermediate element between two elements that are "connected" or "coupled" together via the intermediate element unless the connection is expressly disclosed as being directly connected.
[0071] Any reference herein to an element using a designation such as "first," "second," and so forth does not limit the quantity and/or order of those elements. Rather, these designations are used as a convenient method of distinguishing between two or more elements and/or instances of an element. Also, unless stated otherwise, a set of elements can comprise one or more elements.[0072] Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0073] Nothing stated or illustrated depicted in this application is intended to dedicate any component, action, feature, benefit, advantage, or equivalent to the public, regardless of whether the component, action, feature, benefit, advantage, or the equivalent is recited in the claims.[0074] In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the example clauses have more features than are explicitly mentioned in each clause. Rather, the various aspects of the disclosure may include fewer than all features of an individual example clause disclosed. Therefore, the following clauses should hereby be deemed to be incorporated in the description, wherein each clause by itself can stand as a separate example. Although each dependent clause can refer in the clauses to a specific combination with one of the other clauses, the aspect(s) of that dependent clause are not limited to the specific combination. It will be appreciated that other example clauses can also include a combination of the dependent clause aspect(s) with the subject matter of any other dependent clause or independent clause or a combination of any feature with other dependent and independent clauses. The various aspects disclosed herein expressly include these combinations, unless it is explicitly expressed or can be readily inferred that a specific combination is not intended (e.g., contradictory aspects, such as defining an element as both an insulator and a conductor). Furthermore, it is also intended
that aspects of a clause can be included in any other independent clause, even if the clause is not directly dependent on the independent clause.[0075] Implementation examples are described in the following numbered clauses:[0076] Clause 1. An apparatus comprising: a first top contact; a second top contact, adj acent the first top contact; a first mesa disposed below the first top contact; a second mesa disposed below the second top contact; a first plate of a metal-insulator-metal (MIM) capacitor disposed below the first top contact and electrically coupled to the first top contact; a first insulator of the MIM capacitor disposed on the first plate; a second plate of the MIM capacitor disposed on the first insulator and electrically coupled to the second top contact; a second insulator of the MIM capacitor disposed on the second plate; and a third plate of the MIM capacitor disposed on the second insulator and electrically coupled to the first top contact.[0077] Clause 2. The apparatus of clause 1, further comprising: a first partial via disposed between the first top contact and the first mesa, wherein the first plate and the third plate are electrically coupled to the first top contact through the first partial via; and a second partial via disposed between the second top contact and the second mesa, wherein the second plate is electrically coupled to the second top contact through the second partial via.[0078] Clause 3. The apparatus of clause 1, wherein the first top contact is directly disposed on the first mesa and the second top contact is directly disposed on the second mesa.[0079] Clause 4. The apparatus of any of clauses 1 to 3, wherein the first mesa and the second mesa are formed in a first inter-metal dielectric (IMD) layer.[0080] Clause 5. The apparatus of clause 4, further comprising: a second inter-metal dielectric (IMD) layer, wherein the first top contact and the second top contact are at least partially disposed in the second IMD layer.[0081] Clause 6. The apparatus of clause 5, wherein the first top contact and the second top contact are in a same metal layer in the second IMD layer.[0082] Clause 7. The apparatus of clause 6, further comprising” a lower metal layer, wherein the first IMD layer is disposed on the lower metal layer.[0083] Clause 8. The apparatus of any of clauses 4 to 7, wherein the first insulator comprises a high dielectric constant (high-k) dielectric material, and wherein the first IMD layer comprises a low dielectric constant (low-k) dielectric material.
[0084] Clause 9. The apparatus of any of clauses 5 to 8, wherein the second IMD layer comprises a low dielectric constant (low-k) dielectric material.[0085] Clause 10. The apparatus of any of clauses 1 to 9, wherein the first plate, the second plate, the third plate, the first insulator and the second insulator are disposed between the first top contact and the second top contact.[0086] Clause 11. The apparatus of any of clauses 1 to 10, wherein the first plate and the third plate are coupled to a first power connection, and the second plate is coupled to a second power connection.[0087] Clause 12. The apparatus of clause 11, wherein the first power connection is configured to be at a positive potential and wherein the second power connection is configured to be at a negative potential or ground.[0088] Clause 13. The apparatus of any of clauses 1 to 12, further comprising: a second MIM capacitor, wherein the second MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the first plate and the third plate are coupled to the first top contact.[0089] Clause 14. The apparatus of any of clauses 1 to 13, further comprising: a third MIM capacitor, wherein the third MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the second plate is coupled to the second top contact.[0090] Clause 15. The apparatus of any of clauses 1 to 14, wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, an access point, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (loT) device, a laptop computer, a server, a base station and a device in an automotive vehicle.[0091] Clause 16. A method of fabricating an apparatus, the method comprising: forming a first mesa; forming a second mesa adjacent the first mesa; depositing a first plate of a metal-insulator-metal (MIM) capacitor between the first mesa and the second mesa, wherein a portion of the first plate extends to the first mesa; depositing a first insulator of the MIM capacitor on the first plate, wherein a portion of the first insulator extends to the first mesa and the second mesa; depositing a second plate of the MIM capacitor on the first insulator between the first mesa and the second
mesa, wherein a portion of the second plate extends to the second mesa; depositing a second insulator of the MIM capacitor on the second plate, wherein a portion of the second insulator extends to the first mesa and the second mesa; depositing a third plate of the MIM capacitor on the second insulator between the first mesa and the second mesa, wherein a portion of the third plate extends to the first mesa; forming a first top contact, wherein the first mesa is disposed below the first contact and the first plate and the second plate are electrically coupled to the first top contact; and forming a second top contact, wherein the second mesa is disposed below the second contact and the second plate is electrically coupled to the second contact.[0092] Clause 17. The method of clause 16, further comprising: disposing a first partial via between the first top contact and the first mesa, wherein the first plate and the third plate are electrically coupled to the first top contact through the first partial via; and disposing a second partial via between the second top contact and the second mesa, wherein the second plate is electrically coupled to the second top contact through the second partial via.[0093] Clause 18. The method of clause 16, wherein the first top contact is directly disposed on the first mesa and the second top contact is directly disposed on the second mesa.[0094] Clause 19. The method of any of clauses 16 to 18, wherein the first mesa and the second mesa are formed in a first inter-metal dielectric (IMD) layer.[0095] Clause 20. The method of clause 19, further comprising: forming a second intermetal dielectric (IMD) layer, wherein the first top contact and the second top contact are at least partially disposed in the second IMD layer.[0096] Clause 21. The method of clause 20, wherein the first top contact and the second top contact are in a same metal layer in the second IMD layer.[0097] Clause 22. The method of clause 21, further comprising” disposing a lower metal layer, wherein the first IMD layer is on the lower metal layer.[0098] Clause 23. The method of any of clauses 19 to 22, wherein the first insulator comprises a high dielectric constant (high-k) dielectric material, and wherein the first IMD layer comprises a low dielectric constant (low-k) dielectric material.[0099] Clause 24. The method of any of clauses 20-23, wherein the second IMD layer comprises a low dielectric constant (low-k) dielectric material.
[0100] Clause 25. The method of any of clauses 16 to 24, wherein the first plate, the second plate, the third plate, the first insulator and the second insulator are disposed between the first top contact and the second top contact.[0101] Clause 26. The method of any of clauses 16 to 25, wherein the first plate and the third plate are coupled to a first power connection, and the second plate is coupled to a second power connection.[0102] Clause 27. The apparatus of clause 26, wherein the first power connection is configured to be at a positive potential and wherein the second power connection is configured to be at a negative potential or ground.[0103] Clause 28. The method of any of clauses 16 to 27, further comprising: forming a second MIM capacitor, wherein the second MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the first plate and the third plate are coupled to the first top contact.[0104] Clause 29. The method of any of clauses 16 to 28, further comprising: forming a third MIM capacitor, wherein the third MIM capacitor has a second plate disposed between a first plate and a third plate and wherein the second plate is coupled to the second top contact.[0105] Clause 30. The method of any of clauses 16 to 29, wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, an access point, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (loT) device, a laptop computer, a server, a base station and a device in an automotive vehicle.[0106] It should furthermore be noted that methods, systems, and apparatus disclosed in the description or in the claims can be implemented by a device comprising means for performing the respective actions and/or functionalities of the methods disclosed.[0107] Furthermore, in some examples, an individual action can be subdivided into one or more sub-actions or contain one or more sub-actions. Such sub-actions can be contained in the disclosure of the individual action and be part of the disclosure of the individual action.
[0108] While the foregoing disclosure shows illustrative examples of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions and/or actions of the method claims in accordance with the examples of the disclosure described herein need not be performed in any particular order. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and examples disclosed herein. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
A method and apparatus for enabling z-axis offset of narrow metal ties straps in lead frames used for packaging integrated circuits to prevent bowing or distortion. Simultaneous offsetting of the tie strap and stress relief mechanisms are provided on both the front and back sides of the lead frame. Those mechanisms include indentations along the long or primary axis of each tie strap, coupled with depressions across the top surface both at the center of the lead frame and between the base of the off set and the chip attach locations to prevent bowing in small pad and no pad lead frames, in particular. |
1. A method for offsetting a narrow ductile metal part wherein the offset section is stress relieved, the method comprising the steps of:providing a form die with protruding rib inserts along the primary axis of said metal part, providing a forming punch with protrusions horizontal to said die protrusion ribs, positioning said metal part on the surface of said form die, and aligning said forming punch to the die, then applying pressure to said punch to form an off-set plane with stress relief depressions in the metal by the die protrusion ribs and the protrusions. 2. A method as in claim 1 wherein said metal part is a lead frame for interconnecting a semiconductor device.3. A method as in claim 1 wherein said inserts and said forming punch comprise steel.4. A method as in claim 1 wherein the applied pressure is in the range of 300 to 500 pounds force.5. A method as in claim 1 wherein said metal parts comprise copper.6. A method as in claim 1 wherein said form die in positioned in a hydraulic press.7. A method as in claim 1 wherein said form die is positioned in an electrically driven press.8. A method as in claim 1 wherein stress on said metal part is relieved by longitudinal indentations on the back side and concurrently by horizontal depressions on the front side arrayed between the bottom of the down-set and the center of the metal part.9. A method as in claim 1 which is applicable to metal parts having different shapes and sizes.10. A method as in claim 1 wherein stress relief mechanisms of indentations are permanent.11. A method as in claim 1 wherein said stress relief mechanisms are accurate and reproducible.12. A method for offsetting a tie strap in a lead frame part wherein the offset section is stress relieved, the method comprising the steps of:providing a form die comprising steel with protruding rib inserts along the primary axis of said tie strap, providing a forming punch comprising steel with protrusions horizontal to said die protrusion ribs, positioning said lead frame part on the surface of said form die, then aligning said forming punch to the die positioned in a press, applying 300 to 500 pounds force, and simultaneously creating an off-set plane while creating permanent stress relief depressions in the tie strap comprising longitudinal indentations on the back side and horizontal depressions on the front side arrayed between the bottom of the offset and the center of the metal part. 13. A method for offsetting a tie strap in a lead frame part wherein the offset section is stress relieved, the method comprising the steps of:providing a form die with a protruding rib having a lenoth with a primary axis extending along the primary axis of said tie strap; providing a forming punch; positioning said lead frame part on the surface of said form die; aligning said forming punch to said die; and applying force to said punch to press the protruding rib against the tie strap. 14. The method of claim 13, wherein said step of providing a forming punch comprises providing a forming punch with protrusions horizontal to said die protrusion ribs.15. The method of claim 13, wherein said step of providing a form die comprises providing a steel form die.16. The method of claim 13, wherein said step of providing a forming punch comprises providing a steel forming punch.17. The method of claim 13 wherein said step of applying force to said punch comprises applying approximately 300 to 500 pounds force to said punch.18. The method of claim 13, wherein said lead frame part is a small pad lead frame.19. The method of claim 18, wherein said small pad lead frame comprises a small circular pad.20. The method of claim 13, wherein said lead frame part is a lead-on-chip lead frame. |
This application claims the benefit of Provisional application No. 60/157,780 filed Oct. 5, 1999.FIELD OF THE INVENTIONThis invention relates generally to the field of metal forming, and more particularly to the forming of lead frames used in the assembly of micro electronic devices.BRIEF DESCRIPTION OF PRIOR ARTIntegrated circuit devices, having an integrated circuit chip and a lead frame which are sealed within a protective enclosure find wide use in products, among which are consumer electronics, computers, automobiles, telecommunications and military applications. A means to electrically interconnect an integrated circuit chip to circuitry external to the device frequently takes the form of a lead frame. The lead frame is formed from a highly electrically and thermally conductive material, such as copper or copper alloys. The lead frame is stamped or etched into a plurality of leads, and a central area, called a chip pad, on which the integrated circuit chip is attached. The chip is electrically connected to the leads, usually by wire bonding, and the device is encapsulated to provide mechanical and environmental protection.Lead frames typically include a solid chip pad somewhat larger than the chip to which the integrated circuit chip is attached by an adhesive or alloy. However, currently many lead frames 101 are fabricated with a single or multiple small circular pads 102 as shown in FIG. 1, or simply to strips of metal where the chip is attached, and the large chip pad is eliminated. The chip pads 102 are connected to outer support rails 105 by thin etched or stamped extensions of the metal, called tie straps 106. Support rails 105 also hold together one or more lead frames in a strip until encapsulation is completed.Those lead frames having one or more small pads 102 as in FIG. 1 are typically referred to as S-pad or small pad lead frames. A small, circular pad is positioned approximately mid-way from the edge of the tie strap to the center of the lead frame where the tie straps intersect 108. The chip is positioned atop the frame and the unpatterned side of the chip attached to the pads by an adhesive. An outline of a chip position is represented by the dashed line 103.As shown in FIG. 2a, lead frames 201 may be attached to the active patterned surface of the chip 203, as in devices referred to as LOC or lead-on-chip, and illustrated in FIGS. 2a and 2b. A chip is attached to a flat portion of the lead frames itself, and most often to a down-set or offset portion 209 of the frame.Lead frames having a reduced chip pad area were developed in response to a failure mechanism in surface mount packages often referred to as "pop corning". Moisture ingress into the plastic package is trapped between the chip and the metal chip pad, and when subjected to a rapid thermal excursion, such as solder attachment to a printed wiring board, the vapor pressure causes the plastic package to bulge and sometimes crack. This failure mechanism can be avoided by eliminating the large solid metal die pad.In the process of lead frame down-setting a selected strip of metal is elongating in a die under pressure from either a hydraulic or electrically driven press while the support metal remains planar. The metal in two or more tie straps metal is forced downward to form angled bends and is pressed toward the center of the lead frame by using a forming punch to press the tie straps against the die surface. The more ductile metal, following the path of least resistance moves toward the center where the tie straps converge, and lacking a relief mechanism, the metal strip bows. FIG. 3a is a schematic of a lead frame 301 with bowed tie strap 308 at the center where the tie straps converge. The semiconductor chip 302 is attached to the small pads 303 by adhesive 304 only at localized area owing to the non-planar attach area. The schematic in FIG. 3a demonstrates a device bowed in a concave direction. Convex bowing is equally as big an issue for the assembly of semiconductor devices.In conventional lead frames having a rigid or solid chip attach pad in the center, stress is relieved in the large pad, and bowing is much less of a problem. However, in the case of small pad frames where only small relief areas are provided, the narrow metal frequently is distorted at the location where pressure converges.However, small or no pad lead frames are not without significant manufacturing challenges. One of the more difficult issues has been warping or distortion of the long thin tie straps which occurs during offsetting the chip mount area. Either convex or concave bowing of the tie strap and chip attach areas prevent the chip from seating correctly.Owing to the reduced contact area for chip attachment, it is imperative that the chip attach area be planar and allow the small amount of adhesive on the pads to contact and firmly seat the chip. Losses resulting from bowed or distorted lead frames result in not only yield loss, but also present a reliability concern for both mechanical stresses on wire bonds, as well as diminished thermal transport path. A solution to this issue has been sought by the industry since the inception of small pad and no pad packaged devices.SUMMARY OF THE INVENTIONAs thinner integrated circuit packages, and consequently thinner lead frames are demanded by the industry, warping and distortion of the lead frame chip attach area has becomes a more prevalent issue. Further, the heavily favored lead frame materials are alloys of copper because of its excellent thermal conductivity and ease of processing, but the malleability and ductility of these copper alloys allows greater warping than other more rigid metallic alloys.It is an object of this invention to provide an essentially no cost, permanent, and consistent means of eliminating bowing and distortion of lead frame tie straps resulting from the z-axis down-setting process.It is an object of the invention to provide tooling which is common to a large family of devices, and the methodology for altering the tooling design to be applicable to many other lead frames sizes and shapes.The chip attach area of lead frames is offset from the plane of the lead frame in order to minimize the length and looping height of bonding wires by positioning the chip surface at more nearly the same level as the lead frame bond fingers to which bond wires are attached. Offset or z-axis down-set is accomplished by positioning the stamped or etched lead frame in a press having a down-set die and a forming tool. The material to be offset is elongated by the contact and pressure from the forming tool, thereby stretching the metal and increasing its length with respect to the surrounding planar portion of the lead frame. The metal is being elongated or stretched along the surface of the forming die from opposite sides of the lead frame and converges toward the center of the lead frame. This results in the central area bowing in either a concave or convex direction, either of which is unacceptable.The present invention provides a set of inserts to be positioned in the down-set die which have a protrusion above the die surface in the center of each tie strap. In addition, a series of protrusions are fabricated on the down-set forming punch. The insert tooling forms an indentation on the backside of the tie strap which allows the lead frame material to be pushed upward, and to move to the outer edge of the tie strap. In the same operation, protrusions on the forming punch create small lateral impressions on the top surface of the tie strap which control the flow of material being pushed toward the center of the lead frame during down-setting. Simultaneously creating small controlled indentations in the top and bottom surfaces, as well as along both the longitudinal and horizontal axis allows relief for the deformed lead frame material, and results in a flat, planar chip attach area.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a plane view of a prior art "S" or small pad lead frame.FIG. 2a is a cross section of a known LOC package.FIG. 2b is a plane view of a known LOC type lead frame.FIG. 3a is a cross sectional view of a prior art down-set "S" pad lead frame.FIG. 3b is a cross sectional view of a down-set "S" pad lead frame fabricated using the method of the current invention.FIG. 4a is a planar schematic view of a die insert of the current invention.FIG. 4a' is a cross sectional view of a die insert of the current invention.FIG. 4b is a schematic view of the die surface with insert locations.FIG. 5 provides a schematic of a forming punch tool with protrusions and locations.FIG. 6 is a plane view of the top surface of an "S" pad lead frame having impressions from forming punch protrusions.FIG. 7 is a plane view of the bottom surface of an "S" pad lead frame having indentations from the die inserts.FIG. 8 shows the relative positions of stress relief mechanisms on an "S" pad lead frameDETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSIn accordance with the present invention, there is provided a method to eliminate bowing or distortion of lead frames resulting from offsetting the tie straps, including the first step of providing the necessary forming tools Detailed descriptions of the preferred embodiment are provided herein. It is to be understood, however, that the present invention may be embodied in various forms. Therefore, specific details disclosed herein are not to be interpreted as limiting, but rather as a basis for teaching one skilled in the art to employ the present invention in virtually any appropriate detailed system, structure or manner.A preferred embodiment of the present invention uses conventional hydraulic or electrically driven presses for offsetting lead frames, in combination with a novel offset tooling set. The improved tooling set of the current invention simultaneously relieves stress on the bottom surface and the top surface of the lead frame tie strap during the down-setting process and avoids build-up and bowing of the metal toward the center of the work-piece. FIG. 3b illustrates the chip 312 attached to small chip pads 313 in a lead frame 311 processed using the forming tooling of the current invention, wherein the tie strap, the small pads 313 and the central area 318 where tie straps converge is flat after offset forming.FIG. 4a and 4a' are planar and cross sectional schematic representations of one of a set of tools to be insert into the surface of the down-set die. Each insert tool 401 comprises a steel device having a rib 402 which protrudes about 0.0005 inches above the surface of the die, and corners 404 shaped to conform to those of a cavity in the die. FIG. 4b shows the locations of cavities 411 in the die surface 410. One rib shaped tool 401 is provided to be positioned in each cavity 411 between the angled down-set areas 412, and the small pad of the lead frame (not shown). The preferred embodiment provides a cavity 411 in the die for each pad of a four (4) small circular pad device.Length and width of the protrusions 402 on the insert tools 401 are determined by the specific lead frame dimensions. The length is derived from the location of chip pads and the base of the downs set, and will extend inside those boundaries. In the preferred embodiment the tool is in about 0.12 inches long. Width of the protruding rib is dependent upon the width of lead frame tie strap, and the width 403 of the protrusion is about 0.0002 inches wide for a tie strap of nominal width 0.012 inches.Protrusions 402 on the inserts force the ductile lead frame metal to create an indentation on the bottom side along the primary axis of the tie strap, and to flow toward the outer edge of the lead frame metal, rather than simply moving toward the center of the device.The surface of the forming punch, as shown in FIG. 5 has a series of down set angles 504 which press the lead frame tie straps onto the die surface, thereby forming a down-set area out of plane with the support straps. The downs-set angle is 30 degrees and depth of the down-set in the preferred embodiment is about 0.009 inches. The forming punch of the current invention includes five (5) protrusions 502/503 of about 0.0005 inches height on the surface of the tool. There are 4 small protrusions 502, each of which contacts one of the tie straps, and a centrally located large protrusion 503 which contacts all of the tie straps. These protrusions correspond to the locations where impressions are formed on the top surface of the tie strap. An array of four (4) protrusions 502, about 0.015 to 0.016 inches on a side create a horizontal depression, about 0.015 inches long in the tie strap near the edge of each small circular chip pad, on the side toward the angled down-set. These impressions extend the full width of the tie strap, and are perpendicular to the backside indentations. The fifth protrusion 503 on the forming tool is a square positioned at the center of the tool so that one side intersects each of the tie straps, and makes a depression from a location near the edge of the circular chip pad across the center of the lead frame to near the pad on the opposite tie strap. Protrusions on the forming punch are about 0.0003 inches in height, and spacing is related to the location and size of the circular die pads. In the preferred embodiment, the square protrusion is about 0.13 inches, and will form an impression extending radially about 0.065 inches inward from the center of the tie strap intersection.The impressions on the top surface of the lead frame horizontal to the length of the tie strap, coupled with the longitudinal indentation on the backside of the tie strap allow the metal to be moved laterally toward the edges on the back side, and provide a horizontal relief stop for the metal as it moves toward the center on the top side.FIG. 6 illustrates the top surface 610 of a small pad lead frame having depressions 602/603 as created by the forming tool protrusions 502/503 in FIG. 5. Each impression extends the width of the tie strap, and is about 0.0002 to 0.0003 inches deep. Shaded areas 602/603 represent the depressed area caused by protrusions 502/503 on the forming tool. The outline of protrusions 502/503 is represented by dashed lines 502a/503a in FIG. 6.FIG. 7 is a plane view of the back surface of the lead frame 710. Indentations 711 along the primary axis of the tie strap 712 result from pressing the tie straps against ribs 402 on the form die insert as shown in FIG. 4a. Indentations 711 are in the range of 0.00035 to 0.0005 inches in depth, and are located between the termination of the down-set angle termination 715 and the circular chip pad 713. Indentations are designed to terminate prior to the onset of impressions 602 on the top surface in FIG. 6.FIG. 8 illustrates the relative locations of the top surface depressions 802/803 and the bottom surface indentations 811 with respect to the termination of the down-set angle 815 and the circular chip pad 813 in one half of a lead frame 820 cross section. The center of the lead frame where tie straps converge is noted by the dashed lines 821.It should be understood that within a standardized lead pitch and for a specified number of leads on a lead frame, the small pad lead frame accommodates a large range of chip sizes. For example, a 132 pin quad flat pack lead frame may be used for many chip sizes and types, and therefore, the embodiment of this invention is applicable to a large number of Integrated circuit devices.It should further be understood that precise dimensions of the indentations and impressions are dependent upon the lead frame dimensions, and that tooling dimensions are different for different lead frames, but the relative locations are similar, and therefore the invention is applicable to the entire family of small pad lead frames.It should further be understood that the preferred embodiment described a tooling set and application including four (4) small circular chip pads, but that the invention is applicable to devices having a single circular or other shaped chip pads by omitting the center protrusion on the forming press. It is further applicable to LOC (lead-on-chip) or COL (chip-on-lead) lead frames where there are no chip pads, or to conventional pad devices where distortion of the ties strap is a problem. Such devices include those having a deep down-set angle, long tie straps or very thin or ductile material.It should also be understood that because the stress relief mechanisms of forming depressions in both the front and back side of the lead frame are created during the forming process, the solution is permanent, as opposed to a solution where the stress corrected, but may return as a function of memory in the metal.Because the stress relief mechanisms are created by hard tooling with limited tolerance ranges, the relief mechanisms are reproducible and limited by tooling tolerances. |
PROBLEM TO BE SOLVED: To provide advanced paging capabilities for secure enclave page caches.SOLUTION: Embodiments include multiple hardware threads or processing cores, a cache to store secure data for a shared page address allocated to a secure enclave accessible by the hardware threads. A decode stage decodes a first instruction specifying the shared page address as an operand, and execution units mark an entry corresponding to an enclave page cache mapping for the shared page address to block creation of a new translation for either of the hardware threads to access the shared page. A second instruction is decoded; hardware threads currently being accessing secure data in the enclave page cache corresponding to the secure enclave are recorded; and the recorded number of hardware threads is decremented when exiting the secure enclave. |
A processor, comprising: a first hardware thread and a second hardware thread; and cache lines for shared page addresses assigned to corresponding secure enclaves accessible by the first and second hardware threads. An enclave page cache for storing secure data, and a decryption stage for decrypting a first instruction for execution by the processor, the first instruction specifying the shared page address as an operand. A decoding stage, and one or more execution units, wherein the one or more execution units correspond to an enclave page cache mapping to the shared page address in response to the decoded first instruction. Mark an entry A processor that blocks creation of a new translation for accessing either secure data corresponding to the shared page address by either the first or second hardware thread.The instruction according to claim 1, wherein the first instruction is an EBLOCK instruction specifying the shared page address to prevent the creation of a new translation corresponding to the shared page address in any translation lookaside buffer (TLB). Processor.The decoding step for decoding a second instruction for execution by the processor, wherein the second instruction specifies the secure enclave as an operand, and one or more execution units And the one or more execution units are responsive to the decoded second instruction to: move a hardware thread currently accessing secure data in the enclave page cache corresponding to the secure enclave The processor according to claim 1 or 2, which records.The processor according to claim 3, wherein the second instruction is an ETRACK instruction specifying the secure enclave to record the number of hardware threads currently executing in the secure enclave.The one or more execution units are responsive to the decoded second instruction to execute a currently executing hardware thread in the secure enclave when any of the hardware threads exits the secure enclave. 5. The processor of claim 4, wherein the recorded number of is reduced.The one or more execution units are responsive to the decoded first instruction, and when any of the hardware threads exits the secure enclave, a currently executing hardware thread in the secure enclave 5. The processor of claim 4, wherein the recorded number of is reduced.A processor, comprising: a first hardware thread and a second hardware thread; and cache lines for shared page addresses assigned to corresponding secure enclaves accessible by the first and second hardware threads. An enclave page cache for storing secure data, and a decryption stage for decrypting a first instruction for execution by said processor, said first instruction specifying said secure enclave as an operand, decryption And one or more execution units, wherein the one or more execution units are responsive to the decrypted first instruction to: secure in the enclave page cache corresponding to the secure enclave Currently accessing data A processor that records hardware threads inside.The processor according to claim 7, wherein the first instruction is an ETRACK instruction specifying the secure enclave to record the number of hardware threads currently executing in the secure enclave.The one or more execution units are responsive to the decoded first instruction, and when any of the hardware threads exits the secure enclave, a currently executing hardware thread in the secure enclave The processor of claim 8, wherein the recorded number of is reduced.The decoding step for decoding a second instruction for execution by the processor, wherein the second instruction specifies the shared page address as an operand; and one or more executions A unit, the one or more execution units being responsive to the decoded second instruction to mark an entry corresponding to an enclave page cache mapping for the shared page address, 10. The processor according to any one of claims 7-9, wherein either the one or the second hardware thread blocks creating a new translation to access secure data corresponding to the shared page address.The processor according to claim 10, wherein the second instruction is an EBLOCK instruction that specifies the shared page address to prevent the creation of a new translation corresponding to the shared page address in any TLB.The one or more execution units are responsive to the decoded second instruction to execute a currently executing hardware thread in the secure enclave when any of the hardware threads exits the secure enclave. The processor of claim 11, wherein the recorded number of is reduced.The decoding step for decoding a second instruction for execution by the processor, wherein the second instruction specifies the shared page address as an operand; and one or more executions A unit, and the one or more execution units respond to the decoded second instruction when the recorded number of hardware threads currently executing in the secure enclave reaches zero. The processor according to any one of claims 7 to 12, wherein the data removal and write back of secure data in the enclave page cache corresponding to the shared page address is performed.14. The processor of claim 13, wherein the second instruction is an enclave write back (EWB) instruction that specifies the shared page address to withdraw and write back the shared page from the enclave page cache.15. The processor of claim 14, wherein the second instruction fails when the recorded number of currently executing hardware threads in the secure enclave does not reach zero.15. The processor of claim 14, wherein the second instruction waits for execution until the recorded number of hardware threads currently executing in the secure enclave reaches zero.A method, comprising: executing a first hardware thread and a second hardware thread in a multi-threaded processor; and assigning to corresponding secure enclaves accessible by the first and second hardware threads Storing secure data in a cache line for the shared page address, and decoding a first instruction for execution by the processor, wherein the first instruction specifies the shared page address as an operand The first or second hardware thread by marking an entry corresponding to an enclave page cache mapping for the shared page address in response to the step of decoding the first instruction. Either comprises the steps of blocking the creation of a new conversion for accessing secure data corresponding to the shared page address, method.18. The method of claim 17, wherein the first instruction is an EBLOCK instruction that specifies the shared page address to prevent the creation of a new translation corresponding to the shared page address in any TLB.19. The method according to claim 17 or 18, further comprising the step of recording a hardware thread currently accessing secure data in the enclave page cache corresponding to the secure enclave in response to the step of decrypting the first instruction. Method described.In response to the step of decrypting the first instruction, the recorded number of currently executing hardware threads in the secure enclave decreases as any of the hardware threads exit the secure enclave 20. The method of claim 19.21. The method of claim 20, wherein when the corresponding hardware thread exits the secure enclave, a translation corresponding to the shared page address is flushed to a TLB corresponding to any of the hardware threads.Decoding the second instruction for execution by the processor, wherein the second instruction specifies the secure enclave as an operand, and in response to decoding the second instruction. 22. A method according to any one of claims 17 to 21, comprising the step of: recording a hardware thread currently accessing secure data in the enclave page cache corresponding to the secure enclave.In response to the step of decrypting the second instruction, the recorded number of currently executing hardware threads in the secure enclave decreases as any of the hardware threads exits the secure enclave The method according to claim 22,.Decoding the third instruction for execution by the processor, wherein the third instruction specifies the shared page address as an operand, and responsive to decoding the third instruction. And Ejecting and writing back secure data in the enclave page cache corresponding to the shared page address when the recorded number of currently executing hardware threads in the secure enclave reaches zero. 23. The method of claim 22, comprising:In the enclave page cache corresponding to the shared page address prior to writing the secure data back to memory or non-volatile storage in response to the step of decoding the third instruction for execution by the processor 25. The method of claim 24, wherein the secure data is encrypted.25. The method of claim 24, wherein the third instruction fails when the recorded number of currently executing hardware threads in the secure enclave does not reach zero.25. The method of claim 24, wherein the third instruction waits for execution until the recorded number of currently executing hardware threads in the secure enclave reaches zero.A method, comprising: executing a first hardware thread and a second hardware thread in a multi-threaded processor; and assigning to corresponding secure enclaves accessible by the first and second hardware threads Storing secure data in a cache line for the shared page address; and decrypting a first instruction for execution by the processor, wherein the first instruction specifies the secure enclave as an operand. And D. recording a hardware thread currently accessing secure data in an enclave page cache corresponding to the secure enclave, in response to the step of decrypting the first instruction.The method according to claim 28, wherein the first instruction is an ETRACK instruction specifying the secure enclave to record the number of hardware threads currently executing in the secure enclave.In response to the step of decrypting the first instruction, the recorded number of currently executing hardware threads in the secure enclave decreases as any of the hardware threads exit the secure enclave The method according to claim 28 or 29.31. The method of claim 30, wherein responsive to the step of decoding the first instruction, creation of a new translation corresponding to the shared page address in any TLB is prevented.Decoding the second instruction for execution by the processor, wherein the second instruction specifies the shared page address as an operand, and responsive to decoding the second instruction. And Ejecting and writing back secure data in the enclave page cache corresponding to the shared page address when the recorded number of currently executing hardware threads in the secure enclave reaches zero. 31. The method of claim 30, comprising:In the enclave page cache corresponding to the shared page address prior to writing the secure data back to memory or non-volatile storage in response to the step of decoding the second instruction for execution by the processor 33. The method of claim 32, wherein the secure data is encrypted.34. The method of claim 33, wherein the second instruction fails when the recorded number of currently executing hardware threads in the secure enclave does not reach zero.A processing system, comprising: a memory; and a processor, the processor corresponding to a first hardware thread and a second hardware thread, accessible by the first and second hardware threads. An enclave page cache for storing secure data in a cache line for a shared page address assigned to a secure enclave; a decryption step for decrypting a first instruction for execution by said processor, said first step An instruction of designating the shared page address as an operand, and one or more execution units, wherein: an enclave page cache mapping to the shared page address in response to the decoded first instruction Ent corresponding to One or more executions that block the creation of a new translation for either the first or second hardware thread to access the secure data corresponding to the shared page address by marking A unit, and the decoding step for decoding a second instruction for execution by the processor, wherein the second instruction specifies the secure enclave as an operand; A plurality of execution units, responsive to the decrypted second instruction, recording hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave; When any thread exits the secure enclave A processing system, comprising: the one or more execution units reducing the recorded number of hardware threads currently executing in the secure enclave.36. The processing system of claim 35, wherein when the corresponding hardware thread exits the secure enclave, the translation corresponding to the shared page address is flushed in a TLB corresponding to any of the hardware threads.The decoding step for decoding a third instruction for execution by the processor, wherein the third instruction specifies the shared page address as an operand; One or more execution units, in response to the decoded third instruction, the shared page when the recorded number of currently executing hardware threads in the secure enclave reaches zero 37. The processing system of claim 36, comprising: one or more execution units for eviction and write back of secure data in the enclave page cache corresponding to an address.38. The processing system of claim 37, wherein the third instruction fails when the recorded number of currently executing hardware threads in the secure enclave does not reach zero. |
Instructions and logic to provide advanced paging capabilities for secure enclave page cachesThe present disclosure is processing logic, a microprocessor, and an associated instruction set architecture, wherein the instruction set architecture performs logical, mathematical, or other functional operations when executed by a processor or other processing logic. About the field. In particular, the present disclosure relates to instructions and logic to provide advanced paging capabilities for secure enclave page caches.Applications and high-performance networks to support new usage models and services such as voice, video, transactions, and private data, for example, present new challenges in the area of security. Although the need to protect data during storage or transport for confidentiality and integrity is important, it is required to maintain secure access to protected code and / or data. Supporting high speed encryption operations and storage adds complexity and eventually costs.One of the techniques for creating and maintaining a secure, protected or segregated compartment or environment is known as establishing an enclave. An enclave is a set of information and processing power protected as a group. Information and processing capabilities may include networks, hosts or applications.Processing techniques commonly used to access data and / or instructions include, for example, linear addresses to physical memory addresses according to the mappings found in the page table, using, for example, a translation look-aside buffer (TLB). Through the cache to support virtual memory, such as by performing the conversion in hardware quickly. Entries in the TLB may be associated with one or more specific processor cores, hardware threads, or logical processors. Thus, data that may be accessed in the cache may be protected from access by unauthorized processor cores, hardware threads or logical processors.Management of permissions, changes in mapping in physical memory and / or page tables are typically managed by the operating system (OS), but when memory content is protected, as for example enclaves, the OS Sometimes permission or trust to gain access to the actual protected content may not be obtained, ie the enclave has private memory.Thus, ensuring the security and / or integrity of the private memory content and managing the technical constraints of limited amount of physical memory when the OS can not be trusted is a set of unique security and performance features. It shows the problem.To date, security solutions that address these issues, as well as potential performance limiting issues as well as potential solutions to design, verification, and other complexity, have not been properly explored.FIG. 6 is a block diagram of an embodiment of a system that executes instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 7 is a block diagram illustrating another embodiment of a system that executes instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 7 is a block diagram illustrating another embodiment of a system that executes instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram of an embodiment of a processor that executes instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 5 illustrates packed data types according to one embodiment.FIG. 5 illustrates packed data types according to one embodiment.FIG. 5 illustrates packed data types according to one embodiment.FIG. 7 illustrates instruction coding to provide advanced paging capabilities for a secure enclave page cache according to one embodiment.FIG. 8 illustrates instruction coding to provide advanced paging capabilities for a secure enclave page cache in accordance with another embodiment.FIG. 8 illustrates instruction coding to provide advanced paging capabilities for a secure enclave page cache in accordance with another embodiment.FIG. 8 illustrates instruction coding to provide advanced paging capabilities for a secure enclave page cache in accordance with another embodiment.FIG. 8 illustrates instruction coding to provide advanced paging capabilities for a secure enclave page cache in accordance with another embodiment.FIG. 7 illustrates components of one embodiment of a processor micro-architecture for executing instructions that provide advanced paging capabilities for a secure enclave page cache.FIG. 8 illustrates components of another embodiment of a processor micro-architecture for executing instructions that provide advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram of an embodiment of a processor for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram of an embodiment of a computer system for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 5 is a block diagram illustrating another embodiment of a computer system for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 5 is a block diagram illustrating another embodiment of a computer system for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram of an embodiment of a system on chip for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram illustrating an embodiment of a processor for executing instructions that provides advanced paging capabilities for a secure enclave page cache.FIG. 6 is a block diagram of an embodiment of an IP core development system that provides advanced paging capabilities for secure enclave page caching.FIG. 7 illustrates one embodiment of an architecture emulation system that provides advanced paging capabilities for secure enclave page caching.FIG. 1 is an illustration of an embodiment of a system for translating instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 7 illustrates one embodiment of a processing system for using instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 7 illustrates an embodiment of an apparatus within a processor for using instructions to provide advanced paging capabilities for a secure enclave page cache.FIG. 6 is a flow diagram for one embodiment of a process for providing advanced paging capabilities for a secure enclave page cache.FIG. 7 is a flow diagram for an alternative embodiment of a process for providing advanced paging capabilities for secure enclave page caching.FIG. 7 is a flow diagram for another embodiment of a process for providing advanced paging capabilities for a secure enclave page cache.FIG. 7 is a flow diagram for another embodiment of a process for providing advanced paging capabilities for a secure enclave page cache.The invention is illustrated by way of example and not limitation in the figures of the accompanying drawings.The following description discloses instructions and processing logic for providing advanced paging capabilities for a secure enclave page cache, which may be in or associated with a processor, computer system, or other processing device. It is.For example, in a special cache or part of a cache that holds only private or protected data, such as data associated with an enclave, when the private or protected data is decrypted, the processor core is authorized to access the data , Hardware threads, or only logical processors. Such enclave private memory may be referred to as enclave page cache (EPC) memory.As with other physical memories, the EPC can be made to support a larger private or protected address space by page-in and page-out data and / or code as needed. Page mapping changes are typically managed by the OS, but in an enclave the OS does not necessarily have access to the contents of the enclave private memory.An entry in the TLB is associated with one or more specific processor cores, hardware threads or logical processors, which modify the page when the page is paged out to memory or non-volatile storage Should not be allowed. Thus, to change the mapping of pages to enclaves, such as leaving pages to enclaves or loading new pages, the EPC memory content may be encrypted and written back, or new pages may be loaded from memory. The system temporarily deactivates one or more processor cores, hardware threads or logical processors that are accessing enclave resources while being deciphered or TLB entries being flushed and replaced, etc. It may be necessary to "quiesce" them by putting them in a suppressed state or otherwise stopping the execution of any applications in the enclave. The hardware protection mechanism protects pages in the EPC to ensure the security and / or integrity of private memory content, and to help manage a limited amount of physical private memory when the OS can not be trusted May need to be used.An example of an approach that involves secure enclaves is the US Patent entitled “Method and Apparatus for Providing Secure Application Execution,” filed June 19, 2012, entitled “Method and Apparatus for Providing Secure Application Execution”. Application Serial No. 13 / 527,547.Whenever a page of EPC memory is evacuated, it entails signaling of all processor cores or logical processors that use that EPC memory and / or replacement of page content, TLB entry or entries All processor cores or logical processors may be required to leave the enclave in order to do a flush of Furthermore, ensuring in hardware that these requirements are met to protect the privacy of the enclave can involve considerable design and verification complexity.For example, the paging process can be split into multiple stages, such as EPC memory content being encrypted and written back, new pages loaded from memory and decrypted, TLB entries flushed and replaced, etc. It will be appreciated that the performance degradation due to the paging process may be reduced when the processor core or logic processor is interrupted for only a short time between one or more stages.Disclosed herein are instructions and logic to provide advanced paging capabilities for a secure enclave page cache. Some embodiments store secure data for a shared page address assigned to a secure enclave accessible by multiple hardware threads, logical processors or processing cores and the hardware threads, logical processors or processing cores Including caching and The decoding stage decodes a first instruction (e.g., an EBLOCK instruction, discussed in more detail below), which specifies the shared page address as an operand. One or more execution units mark an entry corresponding to the enclave page cache mapping to a shared page address, for any of said plurality of hardware threads, logical processors or processing cores to access the shared page Block generation of new TLB translations. A second instruction (eg, an ETRACK instruction, also discussed in more detail below) is decoded for execution, which specifies the secure enclave as an operand, and one or more execution units are: Record hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave. When any of the hardware threads leave the secure enclave, the number of hardware threads recorded is reduced.The OS may then send an inter-processor interrupt (IPI) to any hardware thread, logical processor or processing core currently accessing secure data in the enclave page cache corresponding to the secure enclave. When a hardware thread, logical processor or processing core approves an IPI and exits the secure enclave, those TLB entry (s) are flushed and the number of recorded hardware threads is reduced. When the number of recorded hardware threads reaches zero, it is safe for the OS to evict and encrypt one or more pages and write them back to memory or non-volatile storage. The OS may complete eviction and write back using a third instruction (e.g., the EWB instruction also discussed in more detail below). One embodiment of the third instruction may fail when the number of recorded hardware threads does not reach zero, as secure data enclave protection may not trust the OS. In an alternative embodiment, the third instruction may wait for execution until the number of recorded hardware threads reaches zero.The management of permissions, changes in physical memory and / or mappings may still be managed by the OS, but when memory content is protected, as with enclaves, the OS is permitted to access the actual protected content. It will be appreciated that or trust may not be obtained. Because enclaves have private memory. Thus, ensuring the security and / or integrity of the private memory content and managing the technical constraints of limited amount of physical memory when the OS can not be trusted is a sophisticated hardware assistance and / or design It can be accomplished in a step-by-step manner using instructions and processing logic to provide advanced paging capabilities for a secure enclave page cache without requiring effort.In the following description, numerous specific details are set forth, such as processing logic, processor types, microarchitectural conditions, events, availability mechanisms, etc., in order to provide a more thorough understanding of embodiments of the present invention. Be However, it will be appreciated by those skilled in the art that the present invention may be practiced without these specific details. In addition, some well known structures, circuits, etc. are not shown in detail to avoid unnecessarily obscuring embodiments of the present invention.Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Techniques and teachings similar to the embodiments of the present invention may be applied to other types of circuits or semiconductor devices that may benefit from higher pipeline throughput and performance improvement. The teachings of the embodiments of the present invention are applicable to any processor or machine that performs data manipulation. However, the invention is not limited to processors or machines that operate with 512, 256, 128, 64, 32 or 16 bits of data, but to any processor and machine in which data manipulation or management takes place. It may be applied. In addition, for the purposes of illustration, the following description provides examples, and the accompanying drawings show various examples. However, these examples are not intended to provide an exhaustive list of all possible implementations of the embodiments of the present invention, but are merely intended to provide examples of the embodiments of the present invention. These examples should not be construed in a limiting sense.The following example describes the processing and distribution of instructions in the context of execution units and logic circuits, but other embodiments of the invention, when executed by a machine, at least one embodiment of the invention May be accomplished by data and / or instructions stored on a machine-readable tangible medium, which causes the machine to perform the function corresponding to. In one embodiment, the functionality associated with embodiments of the present invention is embodied in machine-executable instructions. This instruction may be used to cause a general purpose processor or a special purpose processor programmed by this instruction to perform the steps of the present invention. Embodiments of the present invention are machines or computer readable media storing instructions that may be used to program a computer (or other electronic device) to perform one or more operations in accordance with embodiments of the present invention. Computer program product or software that may be included. Alternatively, steps of embodiments of the present invention may be performed by a specific hardware component including fixed function logic for performing the step, or programmed computer components and fixed function hardware configuration It may be performed by any combination of elements.The instructions used to program the logic to perform embodiments of the present invention may be stored in memory within the system, such as, for example, DRAM, cache, flash memory, or other storage device. Further, the instructions may be distributed via a network or may be distributed by other computer readable media. Thus, a machine readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g. a computer), but floppy diskettes, optical disks, compact disk read only memories (Compact Disc, Read-Only Memory: CD-ROM), Magneto-Optical Disc, Read-Only Memory (ROM), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory) : EPROM, electrically erasable programmable read only memory (Electrically Erasable Programmable R) ead-Only Memory (EEPROM), magnetic or optical card, flash memory, or information transmission on the Internet via electrical, optical, acoustic or other form of propagated signal (eg, carrier wave, infrared signal, digital signal, etc.) It is not limited to the tangible machine readable storage used. Thus, computer readable media includes any type of tangible machine readable media suitable for storing or transmitting electronic instructions or information in a readable form by a machine (e.g., a computer).The design may go through various stages from fabrication to simulation and fabrication. Data representing a design may represent the design in several ways. First, as useful in simulation, hardware may be represented using a hardware description language or another functional description language. In addition, at some stages of the design process, circuit level models with logic and / or transistor gates may be generated. Furthermore, most designs, at some stage, reach levels of data that represent the physical arrangement of the various devices in the hardware model. If conventional semiconductor fabrication techniques are used, the data representing the hardware model may be data specifying the presence or absence of various features in different mask layers for the mask used to produce the integrated circuit. In any representation of the design, data may be stored on any form of machine readable medium. A memory or a magnetic or optical storage device, such as a disc, etc., is for storing information transmitted via an optical or electrical wave which has been modulated or otherwise generated to transmit such information. It may be a machine readable medium. When an electrical carrier wave indicating or possessing a code or design is transmitted, a new copy is made to the extent that copying, buffering or retransmission of the electrical signal takes place. Thus, a communication provider or network provider may store an article, such as information encoded on a carrier wave, embodying the techniques of the present invention, at least temporarily, on a tangible machine-readable medium. .In modern processors, several different execution units are used to process and execute various code and instructions. Not all instructions are created equal, and one instruction may complete earlier, while another instruction may take several clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus, it would be advantageous to execute as many instructions as possible as quickly as possible. However, there are also specific instructions that are more complex and require greater execution time and processor resources. For example, floating point instructions, load / save operations, data movement, etc.As more computer systems are used in Internet, text and multimedia applications, additional processor support has been introduced over time. In one embodiment, the instruction set includes one or more of: data type, instruction, register architecture, addressing mode, memory architecture, interrupt and exception handling, and external inputs and outputs (I / O). It may be associated with a computer architecture.In one embodiment, the instruction set architecture (ISA) may be implemented by one or more microarchitectures including processor logic and circuitry used to implement one or more instruction sets . Thus, processors having different microarchitectures may share at least a portion of the common instruction set. For example, Intel (R) Pentium (R) 4 processor, Intel (R) Core (R) processor, and Advanced Micro Devices of Sunnyvale, CA Advanced Micro Devices, Inc.'s processor implements nearly identical versions of the x86 instruction set (more extensions have some enhancements added) but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings (ARM Holdings, Ltd.), MIPS, or their licensors or adopters, share at least a portion of the common instruction set Although they may include different processor designs. For example, the same register architecture of ISA may be implemented in different ways in different micro-architectures using new or known techniques, which use dedicated physical registers, register renaming mechanisms (eg, register alias table ( Register Alias Table (RAT), Reorder Buffer (ROB), and Use of Retire Register File) Contains one or more dynamically allocated physical registers. In one embodiment, the registers may include one or more registers, register architectures, register files, or other sets of registers that may or may not be addressable by the software programmer.In one embodiment, the instructions may include one or more instruction formats. In one embodiment, the instruction format may indicate, among other things, the operation to be performed and the various fields (number of bits, location of bits, etc.) for specifying the operand on which the operation is to be performed. Some instruction formats may be further divided and defined by instruction templates (or subformats). For example, an instruction template of a given instruction format is defined to have different subsets of fields of the instruction format and / or defined to have a given field interpreted differently. It is also good. In one embodiment, the instruction is represented using an instruction format (and, if defined, in a given one of the instruction templates of the instruction format) to specify or display the operation and the operands on which the operation operates. .Scientific applications, financial applications, automatic vectorization general purpose applications, RMS (recognition, mining, and synthesis) applications, and visual and multimedia applications (eg, 2D / 3D graphics, image processing, video Compression / decompression, speech recognition algorithms and sound manipulation) may require the same operation to be performed on multiple data items. In one embodiment, Single Instruction Multiple Data (SIMD) indicates the type of instruction that causes the processor to perform an operation on multiple data elements. SIMD techniques may be used in processors that can logically divide the bits in a register into several fixed or variable sized data elements, each representing a distinct value. For example, in one embodiment, the bits in a 64-bit register may be organized as source operands that include four separate 16-bit data elements, each representing a separate 16-bit value. This type of data may be referred to as a "packed" data type or "vector" data type, and the operands of this data type are referred to as packed data operands or vector operands. In one embodiment, the packed data items or vectors may be a sequence of packed data elements stored in a single register, and the packed data operands or vector operands are SIMD instructions (or "packed data instructions" or "vectors") It may be the source or destination operand of the instruction "). In one embodiment, the SIMD instructions have the same or different size, with the same or different number of data elements, by specifying a single vector operation to be performed on the two source vector operands, or Generate destination vector operands (also called result vector operands) of different data element order.For example, Intel (R) Core (TM) processor with an instruction set including x86, MMX (TM), Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1 and SSE4.2 instructions, ARM Processors, such as the ARM Cortex® family of processors with instruction sets including Vector Floating Point (VFP) and / or NEON instructions, and MIPS processors, such as Chinese Academy of Sciences of Science's Institute of Computing Technology (ICT) SIMD technologies, such as those used by the Loongson family of processors developed at that time, have enabled significant improvements in application performance (Core (TM) and MMX (TM), Intel Corp., Santa Clara, Calif.). Intel Corporation is a registered trademark or trademark of Intel Corporation).In one embodiment, destination and source registers / data are generic terms that represent the source and destination of corresponding data or operations. In some embodiments, they may be implemented by registers, memories or other storage areas with names or functions other than those described. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, while "SRC1" and "SRC2" are first and second source storage registers or other storage area. The same may be applied to the following. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage components within the same storage area (eg, SIMD register). In one embodiment, one of the source registers, eg, by writing back the results of operations performed on the first and second source data to one of two source registers acting as destination registers. May also serve as a destination register.FIG. 1A is a block diagram of an exemplary computer system formed with a processor that includes an execution unit for executing instructions in accordance with one embodiment of the present invention. System 100 includes components such as, for example, processor 102, for using execution units that include logic for executing algorithms for processing data according to the present invention, such as the embodiments described herein. System 100 may be a Pentium® III, a Pentium® 4, a dione (Xeon), an Itanium®, an XScaleTM available from Intel Corporation of Santa Clara, California. And / or is typical of a processing system based on StrongARM (TM) microprocessor, but other systems (including PCs with other microprocessors, engineering workstations, set top boxes etc) are used It is also good. In one embodiment, the sample system 100 may run a version of the Windows® operating system version available from Microsoft Corporation of Redmond, Wash. Operating systems (e.g. UNIX and Linux (R) etc.), embedded software, and / or graphical user interfaces may be used. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the invention may be used for other devices, such as hand-held devices and embedded applications. Some examples of handheld devices include mobile phones, internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications include microcontrollers, digital signal processors (DSPs), system-on-chips, network computers (network computers), set-top boxes, network hubs, wide area network (WAN) switches, Or any other system capable of executing one or more instructions in accordance with at least one embodiment.FIG. 1A is a block diagram of a computer system 100 formed with a processor 102 including one or more execution units 108 for performing an algorithm for executing at least one instruction in accordance with an embodiment of the present invention. It is. While one embodiment may be described in the context of a single processor desktop or server system, alternative embodiments may be included in a multiprocessor system. System 100 is an example of a "hub" system architecture. Computer system 100 includes a processor 102 for processing data signals. The processor 102 is a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, It may be a processor that implements a combination of instruction sets, or any other processor device, such as a digital signal processor. Processor 102 is coupled to a processor bus 110 that can transmit data signals between processor 102 and other components in system 100. The components of system 100 perform their conventional functions well known to those skilled in the art.In one embodiment, processor 102 includes Level 1 (Level 1: L1) internal cache memory 104. Depending on the architecture, processor 102 may have a single internal cache or multiple levels of internal caches. Alternatively, in another embodiment, cache memory may reside external to processor 102. In addition, other embodiments may include a combination of both internal and external caches, depending on the particular implementation and requirements. Register file 106 may store different types of data in various registers, including integer registers, floating point registers, status registers, and instruction pointer registers.Also within processor 102 is an execution unit 108 that includes logic for performing integer and floating point operations. In addition, processor 102 includes a microcode (ucode) ROM that stores microcode for particular macroinstructions. In one embodiment, execution unit 108 includes logic to process packed instruction set 109. By including packed instruction set 109 in the instruction set of general purpose processor 102, along with associated circuitry for executing instructions, the packed data in general purpose processor 102 is used to perform operations used by many multimedia applications. May be Thus, by using the full width of the processor's data bus to perform operations on packed data, many multimedia applications can be accelerated and performed more efficiently. This eliminates the need to transport smaller units of data onto the processor's data bus to perform one or more operations on one data element at a time.Alternative embodiments of execution unit 108 may be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 includes a memory 120. Memory 120 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other memory device. Memory 120 may store instructions and / or data represented by data signals that may be executed by processor 102.A system logic chip 116 is coupled to processor bus 110 and memory 120. The system logic chip 116 in the illustrated embodiment is a memory controller hub (MCH). Processor 102 may communicate with MCH 116 via processor bus 110. MCH 116 provides a high bandwidth memory path 118 to memory 120 for storage of instructions and data as well as for storage of graphics commands, data and textures. MCH 116 sends data signals between processor 102, memory 120, and other components in system 100, and bridges data signals between processor bus 110, memory 120, and system I / O 122. In some embodiments, system logic chip 116 may provide a graphics port for coupling to graphics controller 112. MCH 116 is coupled to memory 120 through memory interface 118. Graphics card 112 is coupled to MCH 116 through Accelerated Graphics Port (AGP) interconnect 114.The system 100 couples the MCH 116 to an I / O controller hub (ICH) 130 using a proprietary hub interface bus 122. The ICH 130 provides a direct connection to several I / O devices via a local I / O bus. The local I / O bus is a high speed I / O bus for connecting peripherals to the memory 120, chipset and processor 102. Some examples include an acoustic controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I / O controller with user input and keyboard interface, Universal Serial Bus (USB), etc. Serial expansion port, as well as the network controller 134. Data storage 124 may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.For another embodiment of the system, instructions in accordance with one embodiment may be used with a system on chip. One system-on-chip embodiment consists of a processor and memory. The memory for one of such systems is flash memory. Flash memory may be located on the same die as the processor and other system components. In addition, other logic blocks such as, for example, a memory controller or graphics controller may also be located on the system on chip.FIG. 1B shows a data processing system 140 that implements the principles of one embodiment of the present invention. Those skilled in the art will readily recognize that the embodiments described herein may be used with alternative processing systems without departing from the scope of the embodiments of the present invention.Computer system 140 includes a processing core 159 that can execute at least one instruction in accordance with one embodiment. In one embodiment, processing core 159 represents a processing unit of any type of architecture including, but not limited to, a CISC, RISC, or VLIW type architecture. In addition, the processing core 159 may be suitable for manufacture in one or more process techniques, and is suitable for facilitating said manufacture by presenting sufficient detail in the machine readable medium It may be.The processing core 159 includes an execution unit 142, a set 145 of register files, and a decoder 144. In addition, processing core 159 includes additional circuitry (not shown) that is not necessary for understanding the embodiments of the present invention. Execution unit 142 is used to execute the instructions that processing core 159 receives. Execution unit 142 may execute instructions in packed instruction set 143 to perform operations on packed data formats in addition to executing typical processor instructions. Pack instruction set 143 includes instructions for performing an embodiment of the present invention and other pack instructions. Execution unit 142 is coupled to register file 145 by an internal bus. The register file 145 represents a storage area on the processing core 159 for storing information including data. As mentioned above, it is understood that the storage area used to store the pack data is not important. Execution unit 142 is coupled to decoder 144. The decoder 144 is used to decode the instructions received by the processing core 159 into control signals and / or microcode entry points. In response to these control signals and / or microcode entry points, execution unit 142 takes appropriate action. In one embodiment, the decoder is used to interpret the opcode of the instruction that indicates which operation to perform on the corresponding data indicated in the instruction.The processing core 159 is coupled to a bus 141 for communicating with various other system devices, such as synchronous dynamic random access memory (SDRAM) control 146, static with other system devices. Random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA) / compact flash (CF) (compact flash: CF) Card control 149, LCD display It may include, but is not limited to, a liquid crystal display (LCD) control 150, a direct memory access (DMA) controller 151, an alternative bus master interface 152, and the like. In one embodiment, data processing system 140 may also include an I / O bridge 154 for communicating with various I / O devices via I / O bus 153. Such I / O devices include, for example, universal asynchronous receiver / transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth® wireless UART 157, and I / O expansion interface 158, etc. But not limited thereto.One embodiment of data processing system 140 provides mobile, network and / or wireless communications, and processing core 159 that may perform SIMD operations including text string comparison operations. Processing core 159 may be programmed by various acoustic, video, imaging and communication algorithms, such as discrete transforms such as Walsh-Hadamard transform, fast Fourier transform (FFT), discrete cosine transform transform: DCT and the respective inverse transform etc .; compression / decompression techniques such as color space transform, video coding motion estimation or video decoding motion compensation etc; and modulation / demodulation (MODEM) functions such as pulse code modulation (Pulse coded modulation: PCM) and the like.FIG. 1C shows another alternative embodiment of a data processing system that can execute instructions to provide advanced paging capabilities for a secure enclave page cache. According to one alternative embodiment, data processing system 160 may include main processor 166, SIMD coprocessor 161, cache memory 167, and input / output system 168. Input / output system 168 may optionally be coupled to wireless interface 169. The SIMD co-processor 161 can perform operations including instructions in accordance with one embodiment. The processing core 170 may be suitable for manufacture in one or more process technologies, and all or part of the data processing system 160 including the processing core 170 may be provided by the machine readable medium being sufficiently detailed. It may be suitable to facilitate some manufacture.In one embodiment, the SIMD coprocessor 161 includes a set of execution units 162 and register file (s) 164. One embodiment of main processor 166 includes a decoder 165 for recognizing instructions of instruction set 163, including instructions according to one embodiment, for execution by execution unit 162. In an alternative embodiment, SIMD coprocessor 161 further includes at least a portion of decoder 165 B for decoding the instructions of instruction set 163. In addition, processing core 170 includes additional circuitry (not shown) that is not necessary for understanding the embodiments of the present invention.In operation, main processor 166 executes a stream of data processing instructions that control general types of data processing operations including interaction with cache memory 167 and input / output system 168. The stream of data processing instructions is embedded with SIMD coprocessor instructions. The decoder 165 of the main processor 166 recognizes these SIMD co-processor instructions as being of the type to be executed by the connected SIMD co-processor 161. Thus, main processor 166 issues these SIMD co-processor instructions (or control signals representing SIMD co-processor instructions) onto co-processor bus 171, from which they are received by any connected SIMD co-processor . In this case, the SIMD coprocessor 161 receives and executes any received SIMD coprocessor instructions intended for it.Data may be received via wireless interface 169 for processing by SIMD co-processor instructions. As an example, voice communications may be received in the form of digital signals, which may be processed by SIMD co-processor instructions to regenerate digital acoustic samples representing voice communications. As another example, compressed audio and / or video may be received in the form of a digital bit stream, which is processed by SIMD co-processor instructions to regenerate digital audio samples and / or motion video frames. May be As one embodiment of processing core 170, main processor 166 and SIMD co-processor 161 are integrated to provide an execution unit 162, a set of register file (s) 164, and an instruction set 163 including instructions according to one embodiment. There is a single processing core 170 that includes a decoder 165 for recognizing instructions.FIG. 2 is a block diagram of a micro-architecture for a processor 200 that includes logic circuitry for executing instructions in accordance with one embodiment of the present invention. In some embodiments, instructions according to one embodiment are for data elements having sizes such as bytes, words, double words, quad words, and data types such as single and double precision integers, and floating point data types, for example. It may be implemented to operate with. In one embodiment, the in-order front end 201 is the part of the processor 200 that fetches the instructions to be executed and prepares them for subsequent use in the processor pipeline. The front end 201 may include several units. In one embodiment, instruction prefetcher 226 fetches instructions from memory and provides them to instruction decoder 228, which then decodes or interprets the instructions. For example, in one embodiment, the decoder decodes the received instructions and one or more operations, called "micro-instructions" or "micro-operations" (also called micro-ops or uops) that the machine can execute Make it In another embodiment, the decoder parses the instruction into opcodes and corresponding data and control fields, which are used by the microarchitecture to perform operations in accordance with one embodiment. In one embodiment, trace cache 230 takes decoded uops and assembles them into a program ordered sequence or traces to uop queue 234 for execution. When trace cache 230 encounters complex instructions, microcode ROM 232 provides the uops needed to complete the operation.Some instructions are translated into a single micro-op, while others require several micro-ops to complete the entire operation. In one embodiment, when more than four micro-ops are needed to complete an instruction, decoder 228 accesses microcode ROM 232 to perform the instruction. In one embodiment, instructions may be decoded into a small number of micro-ops for processing in instruction decoder 228. In another embodiment, the instructions may be stored in the microcode ROM 232 if several micro-ops are required to accomplish the operation. Trace cache 230 is an entry point programmable logic array (PLA) for defining the correct microinstruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM 232 according to one embodiment. ). After the microcode ROM 232 has ordered the micro ops for the instruction, the machine front end 201 resumes fetching the micro ops from the trace cache 230.An out-of-order execution engine 203 is where instructions are ready for execution. Out-of-order execution logic has several buffers to smooth and reorder instruction flow to optimize performance as the instructions are scheduled down the pipeline for execution . The allocator logic allocates the machine buffers and resources that each uop needs for execution. Register Renaming Logic renames a logical register onto an entry in the register file. In addition, the allocator assigns an entry for each uop to one of two uop queues, one for the memory operation and the other for the non-memory operation, in front of the instruction scheduler. The instruction schedulers are: memory scheduler, high speed scheduler 202, low speed / general floating point scheduler 204, and simple floating point scheduler 206. The uop scheduler 202, 204, 206 prepares for the execution of the uop based on the readiness of their dependent input register operand sources and the availability of the execution resources that the uop needs to complete its operation. Determine if you can. The fast scheduler 202 of one embodiment can schedule for each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The scheduler arbitrates dispatch ports to schedule uops for execution.The register files 208, 210 are located in the execution block 211 between the schedulers 202, 204, 206 and the execution units 212, 214, 216, 218, 220, 222, 224. Separate register files 208, 210 exist for integer and floating point operations, respectively. Each register file 208, 210 of one embodiment also includes a bypass network that can bypass or transfer the just completed results not yet written to the register file to the new dependent uop. Additionally, integer register file 208 and floating point register file 210 can communicate data with one another. In one embodiment, integer register file 208 is divided into two separate register files, one register file for the lower 32 bits of data and a second register file for the upper 32 bits of data . The floating point register file 210 of one embodiment has entries 128 bits wide, because floating point instructions typically have operands that are 64 to 128 bits wide.Execution block 211 includes execution units 212, 214, 216, 218, 220, 222, 224, where the instructions are actually executed. This section includes register files 208, 210 that store integer and floating point data operand values that microinstructions need for execution. The processor 200 of an embodiment is comprised of several execution units. Address generation unit (AGU) 212, AGU 214, high speed ALU 216, high speed ALU 218, low speed ALU 220, floating point ALU 222, and floating point moving unit 224. In one embodiment, floating point execution units 222, 224 perform floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 222 of one embodiment includes a 64-bit by 64-bit floating point divider for performing divide, square root and remainder micro-ops. For embodiments of the present invention, instructions with floating point values may be processed by floating point hardware. In one embodiment, ALU operations go to high speed ALU execution units 216, 218. The fast ALUs 216, 218 of one embodiment can perform fast operations with an effective latency of half a clock cycle. In one embodiment, the most complex integer operations go to the slow ALU 220. This is because the low speed ALU 220 includes integer execution hardware for long latency types of operations such as, for example, multipliers, shifts, flag logic, and branch processing. Memory load / save operations are performed by the AGUs 212, 214. In one embodiment, integer ALUs 216, 218, 220 are described in the context of performing integer operations on 64-bit data operands. In alternative embodiments, the ALUs 216, 218, 220 may be implemented to support various data bits, including 16, 32, 128, 256, etc. Similarly, floating point units 222, 224 may be implemented to support a range of operands having bits of various widths. In one embodiment, the floating point units 222, 224 may operate on 128 bit wide packed data operands with SIMD and multimedia instructions.In one embodiment, the uop schedulers 202, 204, 206 dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 200, processor 200 further includes logic to handle memory misses. When a data load misses in the data cache, dependent operations away from the scheduler due to temporarily incorrect data may be in the pipeline. The replay mechanism tracks and re-executes instructions with incorrect data. Only dependent operations need to be replayed, and independent operations are completed. In addition, the scheduler and replay mechanism of one embodiment of the processor is designed to capture instructions that provide advanced paging capabilities for secure enclave page caching.The term "register" may refer to an on-board processor storage location used as part of an instruction to identify an operand. In other words, the registers may be available from outside the processor (from the programmer's point of view). However, the register of the embodiment should not be limited to mean a particular type of circuit. Rather, the embodiment's registers are capable of storing and providing data, and capable of performing the functions described herein. The registers described herein may be any number of different, eg, dedicated physical registers, physical registers dynamically assigned using register renaming, combinations of dedicated physical registers and dynamically assigned physical registers, etc. The techniques may be implemented by circuitry in a processor. In one embodiment, the integer register stores 32-bit integer data. In addition, the register file of one embodiment includes eight multimedia SIMD registers for packed data. For the following discussion, the register is a data register designed to hold packed data, such as a 64-bit wide MMXTM in a microprocessor enabled by Intel's MMX technology in Santa Clara, California. It is understood to be a register (sometimes also called a "mm" register) or the like. These MMX registers, available in both integer and floating point forms, can operate with packed data elements with SIMD and SSE instructions. Similarly, 128 bit wide XMM registers for SSE2, SSE3, SSE4 or higher (collectively referred to as "SSEx") technologies may also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not have to distinguish between these two data types. In one embodiment, integers and floating points are contained in the same register file or in different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or in the same register.Several data operands are described in the examples of the following drawings. FIG. 3A shows a representation of various packed data types in a multimedia register in accordance with one embodiment of the present invention. FIG. 3A shows the data types for packed byte 310, packed word 320, and packed double word (dword) 330 for 128 bit wide operands. The packed byte format 310 of this example is 128 bits long and contains sixteen packed byte data elements. Here, a byte is defined as 8-bit data. The information for each byte data element is bit 7 to bit 0 for byte 0, bit 15 to bit 8 for byte 1, bit 23 to bit 16 for byte 2, and finally the byte For fifteen, bits 120 through 127 are stored. Thus, all available bits in the register are used. This storage arrangement enhances the storage efficiency of the processor. In addition, because 16 data elements are accessed, one operation can be performed on 16 data elements in parallel.In general, data elements are individual data portions stored in a single register or memory location along with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in the XMM register is 128 bits divided by the bit length of the individual data elements. Similarly, in packed data sequences related to MMX and SSE techniques, the number of data elements stored in the MMX register is 64 bits divided by the bit length of the individual data elements. Although the data types shown in FIG. 3A are 128 bits long, embodiments of the present invention may operate with operands that are 64 bits wide, 256 bits wide, 512 bits wide, or other sizes. The packed word format 320 of this example is 128 bits long and contains eight packed word data elements. Each packed word contains 16 bits of information. The packed doubleword format 330 of FIG. 3A is 128 bits long and includes four packed doubleword data elements. Each packed doubleword data element contains 32 bits of information. The packed quad word is 128 bits long and contains two pack quad data elements.FIG. 3B shows an alternative in-register data storage format. Each pack data may include two or more independent data elements. Three packed data formats are shown. That is, packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 includes fixed point data elements. In alternative embodiments, one or more of pack half 341, pack single 342, and pack double 343 may include floating point data elements. One alternative embodiment of packed half 341 is 128 bits long and contains eight 16-bit data elements. One embodiment of packed single 342 is 128 bits long and includes four 32-bit data elements. One embodiment of packed double 343 is 128 bits long and includes two 64-bit data elements. It will be appreciated that such packed data format may be further extended to other register lengths, such as, for example, 96 bits, 160 bits, 192 bits, 224 bits, 256 bits, 512 bits or more.FIG. 3C shows various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention. Unsigned packed byte representation 344 illustrates the storage of unsigned packed bytes in SIMD registers. Information for each byte data element is stored in bit 7 to bit 0 for byte 0, in bit 15 to bit 8 for byte 1, in bit 23 to bit 16 for byte 2, etc. Finally, for byte 15, bits 120 through 127 are stored. Thus, all available bits in the register are used. This storage arrangement can increase the storage efficiency of the processor. In addition, since 16 data elements are accessed, one operation can be performed in a parallel manner on the 16 data elements. Signed packed byte representation 345 indicates storage of signed packed bytes. Note that the eighth bit of each byte data element is a code indicator. Unsigned packed word representation 346 shows how word 7 through word 0 are stored in the SIMD register. Signed packed word representation 347 is similar to in-register, unsigned packed word representation 346. It should be noted that the sixteenth bit of each word data element is a code indicator. Unsigned packed double word representation 348 indicates how double word data elements are stored. Signed packed doubleword representation 349 is similar to in-register unsigned pack doubleword representation 348. Note that the required sign bit is the 32nd bit of each double word data element.FIG. 3D illustrates one embodiment of operation coding (opcode) format 360, which has 32 bits or more, and further, of the world-wide-web (www). intel. Intel® 64 and IA-32 Intel Architecture Software Developer Manuals Volume 2A and 2B Merged Issues, available from Intel Corporation of Santa Clara, California at com / products / processor / manuals /: Instruction Set Reference A Register / memory operand addressing corresponding to the type of operation code format described in “-Z (Intel (R) 64 and IA-32 Intel Architecture Software Developer's Manual Combined Volumes 2A and 2B: Instruction Set Reference A-Z)” Have a mode. In one embodiment, the instruction may be encoded by one or more of fields 361 and 362. Up to two operand positions may be identified per instruction, including up to two source operand identifiers 364 and 365. For one embodiment, the destination operand identifier 366 is the same as the source operand identifier 364 whereas in other embodiments they are different. For alternative embodiments, destination operand identifier 366 is the same as source operand identifier 365, while in other embodiments they are different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 is overwritten by the result of the instruction, while in another embodiment identifier 364 corresponds to a source register element, the identifier 365 correspond to the destination register element. In one embodiment, operand identifiers 364 and 365 may be used to identify 32-bit or 64-bit source and destination operands.FIG. 3E illustrates another alternative operation coding (opcode) format 370 having 40 bits or more. Opcode format 370 corresponds to opcode format 360 and includes an optional prefix byte 378. Instructions according to one embodiment may be encoded by one or more of fields 378, 371 and 372. Source operand identifiers 374 and 375 and prefix byte 378 may identify up to two operand locations per instruction. In one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. For one embodiment, the destination operand identifier 376 is the same as the source operand identifier 374 whereas in other embodiments they are different. For alternative embodiments, destination operand identifier 376 is the same as source operand identifier 375, while in other embodiments they are different. In one embodiment, the instruction operates on one or more of the operands identified by operand identifiers 374 and 375, and one or more operands identified by operand identifiers 374 and 375 depend on the outcome of the instruction. In contrast, in other embodiments, the operands identified by identifiers 374 and 375 are written to another data element in another register. Opcode formats 360 and 370 are register to register, memory to part, partially specified by MOD fields 363 and 373 and any scale-index-base and displacement bytes. -Memory to register, register by memory, register by register, register by immediate, register to memory Allows addressing ofTurning next to FIG. 3F, in some alternative embodiments, 64-bit (or 128 bits, or 256 bits, or 512 bits or more) single instruction multiple data (SIMD) arithmetic operations are performed by a coprocessor It may be performed through a co-processor data processing (CDP) command. Operation coding (opcode) format 380 shows one such CDP instruction with CDP opcode fields 382 and 389. In the type of CDP instruction for alternative embodiments, the operation may be encoded by one or more of fields 383, 384, 387 and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390, and one destination operand identifier 386. One embodiment of the co-processor can operate on 8, 16, 32, and 64-bit values. In one embodiment, the instructions are executed on integer data elements. In some embodiments, instructions may be conditionally executed using condition field 381. For some embodiments, field 383 may encode the source data size. In some embodiments, 0 (Zero: Z), negative (N), carry (C), and overflow (V) detection may be performed on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.Next, FIG. 3G shows another alternative operation coding (opcode) format 397 for providing advanced paging capabilities for a secure enclave page cache according to another embodiment, which opcode format is , World wide web (www) intel. The opcode format described in "Intel (R) Advanced Vector Extensions Programming Reference" available from Intel Corporation of Santa Clara, California at com / products / processor / manuals / Corresponds to the type ofThe original x86 instruction set provides a one byte opcode with various formats of address syllables and immediate operands contained in additional bytes, the existence of which is known from the first "opcode" byte. In addition, there were specific byte values reserved as qualifiers for opcodes (which were called prefixes because they need to be placed before an instruction). When the original palette of 256 opcode bytes (including these special prefix values) was exhausted, a single byte was escaped into a new set of 256 opcodes. As vector instructions (e.g. SIMD) were added, more opcodes were requested, and the "two-byte" opcode map was insufficient even when extended using a prefix. To this end, a new instruction is added to the additional map, which uses 2 bytes plus an optional prefix as an identifier.In addition, an additional prefix may be used between the prefix and the opcode (and any escape bytes needed to define the opcode) to facilitate additional registers in 64-bit mode ("REX Called). In one embodiment, REX may have 4 "payload" bits to indicate the use of additional registers in 64-bit mode. In other embodiments, REX may have fewer or more than four bits. The general format of at least one instruction set (generally corresponding to format 360 and / or format 370) is generically indicated by: [Prefix] [rex] Escape [Escape 2] Op code modrm (etc).Opcode format 397 corresponds to opcode format 370 and includes any other commonly used legacy instruction prefix byte and optional VEX prefix byte 391 (beginning with hex value C4 in one embodiment) replacing the escape code. . For example, the following shows an embodiment that uses two fields to encode the instruction, which may be when a second escape code is present in the original instruction, or an additional bit in the REX field (eg, It may be used when XB and W fields need to be used. In the embodiment shown below, the legacy escape is represented by a new escape value, the legacy prefix is fully compressed as part of the "payload" byte, and the legacy prefix is regenerated and available for future expansion The second escape code is compressed into a "map" field, a future map or feature space is available, and new features are added (eg, increased vector length and additional source register specifiers).Instructions according to one embodiment may be encoded by one or more of fields 391 and 392. A combination of up to four operand positions per instruction with field 391 and source operand identifiers 374 and 375, and any scaled index based (SIB) identifier 393, optional displacement identifier 394 and optional immediate byte 395 May be identified by In one embodiment, the VEX prefix byte 391 may be used to identify 32-bit or 64-bit source and destination operands and / or 128-bit or 256-bit SIMD registers or memory operands. For one embodiment, the functionality provided by opcode format 397 may be redundant with opcode format 370, whereas in other embodiments they are different. Opcode formats 370 and 397 are partially specified by MOD field 373 and optional (SIB) identifier 393, optional displacement identifier 394 and optional immediate byte 395, register to register, memory to Allows addressing of registers, register by memory, register by register, register by immediate, register to memory.Next, FIG. 3H shows another alternative operational coding (opcode) format 398 for providing advanced paging capabilities for a secure enclave page cache in accordance with another embodiment. Opcode format 398 corresponds to opcode formats 370 and 397 and is an optional EVEX prefix byte 396 (one embodiment that replaces most other commonly used legacy instruction prefix bytes and escape codes to provide additional functionality. Starting with the hexadecimal value 62). Instructions in accordance with an embodiment may be encoded by one or more of fields 396 and 392. Up to four operand locations and masks per instruction, combined with field 396, source operand identifiers 374 and 375, and optional scale index based (SIB) identifier 393, optional displacement identifier 394, and optional immediate byte 395 It may be identified by a combination of In one embodiment, the EVEX prefix byte 396 may be used to identify 32-bit or 64-bit source and destination operands and / or 128-bit, 256-bit or 512-bit SIMD registers or memory operands. For one embodiment, the functionality provided by opcode format 398 may be redundant with opcode format 370 or 397, while they are different in other embodiments. Opcode format 398 is a register-to-register, memory-to-part, partially specified by MOD field 373 and optional (SIB) identifier 393, optional displacement identifier 394 and optional immediate byte 395, together with a mask. Allows addressing of registers, register by memory, register by register, register by immediate, register to memory. The general format of at least one instruction set (generally corresponding to format 360 and / or format 370) is generically indicated by: evex1 RXB mmmmm WvvvLpp evex4 op code modrm [sib] [displacement] [immediate value].In one embodiment, instructions encoded according to EVEX format 398 should be selected or selected from, for example, a user configurable mask register, or additional operands, or 128 bit, 256 bit or 512 bit vector registers. Additional new features, such as additional registers, may have additional "payload" bits that may be used to provide advanced paging capabilities for a secure enclave page cache.For example, VEX format 397 can be used to provide advanced paging capabilities for secure enclave page caches by implicit mask, while EVEX format 398 is an explicit user configurable mask of secure enclave page caches. Can be used to provide advanced paging capabilities. In addition, the VEX format 397 can be used to provide advanced paging capabilities for secure enclave page caching in 128 bit or 256 bit vector registers, while the EVEX format 398 is 128 bits, 256 bits, 512 It may be used to provide advanced paging capabilities for secure enclave page caching in bit or larger (or smaller) vector registers.An example of instructions for providing advanced paging capabilities for a secure enclave page cache is illustrated by the following example.A paging process (e.g., secure enclave page cache memory content is encrypted and written back to a new page from memory by using the above enclave instructions to provide advanced paging capabilities for secure enclave page cache) Can be divided into multiple stages, where the TLB is loaded and decoded, TLB entries are flushed and replaced, etc., where the processor core or logical processor is interrupted for only a short time between one or more stages Will be recognized. Thus, the performance degradation due to the paging process may be reduced while ensuring the security of secure enclave data and without requiring undue complexity and design effort.Some embodiments store secure data for a shared page address assigned to a secure enclave accessible by multiple hardware threads, logical processors or processing cores and the hardware threads, logical processors or processing cores And an enclave page cache. One embodiment of the EBLOCK instruction specifies the shared page address as an operand. One or more execution units for any of multiple hardware threads, logical processors or processing cores to access the shared page by marking an entry corresponding to the enclave page cache mapping to the shared page address Block creation of new TLB translations. One embodiment of the ETRACK instruction specifies a secure enclave as an operand, and one or more execution units record hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave. For example, in one embodiment, the enclave may have two or more counters, referred to herein as "epoch" counters, which are currently accessing secure data in the current epoch of the secure enclave. The number of hardware threads may be recorded and then copied to the most recent previous epoch counter, and a new epoch without a hardware thread may be initialized as a new current epoch. In an alternative embodiment, the EBLOCK & TRACK instruction specifies the shared page address as an operand. One or more execution units for any of multiple hardware threads, logical processors or processing cores to access the shared page by marking an entry corresponding to the enclave page cache mapping to the shared page address Block creation of a new TLB translation, record the logical processor or hardware thread currently accessing the secure enclave corresponding to the page memory address Addr1, and either the logical processor or hardware thread when coming out of the secure enclave Reduce the number of In one or more alternative embodiments, the epoch counter constantly tracks hardware threads, logical processors or processing cores executing in or accessing secure data associated with the secure enclave. .The OS may then send an inter-processor interrupt (IPI) to any hardware thread, logical processor or processing core currently accessing secure data in the enclave page cache corresponding to the secure enclave. Each hardware thread, logical processor or processing core currently accessing secure data corresponding to a secure enclave has entered the secure enclave by an EENTER or ERESUME instruction specifying the secure enclave, when the number of epochs is hard It would have been associated with a wear thread, logical processor or processing core. When a hardware thread, logical processor or processing core approves an IPI and exits the secure enclave, those TLB translation (s) are flushed. Each time a hardware thread from the previous epoch exits the secure enclave (eg, by an EEXIT or AEX instruction), the number of hardware threads recorded in the previous epoch counter is reduced.When the number of recorded hardware threads reaches zero, it is safe for the OS to leave one or more pages, encrypt the data, and write them back to memory or non-volatile storage. In one embodiment, the OS may complete the eviction with an EWRITE BACK or EWB instruction specifying the shared page address as an operand, encrypt secure data, and write the page back to non-volatile storage. Because enclave protection of secure data may not trust the OS, one embodiment of the EWRITE BACK or EWB instruction may fail when the number of recorded hardware threads from the immediately preceding epoch does not reach zero. In another alternative embodiment, an EWRITE BACK or EWB instruction may wait for execution or result in an exception until the number of recorded hardware threads reaches zero. The OS allocates free storage to the new page of the secure enclave in response to one embodiment of the ELOAD instruction specifying the new shared page address as an operand, and decrypts secure data for the new page. It is also good.The management of permissions, changes in physical memory and / or mappings may still be managed by the OS, but as with secure enclaves, when the memory content is protected, the OS will actually protect the content of the enclave private memory It will be appreciated that no permission or trust to gain access is obtained. Ensuring the security and / or integrity of private memory content, and the technical constraints of using limited amount of physical memory to support larger protected enclave private memory space when the OS can not be trusted Managing in a step-by-step manner using instructions and processing logic to provide advanced paging capabilities for secure enclave page caching without requiring elaborate hardware support and / or design effort. It can be achieved.FIG. 4A is a block diagram illustrating an in-order pipeline and register renaming stage, out-of-order issue / execution pipeline in accordance with at least one embodiment of the present invention. FIG. 4B is a block diagram illustrating in-order architecture core and register renaming logic, out-of-order issue / execution logic included in a processor in accordance with at least one embodiment of the present invention. Solid boxes in FIG. 4A indicate in-order pipelines, and dashed boxes indicate register renaming and out-of-order issue / execution pipelines. Similarly, solid boxes in FIG. 4B indicate in-order architecture logic and dashed boxes indicate register rename logic and out-of-order issue / execution logic.In FIG. 4A, processor pipeline 400 has fetch stage 402, length decode stage 404, decode stage 406, allocation stage 408, rename stage 410, and schedule (also known as dispatch or issue) stage 412. Register read / memory read stage 414, execute stage 416, write back / memory write stage 418, exception handling stage 422, and commit stage 424.In FIG. 4B, the arrows indicate coupling between two or more units, and the direction of the arrows indicates the direction of data flow between those units. FIG. 4B shows processor core 490, which includes front end unit 430 coupled to execution engine unit 450, both of which are coupled to memory unit 470.Core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, core 490 may be a special purpose core such as, for example, a network or communication core, a compression engine, a graphic score, and the like.Front end unit 430 includes a branch prediction unit 432 coupled to instruction cache unit 434, which is coupled to instruction translation lookaside buffer (TLB) 436, which is an instruction translation lookaside buffer (TLB) 436 is an instruction Coupled to fetch unit 438, instruction fetch unit 438 is coupled to decode unit 440. The decoding unit or decoder may decode the instructions to generate one or more micro-operations, microcode entry points, micro-instructions, other instructions, or other control signals as output, which are the original instructions. Or otherwise reflect the original instruction or are derived from the original instruction. The decoder may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. Instruction cache unit 434 is further coupled to level 2 (L2) cache unit 476 in memory unit 470. Decoding unit 440 is coupled to rename / allocator unit 452 in execution engine unit 450.Execution engine unit 450 includes a rename / allocator unit 452 coupled to a set of retired unit 454 and one or more scheduler unit (s) 456. Scheduler unit (s) 456 represent any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit (s) 456 are coupled to physical register file (s) unit (s) 458. Each of the physical register file (s) unit 458 represents one or more physical register files, different ones of which store one or more different data types, the data types being, for example, scalar integers, scalar floating Such as a decimal point, packed integer, packed floating point, vector integer, vector floating point, etc. Status (eg, instruction pointer which is the address of the next instruction to be executed). Register Renaming and Out-of-Order Execution Various Aspects (eg Reorder Buffer (s) and Retirement Register File (s) Use, Future File (s), History Buffer (s) Physical register file (s) unit (s) 458 can be used with retire unit 454 to indicate multiple) and use of retire register file (s), use of register map and pool of registers, etc. It is piled up. In general, architectural registers are visible from outside the processor or from the programmer's perspective. The register is not limited to any known specific type of circuit. Various different types of registers are preferred as long as data can be stored and provided as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, physical registers dynamically assigned using register renaming, combinations of dedicated physical registers and dynamically assigned physical registers, and the like. Retireer unit 454 and physical register file (s) unit (s) 458 are coupled to execution cluster (s) 460. Execution cluster (s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. Execution unit 462 performs various operations (eg, shift, add, subtract, multiply) on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) You may go. While some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments include only one execution unit or all of them have full functionality. It may include multiple execution units to perform. Scheduler unit (s) 456, physical register file (s) unit (s) 458, and execution cluster (s) 460 are also shown as potentially possible. Because certain embodiments create separate pipelines for particular types of data / operations (eg, scalar integer pipeline, scalar floating point / packed integer / packed floating point / vector integer / The vector floating point pipeline and / or memory access pipelines each have their own scheduler unit, physical register file (s) unit and / or execution cluster, and in the case of separate memory access pipelines In the implementation of the particular embodiment, only the execution cluster of this pipeline has memory access unit (s) 464). Additionally, it should be understood that when separate pipelines are used, one or more of these pipelines may be out-of-order issue / execution and the rest may be in-order It is.The set of memory access units 464 is coupled to memory unit 470, memory unit 470 includes data TLB unit 472, data TLB unit 472 is coupled to data cache unit 474, and data cache unit 474 is a level 2 (L2) cache unit. It is combined with 476. In one exemplary embodiment, memory access unit 464 may include a load unit, a save address unit, and a save data unit, each of which is coupled to data TLB unit 472 in memory unit 470. L2 cache unit 476 is coupled to one or more other levels of cache and ultimately to main memory.As an example, the exemplary register renaming out-of-order execution / execution core architecture may implement pipeline 400 as follows. 1) instruction fetch 438 performs fetch and length decoding stages 402 and 404, 2) decoding unit 440 performs decoding stage 406, 3) rename / allocator unit 452 performs allocation stage 408 and renaming stage 410 , 4) scheduler unit (s) 456 execute schedule stage 412, 5) physical register file (s) unit (s) 458 and memory unit 470 execute register read / memory read stage 414 Execution cluster 460 performs execution stage 416, 6) memory unit 470 and physical register file (s) unit (s) 458 execute write back / memory write stage 418, exception handling Stage May include various units for 22, 8) retire unit 454 and the physical register file (s) units (s) 458 to perform a commit phase 424.The core 490 includes one or more instruction sets (eg, x86 instruction sets (more recent versions with some extensions added), MIPS Technologies's MIPS Instructions Set in Sunnyvale, CA, Sunny, CA) ARM's ARM instruction set (with any additional extensions such as NEON) may be supported by Vert ARM Holdings.The core may support multithreading (performing two or more parallel sets of actions or threads), it is time slice multithreading, simultaneous multithreading (single physical core, but its physical core is simultaneous multithreading) Provide a logical core for each of the threads that do a), or a combination thereof (eg performing simultaneous multithreading after time slice fetching and decoding, such as, for example, Intel's Hyper-Threading Technology etc.) It should be understood that it may be done in any way.Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. Although the illustrated processor embodiment includes separate instruction and data cache units 434/474 and shared L2 cache unit 476, an alternative embodiment is a single internal cache for both instructions and data, eg, level 1 (L1) It may have an internal cache or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache external to the core and / or processor. Alternatively, all caches may be external to the core and / or processor.FIG. 5 is a block diagram of a single core processor and multi-core processor 500 with integrated memory controller and graphics in accordance with an embodiment of the present invention. The solid box in FIG. 5 shows a processor 500 having a single core 502A, a system agent 510, and a set of one or more bus controller units 516, while the optional addition of the dashed box is , An alternative processor 500 including a plurality of cores 502A-N, a set of one or more integrated memory controller unit (s) 514 in system agent unit 510, and integrated graphics logic 508.The memory hierarchy includes one or more levels of cache within the core, a set of one or more shared cache units 506, and external memory (not shown) coupled to the set of integrated memory controller units 514. Including. The set of shared cache units 506 may be configured as a final level cache (last level cache, such as one or more intermediate level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache level cache (LLC), and / or a combination thereof. In one embodiment, a ring based interconnect unit 512 interconnects the integrated graphics logic 508, the set of shared cache units 506, and the system agent unit 510, but alternative embodiments interconnect such units Any number of known techniques may be used to connect.In some embodiments, one or more of cores 502A-N are capable of multithreading. System agent 510 includes components that cause cores 502A-N to operate in concert. System agent unit 510 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or contain the necessary logic and components to adjust the power state of the cores 502A-N and integrated graphics logic 508. The display unit is for driving one or more externally connected displays.Cores 502A-N may be homogeneous or heterogeneous in terms of architecture and / or instruction set. For example, some of the cores 502A-N may be in-order and others may be out-of-order. As another example, two or more of cores 502A-N can execute the same set of instructions, while others can only execute a subset of that set of instructions or execute different sets of instructions May be possible.Processors may be, for example, CoreTM i3, i5, i7, 2 Duo and Quad, Quad, XeonTM, Itanium (Itanium) available from Intel Corporation of Santa Clara, California. The processor may be a general purpose processor, such as a) (trademark), XScale (trademark), or a StrongARM (trademark) processor. Alternatively, the processor may be of another company such as, for example, ARM Holdings, MIPS. The processor may be a special purpose processor such as, for example, a network or communication processor, a compression engine, a graphics processor, a co-processor, or an embedded processor. The processor may be implemented on one or more chips. The processor 500 may be part of and / or implemented on one or more substrates using any of a number of process technologies such as, for example, BiCMOS, CMOS or NMOS.FIGS. 6-8 are exemplary systems suitable for including the processor 500, and FIG. 9 is an exemplary system on a chip (SoC) that may include one or more of the cores 502. Laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set top boxes, micro Other system designs and configurations known in the art for controllers, cell phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices that can incorporate the processor (s) and / or other execution logic disclosed herein are generally suitable.Referring now to FIG. 6, a block diagram of a system 600 in accordance with one embodiment of the present invention is shown. System 600 may include one or more processors 610, 615, which are coupled to a graphics memory controller hub (GMCH) 620. The optional nature of the additional processor 615 is shown in dashed lines in FIG.Each processor 610, 615 may be any version of processor 500. However, it should be noted that no integrated graphics logic and integrated memory control unit would be present in the processor 610, 615. In FIG. 6, it is shown that GMCH 620 may be coupled to memory 640, which may be, for example, dynamic random access memory (DRAM). For at least one embodiment, a DRAM may be associated with a non-volatile cache.GMCH 620 may be a chipset or part of a chipset. GMCH 620 may communicate with processor (s) 610, 615 to control the interaction of processor (s) 610, 615 with memory 640. Additionally, GMCH 620 may act as an accelerated bus interface between processor (s) 610, 615 and other components of system 600. For at least one embodiment, GMCH 620 communicates with processor (s) 610, 615 via a multidrop bus, such as, for example, a frontside bus (FSB) 695.Additionally, GMCH 620 is coupled to display 645 (eg, a flat panel display, etc.). GMCH 620 may include an integrated graphics accelerator. GMCH 620 is further coupled to an input / output (I / O) controller hub (ICH) 650 that may be used to couple various peripheral devices to system 600. For example, in the embodiment of FIG. 6, an external graphics device 660, which may be a separate graphics device coupled to the ICH 650, and another peripheral device 670 are shown.Alternatively, additional processors or different processors may be present in system 600. For example, additional processor (s) 615 may be the same as processor 610, additional processor (s), additional processor (s) that may be disparate or asymmetric with respect to processor 610. , Accelerators (e.g., graphics accelerators or digital signal processing (DSP) units, etc.), field programmable gate arrays, or any other processor. There may be various differences between the physical resources 610, 615 in terms of a range of metrics of advantage, including architecture characteristics, micro-architecture characteristics, thermal characteristics, power consumption characteristics, and the like. These differences may effectively represent asymmetry and heterogeneity between the processors 610, 615. For at least one embodiment, various processors 610, 615 may be present in the same die package.Referring now to FIG. 7, a block diagram of a second system 700 in accordance with an embodiment of the present invention is shown. As shown in FIG. 7, multiprocessor system 700 is a point-to-point interconnect system and includes a first processor 770 and a second processor 780 coupled via point-to-point interconnect 750. As with one or more of processors 610, 615, each of processors 770 and 780 may be any version of processor 500.Although only two processors 770, 780 are illustrated, it should be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.Processors 770 and 780 are shown to include integrated memory controller units 772 and 782, respectively. In addition, processor 770 includes point-to-point (PP) interfaces 776 and 778 as part of its bus controller unit. Similarly, the second processor 780 includes PP interfaces 786 and 788. Processors 770, 780 may exchange information via point-to-point (PP) interface 750 using PP interface circuits 778, 788. As shown in FIG. 7, IMCs 772 and 782 couple processors to their respective memories, ie, memory 732 and memory 734, which may be part of main memory locally attached to each processor .Each of the processors 770, 780 may exchange information with the chipset 790 via individual P-P interfaces 752, 754 using point-to-point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with high performance graphics circuitry 738 via high performance graphics interface 739.The processor entered a low power mode by having a shared cache (not shown) included in either processor or connected to the processor outside of both processors but through a PP interconnect Sometimes, local cache information of either or both processors may be stored in the shared cache.Chipset 790 may be coupled to first bus 716 via interface 796. In one embodiment, the first bus 716 may be a Peripheral Component Interconnect (PCI) bus, for example a bus such as a PCI Express bus or another third generation I / O interconnect bus Although the scope of the invention is not so limited.As shown in FIG. 7, various I / O devices 714 may be coupled to the first bus 716, with the bus bridge 718 coupling the first bus 716 to the second bus 720. In one embodiment, the second bus 720 may be a low pin count (LPC) bus. A variety of devices may be coupled to the second bus 720, including, for example, a keyboard and / or mouse 722, a communication device 727, and a storage unit 728 such as, for example, a disk drive or other mass storage device. In an embodiment, storage unit 728 may include instructions / code and data 730. Additionally, an acoustic I / O 724 may be coupled to the second bus 720. It should be noted that other architectures are also possible. For example, instead of the point-to-point architecture of FIG. 7, the system may implement a multidrop bus or other such architecture.Referring now to FIG. 8, a block diagram of a third system 800 in accordance with an embodiment of the present invention is shown. Similar components in FIGS. 7 and 8 have similar reference numbers, and certain aspects of FIG. 7 are omitted in FIG. 8 to avoid obscuring other aspects of FIG.FIG. 8 illustrates that processors 870, 880 may include integrated memory and I / O control logic ("CL") 872 and 882, respectively. For at least one embodiment, CLs 872, 882 may include integrated memory controller units such as those described above in connection with FIGS. In addition, CL 872, 882 may include I / O control logic. In FIG. 8, not only is memory 832, 834 coupled to CL 872, 882, but I / O device 814 is also coupled to control logic 872, 882. Legacy I / O devices 815 are coupled to chipset 890.Referring now to FIG. 9, a block diagram of a SoC 900 in accordance with an embodiment of the present invention is shown. Components similar to FIG. 5 have similar reference numbers. In addition, the dashed box is an optional feature in the more advanced SoC. In FIG. 9, an interconnect unit (s) 902 includes an application processor 910 including a set of one or more cores 502A-N and a shared cache unit (s) 506, a system agent unit 510, and a bus. Controller unit (s) 516, integrated memory controller unit (s) 514, integrated graphics logic 508, image processor 924 for providing stationary and / or video camera functions, hardware acoustic acceleration provided Processor 926, and a set of one or more media processors 920 that may include a video processor 928 to provide video encoding / decoding acceleration, and static random access memory (SRAM) A unit 930, a direct memory access (DMA) unit 932 is coupled to a display unit 940 for coupling to one or more external display.FIG. 10 shows a processor that includes a central processing unit (CPU) and a graphics processing unit (GPU) that can execute at least one instruction in accordance with one embodiment. In one embodiment, instructions for performing an operation in accordance with at least one embodiment may be executed by a CPU. In another embodiment, the instructions may be executed by a GPU. In yet another embodiment, the instructions may be performed by a combination of operations performed by the GPU and CPU. For example, in one embodiment, instructions in accordance with one embodiment may be received at the GPU and decoded for execution. However, one or more operations in the decoded instruction may be performed by the CPU, and the result is returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as a primary processor and the GPU may act as a co-processor.In some embodiments, instructions that benefit from a highly parallel throughput processor may be executed by the GPU, while instructions that benefit from processor performance that benefits from a deeply pipelined architecture are CPU May be performed by For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of GPUs and may be executed accordingly, such as operating system kernels or application code etc. A continuous application may be preferable to the CPU.In FIG. 10, a processor 1000 includes a CPU 1005, a GPU 1010, an image processor 1015, a video processor 1020, a USB controller 1025, a UART controller 1030, an SPI / SDIO controller 1035, a display device 1040, and a high resolution multimedia interface. (High-Definition Multimedia Interface: HDMI (registered trademark)) controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, I 2 S / I 2 C (integrated chip-to-chip sound / integrated circuit grated Interchip Sound / Inter-Integrated Circuit)) and an interface 1070. Other logic and circuitry may be included in the processor of FIG. 10, including more CPUs or GPUs and other peripheral interface controllers.One or more aspects of at least one embodiment may be realized by means of representational data stored on a machine readable medium representative of various logic in a processor, which data is read by a machine Have the machine create the logic to perform the techniques described herein. Such a representation, known as an "IP core", is created to actually create that logic or processor by being stored on a tangible machine readable medium ("tape") and supplied to various customers or manufacturing facilities. It may be loaded into the machine. For example, the IP core, such as the CortexTM family of processors developed by ARM Holdings, and the Loongson IP core developed by the Institute of Computer Technology (ICT) of the Chinese Academy of Sciences, may be At processors produced by those customers or licensors licensed or sold to various customers or licensors such as Texas Instruments, Qualcomm, Apple, or Samsung. It may be realized.FIG. 11 shows a block diagram illustrating the development of an IP core according to one embodiment. The storage unit 1130 includes simulation software 1120 and / or a hardware or software model 1110. In one embodiment, data representing an IP core design may be provided to storage 1130 via memory 1140 (eg, hard disk), wired connection (eg, Internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model may then be sent to the fabrication facility, where an IP core may be created by a third party to perform at least one instruction according to at least one embodiment. .In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and may be translated or emulated in a processor of a different type or architecture (e.g., ARM) It is also good. Thus, instructions in accordance with one embodiment may be executed on any processor or processor type, including ARM, x86, MIPS, GPU, or other processor type or architecture.FIG. 12 illustrates, according to one embodiment, how instructions of the first type are emulated by processors of different types. In FIG. 12, program 1205 includes several instructions that may perform the same function or substantially the same function as the instructions according to one embodiment. However, the instructions of program 1205 may be instructions of a type and / or format that is different or incompatible with processor 1215, which may prevent instructions of type program 1205 from being executed natively by processor 1215. It means that. However, with the help of emulation logic 1210, the instructions of program 1205 are converted into instructions that can be executed natively by processor 1215. In one embodiment, the emulation logic is embodied in hardware. In another embodiment, the emulation logic is embodied in a tangible machine readable medium that includes software for converting instructions of a type of program 1205 into types that can be executed natively by processor 1215. In another embodiment, the emulation logic is a combination of fixed function or programmable hardware and a program stored on a tangible machine readable medium. In one embodiment, the processor includes emulation logic, while in other embodiments the emulation logic resides outside the processor and is provided by a third party. In one embodiment, a processor can load emulation logic embodied in a tangible machine-readable medium, including software, by executing microcode or firmware included in or associated with the processor.FIG. 13 is a block diagram contrasting the use of a software instruction converter to convert a binary instruction of a source instruction set to a binary instruction of a target instruction set according to an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 13 shows that high level language 1302 programs can be compiled using x86 compiler 1304 to generate x86 binary code 1306 that can be executed natively by processor 1316 having at least one x86 instruction set core. Indicates A processor 1316 having at least one x86 instruction set core is (1) intrinsic to the instruction set of the Intel x86 instruction set core to obtain substantially the same result as an Intel processor having at least one x86 instruction set core (2) by executing or otherwise processing object code versions or other software of an application targeted to run on an Intel processor with at least one x86 instruction set core Represent any processor that can perform substantially the same function as an Intel processor with at least one x86 instruction set core. The x86 compiler 1304 is for generating x86 binary code 1306 (eg, object code) that can be executed by additional link processing or without additional link processing in a processor 1316 having at least one x86 instruction set core Represents an operable compiler. Similarly, FIG. 13 shows processor 1314 without at least one x86 instruction set core (e.g., Sunnyvale, Calif.) As programs in high level language 1302 are compiled using alternative instruction set compiler 1308. An alternative instruction set binary code 1310 is generated that can be executed natively by MIPS Technologies' MIPS instruction set and / or a processor with a core that executes the ARM instruction set of Sunnyvale's ARM Holdings Inc. Indicates to get. The instruction converter 1312 is used to convert the x86 binary code 1306 into code that can be executed natively by the processor 1314 without the x86 instruction set core. This translated code is not considered the same as the alternative instruction set binary code 1310. Because it is difficult to make an instruction converter that can do that. However, the translated code will perform general operations and will consist of instructions from the alternative instruction set. Thus, the instruction converter 1312 may be software, firmware, hardware that allows a processor or other electronic device without an x86 instruction set processor or core to execute the x86 binary code 1306 through emulation, simulation or any other process. Ware or a combination thereof.FIG. 14 shows an embodiment of a processing system 1401 for using instructions to provide advanced paging capabilities for the secure enclave page cache EPC 1460. System 1401 includes system memory 1490 and processor 1402. Processor 1402 includes a first hardware thread or logical processor 1420 and a second hardware thread or logical processor 1430. Although processor 1402 is shown as including two logical processors, each representing a single hardware thread, for simplicity, it will be appreciated that the present invention is not so limited. For example, it is typical for a processor such as processor 1402 or other processors shown herein to have several logical processor cores, which are some physical resources (e.g. EPC 1460) and / or The logic processors or processor cores may or may not share circuitry (eg, SE unit 1470), and have multiple hardware threads that can execute software threads simultaneously or simultaneously.In addition, processor 1402 includes a secure enclave (SE) unit 1470 and an enclave page cache EPC 1460. For some embodiments, the EPC 1460 may be part of a larger cache unit, such as, for example, one or more Level 1 caches 1440 and 1450, or Level 2 cache (not shown). In another embodiment, the EPC 1460 comprises a plurality of hardware for storing secure data for addresses of shared pages 1442, 1444 and 1456 assigned to secure enclaves accessible by hardware threads, logical processors or processing cores. It may be a separate structure or distributed structure (eg, cache 1440 and cache 1450) shared by wear threads, logical processors or processing cores.Similarly, SE unit 1470 may be used to store an encryption unit, an integrity protection unit, an access control unit, a range register, an enclave page cache mapping, and at least two previous epochs and a current epoch. And epoch counter storage locations, and may include separate structures or distributed structures (eg, SE units 1427 and 1437) shared by multiple hardware threads, logical processors or processing cores. In addition, SE unit 1470 supports enclave instructions to provide advanced paging capabilities for secure enclave page caches.In this example, logic processor 1420 includes a decoding stage 1422, a reading stage 1424, one or more execution units (eg, execution unit 1426), and a writing stage 1428. In addition, the logical processor 1420 has a TLB 1425, in which translations for accessing the EPC 1460 may be installed. Logic processor 1430 includes a decode stage 1432, a read stage 1434, one or more execution units (eg, execution unit 1436), and a write stage 1438. In addition, the logical processor 1430 has a TLB 1435, in which translations for accessing the EPC 1460 may be installed. Embodiments of logical processors 1420 and 1430 are further illustrated at other pipeline stages (eg, pipeline 400) to execute enclave instructions to provide advanced paging capabilities for secure enclave page cache EPC 1460. May be included.By using enclave instructions to provide advanced paging capabilities for secure enclave page caching, paging processes (eg, secure enclave page cache memory content is encrypted and written back, new pages are loaded from memory (Decrypted and decoded, TLB entries are flushed and replaced, etc.) can be divided into multiple stages, where processor cores or logical processors (eg, logical processors 1420 and 1430) may be used during one or more stages. It will be recognized that there is only a brief interruption to Thus, the performance degradation due to the paging process may be reduced while ensuring the security of secure enclave data and without requiring undue complexity and design effort.In one embodiment, the EBLOCK instruction specifies the address of a shared page (eg, page 1442) as an operand. One or more execution units (e.g. execution unit 1426) may mark multiple entries corresponding to the enclave page cache mapping to the shared page address to allow any of multiple hardware threads, logical processors or processing cores to Block creation of new TLB translations (e.g., in TLB 1435) to access shared pages. In one embodiment, the ETRACK instruction specifies a secure enclave as an operand, and one or more execution units (eg, execution unit 1426) are currently accessing secure data in enclave page cache EPC 1460 corresponding to the secure enclave. Record hardware threads. For example, in one embodiment, the enclave may have two or more epoch counters, thereby recording the number of hardware threads currently accessing secure data in the current epoch of the secure enclave, and then The number may be copied to the previous epoch counter (eg, in response to an ETRACK instruction) to initialize a new epoch without a hardware thread as a new current epoch.The OS may then send an IPI to any hardware thread, logical processor or processing core currently accessing secure data in the enclave page cache corresponding to the secure enclave. In one embodiment, each hardware thread, logical processor or processing core (e.g., logical processors 1420 and 1430) currently accessing secure data corresponding to a secure enclave secure enclaves by an EENTER or ERESUME instruction specifying the secure enclave. The epoch number would have been associated with a hardware thread, logical processor or processing core at that time. When a hardware thread, logical processor or processing core approves an IPI and exits the secure enclave, those TLB translation (s) are flushed (e.g., from TLB 1425 and / or TLB 1435). Each time a hardware thread from the previous epoch exits the secure enclave (eg, by an EEXIT or AEX instruction), the number of hardware threads recorded in the previous epoch counter is reduced.When the number of hardware threads recorded reaches zero, one or more pages (e.g., page 1442) are evicted, data is encrypted, and they are stored (e.g., as encrypted page 1495) in memory or non-volatile storage Writing back to is safe for the OS. In one embodiment, the OS completes eviction using an EWRITE BACK or EWB instruction that specifies the address of the shared page (eg, page 1442) as an operand, encrypts secure data, and makes the page into memory or non-volatile storage. You may write it back. Because enclave protection of secure data may not trust the OS, one embodiment of the EWRITE BACK or EWB instruction may fail when the number of recorded hardware threads from the immediately preceding epoch does not reach zero. In another alternative embodiment, an EWRITE BACK or EWB instruction may wait for execution or result in an exception until the number of recorded hardware threads reaches zero. In one embodiment, the OS may then use the ELOAD instruction to read a new page (eg, page 1410) from memory or non-volatile storage, decrypt the data, and save the decrypted page to EPC 1460. Good. Thus, multiple stages of the paging process (eg, secure enclave page cache memory content is encrypted and written back, new pages are loaded and decrypted from memory, TLB entries are flushed and replaced, etc.) , Where processor cores or logic processors (eg, logical processors 1420 and 1430) are only briefly interrupted (eg, by IPI) between one or more stages.FIG. 15 shows an embodiment of an apparatus within a processor 1501 for using instructions to provide advanced paging capabilities for a secure enclave page cache. The apparatus includes a secure enclave (SE) unit 1502 and an enclave page cache EPC 1520. For some embodiments, the EPC 1520 may be part of a larger cache unit, such as, for example, Level 1 Cache L1 1540, or Level 2 Cache (not shown). In another embodiment, the EPC 1520 is a plurality of hardware threads, logic for storing secure data for the address of the shared page 1542 assigned to the secure enclave accessible by the hardware thread, logical processor or processing core. It may be a separate or distributed structure shared by the processor or processing core. The SE unit 1502 includes an encryption unit 1510, an integrity protection unit 1512, an access control unit 1514, a range register 1516, an enclave page cache mapping EPC 1518, and two or more epoch counter locations, ie, previous. It may include an epoch (previous epoch) PE 1517 and a current epoch CE 1519. Furthermore, the SE unit 1502 may include an enclave instruction 1503, which is not shown, and the EBRAVE instruction 1531, ETRACK instruction 1532, EWB instruction 1533, ELOAD instruction 1534, EEXIT instruction 1535, and EENTER instruction 1536. And other enclave instructions (eg, AEX instruction, ERESUME instruction, etc.).In addition, processor core 1501 includes TLB 1525, where translations for accessing EPC 1520 may be installed. Processor core 1501 further includes a decoding stage 1522, a reading stage 1524, one or more execution units (eg, execution unit 1526), and a writing stage 1528. The embodiment of processor core 1501 is further shown in another pipeline stage (eg, pipeline 400) to execute enclave instructions 1503 to provide advanced paging capabilities for secure enclave page cache EPC 1520. ) May be included.In one embodiment, the EBLOCK instruction 1531 specifies the address of the shared page 1542 as an operand. One or more execution units (e.g. execution unit 1526) mark the entry corresponding to the enclave page cache mapping in EPCM 1518 to the address of the shared page 1542 so that the hardware thread, logical processor or processing core can Block the creation of a new TLB translation (eg, in TLB 1525 or any other TLB) to access shared pages. In one embodiment, the ETRACK instruction 1532 specifies a secure enclave as an operand, and one or more execution units (eg, execution unit 1526 or access control unit 1514) are secure in the enclave page cache EPC 1520 corresponding to the secure enclave. Record hardware threads currently accessing data. For example, in one embodiment, the enclave may have two or more epoch counters (eg, PE 1517 and CE 1519), thereby currently accessing secure data in the current epoch of secure enclaves (eg, CE 1519) Record the number of hardware threads, then copy the number to the previous epoch counter (eg PE1517) and initialize a new epoch with no hardware thread as a new current epoch (eg CE1519) May beThe OS may then send an IPI to any hardware thread, logical processor or processing core currently accessing secure data in the enclave page cache EPC 1520 corresponding to the secure enclave. Each hardware thread, logical processor or processing core currently accessing secure data corresponding to a secure enclave was that which entered the secure enclave by an EENTER (or ERESUME) instruction 1536 specifying the secure enclave, at that time the epoch The number would have been associated with a hardware thread, logical processor or processing core. When a hardware thread, logical processor or processing core approves an IPI and exits the secure enclave, those single or multiple TLB translations are flushed (e.g., from TLB 1525). Each time a hardware thread from the previous epoch (for example, corresponding to PE 1517) exits the secure enclave by EEXIT (or AEX) instruction 1535, the number of hardware threads recorded in the previous epoch counter (eg, PE 1517) Is reduced.When the number of hardware threads recorded (eg, in PE 1517) reaches zero, one or more pages (eg, shared page 1542) are retired and the data is encrypted and stored in memory or non-volatile storage Writing back is safe for the OS. In one embodiment, the OS uses EWB (or EWRITEBACK) instruction 1533 to specify the address of the shared page 1542 as an operand, completes the eviction, encrypts secure data, and writes the page 1542 back to non-volatile storage. It is also good. Because secure data enclave protection may not trust the OS, one embodiment of the EWB instruction 1533 may fail if the number of recorded hardware threads from the immediately preceding epoch (eg, PE 1517) does not reach zero. . In another alternative embodiment, EWB instruction 1533 may wait for execution until the number of hardware threads recorded (eg, in PE 1517) reaches zero, or EWB instruction 1533 may result in an exception.The management of permissions, changes in physical memory and / or mappings may still be managed by the OS, but as with secure enclaves, when the memory content is protected, the OS will actually protect the content of the enclave private memory It will be appreciated that no permission or trust to gain access is obtained. Guaranteeing the security and / or integrity of private memory content, and limiting the amount of physical memory (eg EPC 1520 or EPC 1460) to support larger protected enclave private memory space when the OS can not be trusted Managing the technical constraints of using does not require sophisticated hardware support and / or design effort, and provides instruction and processing logic to provide advanced paging capabilities for secure enclave page caches. It can be achieved in the stepwise manner used.FIG. 16 shows a flow diagram for one embodiment of a process 1601 for providing advanced paging capabilities for secure enclave page caching. Process 1601 and the other processes disclosed herein may be implemented by processing blocks that may include dedicated hardware or software or firmware opcodes that may be executed by a general purpose machine or a special purpose machine or a combination of both. It will be.At processing block 1610 of process 1601 a secure enclave is created to protect private data and / or instructions. At processing block 1620, EPC pages are assigned to secure enclaves. At processing block 1625, it is determined whether paging is required. If not required, EPC pages continue to be assigned to the secure enclave in processing block 1620, where secure data is shared by the secure enclave, accessible by multiple hardware threads executing within the secure enclave. It may be stored in the EPC line for the page address. In the other case, at processing block 1630, one or more EBLOCK instructions are executed, each EBLOCK instruction in one embodiment designating a shared page address as an operand. At processing block 1640, an ETRACK instruction is executed, which in one embodiment specifies a secure enclave. At processing block 1650, the IPIs are sent to each running logical processor in the secure enclave to leave them from the secure enclave. Approval of the IPI is confirmed at processing block 1660 and it is determined at processing block 1665 whether all IPIs have been approved. If not, processing continues at processing block 1660, but if all IPIs are approved, processing proceeds to processing block 1670. At processing block 1670, one or more EWB instructions are executed, each EWB instruction in one embodiment designating one of the blocked shared page addresses as an operand. At processing block 1680, one or more ELOAD instructions are executed, each ELOAD instruction in one embodiment specifying a new shared page address as an operand. Processing then repeats from processing block 1625.FIG. 17 shows a flow diagram for an alternative embodiment of a process 1701 for providing advanced paging capabilities for secure enclave page caching. In processing block 1710 of process 1701 an entry for a shared page is marked (eg, in response to an EBLOCK instruction specifying the shared page address as an operand) to block creation of a new translation in any TLB . At processing block 1720, the hardware thread, logical processor or processing core currently accessing secure data in the secure enclave (eg, in response to an ETRACK instruction specifying the secure enclave as an operand) is recorded. At processing block 1730, when any thread exits the secure enclave (eg, using an EEXIT or AEX instruction), the number of threads recorded is reduced. At processing block 1735, it is determined whether the number of threads recorded is currently zero. If not, processing continues at processing block 1730, but if the number of threads recorded is currently zero, processing proceeds to processing block 1740. At processing block 1740, secure data for the shared page is evicted, and at processing block 1750, secure data for the evicted shared page is encrypted (eg, in response to an EWRITE BACK or EWB instruction specifying the shared page address as an operand). Be done. The encrypted secure data for the retired shared page is then written back to memory or non-volatile storage at processing block 1760. At processing block 1770, new pages of secure enclaves are allocated free storage. At processing block 1780, secure data for the new page is decrypted (eg, in response to an ELOAD instruction specifying a new shared page address as an operand).FIG. 18A shows a flow diagram for another embodiment of a process 1801 for providing advanced paging capabilities for secure enclave page caching. At processing block 1810 of process 1801, multiple hardware threads are executed (eg, in a multi-threaded processor). At processing block 1820, secure data is stored in a cache for shared pages assigned to secure enclaves accessible by multiple threads. At processing block 1830 of process 1802, the EBLOCK instruction is decoded, and in one embodiment, the EBLOCK instruction designates the shared page address as an operand. At processing block 1840, an entry for the shared page is marked to block creation of a new translation in any TLB. At processing block 1850, the hardware thread, logical processor or processing core currently accessing secure data in the secure enclave is recorded. At processing block 1860 of process 1803, an ETRACK instruction is decoded, and in one embodiment, the ETRACK instruction specifies a secure enclave as an operand. At processing block 1870, when any thread exits the secure enclave (eg, using an EEXIT or AEX instruction), the number of threads recorded is reduced. At processing block 1880, it is determined whether the number of threads recorded is currently zero. If not, processing continues at processing block 1870, but if the number of threads recorded is currently zero, processing proceeds to processing block 1890. At processing block 1890, secure data for the shared page is paged out to memory or non-volatile storage (eg, in response to an EWRITE BACK or EWB instruction specifying the shared page address as an operand).By using enclave instructions to provide advanced paging capabilities for secure enclave page caching, paging processes (eg, secure enclave page cache memory content is encrypted and written back, new pages are loaded from memory Divided and decoded, TLB entries are flushed and replaced, etc.) can be divided into multiple stages, where the processor core or logical processor is interrupted for only a short time between one or more stages. Thus, the performance degradation due to the paging process may be reduced while ensuring the security of secure enclave data and without requiring undue complexity and design effort.FIG. 18B shows a flow diagram for another embodiment of a process 1804 for providing advanced paging capabilities for secure enclave page caching. At processing block 1810 of process 1804, multiple hardware threads are executed (eg, in a multi-threaded processor). At processing block 1820, secure data is stored in a cache for shared pages assigned to secure enclaves accessible by multiple threads. At processing block 1830 of process 1805, the EBLOCK instruction is decoded, and in one embodiment, the EBLOCK instruction specifies the shared page address as an operand. At processing block 1840, an entry for the shared page is marked to block creation of a new translation in any TLB. At processing block 1860 of process 1806, the ETRACK instruction is decoded, and in one embodiment, the ETRACK instruction specifies a secure enclave as an operand. At processing block 1850, the hardware thread, logical processor or processing core currently accessing secure data in the secure enclave is recorded. At processing block 1870, when any thread exits the secure enclave (eg, using an EEXIT or AEX instruction), the number of threads recorded is reduced. At processing block 1880, it is determined whether the number of threads recorded is currently zero. If not, processing continues at processing block 1870, but if the number of threads recorded is currently zero, processing proceeds to processing block 1890. At processing block 1890, secure data for the shared page is paged out to memory or non-volatile storage (eg, in response to an EWRITE BACK or EWB instruction specifying the shared page address as an operand).Thus, although permission management, physical memory and / or mapping changes may still be managed by the OS, the OS does not have the permission or trust to access the actual protected content of the enclave private memory. Ensuring the security and / or integrity of private memory content, as well as managing the technical constraints of using a limited amount of physical memory to support a larger protected enclave private memory space, is elaborate This can be accomplished in a step-by-step manner with instructions and processing logic to provide advanced paging capabilities for secure enclave page caching without requiring any hardware assistance and / or design effort. In some alternative embodiments of the process 1804 and other processes disclosed herein, the processing blocks shown as being performed in a particular order may be performed in another order, if possible. It will be appreciated that it may be performed simultaneously or in parallel with each other.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the present invention are programmable systems that include at least one processor, a storage system (including volatile and non-volatile memory and / or storage components), at least one input device, and at least one output device. May be realized as a computer program or program code executed onProgram code may be applied to input instructions for performing the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purpose of this application, the processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. In addition, if desired, the program code may be implemented in assembly or machine language. In practice, the mechanisms described herein are not limited in scope to any particular programming language. In all cases, the language may be a compiled or interpreted language.One or more aspects of the at least one embodiment may be realized by means of expressive instructions stored on a machine readable medium representing various logic in the processor, the instructions being as read by the machine Cause the machine to create logic to perform the techniques described herein. Such expressions are known as "IP cores" and are stored on tangible machine-readable media and supplied to various customers or manufacturing facilities to be loaded into the manufacturing machine that actually creates the logic or processor. It is also good.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangement of articles manufactured or formed by a machine or device, such as hard disks, as well as floppy disks, optical disks, compacts Disc-read-only-memory (CD-ROM), compact disc-rewritable's (CD-RW), and any other type of disc, including magneto-optical discs, semiconductor devices such as read-only-memory (ROM), Erasable programmable read only memory (EPROM), random access memory (RAM), eg dynamic random access memory (DRAM) and static random access memory (SRAM) Sshumemori and electrically erasable programmable read only memory (EEPROM), it may include a storage medium such as any other type of media suitable for storing a magnetic or optical cards, or electronic instructions.Thus, embodiments of the present invention further define features of the structures, circuits, devices, processors and / or systems described herein, such as instructions such as Hardware Description Language (HDL) or Includes non-transitory tangible machine-readable media, including design data. Such an embodiment may be referred to as a program product.In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter translates (translates), transforms, instructions into one or more other instructions processed by the core (eg, using static binary translation, dynamic binary translation including dynamic compilation) It may be emulated or otherwise converted. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or partially on processor and partially off processor.Thus, techniques are disclosed for executing one or more instructions in accordance with at least one embodiment. Although certain illustrative embodiments are described and illustrated in the accompanying drawings, such embodiments are merely illustrative and not intended to limit the broad invention, and the present invention may be embodied in a particular configuration as illustrated and described. It should be understood that the invention is not limited to this and the arrangement. This is because various other modifications may occur to one of ordinary skill in the art upon reviewing the present disclosure. As in the present technology, in the technical field where growth is fast and no further progress is easily foreseen, it is facilitated by allowing technical progress without departing from the principle of the present disclosure or the scope of the appended claims. As such, the arrangement and details of the disclosed embodiments may be readily modifiable. |
A system and method for controlling output buffer drive enable signals on a parallel terminated bus is described. The method includes transferring data between a first agent and a second agent, tracking outstanding data requests from the first agent with at least one synchronous counter, tracking outstanding data replies from the second agent with a source synchronous counter, and driving a signal on the parallel terminated bus when the synchronous and source synchronous counters match. |
What is claimed is: 1. A method comprising:transferring data between a first agent and a second agent over a parallel terminated bus; tracking outstanding data requests from the first agent to the second agent with at least one synchronous counter; tracking outstanding data replies from the second agent with a source synchronous counter; and driving a signal on the parallel terminated bus when the synchronous and source synchronous counters match. 2. The method of claim 1, further comprising sending cut-off signals after a preset number of clock cycles after each transaction is initiated by a core on the parallel terminated bus.3. The method of claim 1, further comprising performing a latch-back operation to prevent the parallel terminated bus from floating.4. The method of claim 3, wherein a latch-back operation occurs when a core recognizes that there will be no overlap between post-drive and pre-drive signals of two transaction groups.5. The method of claim 3, wherein the latch-back operation occurs when the spacing between transaction groups is greater than a predetermined number of cycles.6. The method of claim 5, wherein the predetermined number of cycles is a function of the number of pre-drive and post-drive cycles.7. The method of claim 3, wherein replies are no longer tracked when a latch back indication is received.8. The method of claim 1, further comprising capturing the last value present on the parallel terminated bus in case a latch-back operation is required.9. The method of claim 1, further comprising utilizing a knob to adjust a processor drive operation.10. The method of claim 9 wherein the knob is used to adjust at least one of a processor drive cut-off operation and a processor drive latch-back operation.11. The method of claim 1, further comprising performing a latch-back operation when a bus agent recognizes that there will be no overlap between post-drive and pre-drive signals of two transaction groups.12. The method of claim 1, further comprising performing a latch-back operation when the spacing between transaction groups is greater than four clock cycles.13. A method for operating a computer system comprising:transferring data between a processor and a cache over a parallel terminated bus; tracking data requests from the processor to the cache with at least one synchronous counter; tracking data replies from the cache with a source synchronous counter; and driving a signal on the parallel terminated bus when the synchronous and the source synchronous counters match. 14. The method of claim 13, further comprising sending cut-off signals from the processor after a preset number of clock cycles after each transaction is initiated.15. The method of claim 13, further comprising performing a latch-back operation to prevent the bus from floating.16. The method of claim 15, wherein the latch-back operation is performed when the processor recognizes that there will be no overlap between post-drive and pre-drive signals of two transaction groups.17. The method of claim 15, wherein the latch-back operation occurs when the spacing between transactions is greater than a predetermined number of clock cycles.18. The method of claim 17, wherein the predetermined number of clock cycles is a function of the number of pre-drive and post-drive cycles.19. The method of claim 15, wherein replies are no longer tracked when a latch-back indication is received.20. The method of claim 13, further comprising capturing the last value present on the bus in case a latch-back operation is required.21. The method of claim 13, further comprising utilizing a knob to adjust control values associated with system performance.22. The method of claim 21, wherein the knob comprises at least one of a cut-off knob and a latch-back knob.23. The method of claim 13, further comprising performing a latch-back operation when the processor recognizes that there will be no overlap between post-drive and pre-drive signals of two transaction groups.24. The method of claim 13, further comprising stopping the tracking of replies when a latch-back indication is received from the processor core.25. A parallel terminated system comprising:a first agent; a second agent; and a parallel terminated bus coupling the first agent to the second agent, wherein the first and second agents are capable of driving data, control, strobe and other signals onto the bus, wherein at least one of the agents drives at least the strobe signals continuously onto the bus to ensure that a signal will not float, and wherein at least one agent comprises: a latch-back controller; a drive enable circuit connected to the latch-back controller; and an input/output buffer circuit connected to the drive enable circuit and to the bus, for driving a signal onto the bus when required. 26. The system of claim 25, wherein the latch-back controller further comprises:at least one synchronous strobe counter for tracking the first agent transactions; first and second source synchronous counter circuits for tracking outstanding second agent replies; a comparator circuit connected to the source synchronous strobe counter and to the first and to the second synchronous counter circuits; and an And circuit connected to outputs of the comparator and of the first synchronous counter circuit, for generating a latch-back pulse. 27. The system of claim 25, wherein the drive enable circuit comprises an asynchronous reset-preset priority flip-flop.28. The system of claim 25, wherein the input/output buffer circuit comprises:a multiplexer circuit connected to the drive enable circuit; and a plurality of buffers for storing and releasing a value that appeared on the bus or another value depending upon instructions from a processor core. 29. Circuitry for a processor based system, comprising;a latch-back controller; a drive enable circuit connected to the latch-back controller; and an input/output buffer circuit connected to the drive enable circuit and to a parallel terminated bus, for driving a signal onto the bus when required to ensure that a signal will not float on the bus. 30. The system of claim 29, wherein the latch-back controller comprises:at least one synchronous strobe counter for tracking processor transactions; first and second source synchronous counter circuits for tracking outstanding cache replies; a comparator circuit connected to the source synchronous strobe counter and to the first and to the second synchronous counter circuits; and an And circuit connected to outputs of the comparator and of the first synchronous counter circuit, for generating a latch-back pulse. 31. The system of claim 29, wherein the drive enable circuit comprises an asynchronous reset-preset priority flip-flop.32. The system of claim 29, wherein the input/output buffer circuit comprises:a multiplexer circuit connected to the drive enable circuit; and a plurality of buffers for storing and releasing a value that appeared on the bus or another value depending upon instructions from a processor core. |
BACKGROUND OF THE INVENTIONThe invention pertains to an apparatus and technique for operating a computer system. In particular, apparatus and techniques for controlling output buffer drive enable signals on a parallel terminated bus are described.FIG. 1 is an example of a bi-directional parallel terminated bus system 10. Drivers 12 have an impedance Zo and are operable to drive a signal on a bus 14 to receivers 16. The bus 14 is terminated at each receiver 16, 18 through a resistor R1 between the bus and the source power supply Voltage Vs and through a resistor R2 between the bus and ground. The bus is therefore biased at a midpoint voltage when not driven by a driver 12. Such a configuration makes the rise and fall times of signals on the bus symmetrical, which is desirable in a source synchronous environment. The parallel terminated bus may be unidirectional, bi-directional or multidirectional.Any data exchange between drivers and receivers of two entities, such as between a processor and a memory device which may be on separate chips is typically accomplished in a synchronous manner. That is, the chips have internal clocks that are sufficiently in alignment with each other so that data may be acquired on clock signal transitions. In addition, data exchanges may be accomplished source-synchronously, which means that the exchanges are based on strobe signal transitions that have been derived from a clock signal and are synchronized to their corresponding data.A parallel termination protocol has been developed to ensure correct data signal operation for two or more bus agents across a large operating range. A parallel termination protocol may also be suitable for use with other entities that drive and receive data in a parallel environment. In an implementation, the parallel termination protocol requires that a signal must be driven at all times to prevent a signal from floating to an unspecified logic level. If certain signals such as strobe signals were permitted to float, then the system would become unreliable. Such an occurrence may cause a fatal functional error in the system due to data transmission errors. To avoid such occurrences, the parallel terminated protocol may specify that a bus agent designated as the default bus master will synchronously time the drive cut-off points to occur when another bus agent would drive a signal onto the bus, for example, to return data requested by the bus master. The parallel terminated protocol may also specify that the default bus master is to source-synchronously latch the value on the bus, turn On its drivers, and drive the latched value back onto the bus on the arrival of the last strobe signal for the reply sent by the cache.Although a parallel terminated protocol for high speed processor systems may readily be defined, a need exists for techniques and apparatus to implement the protocol over a wider range of operating frequencies with cleaner signal transitions.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a simplified diagram illustrating a parallel terminated bus system.FIG. 2 is a simplified block diagram of a parallel terminated bus system according to the invention.FIG. 3 is a timing diagram of a back-side bus (BSB) center-tapped terminated (CTT) protocol read cycle in a zero transmission line delay environment.FIG. 4A is a timing diagram to illustrate CTT cut-off and latch-back operation according to the invention.FIG. 4B is a block diagram of a system according to the invention for driving a signal onto the BSB.FIG. 5 is a block diagram of a latch-back controller according to the invention.FIG. 6 is a block diagram of a drive enable circuit according to the invention.FIG. 7 is a block diagram of an input/output buffer circuit according to the invention.FIG. 8 is a schematic diagram of an implementation of a CTT controller.FIG. 9 is an implementation of a strobe pulse generation circuit.FIGS. 10 and 11 illustrate implementations and of a preset-reset priority flip-flop.FIGS. 12 and 13 are circuit diagrams of an implementation of an Input/Output buffer control circuit and a logic control circuit.FIG. 13 illustrates how the ratio logic controls the flow of data from "Core2Pad" and "SensedPad" to "IO2Pad".DETAILED DESCRIPTIONFIG. 2 is a simplified block diagram of a center tapped terminated system 20 including a first agent 22 connected to a second agent 26 via a parallel terminated bus 24. The first and second agents are capable of driving data, control, address, strobe and other signals onto the bus, and are configured to capture the various data signals. At least the strobe signals are continuously driven on the bus 24 by drivers (not shown) associated with either the first agent 22 or the second agent 26. In general, when the last strobe signal from the first agent is received, the second agent proceeds to source-synchronously turn On its driver. When data or strobes are expected from the second agent, the drivers of the first agent are to be synchronously turned Off.In an implementation, the first agent 22 may be a processor, the second agent 26 may be a cache, and the bus 24 may be a back-side cache bus (BSB). These components may be included on a single integrated circuit chip or may be separate components located on different chips. Both the processor and cache include drivers and receivers for driving signals onto, and receiving signals from, the BSB. It should be understood, however, that the novel processes and circuitry described below could be implemented in other interface bus configurations that do not employ a BSB. Furthermore, although the example implementation described below includes a center-tapped terminated (CTT) bus, any type of parallel termination bus circuitry could be used. In addition, unless otherwise noted, references to drivers may include both data drivers and strobe drivers.Receivers depend upon the BSB signal integrity to be such that a strobe signal transition will not be detected when one is not occurring. If a strobe signal is permitted to float on the BSB, then signal integrity may be compromised. A fatal system error may then occur that may require an undesirable re-boot of the processor. One circumstance that may cause a strobe signal to float is when there is a bus master changeover, defined as when the processor stops driving signals and the cache starts driving signals, or vice-versa. If a strobe signal, for example, is left to float to an undetermined value, then the logic or the circuit located after the receiver may begin malfunctioning. Even if a receiver is not expecting to acquire data, if a strobe signal or any other signal is permitted to float on the BSB then power will be wasted because of the large crowbar current in the receiver.FIG. 3 is an illustrative implementation of a timing diagram 30 of a BSB, CTT protocol read cycle in a 1:1 ratio of the processor core clock 31 to the BSB strobe signals in a zero transmission line delay environment. This environment allows signals to be described in an idealized manner where propagation delays are ignored. The 1:1 ratio of the core clock 31 to the data strobes has been chosen here for ease of understanding, and indicates that the switching speed is substantially simultaneous. It should be understood that other ratios could be used, and that in real world operation, the ratio could be different and the line delay may be significant.Referring to the example transactions of FIG. 3, the data signals 32 and strobe signals 34 are bi-directional signals between the processor and the cache. The CTT protocol for the cache regarding data signals 32 specifies a pre-drive window 36 and a post-drive window 38 of 2 clock cycles for the strobe signals 34. The pre-drive and post drive windows are designed to prevent a signal from floating on the BSB during a master changeover by ensuring drive overlap between the masters on the BSB even at the highest possible operating frequency and over an entire operational range of frequencies. As frequency increases, so does the transmission line delay in terms of clock cycles, which causes the drive overlap to diminish and may cause the drive overlap to disappear. If the drive overlap disappears then a strobe signal may float on the BSB. Another possible situation that may cause a strobe signal to float is when the cache responds back-to-back to a request for data with a spacing that results in a non-overlap condition of the post-drive signal from the first reply and the pre-drive signal of the second reply. This case should be recognized so that the processor will drive a signal on the BSB to prevent a strobe signal floating condition.A CTT protocol requires that both the bi-directional data signals 32 and strobe signals 34 be continuously driven on the BSB by the processor or the cache. On reception of the "last strobe" signal for a particular transaction, the processor should asynchronously (source-synchronous with the strobe) turn On its driver. The "last strobe" signal may be defined as the strobe signal for a transaction that is not closely followed by strobe signals of another transaction, which concept will be explained in more detail below with reference to FIG. 4. In FIG. 3, the last strobe signal is depicted at 37. When the processor next expects to receive data and/or strobe signals from the cache, then it should synchronously turn its drivers Off. Thus, the processor cut-off points should be synchronously timed to occur at the earliest time that the processor expects to receive data on the BSB. Although in theory this seems simple to accomplish, under actual system operating conditions it is difficult to establish if a particular strobe signal edge is the "last strobe" that requires generation of a latch-back signal. In the following discussion, the latch-back signal is defined as the signal resulting from a latch-back process which occurs when a receiving agent captures the value on the bus from the transmitting agent, for use in driving it back on the bus if required. The described technique can also be used to prevent signals from floating on a unidirectional bus or signal line.Referring again to FIG. 3, when the system is first turned On, the processor chip will drive strobe signals 34 and data signal 32 on a bi-directional bus. The cache will drive strobe signals 40 on a unidirectional bus. The processor knows when to capture data 41 to 44 from the cache based on the cache strobe signals 40. Thus, at power-up the processor drivers and cache drivers send off strobe signals so that there are no concerns about floating signals. But, when a changeover of control is to occur, then care must be taken in most cases to insure that a signal does not float on a bus. However, there are cases when no special handling is required, such as when the last of the strobe signals 40 arrives, because the cache will drive that signal low and prevent any system failures. In other cases where a strobe floats, then one of the agents, such as a cache, may be able to reset itself before any further requests are made, which would correct any problems caused by floating signals.FIG. 4A is a timing diagram 50 illustrating CTT protocol cut-off and latch-back operation signals for a processor and a cache when operating at two extremes: the processor reading data at a high frequency (in this example 733 Megahertz (MHz)) and at a low frequency (in this example 1 Hertz (Hz)). In the case of high frequency operation, data signals 52 designated as A1-A4 and B1-B4 are followed after a delay 54 by data signals 56 designated as C1-C4. Included with the data signals are counterpart strobe signals 53 and 55, which may be differential signals (e.g. one is active high and the other is active when low) to increase system reliability and performance. For low frequency operation, data signals 62 designated as A1-A4 and B1-B4 are followed after a delay 64 by data signals 66 designated C1-C4. Again included are counterpart strobe signals 63 and 65. The interval 57 is the additional transmission line delay in terms of clock cycles when operating at the higher frequency (which in this example is about 4 clock cycles) as calculated by an equation: transmission line delay divided by the core clock period.The clock signal 70 indicates that in low frequency operation it took 6 cycles from the time a cache read transaction was requested to when the first data block A1 of data signals 62 arrives at the processor assuming the transaction began at clock cycle one. Thus, the cache latency for the data signals 62 is 6 cycles. For high frequency operation, referring to the data signals 52, the cache latency is 10 cycles. Therefore, for different operating frequencies the cache latency will be different and reception of data will occur during different clock cycles. For example, comparing back-to-back read transactions at the frequencies, the last data chunk B4 of data signals 62 and the data chunk A4 of the data signals 52 could be received roughly at the same time (during clock cycle fourteen). If the data chunks C1 to C4 did not exist, then in the case of low frequency operation the processor should perform a latch-back operation at about clock cycle fourteen, while in the case of high frequency operation the processor should perform a latch-back operation at about clock cycle 18.The spacing between transactions also affects whether or not the processor should perform a latch-back operation. For example, FIG. 4A shows that a time spacing 54 or 64 between data chunks that is less than four (4) clock cycles typically allows enough time for the cache pre-drive and post-drive strobe signals to overlap, assuming a two clock cycle of pre-drive and a two clock cycle of post-drive, and thus the latch-back should not be done. Since the post-drive signal after data chunk B4 is two cycles and the pre-drive signal before data chunk C1 is two cycles, there will always be an overlap signal or a known value on the BSB. However, the two groups may be handled by different cache banks, and if the first cache bank responds faster than typical or expected (due to manufacturing deviations, or run time effects resulting from voltage deviations, etc.) while the second cache bank responds slower than expected, then there will be no overlap and the processor has to perform a latch-back operation to prevent a strobe signal from floating on the BSB.The driver cut-off and latch-back circuitry must also be able to operate from a very low frequency (<≈>0 Hz) up to the fastest operating frequency of the processor. This frequency independent requirement is required to allow smooth manipulation of the integrated circuit in the test environment and to ensure that components will function in production. This requirement is met by ensuring that the processor driver cut-off is initiated at the same clock edge (synchronously) as the cache response. Since this response depends on cache latency that may not be the same on different system configurations, some flexibility may be introduced by including a knob called "cut-off knob". A knob may be defined as an adjustment mechanism for setting control values in a semiconductor chip. A knob may be hardware or software based, and is used to change the behavior of the processor integrated circuit chip. The cut-off knob may be tied to a cache latency knob operable to adjust for fast or slow cache response time. The cut-off knob is preferably software based to facilitate adjustment of the processor once the cache latency is known for a particular system. Systems may have different knob settings to ensure frequency independent, fully-configurable operation of the cut-off and latch-back operations. Multiple knobs may be needed to control cut-off points for different signals.FIG. 4B is a simplified block diagram of a system 90 for driving a signal onto the BSB when required. A latch-back controller circuit 100 includes a BSB cut-off signal input 110 and a main reset input 125, and operates to generate a latch-back pulse on line 146 to a drive enable circuit 150. Signal stretch circuitry 92 operates to provide a stretched BSB cutoff signal input on line 154 to the drive enable circuit 150 (which will be explained below with reference to FIG. 6), and a drive enable signal is generated on line 166 for an input/output buffer circuit 200. The input/output circuit 200 generates a signal having a particular value on input/output PAD 202 for driving onto the BSB, and includes a CORE2PAD input line 218. The CORE2PAD is a line that connects a processor core to the bus line. Implementations of the latch-back controller circuit, drive enable circuit and input/output buffer circuit are described below with regard to FIGS. 5 to 7.FIG. 5 is a block diagram of an implementation of a latch-back controller circuit 100 which may be implemented as a part of a processor. The latch-back controller circuit operates to count the requests that have been issued and to count the replies received, and when these counts are equal then it issues a latch-back instruction. The latch-back controller includes a synchronous BSB cut-off counter circuit 102, and two source synchronous strobe counter circuits 120 and 130. The BSB transaction counter circuit 102 keeps track of issued processor transactions, and the source synchronous counter circuits 120 and 130 keep track of cache-replies that have been received for those transactions.In the present implementation, the BSB transaction counter circuit 102 is a two-bit counter that includes a first core clocked flip-flop 104 and a second core clocked flip-flop 106, each having data, reset and enable inputs. A BSB cut-off signal is fed on line 110 to both flip-flops, and the circuit 102 operates to count the number of cut-off signals and output the count on line 112 to a comparator circuit 114. Cut-off signals are sent by the core some number of clocks after each and every transaction is initiated on the bus. Also, a latch-back indication is sent every time the core recognizes that the separation between two transaction groups is large enough for a latch-back to take place. The core may be defined as the actual logic circuitry of the semiconductor chip processor.Referring again to FIG. 5, the two-bit counter circuit 120 includes two asynchronous flip-flops 122 and 124 running source-synchronously on the strobe. It should be understood that implementations including other than two-bit counters and more or less than two flip-flops could be utilized. After a main reset signal is received on line 125, the circuit 120 counts each strobe toggle, and the count is input to the two-bit counter circuit 130. The circuit 130 includes two asynchronous reset flip-flops 132, 134 which generate a count of the last strobe on line 138 which is input to the comparator 114. The output of the comparator 114 is fed on line 140 to And circuit 145, which also is connected to line 144. Referring to both FIGS. 4A and 5, if the count of the cut-off signals (issued transactions) on line 112 equals the last strobe count (count of replies received) on line 138, then the signal on line 140 allows an edge signal 144 to be propagated on line 146. The last strobe signal on line 141 is computed on the edge prior to the last edge of a full set of strobes, and this computation is done to ensure speed of latch-back pulse generation. The signal on line 140 is stable because the circuit 102 is locked or frozen when a latch-back indication is received from the core. The edge signal 144 is therefore generated at the last strobe edge of the cache reply by a pulse generator circuit 142 which uses the strobe as an input 143. The output 146 is zero (ground) most of the time, except when the latch-back is to be done. An active signal on line 140 indicates that the number of transactions issued and replies received is equal. When that occurs, then a latch-back is done by the processor on the BSB. The count gets reset again during the reply following the transaction that caused a latch-back to occur. This is done so that it is feasible to do a latch-back for the following transaction as well.Multiple cut-off counter circuits (102) may be needed if transactions can be issued before the latch back for outstanding transactions occurs. Thus, outstanding cache transactions are tracked as data is being transferred with at least one of the counters. For example, if two counter circuits are used, the counters would alternate every time the core sends a synchronous latch-back indication 170 (FIG.4A), the old counter indication 112 freezes its value, while the new counter indication 113 resets itself and enables itself to count cut-off signals. When data is transferred from the cache to the processor over the BSB, the transaction is tracked with the source synchronous counter circuit 120. When the synchronous counter and the source synchronous counters match, a signal is driven on the BSB to ensure that no signals are permitted to float. Thus, as transactions return data, latch-back is only performed when the synchronous and source synchronous counters match, and the latch-back controller issues a latch-back pulse (L-B Pulse) on the last edge of an incoming strobe signal.FIG. 6 is a block diagram of a drive enable circuit 150. An asynchronous reset-preset flip-flop 156 operates to produce a zero or low output when the reset signal is high, and a one or high output when the preset signal is high. The reset-preset flip-flop 156 also operates as a priority flip-flop to generate a zero or low output if the reset signal and the preset signal are both simultaneously high. A drive enable output on line 166 (see also FIG. 4A) determines whether or not the processor is to drive a signal on the BSB. The drive enable circuit 150 receives the latch-back pulse on line 146 and a synchronous cut-off indication signal on line 154 for turning Off the strobe and/or the data drive enable signal. The same BSB cut-off signal that is input to the BSB strobe counter circuit 102 of FIG. 5 on line 110 is input on line 154 and each cycle is stretched to a 3 cycle duration to ensure that the BSB is cut-off even after a substantially simultaneous preset indication occurs at the same time as a reset indication. The asynchronous preset-reset latch 156 is connected to an OR circuit 158, which also is connected to the latch-back pulse through OR circuit 160. The inverse of the stretched and latched BSB cut-off signal is input on line 162 to AND circuit 164 along with the output 165 of OR circuit 158.The complexity of the drive enable circuit 150 of FIG. 6 is required because there are cases when it appears that a latch-back should be done but a cut-off indication arrives in time to prevent it. In addition, the drive enable circuit ensures that the BSB will not float because a cache reply included an insufficient drive overlap. The latter case could occur, for example, if first and second cache chips each reply to two consecutive requests (bank switch) of data and the strobe signals do not overlap, then a latch-back pulse is generated and should result in the drive enable being turned on, at least temporarily, to prevent a signal from floating on the BSB. The cut-off pulse should have priority over the latch-back pulse so that no signal is driven onto the BSB a certain number of clocks following an issued transaction. In the bank switch case, the cut-off for the second transaction ensures the processor turns off before the second reply is received. Similarly, if a latch-back indication and a cut-off indication occur simultaneously, then the cut-off pulse should have priority. The drive enable circuit 150 ensures that cut-off indications are prioritized over latch-back indications, allowing for the processor to drive a signal even for very short intervals when the cache pre-drive signals and post-drive signals do not overlap. Therefore, the drive enable circuit handles cases that may occur during system operation wherein apparently contradictory signals are generated.FIG. 7 is a block diagram of an input/output buffer circuit 200 illustrating how the latch-back pulse and drive enable signal are used to ensure correct functioning of the BSB. When the cache stops driving and the processor should start driving in the absence of further transactions, the value on the bus at the time (which the cache was driving) should be used. The circuit 200 operates to capture that value for such use.Referring to FIG. 7, the PAD 202 indicates the actual wire connecting the processor to the cache. Thus, the value of the last signal from the cache will appear at this point and will be placed in a buffer 204, which is connected to latches 206, 208. The latches 206 and 208 are clocked by the latch-back signal on line 146 output from the latch-back controller circuit 100 of FIG. 5. Thus, when a latch-back pulse signal is issued, then the value of the signal at 202 is latched into IO2 PAD 212. The buffer 214 will-then drive the signal value onto 202 when a drive enable signal on line 230 is On (which is generated from a drive enable signal on line 166, the output of drive enable circuit 150 of FIG. 6). A multiplexer circuit 216 operates to quickly drive a signal onto line 230 indicating a drive enable to turn On buffer 214 when required. The inputs on lines 217 and 219 to the multiplexer circuit are pre-generated based on a multiplicity of possible system conditions that could occur to guarantee that the drive signal will be present to turn On the buffer 214 only when required. The pre-generated input signals on lines 217 and 219 result from transforming logic circuitry corresponding to all of the possible input variables and output permutations so that the drive enable signal on line 166 is a trial (multiplexer control) signal, and the drive enable signal does not have to propagate through a large cone of logic. This functionality may require an increase in hardware to model each of the possible signal conditions, but it is worth this cost in order to provide the speed needed to ensure the fastest attainable BSB operation while ensuring correct operation under a plurality of conditions. The processor therefore will drive a signal on the BSB via PAD 202 of the same value as the signal the cache had been driving when required.In addition to operating in the manner described above, FIG. 7 may operate to drive out a Core2Pad signal present on line 218 onto the PAD 202. Line 218 originates from the chip core for data writes, and is connected to latches 220 and 222. The latch 220 is clocked by the core clock, and the latch 222 is clocked by the output of an AND circuit 224. The AND circuit has a Capture Enable input 226 and a clock input 228. The Capture Enable input 226 is controlled by the processor and operates to send the value of the Core2Pad signal on line 218 to IO2PAD 212 only when the drive enable signal on line 226 is present which only occurs when the processor intends to do a data write, thus preventing any contention to write a value onto IO2 PAD. Thus, the captured last value is only driven out onto the BSB when needed. In summary, when a latch-back occurs, the current value on the BSB is sampled in the IO2Pad node 212 and it is driven out. When the BSB wants to drive strobes and/or data out, it does so through the Core2Pad 218 to IO2Pad 212 path. A carefully controlled capture Enable signal ensures that no contention occurs in the IO2PAD bus node.FIG. 8 is a CTT controller schematic diagram of an implementation of latch-back controller circuitry and drive enable generation circuitry described above with reference to FIGS. 5 and 6. FIG. 9 is an implementation of a strobe pulse generation circuit. FIGS. 10 and 11 illustrate implementations and of a preset-reset priority flip-flop. FIGS. 12 and 13 are circuit diagrams of an implementation of an Input/Output buffer control circuit and a logic control circuit. FIG. 13 illustrates how the ratio logic controls the flow of data from "Core2Pad" and "SensedPad" to "IO2Pad".The described techniques turn On and Off the processor drive enables in such a way as to ensure that the strobe signals and/or data signals on the BSB are never left to float. The techniques and circuit implementations according to the invention also significantly reduce the occurrence of inter-symbol interference (ISI) in the bus that connects together various entities that drive and receive signals.It is to be understood that while certain implementations of the invention have been described, other aspects, advantages, and modifications are within the scope of the following claims. |
Embodiments include apparatuses, methods, and systems including a selector. A selection signal may be provided to the selector to select a first synchronization signal as a control signal when the first synchronization signal is available, otherwise a second first synchronization signal as the control signal. The first or the second synchronization signal may synchronize a first or second displaycontent received by a first or second display device with a first or a second display refresh rate, respectively. The control signal may be provided to a controller to control the second display content received by the second display device. Other embodiments may also be described and claimed. |
1.A device for computing, comprising:Selector;a first link coupled to the selector for receiving a first synchronization signal from the device or a first display device of a computing device that is in charge of the device, wherein the first synchronization signal is for The first display content received by the first display device is synchronized with the first display refresh rate of the first display device;a second link coupled to the selector for receiving a second synchronization signal from the device or a second display device of the computing device, wherein the second synchronization signal is for using the second display The second display content received by the device is synchronized with the second display refresh rate of the second display device;a third link coupled to the selector for providing a control signal to a controller of the apparatus or the computing device to control a second display content received by the second display device;a fourth link coupled to the selector for providing a selection signal to the selector, the selection signal selecting the first synchronization signal as the control when the first synchronization signal is available A signal is provided to the controller, otherwise the second synchronization signal is selected as the control signal to be provided to the controller.2.The device of claim 1, wherein the first display device is a primary display device of the device or the computing device, and the second display device is a slave display of the device or the computing device device.3.The apparatus of claim 1, wherein the second display content received by the second display device is synchronized by a second synchronization signal from the second display device when the first synchronization signal is not available, Or synchronized by the first display device when the first synchronization signal is available.4.The apparatus of claim 1, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device or a second synchronization from the second display device The signal is a tear effect removal timing signal of the second display device.5.The apparatus of claim 1, wherein the first display refresh rate of the first display device or the second display refresh rate of the second display device is frame by frame.6.The apparatus of claim 1, wherein the first display refresh rate of the first display device or the second display refresh rate of the second display device is 60 Hz, 120 Hz, or 240 Hz.7.The device of claim 1, wherein the first display device or the second display device comprises a display, and whereinThe display is a light emitting diode (LED) display, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), a digital light processing (DLP) display, a plasma display, an electroluminescence panel A selected one of an organic light emitting diode (OLED) display or an electronic paper.8.The apparatus of claim 1, wherein the first display content received by the first display device or the second display content received by the second display device is based on displaying a serial interface (MIPI) from a mobile industry processor interface Received by the selected protocol in the -DSI) protocol, High Definition Multimedia Interface (HDMI) protocol, Display Port (DP) protocol, Miracast protocol, or Wireless Display (WiDi) protocol.9.The device of claim 1 wherein the first display device is used as a display device or input device of the device or the computing device.10.The device of any of claims 1-9, wherein the device further comprises:First display device; andA second display device, wherein the device is a tablet, a mobile device, a smart phone, a smart television (TV), a touch screen display, or a head mounted display (HMD).11.The device of any of claims 1-9, wherein in addition to the device, the computing device further comprises:The first display device; andThe second display device, wherein the computing device is a tablet computer, a mobile device, a smart phone, a smart television (TV), a touch screen display, or a head mounted display (HMD).12.The apparatus of any of claims 1-9, wherein the selector is a multiplexer and the selection signal is to the first display device for determining to the first A logic power signal that displays the power of the device.13.The apparatus of any of claims 1-9, further comprising:And a detecting circuit configured to: generate the selection signal by detecting whether a first synchronization signal from the first display device is available.14.The apparatus of claim 13 wherein said detection circuit comprises a capacitor-resistor constant monostable timer having a duration that is longer than a display refresh time determined by a display refresh rate of said first display device.15.A device for computing, comprising:a communication interface for receiving a control signal, wherein the control signal is the first synchronization signal when a first synchronization signal from the device or a first display device of a computing device that is in charge of the device is available a second synchronization signal from the device or the second display device of the computing device, wherein the first synchronization signal is for using the first display content received by the first display device a first display refresh rate of the display device is synchronized, and the second synchronization signal is for synchronizing the second display content received by the second display device with the second display refresh rate of the second display device;And a controller coupled to the communication interface, configured to: determine to transmit the second display content to the second display device based on the control signal.16.The apparatus of claim 15 wherein said controller is further configured to:Determining to transmit the first display content to the first display device based on the control signal when the first synchronization signal is available.17.The device of claim 15, wherein the first display device is a primary display device and the second display device is a secondary display device.18.The apparatus of claim 15, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device or a second synchronization from the second display device The signal is a tear effect removal timing signal of the second display device.19.The apparatus of any of claims 15-18, wherein the first display device or the second display device comprises a display, and whereinThe display is a light emitting diode (LED) display, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), a digital light processing (DLP) display, a plasma display, an electroluminescence panel A selected one of an organic light emitting diode (OLED) display or an electronic paper.20.The apparatus according to any one of claims 15 to 18, wherein the first display content received by the first display device or the second display content received by the second display device is based on an interface from a mobile industry processor Received by the selected protocol in the Serial Interface (MIPI-DSI) protocol, High Definition Multimedia Interface (HDMI) protocol, Display Port (DP) protocol, Miracast protocol, or Wireless Display (WiDi) protocol.21.A method for delivering display content, including:Receiving a control signal, wherein the control signal is equal to the first synchronization signal when the first synchronization signal from the first display device is available, and is otherwise equal to the second synchronization signal from the second display device, wherein The first synchronization signal is used to synchronize the first display content received by the first display device with the first display refresh rate of the first display device, and the second synchronization signal is used to receive the second display device The second display content is synchronized with the second display refresh rate of the second display device;Transmitting the second display content to the second display device based on the control signal.22.The method of claim 21 further comprising:When the first synchronization signal is available, the first display content is transmitted to the first display device based on the control signal.23.The method of claim 21 wherein the first display device is a primary display device and the second display device is a secondary display device.24.A method according to any one of claims 21 to 23, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device, or from the second The second synchronization signal of the display device is a tear effect removal timing signal of the second display device.25.The method according to any one of claims 21 to 23, wherein the first display content received by the first display device or the second display content received by the second display device is based on a slave processor interface Received by the selected protocol in the Serial Interface (MIPI-DSI) protocol, High Definition Multimedia Interface (HDMI) protocol, Display Port (DP) protocol, Miracast protocol, or Wireless Display (WiDi) protocol. |
Synchronization of display devices in a system including multiple display devicesTechnical fieldEmbodiments of the present invention generally relate to the field of communications and computing technology, and more particularly to systems that include multiple display devices.Background techniqueThe background description provided herein is for the purpose of the present disclosure. The materials described in this section are not prior art to the claims of the present application, and are not admitted to be prior art.Using multiple display devices for a computing system (eg, a fused mobility system) (which may allow multiple display devices to act as one screen) may have advantages such as increasing productivity or improving the user experience. The display content received by the display device can be synchronized with the display refresh rate of the display device. When multiple display devices are available in a computing device, the display device may perform different functions in different situations, for example, as a display device at one time, or as an input device at another time to provide more to the computing device flexibility. For one display device, a synchronization problem may occur when another display device may perform different functions.DRAWINGSThe embodiments will be readily understood by the following detailed description in conjunction with the drawings. For ease of description, like reference numerals indicate similar structural elements. Embodiments are shown in the various figures of the drawings by way of example and not limitation.1 illustrates an example computing device including a plurality of display devices, wherein a determination to a controller to control display content received by a second display device can be determined based on availability of a synchronization signal of the first display device, in accordance with various embodiments. Control signal.2 illustrates an example computing device including two display devices, wherein the display to the controller to control display content received by the second display device can be determined based on the availability of the synchronization signal of the first display device, in accordance with various embodiments. Control signal.3 illustrates another example computing device including two display devices, wherein the controller can be determined to control the reception by the second display device based on the availability of the synchronization signal of the first display device, in accordance with various embodiments. A control signal that displays content.4 illustrates an example process for managing a computing device including a plurality of display devices, wherein the controller can be determined to control by the availability of a synchronization signal for the first display device, in accordance with various embodiments. Second, a control signal for displaying the display content received by the device.FIG. 5 illustrates an example device suitable for implementing aspects of the present disclosure in accordance with various embodiments.FIG. 6 illustrates a storage medium having instructions for implementing the methods described with reference to FIGS. 1-5 in accordance with various embodiments.detailed descriptionThe computing device can include multiple display devices to increase productivity or improve the user experience. When two or more display devices are active, one display device may become the primary display device of all other display devices, and another display device may become the secondary display device of the primary display device. The primary display device can be a timing source, and the secondary display device can be a timing sink such that the two display devices can synchronize their display refresh rates to simultaneously view the corresponding display content using the two display devices. Therefore, when the first display device can be used as the main display device, the second display content received by the second display device can be synchronized by the first display device. However, in order to provide more flexibility to the computing device, the first display device may perform different functions in different situations, for example, as a display device at one time, or as an input device at another time. Therefore, when the first display device may not be used as the display device, the second display content received by the second display device may be synchronized by the synchronization signal from the second display device when the first synchronization signal is unavailable.In an embodiment, the means for calculating may comprise a selector. The first link, the second link, the third link, and the fourth link may be coupled to a selector. The first link can receive the first synchronization signal from the device or the first display device of the computing device that is in charge of the device. The second link can receive a second synchronization signal from the second display device of the device or computing device. The first synchronization signal may synchronize the first display content received by the first display device with the first display refresh rate of the first display device, and the second synchronization signal may display the second display content and the second display received by the second display device The second display refresh rate of the device is synchronized. The third link can provide a control signal to the controller of the device or computing device to control the second display content received by the second display device. The fourth link may provide a selection signal to the selector to select the first synchronization signal as the control signal to be provided to the controller when the first synchronization signal is available, and otherwise select the second synchronization signal as the control signal to be provided to the controller .In an embodiment, the means for calculating may include a communication interface and a controller coupled to the communication interface. The communication interface can receive a control signal, wherein the control signal can be the first synchronization signal when the first synchronization signal from the device or the first display device of the computing device that is in charge of the device is available, otherwise from the device or calculation A second synchronization signal of the second display device of the device. The first synchronization signal may synchronize the first display content received by the first display device with the first display refresh rate of the first display device, and the second synchronization signal may display the second display content received by the second display device with the second display device The second display refresh rate is synchronized. The controller may determine to transmit the second display content to the second display device based on the control signal.In an embodiment, the method performed by the computing device can include receiving a control signal, wherein the control signal is equal to the first synchronization signal from the first display device when the first synchronization signal is available, otherwise equal to the second display device Second synchronization signal. The first synchronization signal may synchronize the first display content received by the first display device with the first display refresh rate of the first display device, and the second synchronization signal may display the second display content received by the second display device with the second display device The second display refresh rate is synchronized. The method can also include transmitting the second display content to the second display device based on the control signal.BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in the claims It is understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be considered in aThe operations of the various methods may be described as a plurality of discrete acts or operations in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as implying that the operations must be dependent on the order. In particular, these operations may not be performed in the order presented. The described operations may be performed in a different order than the described embodiments. In additional embodiments, various additional operations may be performed, and/or the operations described may be omitted, divided, or combined.For the purposes of the present disclosure, the phrases "A or B" and "A and/or B" mean (A), (B) or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or ( A, B and C).The description may use the phrase "in an embodiment" or "in an embodiment", which may each refer to one or more of the same or different embodiments. Furthermore, the terms "including", "comprising", "having", etc., are used in connection with the embodiments of the present disclosure.As used hereinafter (including the claims), the terms "module" or "routine" may refer to, or include a part of, an application-specific integrated circuit (ASIC), an electronic circuit, one or more software, or A processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) of firmware programs, combinatorial logic, and/or other suitable components that provide the described functionality.In the event that the disclosure describes "a" or "first" element or its equivalent, such disclosure includes one or more of such elements, neither or a plurality of such elements are required or excluded. Furthermore, ordinal indicators (e.g., first, second, or third) of the elements used for the identification are used to distinguish such elements, and do not indicate or imply a required or limited number of these elements, and do not indicate the particulars of these elements. Location or order, unless otherwise specified.The terms "coupled to" and "coupled to" and the like may be used herein. "Coupled" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are in indirect contact with each other, but still cooperate or interact with each other, and may indicate that one or more other elements are coupled or connected between the elements that are believed to be coupled to each other. By way of example and not limitation, "coupled" may mean that two or more elements or devices are coupled through an electrical connection on a printed circuit board (eg, a motherboard). By way of example and not limitation, "coupled" may mean that two or more elements/devices cooperate and/or interact through one or more network links (eg, wired and/or wireless networks). By way of example and not limitation, computing devices may include two or more computing devices that are "coupled" on the mainboard or "coupled" through one or more network links.As used herein, the term "circuitry" refers to the following hardware components as part of or include them: for example, electronic circuits, logic circuits, processors (shared, dedicated, or grouped) configured to provide the described functionality. And/or memory (shared, dedicated or group), application specific integrated circuit (ASIC), field programmable device (FPD) (eg, field programmable gate array (FPGA), programmable logic device (PLD), Complex PLD (CPLD), high-capacity PLD (HCPLD), structured ASIC or programmable system-on-chip (SoC), digital signal processor (DSP), etc. In some embodiments, the circuitry can execute one or more software or firmware programs to provide at least some of the functions described.As used herein, the term "processor circuit" may refer to, or be included in, a circuit that is capable of performing a series of arithmetic or logical operations, sequentially and automatically; recording, storing, and/or transmitting digital data. The term "processor circuit" may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single core processor, a dual core processor, a triple core processor, quad core processing. And/or any other device capable of executing or operating computer-executable instructions (eg, program code, software modules, and/or function processes).As used herein, the term "interface circuit" may refer to a circuit that provides, or is part of, a circuit for exchanging information between two or more components or devices. The term "interface circuit" may refer to one or more hardware interfaces (eg, bus, input/output (I/O) interface, peripheral component interface, network interface card, etc.).As used herein, the term "computer device" may describe the ability to perform a series of arithmetic or logical operations, sequentially and automatically, equipped to record/store data on a machine readable medium, and to transmit data and from a communication network. Any physical hardware device that receives data from one or more other devices. Computer devices may be considered synonymous with computers, computing platforms, computing devices, etc., and may sometimes be referred to hereinafter as they are. The term "computer system" can include any type of interconnected electronic device, computer device, or component thereof. Additionally, the terms "computer system" and/or "system" may refer to various components of a computer that are communicatively coupled to each other. Furthermore, the terms "computer system" and/or "system" may refer to a plurality of computer devices and/or multiple computing systems that are communicatively coupled to one another and configured to share computing and/or networking resources. Examples of "computer devices," "computer systems," and the like can include cellular or smart phones, feature phones, tablet personal computers, wearable computing devices, autonomous sensors, laptop computers, desktop personal computers, video game consoles, digital Media players, handheld communication devices, personal data assistants, e-book readers, augmented reality devices, server computer devices (eg, stand-alone, rack-mount, blade, etc.), cloud computing services/systems, network components, car entertainment System (IVI), in-vehicle entertainment (ICE) equipment, instrument panel (IC), head-up display (HUD) equipment, on-board diagnostic (OBD) equipment, dashboard mobile equipment (DME), mobile data terminal (MDT), electronic engine management System (EEMS), Electronic/Engine Control Unit (ECU), Vehicle-mounted Computer Equipment (VECD), Automated or Semi-Automated Vehicle (ADV) system, Car Navigation System, Electronic/Engine Control Module (ECM), Embedded System, Microcontrollers, Control Modules, Transmitter Management Systems (EMS), Networked or "Smart" Appliances, Machine Type Communication (MTC) Equipment , Machine to Machine (M2M), Internet of Things (IoT) devices and/or any other similar electronic device. Furthermore, the term "on-board computer device" may refer to any computer device and/or computer system that is physically mounted on a vehicle, built into the vehicle, or embedded in the vehicle.As used herein, the term "network element" may be considered synonymous with and/or referred to as: networked computer, networking hardware, network equipment, router, switch, hub, bridge, radio network control , radio access network equipment, gateways, servers, and/or any other similar device. The term "network element" can describe a physical computing device of a wired or wireless communication network and is configured to host a virtual machine. Moreover, the term "network element" can be described as a device that provides radio baseband functionality for data and/or voice connectivity between a network and one or more users. The term "network element" may be considered synonymous with "base station" and/or referred to as a "base station." As used herein, the term "base station" may be considered synonymous with and/or referred to as: Node B, Enhanced or Evolved Node B (eNB), Next Generation Node B (gNB), base transceiver A machine (BTS), an access point (AP), a roadside unit (RSU), etc., and can be described as a device that provides radio baseband functionality for data and/or voice connectivity between a network and one or more users. As used herein, the terms "vehicle to vehicle" and "V2V" may refer to any communication involving a vehicle as a source or destination of a message. Additionally, the terms "vehicle to vehicle" and "V2V" as used herein may also include or be equivalent to vehicle to infrastructure (V2I) communication, vehicle to network (V2N) communication, vehicle to pedestrian (V2P) communication, or V2X communication.As used herein, the term "channel" may refer to any transmission medium that is tangible or intangible for communicating data or data streams. The term "channel" may be synonymous with and/or equivalent to: "communication channel", "data communication channel", "transport channel", "data transmission channel", "access channel", "data connection" Incoming channel, "link", "data link", "carrier", "radio frequency carrier" and/or any other similar term referring to a path or medium through which data is passed. Additionally, the term "link" may refer to a connection between two devices for transmitting and receiving information over a Radio Access Technology (RAT).FIG. 1 illustrates an example computing device 100 including a plurality of display devices, such as display device 140, display device 150, and display device 160, in which may be determined based on the availability of synchronization signal 143 of display device 140, in accordance with various embodiments. A control signal 154 to the display 117 of the controller 117 for controlling the display content 157 received by the display device 150. For the sake of clarity, features of computing device 100, display device 140, display device 150, display device 160, control signal 154, controller 117, display content 157, and synchronization signal 143 may be described below as being used to understand computing devices, Examples of multiple display devices, control signals, controllers, display content, and synchronization signals. It should be understood that the components included in computing device 100, display device 140, display device 150, display device 160, control signal 154, controller 117, display content 157, and synchronization signal 143 may be more or less. Moreover, it should be understood that one or more devices and components within computing device 100, display device 140, display device 150, display device 160, control signal 154, controller 117, display content 157, and synchronization signal 143 may include from the following description Additional and/or varying features, and may include any device and component that one of skill in the art would consider and/or refer to as computing devices, multiple display devices, control signals, controllers, display content, and synchronization signals.In an embodiment, computing device 100 may include a plurality of display devices, such as display device 140, display device 150, display device 160, and device 110 having controller 117. These three display devices, such as display device 140, display device 150, display device 160, may be provided by way of example only and not limitation. In some embodiments, there may be only two display devices, such as display device 140 and display device 150, or as shown in FIGS. 2 and 3. In some other embodiments, there may be more than three display devices. Display device 140 may be the primary display device of computing device 100, while display device 150 or display device 160 may be a secondary display device of computing device 100. A display device, such as display device 140, can be used as a display device at one time and can be used as an input device for computing device 100 at another time.In an embodiment, computing device 100 may further include a selector 120 for providing control signal 154 to controller 117 for controlling display content 157 received by display device 150, and a selector 130 for controlling signal 164 The controller 117 is provided to control the display content 167 received by the display device 160.In an embodiment, display device 140 may display display content 147 on display 149 at display refresh rate 148. Display content 147 may be received via channel 141 coupled to device 110. Moreover, display device 140 can receive logic power signal 142 from device 110 for determining power to display device 140 and can provide synchronization signal 143 for synchronizing display content 147 received by display device 140 with display refresh rate 148. The synchronization signal 143 may be a tearing effect removal timing signal for the display device 140. The synchronization signal 143 may be available when the display device 140 can function as a display device, and when the display device 140 is not available as a display device (eg, as an input device, or turned off without power), The sync signal 143 may be unavailable. Further, when the synchronization signal 143 is available, the display device 140 may provide the synchronization signal 145 to the display device 150 to synchronize the display content 157, and provide the synchronization signal 146 to the display device 160 to synchronize the display content 167, and the display device 140 may use As the main display device.In an embodiment, display device 150 may display display content 157 on display 159 at display refresh rate 158. Display content 157 can be received via channel 151 coupled to device 110. Moreover, display device 150 can receive logic power signal 152 from device 110 for determining power to display device 150, and can provide synchronization signal 153 for synchronizing display content 157 received by display device 150 with display refresh rate 158. The sync signal 153 may be a tear effect removal timing signal for the display device 150. Display content 157 received by display device 150 may be synchronized by synchronization signal 153 from display device 150 when synchronization signal 143 may be unavailable, or by display device 140, for example, by synchronization signal when synchronization signal 143 may be available. 145 sync. The synchronization signal 143 may be available when the display device 140 can function as a display device, and the synchronization signal 143 may be unavailable when the display device 140 is not available as a display device (eg, as an input device).In an embodiment, display device 160 may display display content 167 on display 169 at display refresh rate 168. Display content 167 may be received via channel 161 coupled to device 110. Additionally, display device 160 may receive logic power signal 162 from device 110 for determining power to display device 160 and may provide synchronization signal 163 for synchronizing display content 167 received by display device 160 with display refresh rate 168. The sync signal 163 may be a tear effect removal timing signal for the display device 160. Display content 167 received by display device 160 may be synchronized by synchronization signal 163 from display device 160 when synchronization signal 143 may not be obtained from display device 140, or by display device 140, for example, when synchronization signal 143 may be available. The sync signal 146 is synchronized. The synchronization signal 143 may be available when the display device 140 can function as a display device, and the synchronization signal 143 may be unavailable when the display device 140 is not available as a display device (eg, as an input device).Device 110 may include a communication interface 114 coupled to display device 140, a communication interface 115 coupled to display device 150, a communication interface 116 coupled to display device 160, and controls coupled to communication interface 114, communication interface 115, and communication interface 116. 117. Communication interface 114, communication interface 115, and communication interface 116 may perform the same or similar functions for different display devices.In more detail, the communication interface 114 can include a channel 141 to couple the display device 140 for: transmitting display content 147, determining a logic power signal 142 to power the display device 140, and receiving a control signal 144 (which can be coupled to A synchronization signal 143) of the display device 140 is displayed. Control signal 144 can be used to synchronize display content 147 received by display device 140 with display refresh rate 148 of display device 140.In an embodiment, the communication interface 115 can include a channel 151 to couple the display device 150 for: transmitting display content 157, determining a logic power signal 152 to power the display device 150, and receiving a control signal 154 (which can be selected by The device 120 determines). Control signal 154 can be used to synchronize display content 157 received by display device 150 with display refresh rate 158 of display device 150. Control signal 154 may be equal to synchronization signal 143 when synchronization signal 143 is available, otherwise equal to synchronization signal 153. The controller 117 can determine to transmit the display content 157 to the display device 150 based on the control signal 154. When the synchronization signal 143 is available, the controller 117 may determine to transmit the display content 147 to the display device 140 based on the control signal 154.In an embodiment, communication interface 116 may include a channel 161 to couple display device 160 for: transmitting display content 167, determining a logic power signal 162 that supplies power to display device 160, and receiving control signal 164 (which may be selected by The device 130 determines). Control signal 164 can be used to synchronize display content 167 received by display device 160 with display refresh rate 168 of display device 160. Control signal 164 may be equal to sync signal 143 when sync signal 143 is available, otherwise equal to sync signal 163. Controller 117 can determine to transmit display content 167 to display device 160 based on control signal 164.In an embodiment, there may be a first link 121, a second link 123, a third link 127, and a fourth link 125 coupled to the selector 120. The first link 121 can be coupled to the selector 120 to receive the synchronization signal 143 from the display device 140. The second link 123 can be coupled to the selector 120 to receive the synchronization signal 153 from the display device 150. The third link 127 can couple the selector 120 to the device 110 to provide the control signal 154 to the controller 117 of the device 110. The fourth link 125 can be coupled to the selector 120 to provide the selector 120 with a selection signal for selecting the control signal 154. When the synchronization signal 143 is available, the selection signal on the fourth link 125 of the selector 120 may select the synchronization signal 143 as the control signal 154, otherwise the selection signal on the fourth link 125 of the selector 120 may select to synchronize. Signal 153 is used as control signal 154.In an embodiment, there may be a first link 131, a second link 133, a third link 137, and a fourth link 135 coupled to the selector 130. The first link 131 can be coupled to the selector 130 to receive the synchronization signal 143 from the display device 140. The second link 133 can be coupled to the selector 130 to receive the synchronization signal 163 from the display device 160. The third link 137 can couple the selector 130 to the device 110 for providing the control signal 164 to the controller 117 of the device 110. The fourth link 135 can be coupled to the selector 130 to provide the selector 130 with a selection signal for selecting one of the signals on the first link 131 and the second link 133 and output a control signal 164. When the synchronization signal 143 is available, the selection signal on the fourth link 135 of the selector 130 may select the synchronization signal 143 as the control signal 164, otherwise the selection signal on the fourth link 135 of the selector 130 may select the synchronization signal. 163 is used as control signal 164.In an embodiment, the link (e.g., link 121, link 123, link 125, link 127, link 131, link 133, link 135, or link 137) may be the actual physical link. The link can be a wired cable or a wireless link. For example, link 121 can be a wired cable and link 123 can be a wireless link. Different links can use the same or different communication technologies. For example, link 121 can use wireless cellular technology, link 123 can use a different wireless technology (eg, Bluetooth), and link 125 can be a wired cable.In an embodiment, display refresh rate 148, display refresh rate 158, or display refresh rate 168 may be provided frame by frame. Display refresh rate 148, display refresh rate 158, or display refresh rate 168 may be 60 Hz, 120 Hz, or 240 Hz. The display 149, the display 159, or the display 169 may be selected from the group consisting of a light emitting diode (LED) display, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), and a digital Light processing (DLP) displays, plasma displays, electroluminescent panels, organic light emitting diode (OLED) displays, or electronic paper. The display content 147, the display content 157, or the display content 167 may be received by the display device 140, the display device 150, and the display device 160 according to a protocol selected from the following: Mobile Industry Processor Interface Display Serial Interface (MIPI-DSI) protocol High Definition Multimedia Interface (HDMI) protocol, Display Port (DP) protocol, Miracast protocol or Wireless Display (WiDi) protocol. Moreover, computing device 100 can be a tablet, mobile device, smart phone, smart television (TV), wearable device, touch screen display or head mounted display (HMD), laptop, game controller, set top box, infotainment Console, Internet of Things (IoT) device or other.2 illustrates an example computing device 200 including two display devices (eg, display device 240 and display device 250), which may be determined to control based on the availability of synchronization signal 243 of display device 240, in accordance with various embodiments. A control signal 254 of the display 257 for controlling the display content 257 received by the display device 250. The computing device 200, the display device 240, the display device 250, the control signal 254, the controller 217, the display content 257, and the synchronization signal 243 may be the computing device 100, the display device 140, the display device 150, and the display device, respectively, as shown in FIG. 160, an example of the control signal 154, the controller 117, the display content 157, and the synchronization signal 143.In an embodiment, computing device 200 can include display device 240 and display device 250, device 210 with controller 217, and selection for providing control signal 254 to controller 217 to control display content 257 received by display device 250. 220.In an embodiment, display device 240 may display display content 247 on display 249 at display refresh rate 248. Display content 247 can be received via channel 241 coupled to device 210. Moreover, display device 240 can receive logic power signal 242 from device 210 for determining power to display device 240, and can provide synchronization signal 243 for synchronizing display content 247 received by display device 240 with display refresh rate 248. The sync signal 243 may be a tear effect removal timing signal for the display device 240. Moreover, when synchronization signal 243 is available, display device 240 can provide synchronization signal 245 to display device 250 to simultaneously display content 257.In an embodiment, display device 250 may display display content 257 on display 259 at display refresh rate 258. Display content 257 can be received via channel 251 coupled to device 210. Moreover, display device 250 can receive logic power signal 252 from device 210 for determining power to display device 250, and can provide synchronization signal 253 for synchronizing display content 257 received by display device 250 with display refresh rate 258. The sync signal 253 can be a tear effect removal timing signal for the display device 250. Display content 257 received by display device 250 may be synchronized by synchronization signal 253 from display device 250 when synchronization signal 243 may be unavailable, or by display device 240, for example, by synchronization signal when synchronization signal 243 may be available. 245 synchronization.In an embodiment, device 210 may include a communication interface 214 coupled to display device 240, a communication interface 215 coupled to display device 250, and a controller 217 coupled to communication interface 214 and communication interface 215. In more detail, communication interface 114 can receive control signal 244 that can be coupled to synchronization signal 243 from display device 240. Control signal 244 can be used to synchronize display content 247 received by display device 240 with display refresh rate 248 of display device 240.In an embodiment, communication interface 215 can include a channel 251 to couple display device 250 for: transmitting display content 257, determining a logic power signal 252 that supplies power to display device 250, and receiving control signal 254 (which can be selected by The device 220 determines).In an embodiment, selector 220 may be a multiplexer or any other circuit that may select one of two inputs. There may be a first link 221, a second link 223, a third link 227, and a fourth link 225 coupled to the selector 220. The first link 221 can be coupled to the selector 220 to receive the synchronization signal 243 from the display device 240. The second link 223 can be coupled to the selector 220 to receive the synchronization signal 253 from the display device 250. The third link 227 can couple the selector 220 to the device 210 for providing the control signal 254 to the controller 217 of the device 210. The fourth link 225 can be coupled to the selector 220 to provide the selector 220 with a selection signal for selecting the control signal 254.In an embodiment, the selection signal provided on the fourth link 225 may be a logic power signal 242 to the display device 240 for determining power to the display device 240. When the logic power signal 242 can be low (which may indicate that no power is being supplied to the display device 240), the selector 220 may select the synchronization signal 253 on the second link 223 to output as the control signal 254. On the other hand, when the logic power signal 242 can be high (which can represent powering the display device 240) and the display device 240 can be used as a display device, the selector 220 can select the synchronization signal 243 on the first link 221 as Control signal 254 is output.FIG. 3 illustrates another example computing device 300 including two display devices (eg, display device 340 and display device 350), which may be determined based on the availability of synchronization signal 343 of display device 340, in accordance with various embodiments. Control signal 354 to display device 317 for controlling display content 357 received by display device 350. The computing device 300, the display device 340, the display device 350, the control signal 354, the controller 317, the display content 357, and the synchronization signal 343 may be the computing device 100, the display device 140, the display device 150, and the display device, respectively, as shown in FIG. 160, an example of the control signal 154, the controller 117, the display content 157, and the synchronization signal 143.In an embodiment, computing device 300 can include display device 340 and display device 350, device 310 with controller 317, and selection for providing control signal 354 to controller 317 to control display content 357 received by display device 350. 320. Detection circuit 319 can provide selection signal 318 to link 325 of selector 320 to select control signal 354. Detection circuit 319 can generate selection signal 318 by detecting if synchronization signal 343 from display device 340 is available. If the synchronization signal 343 from the display device 340 may be available, the selection signal 318 may have a first logic value. Otherwise, when the synchronization signal 343 from the display device 340 may be unavailable, the selection signal 318 may have a second logic value that is different than the first logic value. In an embodiment, the detection circuit 319 can include a capacitor-resistor constant monostable timer that lasts longer than the display refresh time determined by the display refresh rate 348 of the display device 340.In an embodiment, display device 340 can display display content 347 on display 349 at display refresh rate 348. Display content 347 can be received via channel 341 coupled to device 310. Moreover, display device 340 can receive logic power signal 342 from device 310 for determining power to display device 340 and can provide synchronization signal 343 for synchronizing display content 347 received by display device 340 with display refresh rate 348. The sync signal 343 can be a tear effect removal timing signal for the display device 340. Moreover, when synchronization signal 343 is available, display device 340 can provide synchronization signal 345 to display device 350 to synchronize display content 357.In an embodiment, display device 350 can display display content 357 on display 359 at display refresh rate 358. Display content 357 can be received via channel 351 coupled to device 310. Moreover, display device 350 can receive logic power signal 352 from device 310 for determining power to display device 350 and can provide synchronization signal 353 for synchronizing display content 357 received by display device 350 with display refresh rate 358. The sync signal 353 can be a tear effect removal timing signal for the display device 350. Display content 357 received by display device 350 may be synchronized by synchronization signal 353 from display device 350 when synchronization signal 343 may be unavailable, or by display device 340, for example, by synchronization signal when synchronization signal 343 may be available. 345 sync.In an embodiment, device 310 may include a communication interface 314 coupled to display device 340, a communication interface 315 coupled to display device 350, and a controller 317 coupled to communication interface 314 and communication interface 315. In more detail, communication interface 114 can receive control signal 344 that can be coupled to synchronization signal 343 from display device 340. Control signal 344 can be used to synchronize display content 347 received by display device 340 with display refresh rate 348 of display device 340.In an embodiment, communication interface 315 can include a channel 351 to couple display device 350 for: transmitting display content 357, logic power signal 352 for determining power to display device 350, and receiving control signal 354 (which can Determined by selector 320).In an embodiment, selector 320 may be a multiplexer or any other circuit that may select one of two inputs. There may be a first link 321, a second link 323, a third link 327, and a fourth link 325 coupled to the selector 320. The first link 321 can be coupled to the selector 320 to receive the synchronization signal 343 from the display device 340. The second link 323 can be coupled to the selector 320 to receive the synchronization signal 353 from the display device 350. The third link 327 can couple the selector 320 to the device 310 for providing the control signal 354 to the controller 317 of the device 310. The fourth link 325 can be coupled to the selector 320 to provide a selection signal for selecting one of the signals on the first link 312 and the second link 323 to the selector 320 and output as the control signal 354. In an embodiment, the selection signal provided on the fourth link 325 may be the selection signal 318 generated by the detection circuit 319.4 illustrates an example process 400 for managing a computing device including a plurality of display devices, wherein the control to the controller can be determined based on the availability of the synchronization signal of the first display device, in accordance with various embodiments. Second, a control signal for displaying the display content received by the device. In an embodiment, process 400 may be performed by controller 117, controller 217, or controller 317 as shown in FIGS. 1, 2, and 3.Process 400 can begin with interaction 401. During the interaction 401, the controller may receive a control signal, wherein the control signal may be equal to the first synchronization signal when the first synchronization signal from the first display device is available, otherwise equal to the second from the second display device Synchronization signal. The first synchronization signal may synchronize the first display content received by the first display device with the first display refresh rate of the first display device, and the second synchronization signal may display the second display content received by the second display device with the second display device The second display refresh rate is synchronized. For example, during interaction 401, controller 117 can receive control signal 154. Control signal 154 may be equal to synchronization signal 143 when synchronization signal 143 from display device 140 is available, otherwise control signal 154 may be equal to synchronization signal 153 from display device 150.During the interaction 403, the controller may transmit the second display content to the second display device based on the control signal. For example, during interaction 403, controller 117 can transmit display content 158 to display device 150 based on control signal 154.During the interaction 405, when the first synchronization signal is available, the controller may transmit the first display content to the first display device based on the control signal. For example, during the interaction 405, when the synchronization signal 143 is available, the controller 117 can transmit the display content 148 to the display device 140 based on the control signal 154 (which can be equal to the synchronization signal 143).In some embodiments, various interactions (eg, interaction 401, interaction 403, or interaction 405) may be performed in parallel, or in a different order. Similarly, other interactions can be performed in parallel or in a different order.FIG. 5 illustrates an example device 500 suitable for implementing aspects of the present disclosure in accordance with various embodiments. Device 500 can be used to implement the functionality of computing device 100, computing device 200, computing device 300, or process 400. As shown, device 500 can include one or more processors 502, each having one or more processing cores and, optionally, a hardware accelerator 503 (which can be an ASIC or an FPGA). In an alternate embodiment, hardware accelerator 503 may be part of processor 502 or integrated together on the SOC. Additionally, device 500 can include main memory device 504 (which can be any of a number of known persistent storage media) and data storage circuitry 508. Additionally, device 500 can include an I/O interface circuit 518 having a transmitter 523 and a receiver 517 and coupled to one or more sensors 514, display device 513, display device 515, and input device 521. Additionally, device 500 can include communication circuitry 505 that includes a transceiver (Tx) 511 and a network interface controller (NIC) 512. These elements can be coupled to each other via a system bus 516 (which can represent one or more buses). In the case of multiple buses, they can be bridged by one or more bus bridges (not shown).Additionally, device 500 can include a selector 506 and detection circuitry 507. In an embodiment, the selector 506 may be similar to the selector 120 shown in FIG. 1, the selector 130, the selector 220 shown in FIG. 2, or the selector 320 shown in FIG. 3, and the detection circuit 507. It can be similar to the detection circuit 319 as shown in FIG. In addition, the display device 513 or the display device 515 can be similar to the display device 140, the display device 150, the display device 160, the display device 240 as shown in FIG. 2, the display device 250, as shown in FIG. Display device 340, display device 350. The processor 502 can perform functions similar to the controller 117 shown in FIG. 1, the controller 217 shown in FIG. 2, or the controller 317 shown in FIG.In an embodiment, processor 502 (also referred to as "processor circuit 502") may be one or more processing elements configured to perform basic arithmetic operations, logic operations, and input/output operations by executing instructions. Processor circuit 502 can be implemented as a stand-alone system/device/package or as part of an existing system/device/package. Processor circuit 502 can be one or more microprocessors, one or more single core processors, one or more multi-core processors, one or more multi-threaded processors, one or more GPUs, one or more Ultra low voltage processor, one or more embedded processors, one or more DSPs, one or more FPDs (hardware accelerators) (eg, FPGA, structured ASIC, programmable SoC (PSoC), etc.), and / Or other processor or processing/control circuit. Processor circuit 502 can be part of a SoC in which processor circuit 502 and other components discussed herein are formed as a single IC or a single package. As an example, processor circuit 502 can include one or more Intelorprocessor; Advanced Micro Devices (AMD) Accelerated Processing Unit (APU),orprocessor; Apple A series, S series, W series and other processors; Qualcommprocessor; SamsungProcessor; etc.In an embodiment, I/O interface circuitry 518 can include a sensor hub that can act as a coprocessor by processing data obtained from one or more sensors 514. The sensor hub can include circuitry configured to integrate data obtained from each of the one or more sensors 514 by performing arithmetic operations, logic operations, and input/output operations. In an embodiment, the sensor hub may be capable of: time stamping the obtained sensor data; providing such data to the processor circuit 502 in response to a query of the sensor data; buffering the sensor data; continuously streaming the sensor data To processor circuit 502, an independent stream of each of one or more sensors 514 is included; sensor data is reported based on predefined thresholds or conditions/triggers; and/or other similar data processing functions.In an embodiment, memory 504 (also referred to as "memory circuit 504" or the like) may be circuitry configured to store data or logic for operating computer device 500. Memory circuit 504 can include a plurality of memory devices that can be used to provide a given amount of system memory. By way of example, memory circuit 504 can be any suitable type, number, and/or combination of volatile memory devices (eg, random access memory (RAM), dynamic RAM (DRAM), static RAM (SAM), etc.) and/or Non-volatile memory devices (eg, read only memory (ROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, antifuse, etc.) Configuration can be done in any suitable implementation known. In various implementations, each memory device can be formed from any number of different package types, such as single die package (SDP), dual die package (DDP) or quad die package (Q17P), dual in-line A memory module (DIMM) (eg, a microDIMM or MiniDIMM) and/or any other similar memory device. In order to provide permanent storage of information such as data, applications, operating systems, etc., memory circuit 504 may include one or more mass storage devices, such as a solid state disk drive (SSDD); flash memory cards, such as SD cards, microSD cards, xD graphics Cards, etc., and USB flash drives; on-chip memories or registers associated with processor circuit 502 (eg, in low power implementations); micro hard disk drives (HDDs); 3D intersections ofand(3D XPOINT) Memory; etc.Where FPD is used, processor circuit 502 and memory circuit 504 (and/or data storage circuit 508) may include logic blocks or logic constructs, memory cells, input/output (I/O) blocks, and may be programmed to perform Other interconnect resources for the various functions of the example embodiments discussed herein. The memory unit can be used to store data in a lookup table (LUT) that the processor circuit 502 uses to implement various logic functions. The memory unit can include any combination of various levels of memory/storage including, but not limited to, EPROM, EEPROM, flash memory, SRAM, anti-fuse, and the like.In an embodiment, data storage circuitry 508 (also referred to as "storage circuitry 508", etc.) with shared or corresponding controllers may provide for permanent storage of information, operating systems, and the like. The data storage circuit 508 can be implemented as a solid state drive (SSD); a solid state disk drive (SSDD); a serial AT accessory (SATA) storage device (eg, SATA SSD); a flash drive; a flash card such as an SD card, a microSD card, an xD Graphics card, etc., and USB flash drive; 3D Xpoint storage device; on-chip memory or registers associated with processor circuit 502; hard disk drive (HDD); micro HDD; resistive memory; phase change memory; Memory; or chemical memory; etc. As shown, data storage circuitry 508 is included in computer device 500; however, in other embodiments, data storage circuitry 508 can be implemented as one or more devices separate from other components of computer device 500.In some embodiments, data storage circuitry 508 can include an operating system (OS) (not shown), which can be a general purpose operating system or an operating system specifically written and customized for computer device 500. The OS may include one or more drivers, libraries, and/or application programming interfaces (APIs) that provide program code and/or software components, and/or control system configurations to control and/or obtain/process from one or more Data 514 for the sensors.The components of computer device 500 can communicate with one another via bus 516. Bus 516 may include any number of technologies, such as Local Interconnect Network (LIN); Industry Standard Architecture (ISA); Extended ISA (EISA); PCI; Extended PCI (PCIx); PCIe; Integrated Circuit (I2C) bus; Small Computer System Interface (SPI) bus; Common Application Programming Interface (CAPI); point-to-point interface; power bus; proprietary bus such asUltra Path Interface (UPI),Accelerator Link (IAL) or SoC-based interface Other dedicated buses used in ; or any other technology. In some embodiments, bus 516 can be a controller area network (CAN) bus system, a time triggered protocol (TTP) system, or a FlexRay system that can allow various devices (eg, one or more sensors 514, etc.) to use messages. Or frames communicate with each other.Communication circuitry 505 can include circuitry for communicating with a wireless network or a wired network. For example, communication circuit 505 can include a transceiver (Tx) 511 and a network interface controller (NIC) 512. Communication circuitry 505 can include one or more processors (e.g., baseband processors, modems, etc.) that are specific to a particular wireless communication protocol.The NIC 512 can be included to provide a wired communication link to a network and/or other device. Wired communication can provide Ethernet connectivity, Ethernet over USB, etc., or can be based on other types of networks, such as DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET. Additional NIC 512 may be included to allow connection to a second network (not shown) or other device, for example, first NIC 512 provides communication to network 150 over Ethernet, and second NIC 512 passes another type of network (eg, A personal area network (PAN) including a personal computer (PC) device provides communication to other devices. In some embodiments, various components of device 500 (eg, one or more sensors 514, etc.) can be coupled to the processor via NIC 512 as described above, rather than via I/O interface circuitry 518 as described below. 502.The Tx 511 may include one or more radios for wireless communication with a network and/or other devices. The Tx 511 can include hardware devices that enable communication with wired networks and/or other devices using modulated electromagnetic radiation through solid or non-solid media. Such hardware devices may include switches, filters, amplifiers, antenna elements, etc. to generate or transmit radio waves to transmit data to one or more other devices and convert the received signals into usable information (eg, digital data). (It can be provided to one or more other components of computer device 500) to facilitate over-the-air (OTA) communication. In some embodiments, various components of device 500, such as one or more sensors 514, etc., can be connected to device 500 via Tx 511 as described above, rather than via I/O interface circuitry 518 as described below. In one example, one or more sensors 514 can be coupled to device 500 via a short range communication protocol.The Tx 511 may include one or more radios compatible with any number of 3GPP (3rd Generation Partnership Project) specifications, particularly Long Term Evolution (LTE), Advanced Long Term Evolution (LTE-A), Advanced Long Term Evolution Pro (LTE). -APro) and the fifth generation (5G) new air interface (NR). It may be noted that radios compatible with any number of other fixed, mobile or satellite communication technologies and standards may be selected. These may include, for example, any cellular wide area radio communication technology (which may include, for example, a 5G communication system), Global System for Mobile Communications (GSM) radio communication technology, General Packet Radio Service (GPRS) radio communication technology, or Enhanced Data Rate GSM Evolution (EDGE). ) Radio communication technology. Other Third Generation Partnership Project (3GPP) radio communication technologies that may be used include UMTS (Universal Mobile Telecommunications System), FOMA (Multimedia Free Access), 3GPP LTE (Long Term Evolution), 3GPP Advanced LTE (Advanced Long Term Evolution), 3GPP LTE-Advanced Pro (Advanced Long Term Evolution Pro), CDMA2000 (Code Division Multiple Access 2000), CDPD (Cellular Digital Packet Data), Mobitex, 3G (3rd Generation), CSD (Circuit Switched Data), HSCSD (High Speed) Circuit Switched Data ), UMTS (3G) (Universal Mobile Telecommunications System (3rd Generation)), W-CDMA (UMTS) (Wideband Code Division Multiple Access (Universal Mobile Telecommunications System)), HSPA (High Speed Packet Access), HSDPA (High Speed Downlink) Link packet access), HSUPA (High Speed Uplink Packet Access), HSPA+ (High Speed Packet Access Plus), UMTS-TDD (Universal Mobile Telecommunications System - Time Division Duplex), TD-CDMA (Time Division - Code Division) Address), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), 3GPP Rel.8 (Pre-4G) (3rd Generation Partnership Project 8th Edition (Pre-4G)), 3GPP Rel.9 (3rd Generation) Partner Project 9th Edition), 3GPP Rel.10 (3rd Generation Partnership Project 10th Edition), 3GPP Rel.11 (3rd Generation Partnership Project 11th Edition), 3GPP Rel.12 (3rd Generation) Partner Project 12th Edition), 3GPP Rel.13 (3rd Generation Partnership Project 13th Edition), 3GPP Rel.14 (3rd Generation Partnership Project 14th Edition), 3GPP LTE Extra, LTE Authorized Auxiliary Access ( LAA), UTRA (UMTS Terrestrial Radio Access), E-UTRA (Evolved UMTS Terrestrial Radio Access), LTE Advanced (4G) (Advanced Long Term Evolution (4th Generation)), cdmaOne (2G), CDMA2000 (3G) ( Code Division Multiple Access 2000 (3rd Generation), EV-DO (Evolution Data Optimized or Evolution Only Data), AMPS (1G) (Advanced Mobile Phone System (1st Generation)), TACS/ETACS (Total Access Communication) System/Extended Total Access Communication System), D-AMPS (2G) (Digital AMPS (2nd Generation)), PTT (Push-to-Talk), MTS (Mobile Phone System), IMTS (Improved Mobile Phone System), AMTS (Advanced Mobile Phone System), OLT (Norwegian Offentlig Landmobil Telefoni, Public Land Mobile Phone), MTD (Swedish abbreviation Mobiltelefonisystem D, or Mobile Phone System D), Autotel/PALM (Public Automatic Land Mobile), ARP (Finland) Autoradiopuhelin, "Car Radiotelephone"), NMT (Nordic Mobile Phone), Hicap (NTT (Nippon Telegraph and Telephone) high-capacity version), CDPD (Cellular Digital Data), Mobitex, DataTAC, iDEN (Integrated Digital Enhanced Network), PDC (Personal Digital Cellular), CSD (Circuit Switched Data), PHS (Personal Handyphone System), WiDEN (Broadband Integrated Digital Enhanced Network), iBurst, Unlicensed Mobile access (UMA, also known as 3GPP universal access network, or GAN standard), Wireless Gigabit Alliance (WiGig) standard, general mmWave standard (wireless systems operating at 10-90 GHz and above, such as WiGig, IEEE 802.11ad , IEEE 802.11ay), etc. In addition to the criteria listed above, any number of satellite uplink techniques can be used for uplink transceivers, including, for example, radios compliant with standards published by the ITU (International Telecommunications Union) or ETSI (European Telecommunications Standards Institute) . Accordingly, the examples provided herein are understood to be applicable to a variety of other communication technologies that are available and not yet promulgated. Implementations, components, and details of the above protocols may be those known in the art and are omitted herein for the sake of brevity.Input/output (I/O) interface circuitry 518 may include circuitry such as an external expansion bus (eg, Universal Serial Bus (USB), FireWire, Thunderbolt, PCI/PCIe/PCIx, etc.) for External components/devices (eg, one or more sensors 514, etc.) are connected. I/O interface circuitry 518 may include any suitable interface controller and connector to interconnect one or more of processor circuit 502, memory circuit 504, data storage circuit 508, communication circuit 505, and other components of computer device 500. One. The interface controller can include, but is not limited to, a memory controller, a memory controller (eg, a redundant array of independent disks (RAID) controller, a baseboard management controller (BMC), an input/output controller, a host controller, etc.). Connectors may include, for example, a bus (eg, bus 516), ports, slots, jumpers, interconnects, sockets, modular connectors, and the like. I/O interface circuitry 518 can couple device 500 with one or more sensors 514, etc. via a wired connection, such as using USB, FireWire, Thunderbolt, RCA, Video Graphics Array (VGA), Digital Visual Interface (DVI), and/or mini DVI, high definition multimedia interface (HDMI), S-Video, etc.The one or more sensors 514 may be any device configured to detect an event or environmental change, convert the detected event into an electrical signal and/or digital data, and transmit/transmit the signal/data to the computer device 500. Some of the one or more sensors 514 may be sensors for providing computer generated sensing inputs. Some of the one or more sensors 514 may be sensors for motion and/or object detection. Examples of such one or more sensors 514 may include, in particular, a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS) active pixel sensor (APS), a lensless image capture device/camera, a thermal imaging (infrared) camera. , Photographic Detection and Ranging (LIDAR) systems, etc. In some implementations, the one or more sensors 514 can include a lensless image capture mechanism that includes an array of aperture elements, wherein light passing through the array of aperture elements defines pixels of an image. In an embodiment, motion detection of one or more sensors 514 can be coupled or associated with a light generating device, such as one or more infrared projectors for projecting an infrared light grid onto a scene, wherein the infrared camera can The reflected infrared light is recorded to calculate depth information.Some of the one or more sensors 514 can be used for position and/or orientation detection, perimeter/environmental condition detection, and the like. Examples of such one or more sensors 514 may include, inter alia, microelectromechanical systems (MEMS) having piezoelectric, piezoresistive, and/or capacitive components that may be used to determine environmental conditions or location information associated with computer device 500. In an embodiment, the MEMS may comprise a 3-axis accelerometer, a 3-axis gyroscope, and/or a magnetometer. In some embodiments, the one or more sensors 514 may also include one or more gravimeters, altimeters, barometers, proximity sensors (eg, infrared radiation detectors, etc.), depth sensors, ambient light sensors, thermal sensors ( Thermometer), ultrasonic transceiver, etc.Each of these elements, such as one or more processors 502, hardware accelerator 503, memory 504, data storage circuitry 508, input/output interface circuitry 518, one or more sensors 514, communications including Tx 511 and NIC 512 Circuitry 505 and system bus 516 can perform their conventional functions known in the art. Additionally, they can be used (eg, via memory 508, main memory device 504, and processor 502) to store and manage operations associated with an operating system and one or more applications (eg, neural networks of artificial intelligence applications). Execution of programming instructions. The operating system and/or application may be implemented by assembly instructions supported by processor 502 or in a high level language (e.g., C) that may be compiled into such instructions. Operations not associated with device 500 that are not implemented in software may be implemented in hardware, for example, via hardware accelerator 503.The number, capabilities, and/or capacity of these elements 502-523 may vary depending on the number of other devices that device 500 is configured to support. Otherwise, the construction of elements 502-523 is known and will therefore not be further described.As will be appreciated by those skilled in the art, the present disclosure may be embodied as a method or a computer program product. Thus, the present disclosure may take the form of an entirely software embodiment (including firmware, resident software, microcode, etc.) or a combination of software and hardware aspects (which may be collectively referred to collectively as "circuitry" in addition to being embodied in hardware as previously described. The form of an embodiment of "," "module" or "system".Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium representation, in which the existing computer usable program code is embodied. 6 illustrates an example computer readable non-transitory storage medium that can be adapted to store instructions that, in response to a device or computing device executing instructions, cause the device or computing device to implement selected aspects of the present disclosure. As shown, the non-transitory computer readable storage medium 602 can include a plurality of programming instructions 604. Programming instructions 604 can be configured to enable a device (eg, device 500), and in particular processor 502, to perform, for example, with display content for providing control signals to a controller to control reception by a second display device, in response to executing programming instructions. The various operations associated with the device, wherein the control signal can be determined based on the availability of the synchronization signal of the first display device, as shown in FIGS. 1-5.In an alternate embodiment, programming instructions 604 may instead be disposed on a plurality of computer readable non-transitory storage media 602. In an alternate embodiment, the programming instructions 604 can be disposed on a computer readable transient storage medium 602 (eg, a signal). Any combination of one or more computer usable or computer readable media may be utilized. The computer usable or computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (non-exhaustive lists) of computer readable media will include the following: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable In addition to programmable read only memory (EPROM or flash), optical fiber, compact disc read only memory (CD-ROM), optical storage devices, transmission media (eg, media supporting the Internet or an intranet), or magnetic storage devices. It is noted that the computer usable or computer readable medium can even be paper or other suitable medium for the printing process, as the program can be electronically captured via, for example, optical scanning of paper or other media, and then (if needed) in a suitable manner. Compile, interpret, or process and store it in computer memory. In the context of this document, a computer-usable or computer-readable medium can be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer usable medium can include data signals that are propagated in the baseband or as part of a carrier with computer usable program code embodied therein. The computer usable program code can be transmitted using any suitable medium, including but not limited to wireless, wireline, fiber optic cable, RF, and the like.Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including object oriented programming languages such as Java, Smalltalk, C++, and the like, and programming such as "C" programming languages or the like. The general procedural programming language of the language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on the remote computer, or entirely on the remote computer or server. In the latter case, the remote computer can be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computer (eg, using an internet service provider over the Internet).The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system) and computer program products according to embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams can be implemented by computer program instructions. These computer program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that instructions executed by a processor or other programmable data processing apparatus of the computer are used to implement the flowchart and / or module of the function/action specified in the block diagram.The computer program instructions can also be stored in a computer readable medium that can direct a computer or other programmable data processing device to operate in a particular manner such that the generation of instructions stored in the computer readable medium comprises being implemented in a flowchart And/or the manufacture of the instruction module of the function/action specified in the block diagram.The computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on the computer or other programmable device to produce computer-implemented processing such that the computer or other programmable device is The executed instructions provide processing for implementing the functions/acts specified in the flowcharts and/or block diagrams.The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products in accordance with various embodiments of the present disclosure. In this regard, each block of the flowchart or block diagram can represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function. It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order shown in the figures. For example, two blocks shown in succession may in fact be executed substantially simultaneously, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system or a combination of dedicated hardware and computer instructions for performing the specified function or function. to realise. As used herein, "computer-implemented method" may refer to a computer system, such as a smart phone, by one or more processors, a computer system having one or more processors (which may include one or more processors) ), tablet, laptop, set-top box, game console, etc. Any method of execution.Embodiments can be implemented as a computer process, a computing system, or an article of manufacture of a computer program product, such as a computer readable medium. The computer program product can be a computer storage medium readable by a computer system and encoding computer program instructions for executing a computer process.The structure, materials, acts, and equivalents of all of the elements or steps in the following claims are intended to include any structure, material, or action for performing the function in combination with other claimed elements. The description of the present disclosure has been presented for purposes of illustration and description. Numerous modifications and changes will be apparent to those skilled in the art without departing from the scope of the disclosure. The embodiment was chosen and described in order to best explain the principles of the embodiments of the inventionAccordingly, various example embodiments of the present disclosure have been described, including but not limited to:Example 1 can include an apparatus for computing, comprising: a selector; a first link coupled to the selector for receiving a first display device from the device or a computing device that is in charge of the device a synchronization signal, wherein the first synchronization signal is used to synchronize a first display content received by the first display device with a first display refresh rate of the first display device; a second link coupled to the a selector for receiving a second synchronization signal from the device or a second display device of the computing device, wherein the second synchronization signal is for using a second display content received by the second display device a second display refresh rate synchronization of the second display device; a third link coupled to the selector for providing a control signal to the device or the controller of the computing device to control the second Displaying, by the display device, a second display content; and a fourth link coupled to the selector for providing a selection signal to the selector, the selection signal selecting when the first synchronization signal is available Said A first synchronization signal is provided to the controller as the control signal, and otherwise the second synchronization signal is selected as the control signal to be provided to the controller.Example 2 can include the apparatus of example 1 and/or some other examples herein, wherein the first display device is the device or a primary display device of the computing device, and the second display device is the device or A slave display device of the computing device.Example 3 may include the apparatus of example 1 and/or some other examples herein, wherein the second display content received by the second display device passes from the second display device when the first synchronization signal is not available The second synchronization signal is synchronized or synchronized by the first display device when the first synchronization signal is available.Example 4 may include the apparatus of example 1 and/or some other examples herein, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device, or from the The second synchronization signal of the second display device is a tear effect removal timing signal of the second display device.Example 5 can include the apparatus of example 1 and/or some other examples herein, wherein the first display refresh rate of the first display device or the second display refresh rate of the second display device is frame by frame.Example 6 can include the apparatus of example 1 and/or some other examples herein, wherein the first display refresh rate of the first display device or the second display refresh rate of the second display device is 60 Hz, 120 Hz, or 240 Hz.Example 7 can include the apparatus of example 1 and/or some other examples herein, wherein the first display device or the second display device comprises a display, and wherein the display is a light emitting diode (LED) display, a cathode ray Tube (CRT) display, liquid crystal display (LCD), thin film transistor liquid crystal display (TFT-LCD), digital light processing (DLP) display, plasma display, electroluminescent panel, organic light emitting diode (OLED) display or electronic paper A certain one.Example 8 may include the device of example 1 and/or some other examples herein, wherein the first display content received by the first display device or the second display content received by the second display device is based on a processor from a mobile industry The interface displays the serial interface (MIPI-DSI) protocol, the High Definition Multimedia Interface (HDMI) protocol, the Display Port (DP) protocol, the Miracast protocol, or the selected protocol selected by the Wireless Display (WiDi) protocol.Example 9 can include the apparatus of example 1 and/or some other examples herein, wherein the first display device functions as a display device or input device of the device or the computing device.Example 10 can include the apparatus of example 1 and/or some other examples herein, wherein the apparatus comprises: a first display device; and a second display device, wherein the device is a tablet, a mobile device, a smart phone, a smart device Television (TV), touch screen display or head mounted display (HMD).Example 11 may include the apparatus of example 1 and/or some other examples herein, wherein the computing device further comprises: the first display device; and the second display device, wherein The computing device is a tablet, mobile device, smart phone, smart television (TV), touch screen display, or head mounted display (HMD).Example 12 may include the apparatus of example 1 and/or some other examples herein, wherein the selector is a multiplexer, and the selection signal is to the first display device for determining to the first A logic power signal that displays the power of the device.Example 13 may include the apparatus of example 1 and/or some other examples herein, further comprising: detection circuitry for generating the selection signal by detecting whether a first synchronization signal from the first display device is available.Example 14 may include the apparatus of example 13 and/or some other examples herein, wherein the detection circuit includes a capacitor-resistor constant monostable timer having a duration that is longer than a display refresh rate of the first display device The display refresh time.Example 15 can include an apparatus for computing, comprising: a communication interface for receiving a control signal, wherein a first synchronization signal is available when a first display device from the device or a computing device that is in charge of the device is available The control signal is the first synchronization signal, otherwise the second synchronization signal from the device or the second display device of the computing device, wherein the first synchronization signal is used to The first display content received by the first display device is synchronized with the first display refresh rate of the first display device, and the second synchronization signal is used to compare the second display content received by the second display device with the a second display refresh rate synchronization of the second display device; and a controller coupled to the communication interface for determining to transmit the second display content to the second display device based on the control signal.Example 16 may include the apparatus of example 15 and/or some other examples herein, wherein the controller is further configured to: determine the first to be based on the control signal when the first synchronization signal is available Display content is sent to the first display device.Example 17 can include the apparatus of example 15 and/or some other examples herein, wherein the first display device is a primary display device and the second display device is a secondary display device.Example 18 can include the apparatus of example 15 and/or some other examples herein, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device, or from the The second synchronization signal of the second display device is a tear effect removal timing signal of the second display device.Example 19 can include the apparatus of example 15 and/or some other examples herein, wherein the first display device or the second display device comprises a display, and wherein the display is a light emitting diode (LED) display, a cathode ray Tube (CRT) display, liquid crystal display (LCD), thin film transistor liquid crystal display (TFT-LCD), digital light processing (DLP) display, plasma display, electroluminescent panel, organic light emitting diode (OLED) display or electronic paper A certain one.Example 20 may include the apparatus of example 15 and/or some other examples herein, wherein the first display content received by the first display device or the second display content received by the second display device is based on a processor from a mobile industry The interface displays the serial interface (MIPI-DSI) protocol, the High Definition Multimedia Interface (HDMI) protocol, the Display Port (DP) protocol, the Miracast protocol, or the selected protocol selected by the Wireless Display (WiDi) protocol.Example 21 can include a method for communicating display content, comprising: receiving a control signal, wherein the control signal is equal to the first synchronization signal when a first synchronization signal from a first display device is available, Otherwise equal to the second synchronization signal from the second display device, wherein the first synchronization signal is used to synchronize the first display content received by the first display device with the first display refresh rate of the first display device And a second synchronization signal is used to synchronize the second display content received by the second display device with a second display refresh rate of the second display device; and to display the second display content based on the control signal Sended to the second display device.Example 22 may include the method of example 21 and/or some other examples herein, further comprising: transmitting the first display content to the first based on the control signal when the first synchronization signal is available display screen.Example 23 can include the method of example 21 and/or some other examples herein, wherein the first display device is a primary display device and the second display device is a secondary display device.Example 24 may include the method of example 21 and/or some other examples herein, wherein the first synchronization signal from the first display device is a tear effect removal timing signal of the first display device, or from the The second synchronization signal of the second display device is a tear effect removal timing signal of the second display device.Example 25 may include the method of example 21 and/or some other examples herein, wherein the first display content received by the first display device or the second display content received by the second display device is based on a processor from a mobile industry The interface displays the serial interface (MIPI-DSI) protocol, the High Definition Multimedia Interface (HDMI) protocol, the Display Port (DP) protocol, the Miracast protocol, or the selected protocol selected by the Wireless Display (WiDi) protocol.Although certain embodiments are shown and described herein for purposes of illustration, the invention is intended to cover any modifications or variations of the embodiments discussed herein. Therefore, it is apparent that the embodiments described herein are limited only by the claims. |
A method of forming a multi-layered interconnect structure is provided. A first conductive pattern is formed over an insulation layer. A first dielectric material is deposited over the first conductive pattern, and plugs are formed in the first dielectric material. A second conductive pattern is formed over the first dielectric material and plugs so as to form the multi-layered interconnect structure in part. Then, the first dielectric material is stripped away to leave the multi-layered interconnect structure exposed to air. A thin layer of second dielectric material is deposited so as to coat at least a portion of the interconnect structure. Next, a thin layer of metal is deposited so as to coat the at least a portion of the interconnect structure coated with the thin layer of second dielectric material. A third dielectric material is deposited over the interconnect structure to replace the stripped away first dielectric material. |
What is claimed is: 1. A semiconductor device comprising: a substrate; an insulating layer formed on the substrate; and an interconnect structure including: a first conductive pattern formed on the insulating layer, the conductive pattern including at least two conductive lines adjacent one another; a first dielectric material selected from at least one of: polyimides, Teflon, and aerogels and formed over the conductive pattern, the first dielectric material filling a space between the at least two conductive lines; at least one plug formed in the first dielectric material; a second conductive pattern formed over the first dielectric material and the at least one plug; wherein at least a portion of the first and the second conductive patterns associated with the interconnect structure is coaxial in nature; and a second dielectric material formed over the interconnect structure, the second dielectric material having a dielectric constant less than about 3.0. 2. The semiconductor device of claim 1, wherein the at least coaxial portion of the interconnect structure includes a central conductor, an outer conductor and an insulating material interposed between the central conductor and outer conductor. 3. The semiconductor device of claim 2, the insulating material circumferentially surrounding the central conductor. 4. The semiconductor device of claim 2, the outer conductor circumferentially surrounding the insulating material. 5. The semiconductor device of claim 1, the first dielectric material having a dielectric constant less than about 2.5. 6. The semiconductor device of claim 1, the first dielectric material having a dielectric constant less than about 2.0. 7. The semiconductor device of claim 1, the second dielectric material including at least one of SiO2, Si3 N4, and silicon oxynitride. 8. The semiconductor device of claim 1, the outer metal conductor including tantalum nitride. 9. A semiconductor device comprising: a substrate; an insulating layer formed on the substrate; a first conductor line formed on at least a portion of the insulating layer and having a top surface facing away from the insulating layer; a conductive plug having a lower end connected with a first portion of the top surface of the first conductor line, an upper end facing away from the first conductor line, and at least one side between the upper and lower ends; a second conductor line spaced from the first conductor line and having a bottom surface, a top surface and a side surface, a first portion of the bottom surface of the second conductor line being connected to the upper end of the plug; an insulator coating formed on a second portion of the top surface of the first conductor line, the at least one side of the plug, and the remainder surface of the second conductor line; a conductive coating formed on the insulator coating, wherein at least a portion of the conductive coating circumferentially surrounds at least a portion of the second conductor line; and a dielectric material selected from at least one of: polyimides, Teflon, and aerogels and formed on the conductive coating, wherein the dielectric material has a dielectric constant less, than about 3.0. 10. The semiconductor device of claim 9, wherein the insulator coating is formed on at least a second portion of the bottom surface of the second conductor line. 11. The semiconductor device of claim 10, further comprising a dielectric material formed on the conductive coating. 12. The semiconductor device of claim 11, wherein the conductive coating and the second conductor line are generally coaxial. 13. The semiconductor device of claim 9, wherein the conductive coating and the second conductor line are generally coaxial. 14. The semiconductor device of claim 9, wherein the insulator coating comprises a dielectric material. 15. A semiconductor device comprising: a substrate; a first conductor line having a top surface facing away from the substrate; a conductive plug having a lower end connected with a first portion of the top surface of the first conductor line, an upper end facing away from the first conductor line, and at least one side between the upper and lower ends; a second conductor line spaced from the first conductor line and having a bottom surface thereof connected to the upper end of the plug, and a top surface and a side surface; an insulator coating formed on a second portion of the top surface of the first conductor line, the at least one side of the plug, and the remainder surface of the second conductor line; a conductive coating formed on the insulator coating, wherein at least a portion of the conductive coating circumferentially surrounds at least a portion of the second conductor line; and a dielectric material selected from at least one of: polyimides, Teflon, and aerogels and formed on the conductive coating, wherein the dielectric material has a dielectric constant less than about 3.0. |
TECHNICAL FIELD The present invention generally relates to a multi-layered coaxial interconnect structure and method for making the same. BACKGROUND OF THE INVENTION There is an increasing demand for miniaturization in the integrated circuits industry. This demand has led to an ever constant reduction in separation between conductive lines (e.g., metal lines) in order to reduce integrated circuit size and/or increase density. The reduced spacing between the conductive lines has the undesirable effect of increasing the capacitance of whatever material lies between the conductive lines. This phenomenon is known as capacitive crosstalk. In the past, overall integrated circuit (IC) performance depended primarily on device properties, however, this is no longer the case. Parasitic resistance, capacitance and inductance associated with interconnections and contacts of an IC are beginning to become increasingly significant factors in IC performance. In current IC technology, the speed limiting factor is no longer device delay, but the resistive-capacitive (RC) delays associated with the conductive interconnections (e.g., metal lines) of the IC. Conventional ICs typically employ an interconnect structure wherein a first conductive line is adjacent a second conductive line. If the crosstalk or capacitance between the first conductive line and the second conductive line is high, then the voltage on the first conductive line alters or affects the voltage on the second conductive line. This alteration in voltage may result in the IC being inoperable as a result of misinterpreting logic zeros, logic ones and voltage levels, and consequently incorrectly processing binary and/or analog information In order to reduce capacitive coupling and therefore reduce capacitive crosstalk, low dielectric constant (low-K) materials have been developed to replace conventional dielectric/insulation materials that lie between conductive lines in order to insulate one conductive line from the other. Conventional insulation materials such as silicon dioxide exhibit a dielectric constant of about 4.0. Newer materials with lower dielectric constants have been developed. For example, polyimides generally exhibit a dielectric constant of about 2.4 to about 3.0; Teflon exhibits a dielectric constant of about 1.6 to 2.2; and acrogels typically exhibit a dielectric constant of about 2. However, the use of many low-K dielectric/insulation materials is not practicable because equipment is not available to properly process the new dielectric/insulation materials in various ICs. Furthermore, the chemical or physical properties of many low-K dielectric/insulation materials are usually difficult to make compatible or integrate into conventional IC processing. For example, as multiple layers of interconnects are formed, many low dielectric constant materials used to insulate conductive lines exhibit cracking. FIGS. 1 and 2 illustrate the relationship between closely spaced conductive lines and capacitive coupling. Conductive lines 30 are adjacent each other and provide necessary electrical connections between devices of an integrated circuit (not shown). Although only three conductive lines 30 are shown for ease of understanding, it is to be appreciated that many thousands or even millions more such conductive lines may exist in the integrated circuit. As noted above, the increasing demand for miniaturization in the integrated circuits industry has led to an ever constant reduction in separation between the conductive lines 30 in order to reduce integrated circuit size. However, the reduced spacing between the conductive lines 30 has the undesirable effect of increasing the capacitance of whatever material lies between the conductive lines 30 to result in capacitive crosstalk between adjacent conductive lines. A quantity known as pitch (pitch=w+s) is often employed to characterize conductive capacitance crosstalk for adjacent conductive lines used in the integrated circuit industry, where "w" is the cross-sectional width of a conductive line and "s" is the distance of separation between adjacent conductive lines. FIG. 2 graphically illustrates the capacitance between the conductive lines 30 as a function of physical separation. A reduction in pitch is an ongoing activity in the integrated circuit industry in order to optimize substrate surface area utilization in integrated circuits. The capacitance between the conductive lines 30 labeled CCL in FIG. 2 is shown to increase exponentially as pitch is reduced or as the conductive lines 30 are brought closer together. The increase in capacitive coupling resulting from the conductive lines 30 being brought closer together contributes to capacitive crosstalk between the adjacent conductive lines 30, respectively. Since market forces are driving the integrated circuitry towards bringing the conductive lines 30 closer together in order to maximize substrate surface utilization, insulation having low dielectric constant is required between the conductive lines 30 in order isolate the conductive lines 30 from one another and to lower capacitive coupling between the conductive lines 30, respectively, and in turn reduce capacitive crosstalk. Conventional semiconductors such as for example those fabricated according to the aforementioned method do not provide for sufficient insulation between the conductive lines 30 suitable for overcoming capacitive crosstalk between closely spaced conductive lines, particularly at higher frequencies approaching the gigahertz range. In view of the above, it would be desirable to have a semiconductor fabrication method which provides for an insulation material between conductive lines having a dielectric constant suitable for attaining higher IC control speed and meet increasing substrate surface utilization requirements. Furthermore, it would be desirable for such a method to also provide for formation of a coaxial interconnect structure so as to further enhance IC functionality. SUMMARY OF THE INVENTION The present invention provides for a multi-layered interconnect structure which employs dielectric material having a dielectric constant suitable for overcoming capacitive cross-talk between conductive lines. Furthermore, at least some of the conductive lines of the multi-layered interconnect structure are coaxial in nature, wherein the coaxial conductive lines include a central conductive portion which is surrounded by a thin dielectric material and the thin dielectric material surrounded by a metal conductor. Thus, a coaxial conductive line of the present invention provides for a metal conductor circumferentially surrounding a signal carrying central conductor with an insulating material interposed there between. The central conductor is thus substantially shielded from passing noise and induced electromagnetic fields resulting from changing signals therein as well as the central conductor being substantially shielded from externally generated noise and electromagnetic fields. In making the multi-layered interconnect structure of the present invention, metal lines are formed on a substrate. Then a first dielectric material (e.g., SiO2) is deposited on the metal lines, and vias are formed in the first dielectric material thereafter. Then plugs are formed in the vias and subsequent interconnect layers are formed over this first interconnect layer accordingly. Once the base multi-level interconnect structure is formed, the first dielectric material is stripped leaving the multi-level interconnect structure (e.g., conductive lines and plugs) exposed to air. Then a first deposition step is performed to form a thin coat of second dielectric material on the multi-level interconnect structure. A second deposition step is performed thereafter to form a thin coat of metal over the coat of dielectric material so as to make those portions of the multi-layered interconnect structure exposed to the ALD steps coaxial in nature. Next, a third dielectric material (having a dielectric constant suitable for mitigating capacitive cross-talk between adjacent conductive lines) is deposited on the interconnect structure to replace the first dielectric material which was stripped away. The resulting multi-layered interconnect structure may exhibit superior performance as compared to those fabricated in accordance with conventional techniques. In particular, the present invention provides for employing a dielectric material of desirably low dielectric constant which could not be employed in conventional IC fabrication processes without exhibiting cracking. Furthermore, the present invention provides for at least some of the conductive lines and plugs of the multi-level interconnect structure to be coaxial in nature which affords for an increased scope of functionality as compared to conventionally fabricated multi-layered interconnect structures. In accordance with one particular aspect of the invention, a method of forming a multi-layered interconnect structure includes the steps of: forming a first conductive pattern over an insulation layer; depositing a first dielectric material over the first conductive pattern; and forming plugs in the first dielectric material. The method further includes the steps of forming a second conductive pattern over the first dielectric material and plugs so as to form a multi-layered interconnect structure; stripping the first dielectric material; and depositing a second dielectric material over the interconnect structure to replace the stripped away first dielectric material. Another aspect of the present invention relates to a method of forming an interconnect structure having at least a coaxial portion including the steps of: forming a first conductive pattern over an insulation layer; depositing a first dielectric material over the first conductive pattern; and forming plugs in the first dielectric material. The method also includes the steps of: forming a second conductive pattern over the first dielectric material and plugs so as to form the interconnect structure; stripping the first dielectric material; depositing a thin layer of second dielectric material so as to coat at least a portion of the interconnect structure; and depositing a thin layer of metal so as to coat the at least a portion of the interconnect structure coated with the thin layer of second dielectric material. Yet another aspect of the present invention relates to a semiconductor device including: a substrate; and an insulating layer formed on the substrate. The semiconductor device further includes an interconnect structure which comprises: a first conductive pattern formed on the insulating layer, the conductive pattern including at least two conductive lines adjacent one another; a first dielectric material formed over the conductive pattern, the first dielectric material filling a space between the at least two conductive lines; at least one plug formed in the first dielectric material; and a second conductive pattern formed over the first dielectric material and the at least one plug. At least a portion of the interconnect structure is coaxial in nature. Still another aspect of the present invention relates to a method of forming an interconnect structure having at least a coaxial portion, including the steps of: forming a first conductive pattern over an insulation layer; depositing a first dielectric material over the first conductive pattern; and forming plugs in the first dielectric material. The method also includes the steps of forming a second conductive pattern over the first dielectric material and plugs so as to substantially form the interconnect structure; stripping the first dielectric material; depositing a thin layer of second dielectric material so as to coat at least a portion of the interconnect structure; depositing a thin layer of metal so as to coat the at least a portion of the interconnect structure coated with the thin layer of second dielectric material; and depositing a third dielectric material over the interconnect structure to replace the stripped away first dielectric material. To the accomplishment of the foregoing and related ends, the invention, then, comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These embodiments are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic cross-sectional illustration of a portion of a prior art semiconductor device including a conductive pattern; FIG. 2 is a graphical illustration of a relationship between conductive line pitch and capacitive coupling; FIG. 3a is a partial schematic cross-sectional illustration of a multi-layered interconnect structure in accordance with the present invention; FIG. 3b is a schematic cross-sectional illustration of a first layer of conductive lines being formed on a semiconductor substrate in accordance with the present invention; FIG. 3c is a schematic cross-sectional illustration of the first layer of conductive lines of FIG. 3b from a perpendicular view in accordance with the present invention; FIG. 4 is a schematic cross-sectional illustration of dielectric material being deposited over the conductive lines of FIG. 3 in accordance with the present invention; FIG. 5 is a schematic cross-sectional illustration of vias being formed in the dielectric material of FIG. 3 in accordance with the present invention; FIG. 6 is a schematic cross-sectional illustration of the vias of FIG. 5 being filled to form plugs in accordance with the present invention; FIG. 7 is a schematic cross-sectional illustration of a second layer of conductive lines being formed over the plugs and dielectric layer of FIG. 6 in accordance with the present invention; FIG. 8 is a schematic cross-sectional illustration of dielectric material being formed over the second layer of conductive lines of FIG. 7 in accordance with the present invention; FIG. 9 is a schematic cross-sectional illustration of the interconnect structure of FIG. 7 after the dielectric material has been stripped away in accordance with the present invention; FIG. 10 is a schematic partial perspective illustration of the interconnect structure of FIG. 9 in accordance with the present invention; FIG. 11 is a schematic perspective illustration of the interconnect structure of FIG. 10 undergoing a first atomic layer deposition step in accordance with the present invention; FIG. 12 is a schematic perspective illustration of the interconnect structure of FIG. 11 undergoing a second atomic layer deposition step in accordance with the present invention; FIG. 13 is a schematic cross-sectional illustration of a coaxial interconnect line in accordance with the present invention; FIG. 14 is a schematic perspective illustration of the interconnect structure of FIG. 12 having a dielectric material of suitably low dielectric constant being deposited thereon; and FIG. 15 is a schematic perspective illustration of the interconnect structure of FIG. 14 after the dielectric material having suitably low dielectric constant has been deposited over the interconnect structure in accordance with the present invention. DETAILED DESCRIPTION OF THE INVENTION The present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. The method of the present invention will be described with reference to the formation of a multi-layered interconnect structure having at least a coaxial portion, and utilizing a dielectric material having a dielectric constant suitable for mitigating capacitive coupling between closely spaced conductive lines of the multi-layered interconnect structure. The following detailed description is of the best modes presently contemplated by the inventors for practicing the invention. It should be understood that the description of these preferred embodiments are merely illustrative and that they should not be taken in a limiting sense. Referring initially to FIG. 3a, a partial cross-sectional illustration is shown of a multi-layered interconnect stricture 40 in accordance with the present invention. The multi-layered interconnect structure 40 includes a first conductive line 42 formed on an insulated substrate 44. The first conductive line 42 is part of a first conductive pattern 43 (FIG. 3c) which lies atop the insulated substrate 44. The structure 40 includes a plug 46 for providing electrical connection between the first conductive line 42 and a second conductive line 48 (which is part of a second conductive pattern 45 (FIG. 8)). At least a portion of the multi-layered interconnect structure 40 is coaxial in nature. As shown, the second conductive line 46 and the plug 46 are circumferentially surrounded by a thin insulating dielectric material 50. Circumferenitially surrounding the thin insulating dielectric material 50 is a thin metal conductor 52, which may be grounded. Since the first conductive line 42 lies on top of the insulated substrate 44, the side of the first conductive line 42 facing the insulated substrate 44 will not be covered with the dielectric material 50 or the metal conductor 52. The coaxial portion of the structure 40 will enhance the functionality of the resulting IC employing the multi-layered interconnect structure 40. The multi-layered interconnect structure 40 is covered with a dielectric material 54 suitable for facilitating the mitigation of capacitive crosstalk between adjacent conductive lines of the structure 40. As will be discussed in greater detail below, the present invention employs a robust first dielectric material (e.g., SiO2, Si3 N4) during initial formation of the multi-layered interconnect structure 40 of the IC, and after the base multi-layered interconnect structure 40 is substantially complete deposition steps are performed on the multi-layered interconnect structure 40 to form at least some coaxial interconnect lines and/or plugs using the dielectric material 50 (second dielectric material). Thereafter, the dielectric material 54 (third dielectric material--e.g., polyimides, Teflon, aerogels) having a dielectric constant suitable for mitigating capacitive crosstalk between adjacent conductive lines is deposited over the multi-layered interconnect structure 40. As noted above, newer dielectric materials having low dielectric constant such as Teflon are not amendable to being used during fabrication of the multi-layered interconnect structure because the newer dielectric materials are relatively weak and tend to crack during the fabrication steps. However, because the newer dielectric material is applied after the multi-layered interconnect structure 40 is substantially complete the newer dielectric material may be employed without being exposed to the harsh fabrication steps that cause cracking thereof. Referring now to FIG. 3b, an insulating layer is formed on the substrate 44 via a suitable deposition technique such as for example using chemical vapor deposition (CVD) or by a spinning technique. Both the insulating layer and substrate are illustrated in common for ease of understanding and are referenced by number 44. A conductive pattern 43 (e.g., including conductive lines 43a, 43b and 43c) is formed over the insulating/substrate layer 44. Preferably a metalization pattern is formed by depositing a metalization layer and patterning employing suitable photolithographic and etching techniques (e.g., anisotropic etching such as reactive ion etching). The conductive pattern 43 may be deposited by any of a variety of suitable deposition techniques, such as CVD processes including low pressure chemical vapor deposition (LPCVD) and plasma enhanced chemical vapor deposition (PECVD), melting or sputtering. The conductive pattern 43 formed in the claimed invention may comprise any suitable conductive material employable for forming conductive patterns in the semiconductor industry. Preferably, the conductive material includes a member selected from the group consisting of refractory materials, such as titanium and titanium alloys, tungsten and tungsten alloys, aluminum and aluminum alloys, copper and copper alloys and polycrystalline silicon. The insulating material 44 employed in the present invention may comprise any suitable insulating material employable in the semiconductor industry for forming insulating layers. Preferably, the insulating material 44 comprises a member selected from the group consisting of nitrides, oxides, oxy-nitrides, polyimides and polymeric materials. Turning now to FIG. 3c, a schematic cross section illustration is shown of a first layer (e.g., pattern) of the conductive lines 43 of FIG. 3b from a view perpendicular to the direction the conductive lines 43a, 43b and 43c are running. FIG. 4 is a schematic cross-sectional view of a first dielectric material 60 (e.g., SiO2, Si3 N4) being deposited over the conductive lines 43. The first dielectric material 60 in the exemplary embodiment is preferably silicon dioxide (SiO2), however, it will be appreciated that any suitable dielectric material may be employed to carry out the present invention and falls within the scope of the claims. Any suitable technique for depositing the dielectric material 60 may be employed such as PECVD, or high density plasma chemical vapor deposition (HDPCVD) techniques such as electron cyclotron resonance (ECR), inductor coupled plasma (ICP), transformer coupled plasma (TCP) and helicon plasma. The dielectric material 60 is deposited over the conductive pattern 43 so as to form a seal of dielectric material over the conductive lines 43a, 43b and 43c and the spaces between the conductive lines 43a, 43b and 43c. Turning now to FIGS. 5 and 6, vias 70 are formed within the dielectric material 60 and the vias 70 are filled with a suitable material (e.g., tungsten, copper) to form plugs which provide conductive pathways through the dielectric layer 60 to connect interconnects of different conductor layers. Although, the present invention is described with respect to forming only two conductive layers for ease of understanding, it is to be appreciated that many more conductive layers separated with the dielectric material 60 may be formed, and such structures are intended to fall within the scope of the hereto appended claims. While different conductive materials are suitable to fill the vias 70, in this example tungsten forms conductive material 72. The tungsten filled vias 70 are referred to as tungsten plugs 74. Copper, aluminum or an aluminum alloy are exemplary of other plug conductors. The plugs 74 may comprise any other suitable conductive material, which is chemical-vapor deposited with a flow rate sufficient to fill the vias 70 so as to have an aspect ratio less than, for example, 4:1. The plug material 72 is removed from the upper surfaces of dielectric 60 using, for example, sacrificial etchback or CMP. In an alternative embodiment, the conductive pattern 43 and/or plugs 74 may include copper (Cu). Since Cu easily diffuses into dielectric materials such as SiO2, a damascene process may be employed to create a barrier layer (e.g., Ta2 N) between the Cu and the dielectric so as to mitigate diffusion of the Cu into the dielectric 60. Damascene techniques are known in the art, and therefore further discussion related thereto is omitted for sake of brevity. It is to be appreciated that the damascene technique may be performed to generate a barrier layer between any other suitable metal (e.g., tungsten) employed in the formation of the conductive pattern 43 and/or plugs 74. FIG. 7 illustrates the second conductive layer 45 (including conductive lines 82, 84 and 86) being formed over the dielectric material 60 and the plugs 74. Thus, the plugs 74 provide for electrically connecting respective lines of the second conductive layer 45 to respective lines of the first conductive layer 43. The second conductive layer 45 is formed in a manner substantially similar to the manner of forming the first conductive layer 43, and therefore further discussion related thereto is omitted for sake of brevity. The first conductive layer 43, the plugs 74 and the second conductive layer 45 collectively form the multi-layered interconnect structure 40. Of course, the multi-layered interconnect structure 40 may include additional conductive layers and layers of plugs. FIG. 8 is a cross-sectional illustration of the multi-layered interconnect structure 40 after additional first dielectric material 60 is deposited thereon to cover the second conductive layer 45 and fill spaces 92 between lines of the second conductive layer 45. Turning now to FIG. 9, the first dielectric material 60 is shown stripped away to leave the multi-layered interconnect structure 40 exposed to air. In the preferred embodiment, hydrofluoric acid (HF) is employed to strip the first dielectric 60 via an HF dip, however any suitable technique (e.g., wet etch) for stripping the first dielectric material may be employed. The HF dip affords for stripping the first dielectric material 60 (e.g., SiO2) without stripping the barrier layer (not shown) that may be coating the conductive patterns 43 and 45 and/or plugs 74. Although air is an ideal insulator (K.apprxeq.1), the multi-layered interconnect structure 40 is substantially weak and maintaining integrity thereof without supportive materials would be difficult. Thus, as will be discussed in greater detail below, after the base interconnect structure 40 is complete the third dielectric material 54 (see FIG. 14) is deposited over the interconnect structure to replace the stripped away first dielectric material 60. The third dielectric material 54 provides support to the interconnect structure 40 and facilitates mitigating capacitive crosstalk between adjacent conductive lines of the interconnect structure 40. FIG. 10 is a schematic partial perspective view of the multi-layered interconnect structure 40 shown in FIG. 9. FIG. 11 is a schematic illustration of the multi-layered interconnect structure 40 undergoing a first atomic layer deposition (ALD) step 110 wherein the thin layer of second dielectric material 50 (e.g., SiO2, silicon nitrides (Six Nz), Six Oy Nz --where "x", "y" and"z" are integers) is formed on at least a portion of the multi-layered interconnect structure 40. This thin layer of second dielectric material 50 will be an insulating material which is interposed between a central conductor and an outer conductor of a coaxial interconnect in accordance with the present invention. FIG. 12 illustrates the multi-layered interconnect structure 40 undergoing a second ALD step 124 wherein the thin layer of metal 52 (e.g., Ta2 N) is formed over the portions of the multi-layered interconnect structure 40 selected to be coated with the second dielectric material 50 applied in the first ALD step 110. The thin layer of metal 52 will be the outer conductor of the coaxial interconnect. Thus, the first and second ALD steps provide for forming coaxial interconnects (e.g., conductive lines or plugs which are coaxial in nature). Although, atomic layer deposition is a preferred technique for depositing the thin dielectric material layer and thin metal layer, it is to be appreciated that other techniques (e.g., vacuum evaporation, chemical vapor deposition, electrochemical deposition) may be employed. Any suitable deposition technique or other means for carrying out the present invention may be employed and is intended to fall within the scope of the present claims. ALD and the other aforementioned techniques are well known in the art, and thus based on the description herein one skilled in the art could readily carry out suitable deposition techniques to generate the coaxial portions of the present invention. Referring now to FIG. 13, a schematic cross-sectional illustration of a coaxial interconnect 134 is shown. The coaxial interconnect includes a central conductor 136, the outer metal conductive material 52 and the dielectric material 50 interposed between the central conductor 136 and the outer conductive material 52. The coaxial interconnect 134 provides for the signal carrying central conductor 136 to be circumferentially surrounded by the outer metal conductor 52 which may be grounded and the insulating dielectric material 50 interposed there between. The central conductor 136 is thus substantially shielded from passing noise and induced electromagnetic fields resulting from changing signals therein, as well as the central conductor 136 being substantially shielded from externally generated noise and electromagnetic fields. Turning now to FIG. 14, the interconnect structure 40 is shown substantially complete, however, the interconnect structure 40 is exposed to air. Thus, the interconnect structure 40 is in a from which may not be structurally sound enough to withstand prolonged exposure to vibrations, movement, etc. Therefore, a step 150 of depositing the third dielectric material 54 over the interconnect structure 40 is performed. The third dielectric material 54 is selected to have a dielectric constant suitable to facilitate mitigating capacitive crosstalk between conductive lines of the interconnect structure 40. Accordingly, the third dielectric material 54 preferably has a dielectric constant less than 3.0. Any suitable material (e.g., polyinides, Teflon, aerogels) may be employed as the third dielectric material 54 and is intended to fall within the scope of the hereto appended claims. The third dielectric material 54 is deposited over the interconnect structure 40 to replace the stripped away first dielectric material 60. Any suitable technique for depositing the third dielectric material 54 may be employed. For example, any of the following deposition techniques may be employed: PECVD, or high density plasma chemical vapor deposition (HDPCVD) techniques such as electron cyclotron resonance (ECR), inductor coupled plasma (ICP), transformer coupled plasma (TCP) and helicon plasma. The third dielectric material 54 is deposited over the interconnect structure 40 so as to form a seal of dielectric material over conductive lines of the various conductive patterns and fill spaces between the conductive lines and the conductive patterns. Thus, the third dielectric material 54 provides for structurally supporting the interconnect structure 40 and also provides for facilitating the mitigation of capacitive crosstalk between adjacent conductive lines of the interconnect structure 40. FIG. 15 is a schematic partial perspective illustration of the interconnect structure 40 complete in relevant part. What has been described above are preferred embodiments of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. |
A high-voltage transistor structure is provided that includes a self-aligned isolation feature between the gate and drain. Normally, the isolation feature is not self-aligned. The self-aligned isolation process can be integrated into standard CMOS process technology. In one example embodiment, the drain of the transistor structure is positioned one pitch away from the active gate, with an intervening dummy gate structure formed between the drain and active gate structure. The dummy gate structure is sacrificial in nature and can be utilized to create a self-aligned isolation recess, wherein the gate spacer effectively provides a template for etching the isolation recess. This self-aligned isolation forming process eliminates a number of the variation and dimensional constraints attendant non-aligned isolation forming techniques, which in turn allows for smaller footprint and tighter alignment so as to reduce device variation. The structure and forming techniques are compatible with both planar and non-planar transistor architectures. |
An integrated circuit structure, comprising:a fin comprising silicon;a trench isolation region (209) having a first side, a second side, a bottom, an upper portion and a lower portion, the upper portion above the fin and the lower portion in the fin, the lower portion defining a first fin portion and a second fin portion;a first gate spacer (203) along at least part of the first side of the trench isolation region, the first gate spacer having a bottom above the bottom of the trench isolation region;a second gate spacer (203) along at least part of the second side of the trench isolation region, the second gate spacer having a bottom above the bottom of the trench isolation region;a gate electrode (207) over the first fin portion, the gate electrode (207) having a first side and a second side;a third gate spacer (203) along the first side of the gate electrode (207); anda fourth gate spacer (203) along the second side of the gate electrode (207).The integrated circuit structure of claim 1, wherein the gate electrode (207) has an upper surface co-planar with an upper surface of the isolation structure (209).The integrated circuit structure of claim 2, further comprising a gate dielectric (204) between the gate electrode (207) and the fin.The integrated circuit structure of claim 3, wherein the gate dielectric includes a U-shaped gate dielectric layer.The integrated circuit structure of claim 1, wherein the isolation structure extends into an N-well in the fin. |
BACKGROUNDHigh-voltage transistors are a foundational element of numerous applications. For instance, such transistors are frequently used in constructing circuitry such as input/output (IO) circuitry, electrostatic discharge protection circuitry, clamps, and other off-chip interfaces of system-on-chip (SoC) configurations. An example high-voltage transistor device is the vertical drain metal oxide semiconductor (VDMOS) transistor. In a VDMOS transistor, the drain is separated from the gate through the use of shallow-trench isolation. As high-voltage is applied to the drain, the voltage is reduced before reaching the intrinsic gate, enabling higher voltage operation. There are a number of non-trivial challenges with this integration scheme.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a cross-sectional view of a standard high-voltage transistor device annotated with a number of critical dimensions, which may be helpful to understand various embodiments of the present disclosure.Figure 2 illustrates a cross-sectional view of a high-voltage transistor device configured in accordance with an embodiment of the present disclosure.Figures 3a-g collectively illustrate a method for forming a high-voltage transistor device, in accordance with an embodiment of the present disclosure.Figures 3f-g' collectively illustrate a method for forming a high-voltage transistor device, in accordance with another embodiment of the present disclosure.Figure 4 illustrates a computing system implemented with one or more integrated circuit structures configured in accordance with an embodiment of the present disclosure.As will be appreciated, the figures are not necessarily drawn to scale or intended to limit the disclosure to the specific configurations shown. For instance, while some figures generally indicate straight lines, right angles, and smooth surfaces, an actual implementation of a structure may have less than perfect straight lines, right angles, and some features may have surface topology or otherwise be non-smooth, given real world limitations of the processing equipment and techniques used. In short, the figures are provided merely to show example structures.DETAILED DESCRIPTIONA high-voltage transistor structure is provided that includes a self-aligned isolation feature between the gate and drain. Normally, the isolation feature is not self-aligned. The self-aligned isolation process can be integrated into standard complementary metal oxide semiconductor (CMOS) process technology. In one example embodiment, the drain of the transistor structure is positioned one pitch away from the active gate, with an intervening dummy gate structure formed between the drain and active gate structure. In other embodiments the drain of the transistor structure is positioned multiple pitches away from the active gate, with the intervening dummy gate structure formed therebetween. The dummy gate structure is sacrificial in nature and can be utilized to create a self-aligned isolation recess, wherein the gate spacer effectively provides a template for etching the isolation recess. This self-aligned forming process eliminates a number of the variation and dimensional constraints attendant non-aligned isolation forming techniques, which in turn allows for smaller footprint and tighter alignment so as to reduce device variation. The structure and forming techniques are compatible with both planar and non-planar transistor architectures.General OverviewAs transistor technologies continue to scale and operate at lower core voltages, it is increasingly difficult to monolithically integrate high-voltage devices alongside standard (low-voltage) logic devices. For instance, core logic transistor dimensions reduce at a typical rate of 0.7x per node, while analog interfaces including high-voltage transistors do not scale as aggressively. In short, a large disparity exists between standard logic transistors and high-voltage transistors which creates process complexity due to different critical dimensions and densities of the two transistor types. This complexity gives rise to a number of non-trivial challenges with high-voltage transistor integration schemes. For example, the formation of a typical vertical drain n-type metal oxide semiconductor (VDNMOS) transistor relies on multiple process features that are not conducive to aggressive dimensional scaling. A standard VDNMOS transistor forming method uses a shallow-trench isolation (STI) between the drain and the gate to reduce the field under the active gate/drain junction. This is achieved by first patterning the STI into the bare silicon substrate, and then using lithographic re-alignment of the n-type well (N-well) and gate to that STI. Due to process variances, these dimensions are forced to be large relative to certain minimum critical dimension of the technology, as will be appreciated in light of this disclosure and further discussed with reference to Figure 1 .Figure 1 illustrates a cross-sectional view of a standard high-voltage VDNMOS transistor. However, the figure is further annotated with a number of critical dimensions, which may be helpful to understand various embodiments of the present disclosure. As can be seen, an N-well 111 is formed on a p-type substrate 101, and a P-well 117 is provided by virtue of the p-type substrate 101. A shallow trench isolation (STI) region 109 is formed within the N-Well 111 to provide isolation between the drain and gate regions. The N-well 111 may be formed in the substrate 101, for example, through ion implantation and/or diffusion of dopant(s) having the N-type conductivity, which is opposite that of the substrate 101 and P-well 117. The STI region 109 may be formed in the N-well 111, for example, through chemical etching and filling therein with an insulation material, such as oxide or nitride or other suitable insulation material. A gate structure is formed on an upper portion of the N-well 111 and the p-type substrate 101. The gate structure includes gate electrode 107, gate dielectric 104, and gate spacer 103. Diffusion region 115 is formed in the p-type substrate 101 near one edge of the gate electrode 107 to serve as the source region. A similar N+ diffusion region is provided within the N-well 111 for the drain and continues to the edge of the N-well 111 near the other edge of the gate electrode 107. Such diffusion regions may be, for example, heavily doped with N+ dopant(s) to improve contact resistance between the metal contact layer (105a and 105b) and the underlying semiconductor material of the diffusion (source/drain) regions. The diffusion doping scheme can vary, as will be appreciated. An insulation layer (not shown, such as silicon dioxide, silicon nitride, or other suitable insulator material) can then be grown or otherwise deposited over the entire surface of the substrate 101. The source and drain contacts 105a/105b can then be formed using a contact trench etch process to expose the underlying diffusion area followed by a deposition of contact metal.In any case, and as previously explained, due to process variances, the various structure dimensions are forced to be large relative to certain minimum critical dimensions of the structure. Specifically, and as can be further scene with respect to Figure 1 , critical dimensions CD1 through CD5 must provide enough margin to encompass the process variation associated with each of the following: the width of the STI 109 (CD1); the alignment or overlap of the STI 109 with the gate (CD2); the alignment or overlap of the N-well 111 and the STI 109 (CD3); the alignment or overlap of the N-well 111 and the source diffusion 115 (CD4), which also corresponds to the channel length (the channel is between the N+ diffusion region 115 and the N-well 111); and the alignment or overlap of the N-well 111 and the gate 104/107 (CD5). Furthermore, due to the non-self-aligned nature of the standard formation process, the width of STI 109 (CD1) is restricted to typically 4-5x the minimum gate critical dimension (CD6) of the given technology node. As will be appreciated in light of this disclosure, this imposes limitations on the scalability of the transistor device.Thus, and in accordance with an embodiment of the present disclosure, a high-voltage transistor structure is provided that includes a self-aligned isolation feature between the gate and drain, which can be integrated into, for example, standard CMOS process technology. Thus, for instance, the forming process of STI 109 can be more tightly controlled. For example, in one embodiment, the drain of the transistor structure is positioned one pitch away from the active gate, with an intervening dummy gate structure formed between the drain and active gate structure. The dummy gate structure is sacrificial in nature and can be utilized to create a self-aligned isolation recess, thereby eliminating or otherwise mitigating a number of the variation and dimensional constraints attendant STI 109 and other non-aligned isolation forming techniques. As will be appreciated in light of this disclosure, the transistor structure and forming techniques are compatible with both planar and non-planar (e.g., FinFET including double-gate and tri-gate, nanowire, ribbon, gate-all-around) transistor architectures.Due to the self-aligned isolation or STI feature, the capability to aggressively scale the high-voltage transistor structure is much greater than that of standard high-voltage transistor structures, which rely on a variable alignment of the isolation (STI 109) to the gate as previously discussed with reference to Figure 1 . In some embodiments, the resulting transistor structure is free of the constraints solely attributable critical dimensions CD2 and CD3 of a standard VDMOS structure. For instance and according to one example embodiment, by self-aligning the gate-to-drain isolation recess, the forming process eliminates the process-induced footprint constraints associated with the patterning of the width of the STI 109 (CD1), as well as the alignment or overlap of the STI 109 with the gate (CD2) and the alignment or overlap of the N-well 111 and the STI 109 (CD3). By extending the drain one pitch away from the active gate according to some embodiments, the spacer of the dummy gate can be utilized to create a self-aligned isolation recess, thereby eliminating some of the variation and dimensional control limitations of a non-self-aligned structure. The resulting smaller footprint and tighter alignment reduce device variation and provide a compelling alternative for IO transistor utilization and other such high-voltage transistor applications.Any number of structures processes supporting system-on-chip off-chip interfaces and high-voltage interfaces may benefit from an embodiment of the present disclosure. A transistor structure formed in accordance with an embodiment of the present disclosure may be detected, for example, using transmission electron microscopy (TEM) and scanning electron microscopy (SEM), or other standard imaging technology, to show a self-aligned isolation within the transistor. For example, the gate-to-drain isolation recess may be aligned with a sacrificial gate structure, such that the sidewalls of the gate-to-drain isolation effectively align with the inner sidewalls of the gate spacer of that sacrificial gate structure, when the structure is viewed in cross-section. The isolation may continue up through the space between the gate spacer of that sacrificial gate structure, or alternatively may be under a more fully formed gate structure (configured with gate electrode and gate dielectric between the gate spacer), albeit a non-functional gate structure, given the underlying isolation rather than a channel region.Thus, a transistor structure is provided that enables integration of high-voltage compliant transistors into an aggressively scaled, low-voltage process. As advanced technologies scale, it is increasingly difficult to incorporate traditional wide-gate, large-footprint devices alongside tight-pitch digital transistors. This disclosure enables the footprint and gate length differential between high-voltage and logic transistors to more closely converge, according to some embodiments.Architecture and MethodologyFigure 2 illustrates a cross-sectional view of a high-voltage transistor device configured in accordance with an embodiment of the present disclosure. Note that this example embodiment has a particular polarity scheme (N+ diffusion regions along with N-well sitting in p-type substrate). Other embodiments may have other polarity schemes, and the present disclosure is not intended to be limited to any particular one. For instance, another example embodiment might have P+ diffusion regions along with P-well sitting in n-type substrate. To this end, numerous transistor configurations having any number of polarity schemes, with or without wells, may benefit from the self-aligned isolation techniques provided herein.As can be seen in this example embodiment, an N-well 211 is formed on a p-type substrate 201, and a P-well 217 is provided by virtue of the p-type substrate 201. A self-aligned trench isolation (STI) region 209 is formed within the N-Well 211 to provide isolation between the drain and gate regions. The N-well 211 may be formed in the semiconductor substrate 201, for example, through ion implantation and/or diffusion of dopant(s) having the N-type conductivity, which is opposite that of the substrate 201 and P-well 217. A first, active gate structure is formed on an upper portion of the N-well 211 and the p-type substrate 201. A second, effectively sacrificial gate structure is also provided next to the active gate structure, and is used in forming the STI 209. Diffusion region 215 is formed in the p-type substrate 101 near one edge of the gate electrode 207 to serve as the source region.The gate structure includes gate electrode 207, gate dielectric 204, and gate spacer 203, and may be formed by various gate-first methods or gate-later or so-called remove metal gate (RMG) methods (e.g., where an initially provisioned dummy gate electrode of polysilicon or other dummy material is later removed along with any dummy gate dielectric material and replaced with desired metal gate electrode and gate dielectric materials). The final active gate structure can be implemented with standard materials. For instance, the gate spacer 203 can be silicon oxide or silicon nitride or any other suitable spacer material. The gate dielectric 204 may be formed, for example, from materials such as silicon dioxide or high-k dielectric materials. Examples of high-k gate dielectric materials include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. Further note that, in some embodiments, the gate dielectric 204 is provided on the sidewalls of the gate spacer 203 as well as on the underlying substrate 201, as further shown with dashed lines in Figure 2 . Such a U-shaped gate dielectric 204 may be used, for instance, in non-planar transistor configurations, such as FinFET configurations. The gate electrode 207 material may be a gate electrode material such as, for example, aluminum, tungsten, titanium, tantalum, nickel, platinum, highly doped silicon, a silicide of one of these (e.g., titanium silicide, nickel silicide, etc) or a combination of such material layers. As previously noted, the gate structure may be formed on a planar channel region or a non-planar channel region. For instance, in some embodiments, the gate structure is formed on a fin-shaped semiconductor body that extends from the substrate 201 and provides multiple gates (e.g., double-gate, tri-grate, and gate-all-around channel configurations). In such cases, note that in the cross-section of Figure 2 , the cross-section is taken parallel to the fin and through the fin. Alternatively, Figure 2 can also be used to show the cross-section of a planar device. Any number of planar and non-planar configurations can be implemented using the self-aligned isolation techniques provided herein.The sacrificial gate structure may be implemented in the same fashion as the active gate, to keep from deviating from a consistent process. In any case, this neighboring sacrificial gate structure can be used in forming the STI 209. In particular, note that the STI 209 has a cross-sectional width that substantially corresponds to the cross-sectional width between spacers 203 of the sacrificial gate, this distance being designated as CD1 in Figure 2 . Further note that the edges of the STI 209 substantially align to the inside edges of the spacer 203, allowing for some deviation attributable to real world limitations associated with etching down into the N-well 211. The STI region 209 may be formed in the N-well 211, for example, through chemical etching of the sacrificial gate electrode material and underlying N-well 211 semiconductor material, followed by filling the resulting STI trench with an insulation material. The gate spacer 203 of the sacrificial gate structure acts as a template for the etch STI trench etch and deposition process, and may be implemented with a material selective to the STI trench etch process, such that the etch chemistry employed removes any sacrificial gate material (e.g., polysilicon and silicon dioxide) and the underlying semiconductor material of N-well 211 but not the gate spacer 203 material. In some embodiments, masking may be used in conjunction with the gate spacer 203 when forming the STI 209, if a selective etch chemistry is not available, so as to protect exposed materials other than gate spacer 203. Once the STI 209 trench is formed, the STI 209 material can then be deposited therein, such as silicon dioxide, silicon nitride, or other suitable insulator. In some embodiments, the STI 209 material is a high-k dielectric, such as that which may be used for the gate dielectric 204. In the embodiment shown, note that the STI 209 fills the entire trench. In other embodiments, the STI 209 may only fill the part of the trench occupied in the N-well 211, such that the upper part of the trench can be populated with normal gate materials (for the purpose of having a consistent process, even though that particular gate will not have a functional channel).The source and drain diffusion regions (215 and 211, respectively) may be, for example, heavily doped with N+ dopant(s) to improve contact resistance between the metal contact layer (205a and 205b) and the underlying semiconductor material of the diffusion (source/drain) regions. However, the level of doping may vary from one embodiment to the next, as will be appreciated. In some cases, for instance, the N-well 211 is configured as a low-doped drain. Note that the semiconductor material in the diffusion regions 211 and 215 may be native to the substrate 201 (e.g., silicon substrate and diffusion areas, or III-V material substrate and diffusion regions), or alternatively may be a replacement material that is provided by, for example, a recess and deposition process (e.g., silicon substrate and silicon germanium diffusion regions, or gallium arsenide substrate and indium arsenide diffusion regions). Further note that the diffusion regions may be planar or non-planar, just as with the gate and channel region. Also, and as previously explained, other embodiments may have other diffusion polarities, depending on the intended application or transistor functionality, and the example polarity scheme shown is not intended to limit the present disclosure.An insulation layer (not shown, such as silicon dioxide, silicon nitride, or other suitable insulator material) can then be grown or otherwise deposited over the entire surface of the substrate 201, followed by planarization. The source contact 205a and drain contact 205b can then be formed using a contact trench etch process to expose the underlying diffusion areas followed by a deposition of the contact metal structure, which may include one or more contact layers (e.g., in addition to a standard metal plug layer, the contact may optionally include one or more of a liner layer, barrier layer, resistance-reducing metal layer, capping layer, or any other contact structure layer). Many configurations are possible, as will be appreciated in light of this disclosure.Due to the self-aligned STI 209 feature, the capability to aggressively scale the high-voltage transistor structure is much greater than that of standard high-voltage transistor structures, which rely on a variable alignment of a non-self-aligned isolation to the gate, as discussed with reference to Figure 1 and STI 109. With reference to the embodiment shown in Figure 2 , note that the width of STI 209 (CD1) can be much smaller by virtue of self-aligned gate-to-drain isolation recess 209. Further note that the constraints associated with the alignment or overlap of the STI 209 with the gate (CD2) and the alignment or overlap of the N-well 211 and the STI 209 (CD3) are eliminated. Thus, by extending the drain to one pitch away from the active gate according to some embodiments, the spacer 203 of the sacrificial gate can be utilized to create a self-aligned isolation recess, thereby eliminating some of the variation and dimensional control limitations of a non-self-aligned structure, and allowing a smaller footprint. Note that the footprint of a transistor device having a non-self-aligned STI is greater than four polysilicon (dummy gate electrode) pitches, while the footprint of a transistor device having a self-aligned STI according to one example embodiment is less than two polysilicon (dummy gate electrode) pitches, assuming the drain of the transistor is positioned one pitch away from the active gate, with the intervening sacrificial gate structure formed between the drain and active gate structure. In still other embodiments, the drain of the transistor may be positioned two or even three pitches away from the active gate, and the footprint may still be smaller than a transistor device having a non-self-aligned STI.Figures 3a-g collectively illustrate a method for forming a high-voltage transistor device, in accordance with an embodiment of the present disclosure. Figures 3f-3g' illustrate an alternative embodiment, and will be explained in turn. As can be seen, various structures resulting from the example process flows are depicted in cross-section. The transistor being formed might be, for example, high-voltage VDNMOS transistor configuration or any other high-voltage transistor device requiring an isolation between the gate and a given diffusion region. Other embodiments may include, for instance, other NMOS and/or PMOS power transistor configurations (e.g., lateral double-diffused MOS or so-called lateral DMOS). As will be appreciated, the methodology is fully compatible with standard advanced CMOS processing techniques, although any semiconductor processing techniques suitable for power transistor fabrication can be used. Also note the methodology may further include other intermediate stages and processing steps not shown but that can be carried out using standard techniques or as otherwise normally done. Numerous power transistor fabrication schemes will be apparent in light of this disclosure depending on factors such as desired polarity and target footprint, and the present disclosure is not intended to be limited to any specific ones; rather, any number of such fabrication schemes can be configured with a self-aligned isolation between the gate and drain (or source, as the case may be) as variously provided herein.Figure 3a illustrates a standard well formation, where an N-well 211 is formed inside of a P-well 217, in accordance with one example embodiment of the present disclosure. The P-well is provided by virtue of a p-type substrate 201. As previously explained, the N-well 211 may be formed in the substrate 201, for example, through ion implantation and/or diffusion of dopant(s) having the N-type conductivity, which is opposite that of the substrate 201 and P-well 217. In one example embodiment, the N-well 211 acts as the low-doped drain, and provides conduction from the drain to the active gate, although any number of doping schemes suitable for a given power transistor application can be used. The substrate 201 may be, for example, a bulk substrate or a semiconductor-on-insulator (SOI) substrate or a multilayer substrate. In any case, the substrate can be doped accordingly to provide the P-well 217, followed by a further doping to provide the N-well 211, as normally done. A common configuration would be, for example, a bulk silicon substrate doped with boron to provide the P-well 217 and phosphorus to provide the N-well 211 and N+ diffusion area 215.Figure 3b shows the resulting structure after the dummy gate structures and N+ diffusion region 215 are formed on the substrate 201, in accordance with one example embodiment of the present disclosure. Two gate structures are shown for purposes of discussion, but any number may be provisioned. As can be seen, each gate structure includes a dummy gate 206 and gate spacer 203 and an optional dummy gate dielectric 202, where the dummy gate 206 (and dummy gate dielectric 204, if present) are re-aligned to the edge of the N-well 211. In this particular embodiment, the N-well is under the gate spacer 203 and a portion of the gate dielectric 202. In a similar fashion, the N+ diffusion region 215 is also under the gate spacer 203 and a portion of the gate dielectric 202. The channel of the device is under the dummy gate dielectric 202 and between the N+ diffusion 215 and the N-well 211. Recall the channel length is CD4, as shown in Figure 1 . In still other embodiments, the channel may be longer, such that at least one of the N-well 211 and the N+ diffusion region 215 is only under the gate spacer 203 and not the gate dielectric 202. In a more general sense, the channel length and doping levels of the N-well 211 and the N+ diffusion region 215 (or the P-well and the P+ diffusion region, as the case may be) can be set depending on desired performance, as normally done. In some example such embodiments, the gate structures can be formed using standard CMOS dummy gate / spacer formation, where a dummy gate dielectric layer of silicon or polysilicon is deposited followed by a layer of polysilicon to form the dummy gate layer (note that both dummy gate dielectric 202 and dummy gate 206 can be polysilicon). This gate material layer is then masked and etched to form individual dummy gate stacks each including an optional dummy gate dielectric 202 and a dummy gate 206. Then a spacer layer of, for instance, silicon nitride can then be deposited thereon, followed by a planarization process to provide the gate spacer 203 on the sides of each gate stack.Figure 3c shows the resulting structure after the interlayer dielectric (ILD) 219 has been provided and planarized, in accordance with one example embodiment of the present disclosure. The ILD 219 can be any suitable insulator material, such as silicon dioxide, or high-k dielectric materials as previously explained. In some embodiments, the ILD 219 is provisioned by way of standard epitaxial growth, although any suitable deposition process can be used.Figure 3d shows the resulting structure after hardmask 221 has been provided and patterned as shown to expose the second dummy gate, and after that dummy gate has been etched away to provide trench 208, in accordance with an example embodiment of the present disclosure. The etch can be carried using any suitable dry and/or wet etch, depending on the material systems in place and as will be appreciated. The hardmask 221 can be, for example, an oxide, nitride, or oxide/nitride bi-layer stack. A selective etch is then performed to remove the dummy gate 206 (and the dummy gate dielectric 202 material, if present) and also an underlying portion of the substrate within the N-well 211. As can be seen, the selectivity of this etch enables the ILD 219 and spacer 203 films to remain substantially unetched, according to one such embodiment. In one example configuration having a silicon substrate 101, silicon dioxide ILD 219, polysilicon gate 206, and a silicon nitride gate spacer 203, this etch is carried out using an etchant such as tetramethyl ammonium hydroxide (TMAH) such that only the polysilicon gate 206 material and silicon substrate 101 material are removed (and any dummy gate dielectric 202, if present). In other embodiments, the mask 221 may be configured to cover the entire structure except for the sacrificial gate material to be removed by the etch process. Note that the mask need not cover the spacer 203 material, which is also impervious to the etch chemistry that forms trench 208 (no mask alignment is needed in such embodiments). The etch process need only be selective to the mask 221 and spacer 203 in such cases. Numerous suitable patterned mask and wet and/or dry etch schemes can be used to facilitate the removal of the dummy gate. As will be further appreciated, the depth of the trench 208 can vary from one embodiment to the next, but in some cases the trench 208 extends through at least 50% of the N-well 211 height. Example trench depths into the N-well 211 (or P-well, as the case may be) may range, for instance, from 25% to 85% of the overall well height. So long as the self-aligned isolation 209 provides suitable isolation between the corresponding diffusion and channel region, but doesn't completely isolate those two regions. In a more general sense, the trench 208 depth can be set as typically done, or to otherwise ensure proper isolation between the channel and given diffusion area.Figure 3e shows the resulting structure after a second deposition of insulator material has been provided into trench 208 to provide the self-aligned isolation 209, in accordance with an example embodiment of the present disclosure. Note that the deposition may create an overburden of material at the trench 208 opening, which can be resolved with subsequent planarization. This self-aligned insulator material may be, for example, an oxide or nitride and may be the same material as ILD 219 in some cases (e.g., silicon dioxide, silicon nitride, or high-k dielectric, depending on desired degree of isolation and voltages of target application). Note that the length CD1 of the self-aligned isolation 209 substantially tracks with the distance between the gate spacer 203 of the dummy gate structure, and is hence self-aligned within the N-well 211.Figure 3f shows the resulting structure after the hardmask 221 has been removed and the standard RMG process has been carried out to replace the other depicted dummy gate structure (on the left side) with an active gate structure that includes a gate dielectric 204 and gate electrode 207, in accordance with an embodiment of the present disclosure. As previously explained, the gate dielectric 204 may be, for example, silicon dioxide or high-k dielectric materials, and the gate electrode may be, for example, aluminum, tungsten, titanium, tantalum, nickel, platinum, highly doped silicon, a silicide of one of these (e.g., titanium silicide, nickel silicide, etc) or a combination of such material layers. Further recall that the gate dielectric 204 may be provided on the sidewalls of the gate spacer 203 in some embodiments, as generally shown in Figure 3f with dashed lines. Figure 3g shows the resulting structure after standard contact patterning is carried out to provide source contact 205a and drain contact 205b, in accordance with an embodiment of the present disclosure. Similar contact materials with respect to the gate electrode 207 can be used to implement the source and drain contacts 205a-b. Any number of suitable contact configurations can be used.Figure 3f shows an alternative embodiment where, rather than completely filling trench 208 with the insulator material of isolation 209, the isolation 209 only partially fills the trench, and a second gate structure is further provisioned in the top portion of trench 208, including a gate electrode 207 and a gate dielectric 204 (which may also be U-shaped in some cases, as shown with dashed lines). Such a process allows each gate structure to be treated and processed the same at the RMG phase of the process, after the insulator material of isolation 209 has been provisioned. In this example case, the self-aligned isolation 209 is within the underlying semiconductor material of the N-well 211 and under the gate electrode and the gate dielectric. Note that in such embodiments, the deposition of insulator material when forming the self-aligned isolation 209 may only partially fill the trench between the gate spacer 203. After the isolation 209, is formed, a high-k gate dielectric and gate electrode can be provided per RMG processing, yielding the structure in Figure 3f . As previously explained, this additional gate structure is not functional. Figure 3g' shows the resulting structure after standard contact patterning is carried out to provide source contact 205a and drain contact 205b, in accordance with one such example embodiment of the present disclosure.Example SystemFigure 4 illustrates a computing system implemented with one or more integrated circuit structures configured in accordance with an embodiment of the present disclosure. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may include a number of components, including but not limited to a processor 1004 and at least one communication chip 1006 (two are shown in this example), each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board or a daughterboard mounted on a main board or the only board of system 1000, etc. Depending on its applications, computing system 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 may include one or more integrated circuit structures configured with high-voltage transistors as provided herein. In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004).The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some such example embodiments of the present disclosure, the integrated circuit die of the processor 1004 may include one or more high-voltage transistors including a self-aligned isolation as provided herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 1006 may also include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip 1006 includes one or more high-voltage transistors as provided herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein.In various implementations, the computing system 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the system 1000 may be any other electronic device that processes data or employs high-voltage or power transistors configured with self-aligned isolation as described herein. As will be appreciated in light of this disclosure, various embodiments of the present disclosure can be used to improve performance on products fabricated at any process node (e.g., in the micron range, or sub-micron and beyond) by allowing for the use of high-voltage transistors.Further Example EmbodimentsThe following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.Example 1 is a high-voltage transistor device, comprising: a first gate structure over a channel region, the first gate structure having a gate spacer; a source region to one side of the first gate structure; a drain region to another side of the first gate structure; and an isolation aligned to a gate spacer of a second gate structure above the isolation, the isolation comprising insulation material and located between the channel region and one of the source or drain regions.Example 2 includes the subject matter of Example 1, further including: a semiconductor substrate having a first polarity, the substrate further configured with the channel region; a well in the substrate and having a second polarity opposite the first polarity, the well associated with one of the source and drain regions; and a diffusion area in the substrate, such that the channel region is between the diffusion area and the well, the diffusion area associated with the other one of the source and drain regions.Example 3 includes the subject matter of Example 2, wherein the substrate is a p-type substrate and the well is an N-well, and the diffusion area is an N+ diffusion area.Example 4 includes the subject matter of Example 2, wherein the substrate is an n-type substrate and the well is a P-well, and the diffusion area is a P+ diffusion area.Example 5 includes the subject matter of any of Examples 2 through 4, wherein the well is under one side of the gate structure and the diffusion area is under an opposing side of the gate structure.Example 6 includes the subject matter of any of the previous Examples, further including: a first metal contact over the diffusion area; and a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.Example 7 includes the subject matter of any of the previous Examples, wherein the isolation passes through the gate spacer of the second gate structure and continues into underlying semiconductor material.Example 8 includes the subject matter of Example 7, wherein the underlying semiconductor material comprises an N-well.Example 9 includes the subject matter of any of the previous Examples, wherein each of the first and second gate structures further comprises at least one of: a gate electrode; and a gate dielectric layer under the gate electrode; wherein the isolation is within underlying semiconductor material and under at least one of the gate electrode and the gate dielectric of the second gate structure.Example 10 includes the subject matter of any of the previous Examples, wherein the device is a planar transistor and the channel region comprises a portion of a planar substrate.Example 11 includes the subject matter of any of Examples 1 through 9, wherein the device is a non-planar transistor and the channel region comprises as a fin extending from an underlying substrate.Example 12 includes the subject matter of any of Examples 1 through 9, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor wires.Example 13 includes the subject matter of any of Examples 1 through 9, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor ribbons.Example 14 includes the subject matter of any of the previous Examples, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is native to the underlying substrate of semiconductor material.Example 15 includes the subject matter of any of Examples 1 through 13, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is different from the underlying substrate of semiconductor material.Example 16 includes the subject matter of any of the previous Examples, further including an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is native to the underlying substrate of semiconductor material.Example 17 includes the subject matter of any of Examples 1 through 15, further including an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is different from the underlying substrate of semiconductor material.Example 18 is a semiconductor transistor device, comprising: a semiconductor substrate having a first polarity, the substrate further configured with a channel region; a well in the substrate and having a second polarity opposite the first polarity; a diffusion area formed in the substrate, such that the channel region is between the diffusion area and the well; a first gate structure including a gate electrode above the channel region, and a gate dielectric layer between the gate electrode and the channel region, and a gate spacer; a second gate structure including a gate spacer; an isolation aligned with the gate spacer of the second gate structure and extending into the well below the second gate structure, the isolation comprising insulation material; a first metal contact over the diffusion area; and a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.Example 19 includes the subject matter of Example 18, wherein the substrate is a p-type substrate and the well is an N-well, and the diffusion area is an N+ diffusion area.Example 20 includes the subject matter of Example 18, wherein the substrate is an n-type substrate and the well is a P-well, and the diffusion area is a P+ diffusion area.Example 21 includes the subject matter of any of Examples 18 through 20, wherein the well is under one side of the gate structure and the diffusion area is under an opposing side of the gate structure.Example 22 includes the subject matter of any of Examples 18 through 21, wherein the isolation passes through the gate spacer of the second gate structure and continues into semiconductor material of the well.Example 23 includes the subject matter of Example 22, wherein the well is an N-well.Example 24 includes the subject matter of any of Examples 18 through 23, wherein the second gate structure further comprises at least one of: a gate electrode; and a gate dielectric layer under the gate electrode; wherein the isolation is within underlying semiconductor material of the well and under at least one of the gate electrode and the gate dielectric of the second gate structure.Example 25 includes the subject matter of any of Examples 18 through 24, wherein the device is a planar transistor and the channel region comprises a portion of a planar substrate.Example 26 includes the subject matter of any of Examples 18 through 24, wherein the device is a non-planar transistor and the channel region comprises as a fin extending from an underlying substrate.Example 27 includes the subject matter of any of Examples 18 through 24, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor wires.Example 28 includes the subject matter of any of Examples 18 through 24, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor ribbons.Example 29 includes the subject matter of any of Examples 18 through 28, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is native to the underlying substrate of semiconductor material.Example 30 includes the subject matter of any of Examples 18 through 28, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is different from the underlying substrate of semiconductor material.Example 31 includes the subject matter of any of Examples 18 through 30, further including an underlying substrate of semiconductor material, wherein semiconductor material of the diffusion region and the well is native to the underlying substrate of semiconductor material.Example 32 includes the subject matter of any of Examples 18 through 30, further including an underlying substrate of semiconductor material, wherein semiconductor material of the diffusion region and the well is different from the underlying substrate of semiconductor material.Example 33 is a method for forming a high-voltage transistor device, the method comprising: providing a first gate structure over a channel region, the first gate structure having a gate spacer; providing a source region to one side of the first gate structure; providing a drain region to another side of the first gate structure; and providing an isolation aligned to a gate spacer of a second gate structure above the isolation, the isolation comprising insulation material and located between the channel region and one of the source or drain regions.Example 34 includes the subject matter of Example 33, and further includes: providing a semiconductor substrate having a first polarity, the substrate further configured with the channel region; providing a well in the substrate, the well having a second polarity opposite the first polarity, the well associated with one of the source and drain regions; providing a diffusion area in the substrate, such that the channel region is between the diffusion area and the well, the diffusion area associated with the other one of the source and drain regions; providing a first metal contact over the diffusion area; and providing a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.Example 35 includes the subject matter of Example 34, wherein the substrate is a p-type substrate and the well is an N-well, and the diffusion area is an N+ diffusion area.Example 36 includes the subject matter of Example 34, wherein the substrate is an n-type substrate and the well is a P-well, and the diffusion area is a P+ diffusion area.Example 37 includes the subject matter of any of Examples 34 through 36, wherein the well is under one side of the gate structure and the diffusion area is under an opposing side of the gate structure.Example 38 includes the subject matter of any of Examples 33 through 37, wherein providing the isolation aligned to the gate spacer comprises: removing at least one of dummy gate electrode material and dummy gate dielectric material from the second gate structure, thereby providing a trench aligned with the gate spacer of the second gate structure, the trench extending into underlying semiconductor material; and depositing the insulation material into the trench to provide the isolation self-aligned to the gate spacer.Example 39 includes the subject matter of Example 38, wherein the isolation passes through the gate spacer of the second gate structure and continues into the underlying semiconductor material.Example 40 includes the subject matter of Example 39, wherein the underlying semiconductor material comprises an N-well.Example 41 includes the subject matter of any of Examples 33 through 40, wherein providing each of the first and second gate structures further comprises at least one of: providing a gate dielectric layer; and providing a gate electrode over the gate dielectric layer; wherein the isolation is within underlying semiconductor material and under at least one of the gate electrode and the gate dielectric of the second gate structure.Example 42 includes the subject matter of any of Examples 33 through 41, wherein the device is a planar transistor and the channel region comprises a portion of a planar substrate.Example 43 includes the subject matter of any of Examples 33 through 41, wherein the device is a non-planar transistor and the channel region comprises as a fin extending from an underlying substrate.Example 44 includes the subject matter of any of Examples 33 through 41, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor wires.Example 45 includes the subject matter of any of Examples 33 through 41, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor ribbons.Example 46 includes the subject matter of any of Examples 33 through 45, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is native to the underlying substrate of semiconductor material.Example 47 includes the subject matter of any of Examples 33 through 45, further including an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is different from the underlying substrate of semiconductor material.Example 48 includes the subject matter of any of Examples 33 through 47, further including an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is native to the underlying substrate of semiconductor material.Example 49 includes the subject matter of any of Examples 33 through 47, further including an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is different from the underlying substrate of semiconductor material.The foregoing description of example embodiments of the present disclosure has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto.The following section of the description consists of numbered paragraphs simply providing statements of the invention already described herein. The numbered paragraphs in this section are not claims. The claims are set forth below in the later section headed "claims".1. A high-voltage transistor device, comprising: a first gate structure over a channel region, the first gate structure having a gate spacer; a source region to one side of the first gate structure; a drain region to another side of the first gate structure; and an isolation aligned to a gate spacer of a second gate structure above the isolation, the isolation comprising insulation material and located between the channel region and one of the source or drain regions.2. The device of clause 1, further comprising: a semiconductor substrate having a first polarity, the substrate further configured with the channel region; a well in the substrate and having a second polarity opposite the first polarity, the well associated with one of the source and drain regions; and a diffusion area in the substrate, such that the channel region is between the diffusion area and the well, the diffusion area associated with the other one of the source and drain regions.3. The device of clause 1, further comprising: a first metal contact over the diffusion area; and a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.4. The device of clause 1, wherein the isolation passes through the gate spacer of the second gate structure and continues into underlying semiconductor material.5. The device of clause 1, wherein each of the first and second gate structures further comprises at least one of: a gate electrode; and a gate dielectric layer under the gate electrode; wherein the isolation is within underlying semiconductor material and under at least one of the gate electrode and the gate dielectric of the second gate structure.6. The device of any of clauses 1 through 5, wherein the device is a non-planar transistor and the channel region comprises as a fin extending from an underlying substrate.7. The device of any of clauses 1 through 5, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor wires.8. The device of any of clauses 1 through 5, further comprising an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is native to the underlying substrate of semiconductor material.9. The device of any of clauses 1 through 5, further comprising an underlying substrate of semiconductor material, wherein semiconductor material of the channel region is different from the underlying substrate of semiconductor material.10. The device of any of clauses 1 through 5, further comprising an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is native to the underlying substrate of semiconductor material.11. The device of any of clauses 1 through 5, further comprising an underlying substrate of semiconductor material, wherein semiconductor material of the source and drain regions is different from the underlying substrate of semiconductor material.12. A semiconductor transistor device, comprising: a semiconductor substrate having a first polarity, the substrate further configured with a channel region; a well in the substrate and having a second polarity opposite the first polarity; a diffusion area formed in the substrate, such that the channel region is between the diffusion area and the well; a first gate structure including a gate electrode above the channel region, and a gate dielectric layer between the gate electrode and the channel region, and a gate spacer; a second gate structure including a gate spacer; an isolation aligned with the gate spacer of the second gate structure and extending into the well below the second gate structure, the isolation comprising insulation material; a first metal contact over the diffusion area; and a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.13. The device of clause 12, wherein the substrate is a p-type substrate and the well is an N-well, and the diffusion area is an N+ diffusion area.14. The device of clause 12, wherein the well is under one side of the gate structure and the diffusion area is under an opposing side of the gate structure.15. The device of clause 12, wherein the isolation passes through the gate spacer of the second gate structure and continues into semiconductor material of the well.16. The device of clause 12, wherein the second gate structure further comprises at least one of: a gate electrode; and a gate dielectric layer under the gate electrode; wherein the isolation is within underlying semiconductor material of the well and under at least one of the gate electrode and the gate dielectric of the second gate structure.17. A method for forming a high-voltage transistor device, the method comprising: providing a first gate structure over a channel region, the first gate structure having a gate spacer; providing a source region to one side of the first gate structure; providing a drain region to another side of the first gate structure; and providing an isolation aligned to a gate spacer of a second gate structure above the isolation, the isolation comprising insulation material and located between the channel region and one of the source or drain regions.18. The method of clause 17, further comprising: providing a semiconductor substrate having a first polarity, the substrate further configured with the channel region; providing a well in the substrate, the well having a second polarity opposite the first polarity, the well associated with one of the source and drain regions; providing a diffusion area in the substrate, such that the channel region is between the diffusion area and the well, the diffusion area associated with the other one of the source and drain regions; providing a first metal contact over the diffusion area; and providing a second metal contact over the well, such that the isolation is between the second metal contact and the channel region.19. The method of clause 18, wherein the substrate is a p-type substrate and the well is an N-well, and the diffusion area is an N+ diffusion area.20. The method of clause 18, wherein the well is under one side of the gate structure and the diffusion area is under an opposing side of the gate structure.21. The method of clause 18, wherein providing the isolation aligned to the gate spacer comprises: removing at least one of dummy gate electrode material and dummy gate dielectric material from the second gate structure, thereby providing a trench aligned with the gate spacer of the second gate structure, the trench extending into underlying semiconductor material; and depositing the insulation material into the trench to provide the isolation self-aligned to the gate spacer.22. The method of clause 21, wherein the isolation passes through the gate spacer of the second gate structure and continues into the underlying semiconductor material.23. The method of clause 17, wherein providing each of the first and second gate structures further comprises at least one of: providing a gate dielectric layer; and providing a gate electrode over the gate dielectric layer; wherein the isolation is within underlying semiconductor material and under at least one of the gate electrode and the gate dielectric of the second gate structure.24. The method of any of clauses 17 through 23, wherein the device is a non-planar transistor and the channel region comprises as a fin extending from an underlying substrate.25. The method of any of clauses 17 through 23, wherein the device is a non-planar transistor and the channel region comprises one or more semiconductor wires. |
The invention provides, one aspect, a method of fabricating a semiconductor device (100). In one aspect, the method includes forming a carbide layer (170) over a gate electrode (150) and depositing a pre-metal dielectric layer (175) over the carbide layer. The method provides a significant reduction in NBTI (negative bias temperature instability) drift. |
CLAIMS What is claimed is: 1. A method of fabricating a semiconductor device, comprising: forming a hydrogen enriched carbide layer over a gate electrode; and depositing a pre-metal dielectric layer over the carbide layer. 2. The method recited in Claim 1, wherein the hydrogen enriched carbide layer contains at least about 32 atom percent of hydrogen. 3. The method recited in Claim 1, further including subjecting the hydrogen enriched carbide layer to an anneal, wherein the anneal is a thermal anneal conducted at a temperature that is greater than a deposition temperature of the hydrogen enriched carbide layer. 4. The method recited in Claim 3, wherein the anneal is conducted subsequent to depositing the pre-metal dielectric layer. 5. The method recited in any of Claims 1 - 4, wherein forming the hydrogen enriched carbide layer includes using a gas mixture comprising carbon, silicon, and nitrogen. 6. The method recited in Claim 5, wherein the gas mixture comprises trimethyl silane or methyl silane, and ammonia. 7. The method recited in Claim 1, wherein the hydrogen enriched carbide layer has a general formula of SiCxNyHz, wherein a value of x ranges from bout 10% to about 25%, a value of y ranges from about 0% to about 20%, and a value of z ranges from about 10% to about 25%. 8. A semiconductor device, comprising: a semiconductor substrate; a gate electrode located over the semiconductor substrate; a carbide located layer over the gate electrode; and a pre-metal dielectric layer located over the carbide layer. 9. The semiconductor device recited in Claim 8, wherein the carbide layer contains an atom percent of hydrogen ranging from about 10% to about 25%. 10. The semiconductor device recited in Claim 8, wherein the carbide layer comprises silicon carbide nitride or silicon carbide and has a general formula of SiCxNyHz, wherein a value of x ranges from bout 10% to about 25%, a value of y ranges from about 0% to about 20%, and a value of z ranges from about 10% to about 25%. 11. The semiconductor device recited in Claim 10, wherein the carbide layer is silicon carbide nitride (SiCNH) or silicon carbide (SiCH). |
SEMICONDUCTOR DEVICE FABRICATED USING A CARBON-CONTAINING FILM AS A CONTACT ETCH STOP LAYERThe invention is directed in general to a semiconductor device, and more specifically, to a semiconductor device fabricated using a carbon-containing film as a contact etch stop layer.BACKGROUNDHigh performance integrated circuits (ICs) have gained wide acceptance and utility in present day computing and telecommunications applications. Consumer demand has resulted in increasing functionality and speed of ICs, which is made possible by the constant shrinking of transistor feature sizes. These smaller transistors offer performance benefits, such as faster speed of operation and lower power, as well as lower cost. However, smaller features result in physical effects that must be compensated for in the processing of the IC device, and these compensating processes can introduce reliability concerns.The reduction of transistor size demands reduction of transistor feature dimensions, such as the gate oxide thickness, T0x, and gate and channel length. Reduction of T0x is necessary to raise the capacitance of the gate as the transistor threshold voltage, Vt, is reduced as the transistor is scaled down. However, the channel length of state-of-the-art metal oxide semiconductor field effect transistors (MOSFETs) has been reduced to dimensions at which short channel effects have an increasing effect on transistor performance. This effect leads to a higher transistor Vt than would otherwise be necessary from scaling alone, and requires an increasing gate electric field strength, E0x, with each transistor technology generation.Higher E0x results in greater stress on the gate dielectric and on the interface between the gate dielectric and the channel. The quality of this interface is critical to the reliability of the transistor, as changes at the interface can cause undesirable changes of the transistor performance characteristics, such as increased Vt and off current, and decreased saturated drain current and transconductance. These effects occur primarily on p-MOSFETS (equivalently known as p-channel MOSFETs), and are known as Negative Bias Temperature Instability, or NBTI. NBTI is produced by thermal or voltage stress, but their combination is particularly effective in producing the effect. The activation temperature can be as low as 100<0>C, and the minimum necessary gate field strength is below 6 MV/cm. These are conditions routinely experienced by MOSFET transistors in current generation integrated circuits. The changes in transistor performance can significantly degrade circuit performance by causing changes in circuit timing, resulting in increased error rates or even device failure. The root cause of NBTI is the formation of trapped charge at the interface between the gate oxide and the channel, which results from the removal of hydrogen at the interface between the channel and the gate dielectric. Hydrogen may be incorporated in the interface fortuitously as a result of hydrogen containing processes during fabrication, and is intentionally introduced at the end of the fabrication process with a forming gas anneal to passivate dangling bonds at the gate oxide-channel interface. These dangling bonds are a consequence of the lattice mismatch between crystalline silicon in the channel and amorphous silicon dioxide in the gate dielectric, and will result in trapped charge at the interface unless suitably passivated.Several techniques to reduce NBTI are known, including fluorine implantation of the channel and modification of nitrogen content of nitrided gate oxide. Fluorine implantation, while effective at stabilizing the interface, introduces other detrimental effects, such as enhanced boron diffusion in the gate oxide and higher junction leakage. Reducing the nitrogen content of the gate also improves NBTI, but this must be weighed against the benefits of nitriding the gate, such as increased dielectric constant and reduction of boron diffusion through the gate dielectric.Another method that is presently of intense focus within the semiconductor manufacturing industry involves the use of silicon nitride to incorporate hydrogen at the gate oxide interface. While silicon nitride does accomplish this purpose, the amount of free hydrogen that eventually gets incorporated is not sufficient to provide further reduction in NBTI drift.Accordingly, what is needed in the art is a method of fabricating a semiconductor device that addresses these deficiencies. SUMMARYTo overcome the deficiencies in the prior art, the invention, in one embodiment, provides a method of fabricating a semiconductor device. This embodiment comprises forming a hydrogen enriched carbide layer over a gate electrode and depositing a pre-metal dielectric layer over the carbide layer.In another embodiment, the invention provides a semiconductor device that a semiconductor substrate, a gate electrode located over the semiconductor substrate, a carbide located layer over the gate electrode, and a pre-metal dielectric layer located over the carbide layer. BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a sectional view of one embodiment of a semiconductor device provided by the invention; FIGS. 2-5 illustrate various stages of manufacture of the semiconductor device; andFIG. 6 illustrates a sectional view of an integrated circuit (IC) incorporating the semiconductor device. DETAILED DESCRIPTION OF THE EMBODIMENTSFIG. 1 is one embodiment of a semiconductor device 100 of the invention at one stage of manufacture. In this embodiment, the semiconductor device 100 comprises a semiconductor substrate 110. Located over the substrate 110 is an active region 115. Wells 120 and 125 are located in the active region 115. Isolation structures 130 are also located in the active region 115.In the illustrated embodiment, the semiconductor device 100 includes transistors 135, 140. The transistor 135, 140 may also each comprise source/drains 155 and spacers 160. The semiconductor device 100 may further include suicide contacts 165.The embodiment illustrated in FIG. 1 further includes a hydrogen enriched carbide layer 170 that is located over the gate electrodes 150. It has been found that the carbide layer 170 improves the NBTI of the semiconductor device 100 over those provided by conventional processes. In one aspect, the carbide layer 170 provides advantages over conventional layers, such as silicon nitride, by providing a smaller percentage of end of life (EOL) shift in the drive current of the semiconductor device 100. A reduction in EOL extends the devices useful life. Advantageously, the semiconductor device 100 experiences a reduction in the EOL shift in the drive current that is greater than about 5% when compared to a device that uses conventional materials instead of the carbide layer 170. A pre-metal dielectric (PMD) layer 175 is located over the carbide layer 170, and interconnects 180 and metal lines 185 are located within and over the PMD layer 175. Here the PMD layer 175 is the first dielectric layer in which interconnects 180 are formed and over which metal lines 185 are located.FIG. 2 shows an embodiment of a semiconductor device 200 of the invention in an early stage of manufacture. In this embodiment, the semiconductor device 200 includes the same features as discussed above and are numbered similarly. The substrate 210 may be a conventional semiconductor material, such as doped silicon, silicon germanium, gallium arsenide, or silicond-on-insulator (SOI) substrates. An active layer 215 is located over the substrate. The active layer 215 may be a portion of the substrate 210 that is doped to function as an active layer for the device 200, or it may be a conventionally doped epitaxial layer 210. Wells 220, 225 are located within the active layer 215, and they may be conventionally doped with the same type of dopant, or they may be complementary doped wells, as indicated. Isolation structures 230, such as isolation trenches, electrically isolate wells 220, 225 from each other. Conventional processes and materials may be used to construct these isolation structures 230.The semiconductor device 200 may also include transistors 235, 240. The transistors 235, 240 may be configured as PMOS or NMOS, or they may be arranged in a complementary configuration, as shown. In certain embodiments, the transistors 235, 240, may comprise conventionally formed components, such as source/drains 255 and gate dielectric layers 245 over which are located gate electrodes 250. The gate electrodes 250 may also include conventionally formed spacers 260 that are located adjacent the gate electrodes 250. In some embodiments, the gate electrodes 250 may be doped polysilicon, suicided polysilicon, metal, or a combination of any of these. The source/drains 255 may include extension regions, such as lightly doped drains (LDDs) that extend under the spacers 260, but in other embodiments, the extension regions may not be present. The spacers 260 may comprise a single layer or multiple layers, as shown, and may be constructed with conventional materials, such as oxides, nitrides, or combinations thereof. At this stage of manufacture, suicide contacts 265, which may be fabricated using conventional processes and materials, have also been formed. FIG. 3 illustrates the semiconductor device 200 following the deposition of a hydrogen enriched carbide layer 310 over the gate electrodes 250. The carbide layer 310 may be comprised of materials, such as silicon carbide nitride (SiCNH) or silicon carbide (SiCH). In one aspect, the carbide layer 310 has a general formula of SiCxNyHz, wherein a value of x ranges from bout 10% to about 25% and a value of y ranges from about 0% to about 20%. The range of z depends on the deposition conditions, examples of which are discussed below. The carbide layer 310 may be formed by using a gas mixture comprising carbon, silicon, and nitrogen, and deposition processes, such as plasma enhanced chemical vapor deposition (PECVD), atomic layer deposition (ALD), or spin on processes may be used. The gas mixture may vary and non-limiting examples of these gases include trimethyl silane or methyl silane, and ammonia. In one aspect, the flow rates of the gas mixture ranges from about 1200 seem to about 3500 seem, and in a more specific example, the flow rates of the hydrocarbon silane gas may range from about 200 seem to about 500 seem, while the flow rate of ammonia may range from about 0 seem to about 1000 seem, and the flow rate of carrier gas, such as helium, may range from about 1000 seem to about 2000 seem. Deposition temperatures may range from about 200<0>C to about 400<0>C. As initially deposited, the carbide layer 310 may contain varying amounts of hydrogen. For example, in one embodiment, the hydrogen enriched carbide layer 310 has the general formula of SiCxNyHz, which in one example, the carbide layer contains an atomic amount of hydrogen wherein z ranges from about 30 atom percent to about 40 atom percent or higher. In a more specific embodiment, the atom percent of hydrogen may range from about 32 atom percent to about 38 atom percent. It should be understood that the above- discussed deposition parameters may be varied to achieve various hydrogen concentrations, as those stated above. The illustrated embodiment shows the carbide layer 310 located directly on the gate electrode 250. However, other embodiments include those where the carbide layer 310 is located over the gate electrodes 250 such that there may be intervening layers located between the gate electrodes 250 and the carbide layer 310 but prior to the first metal level. Moreover, it should be noted that the carbide layer 310 may also function as a contact etch stop layer for the gate electrodes 250 and as a PMD liner. Also, an added advantage is provided in that the contact etch selectivity of the carbide layer 310 to silicon dioxide can be slightly better than that of silicon nitride. After its deposition, the hydrogen enriched carbide layer 310 is subjected to an anneal410, as shown in FIG. 4. In one embodiment, the anneal 410 may be conducted at this point, or in another, the anneal 410 may be conducted following the deposition of a subsequent layer, such as a pre-metal dielectric layer. The type of anneal 410 that is conducted may vary depending on the embodiment. For example, in one embodiment, the anneal 410 may be a thermal anneal. In such embodiments, the anneal 410 may be conducted at a temperature that is greater than a deposition temperature of the carbide layer. In a more specific embodiment, the anneal 410 can be conducted at a temperature that ranges from about 200<0>C to about 450<0>C. In such instances, the deposition temperature of the carbide layer may be less than about 400<0>C. These temperatures are illustrative only and other temperatures are within the scope of the invention. However, in those embodiments where suicided contacts are included, it is recommended that the temperatures that are used to conduct the anneal 410 be below the temperature that would cause the suicide to punch through the source/drain junction and cause leakage in the device.In other embodiments, the anneal 410 may be conducted with ultra violet radiation, an electron beam, or a laser. These alternative processes can be particularly useful when lower anneal temperatures are required due to the materials present in the device. The laser can be pulsed within a few milliseconds to achieve a very high temperature sufficient to anneal the carbide layer 310, but still prevent further diffusion of the metals in the suicide contacts 265 due to the fact that the high temperature is brief enough such that further diffusion of the metal in the suicide does not occur. Following the anneal, the hydrogen content of the carbide layer 310 may decrease. For example, the atom percent of hydrogen in the carbide layer 310 may range from about 10% to about 25%.FIG. 5 shows the semiconductor device 200 following the deposition of a PMD layer 510 over the carbide layer 310. Depending on the manufacturer, what constitutes a PMD layer 510 may vary. What a PMD layer 510 means with respect to the invention is: any layer in which contact plugs are formed to contact the transistors 235 and 240 and on which a first interconnect metal layer is deposited. As mentioned above, in some instances the PMD layer 510 may be subjected to an anneal and in such instances the above-mentioned anneal 410 would occur at this point in the manufacturing process. The PMD layer 510 may be deposited using conventional materials and processes. Following the deposition and anneal, if applicable, of the PMD layer 510, conventional processes may be used to complete the semiconductor device 200 to form an operative integrated circuit (IC). FIG. 6 is an IC 600 that incorporates the completed semiconductor device 100 of FIG. 1. The IC 600 may be configured into a wide variety of devices, as CMOS devices, BiCMOS devices, Bipolar devices, as well as capacitors or other types of devices. The IC 600 may further include passive devices, such as inductors or resistors, or it may also include optical devices or optoelectronic devices. The IC 600 includes the various components as discussed above, and conventional interconnect structures 610 and metal lines 615 electrically connect the components of the semiconductor device 100 to form an operative IC. The interconnect structures 610 and metal lines 615 may be formed in conventional dielectric layers 620 that are located over the semiconductor device 100. The number of dielectric layers 6320 and metal lines 615 will varying with design. Those skilled in the art are familiar with the process and materials that could be used to incorporate the semiconductor device 100 and arrive at the IC 600.Those skilled in the art to which the invention relates will appreciate that other and further additions, deletions, substitutions, and modifications may be made to the described example embodiments, without departing from the invention. |
A USB hub (102) includes a plurality of downstream ports (110a..d); at least one dual mode port (110d), the dual mode port configured to be switchable from a downstream port to an upstream port; and host detection circuitry (718) for detecting whether, when operating as an upstream port, a host is connected. |
1.A USB hub that includes:Multiple downstream ports;At least one dual mode port configured to switch from a downstream port to an upstream port; andA host detection circuit for detecting whether a host is connected when operating as an upstream port.2.The USB hub of claim 1 wherein said circuitry is operable when said dual mode port has entered a suspend state.3.The USB hub of claim 1, the at least one dual mode port further comprising a DP connection line and a DM connection line;Wherein the DM connection line is coupled to the host detection circuit;Wherein, when operating as an upstream port, the connection of the host can be determined by the host detection circuit, wherein the host detection circuit causes a predetermined logic value to be identified at the DM line when the host is disconnected.4.The USB hub of claim 3 wherein said host detection circuit comprises a current source.5.The USB hub of claim 4 wherein said current source is a 10 uA current source coupled between said DM connection line and ground.6.The USB hub of claim 3 wherein said host detection circuit comprises a pull up resistor coupled to said DM connection line.7.The USB hub of claim 1 further comprising a controller, wherein said controller is configured to switch a mode on said dual mode port such that said dual mode port is determined when said host has been disconnected Switch back to function as a downstream port.8.A method comprising:In a USB hub including a dual mode port, switching the USB port function of the dual mode port from a first mode as a downstream port to a second mode as an upstream port;After entering the pause state, the logic level of the DM connection line provided by the host detection circuit is periodically sampled to determine if the host is still connected.9.The method of claim 8 wherein said host detection circuit comprises a current source coupled between said DM connection line and ground.10.The method of claim 9 wherein said current source comprises a 10 uA current source.11.The method of claim 8 wherein said host detection circuit comprises a pull up resistor coupled to said DM connection line.12.The method of claim 8 further comprising a pull-up resistor coupled to the DP connection line, wherein the host is detected as detecting when the DP connection line and the DM connection line have the same predetermined logic level connection.13.The method of claim 8 including switching the mode on the dual mode port such that the dual mode port switches back to operating as a downstream port when it is determined that the host has been disconnected.14.A USB hub that includes:ControllerMultiple downstream ports;At least one dual mode port configured to be switchable from the downstream port to the upstream port by the controller; andA host detection circuit for providing a flag to whether or not to connect a host when operating as an upstream port.15.The USB hub of claim 14 wherein said circuitry is operable when said dual mode port has entered a suspend state.16.The USB hub of claim 15, the at least one dual mode port further comprising a DP connection line and a DM connection line;Wherein the DM connection line is coupled to the host detection circuit;Wherein, when operating as an upstream port, the connection of the host can be determined by the host detection circuit, wherein the host detection circuit causes a predetermined logic value to be identified at the DM connection line when the host is disconnected.17.The USB hub of claim 14 wherein said host detection circuit comprises a current source.18.The USB hub of claim 17 wherein said current source is a 10 uA current source coupled between said DM connection line and ground.19.The USB hub of claim 16 wherein said host detection circuit comprises a pull up resistor coupled to said DM connection line.20.The USB hub of claim 15 wherein said controller is configured to switch a mode on said dual mode port such that said dual mode port switches back to acting as a downstream port when said host is determined to have been disconnected Operation. |
System and method for disconnection detection on dual mode ports of a USB hubCross-reference to related applicationsThe present application is a conversion of the US Provisional Patent Application Serial No. 61/985,758, filed on Apr. 29, 2014, and the priority of the provisional application, the entire contents of which are hereby The manner in which the references are cited is incorporated herein.Technical fieldThis invention relates to Universal Serial Bus (USB) technology and, in particular, to USB 2.0 and USB 3.0 hub devices.Background techniqueThe Serial Bus (USB) 1.0 specification was originally developed in the 1990s to provide buses and interfaces to standardize communication between computers and peripheral devices such as keyboards, printers, cursor pointing devices, external drives, and the like. Since then, USB has progressed to versions 2.0 and 3.0 and has become ubiquitous in computers and portable devices such as smart phones, tablet computers, and MP3 players.Generally, in USB communication, one device acts as a "Host" and another device acts as a "Device". The "host" powers the bus, issues commands, and generally maintains control over the connection. The device does not initiate any activity for controlling the bus. For example, a personal computer acts as a host to a USB "portable" disk device.The OTG (On-the-Go) specification allows for single-master and single-device swap roles. For example, some tablet computers may operate as a device role and operate as a mass storage device when coupled to a personal computer host, but may function as a host when coupled to a peripheral device, such as a keyboard.The USB hub expands the single USB port to several USB ports, allowing more devices to be connected. A personal computer or an automated entertainment system (for example) may contain multiple external USB ports, but with one internal hub instead of a dedicated USB controller for each port. However, as can be appreciated, difficulties can arise when using a USB hub with an On-the-Go device.However, the hubs with flexible connectivity (USB 2.0 and USB 3.0 hubs) produced by the assignee of this application are unique in the industry in that they can connect the upstream (host) side port to the downstream (device) side. One of the ports is swapped. In fact, the dual role (host/device) can take over the hub from the downstream port. The details of this resiliently-connected hub can be found in U.S. Patent No. 7,480,753, the entire disclosure of which is hereby incorporated by reference in its entirety herein.USB hosts typically provide a signal called VBUS to tell the hub (Hub) its presence. VBUS is also a power signal that can be used to power the hub. If a dual role device (such as) is plugged into the device side port, it always needs to receive power even if it is a host. Since the device that is now acting as the host does not provide VBUS, the hub cannot know if the USB 2.0 host has been disconnected. Lack of activity is not enough to indicate that when there is no activity, the hub only enters a pause state to conserve power.That is, when the hub is providing VBUS to the host, it is impossible to determine whether the host has entered the suspended state or if it is disconnected. The VBUS that is expected to exist for normal USB operation disappears when the host has been disconnected. In the case where VBUS is always present, the hub must be able to recognize that the USB suspend condition is disconnected from the host.Therefore, there is a need for a method of detecting disconnection of a host in a dual-role USB port of a USB hub.Summary of the inventionAccording to various embodiments, to detect a disconnect, once the hub has entered a pause, the internal firmware uses USB battery charging contact detection to detect that the host has disappeared. Therefore, the hub will be able to provide VBUS and detect when the USB host is disconnected.A USB hub according to an embodiment comprises: a plurality of downstream ports; at least one dual mode port configured to be switchable from a downstream port to an upstream port; and a host detection circuit for detecting when upstream Whether to connect to the host when the port is operating.In some embodiments, the circuitry is operable when the dual mode port has entered a suspend state. In some embodiments, the at least one dual mode port further includes a DP connection line and a DM connection line. The DM connection line can be coupled to the host detection circuit, and when operating as an upstream port, the connection of the host can be determined by the host detection circuit, the host detection circuit causing a predetermined logic value when the host disconnects It is identified at the DM line.In some embodiments, the host detection circuit includes a current source. In some embodiments, the current source is a 10 uA current source coupled between the DM connection line and ground. In some embodiments, the host detection circuit includes a pull up resistor coupled to the DM connection line.In some embodiments, the hub includes a controller configured to switch modes on the dual mode port such that when it is determined that the host has been disconnected, the dual mode port switches back as The downstream port operates.A method according to an embodiment comprises: switching a USB port function of the dual mode port from a first mode as a downstream port to a second mode as an upstream port in a USB hub including a dual mode port; and entering After the state is suspended, the logic level of the DM cable provided by the host detection circuit is periodically sampled to determine if the host is still connected.In some embodiments, the host detection circuit includes a current source coupled between the DM connection line and ground. In some embodiments, the current source comprises a 10 uA current source. In some embodiments, the host detection circuit includes a pull up resistor coupled to the DM connection line. In some embodiments, the pull up resistor is coupled to the DP connection line, wherein the host is detected to be connected when it is determined that the DP connection line and the DM connection line have the same predetermined logic level. In some embodiments, the dual mode port switches back to operating as a downstream port when it is determined that the host has been disconnected.A USB hub according to an embodiment comprises: a controller; a plurality of downstream ports; at least one dual mode port configured to be switchable from the downstream port to the upstream port by the controller; and a host detection circuit It is used to provide a flag to the controller whether to connect to the host when operating as an upstream port.In some embodiments, the circuitry is operable when the dual mode port has entered a suspend state. In some embodiments, the at least one dual mode port further includes a DP connection line and a DM connection line coupled to the host detection circuit. When operating as an upstream port, the connection of the host can be determined by the host detection circuit. The host detection circuit causes a predetermined logic value to be identified at the DM connection line when the host is disconnected.In some embodiments, the host detection circuit includes a current source. In some embodiments, the current source is a 10 uA current source coupled between the DM connection line and ground. In some embodiments, the host detection circuit includes a pull up resistor coupled to the DM connection line. In some embodiments, the controller is configured to switch modes on the dual mode port such that the dual mode port switches back to operating as a downstream port when it is determined that the host has been disconnected.These and other aspects of the present invention will be better understood and understood by consideration of the <RTIgt; It should be understood, however, that the description of the invention, and the claims Many alternatives, modifications, additions and/or rearrangements may be made without departing from the spirit of the invention, and the invention includes all such alternatives, modifications, additions and/or rearrangements.DRAWINGSThe drawings, which are incorporated in and constitute a part of the specification, It should be noted that the features illustrated in the drawings are not necessarily to scale. A more complete understanding of the present invention and its advantages are set forth in the <RTIgt;1 is a diagram of an example system in accordance with an embodiment.2 is a diagram of an example port configuration, in accordance with an embodiment.3 is a diagram illustrating an example analog port interface, in accordance with an embodiment.4A and 4B illustrate the operation of the port of FIG.5A and 5B illustrate the operation of another embodiment of the port of FIG.Figure 6 is a flow chart illustrating the operation of an embodiment.7A through 7C illustrate the operation of a system that can be detected using a host according to an embodiment.Detailed waysThe invention and its various features and advantageous details are explained more fully by reference to the exemplary embodiments illustrated in the accompanying drawings. It is understood, however, that the specific embodiments and specific examples are776 Descriptions of well-known programming techniques, computer software, hardware, operating platforms, and protocols may be omitted to avoid unnecessarily obscuring the details of the present invention. Various alternatives, modifications, additions and/or rearrangements within the spirit and/or scope of the basic inventive concept will be apparent to those skilled in the art.Referring now to the drawings and specifically to FIG. 1, a diagram illustrating an example USB hub environment in accordance with an embodiment is shown. As shown, system 100 includes a hub 102, one or more control processors 105, a USB host 112, and a USB device 114.In the illustrated embodiment, the USB hub 102 has three downstream ports 110a, 110b, 110c, an adjustable downstream port 110d, and an alternate upstream port 108. Switching hub 104 provides switching between ports 108 and 110a through 110d. Control processor 105 operates to receive and process signals on the USB port and to manage the switching of the ports from upstream to downstream functionality, as will be described in more detail below.The USB hub 102 is typically operated such that all ports 110a through 110d operate as downstream ports and port 108 operates as an upstream port. However, in certain circumstances, the USB hub 102 can swap the functions of ports 108 and 110d. Thus, port 108 will operate as a downstream port and port 110d will operate as an upstream port. Additionally, USB hub 102 can include dual mode detection circuitry 106 as will be explained in greater detail below. The USB hub 102 can be embodied as a USB 46x4 hub available from the assignee of the present application.An example swap port connection in accordance with an embodiment is shown in FIG. As shown, system 200 includes a USB hub 202 and a USB host 204. The USB hub 202 includes a swap port 209 having a host detect or dual mode detection analog front end (AFE) circuit 210. For the sake of brevity, the remaining ports and components of the hub are not shown. Similarly, USB host 204 includes a USB AFE port circuit 212 and a host transceiver 214.USB host 204 and USB hub 202 are coupled via USB bus 206. This USB bus 206 typically includes four lines: Vbus 216, D- (also known as DM) 218, D+ (also known as D+) 220, and Ground (ground) 222. Vbus 216 and Ground 222 provide power, while DP 220 and DM 218 upload data. In addition, some USB connectors (specifically, u or micro AB connectors) use an additional ID line 223.As will be explained in more detail below, hub swap port 209 is configured to detect the presence of a host via DM 218 and DP 220 lines.An example circuit for doing so is shown more particularly with respect to FIG. The hub dual mode swap port 302 on the hub and the USB host port 304 on the USB host (which previously functioned as a device) are shown.The USB host port 304 includes a connection circuit 310 connected to the DP and a connection circuit 312 connected to the DM. Circuit 310 is coupled to the DP using a 15 kΩ pull-down resistor 316a. Additionally, a driver 314c including a voltage source and a 45 ohm resistor and switch is coupled to the DP. Finally, DP is coupled to comparator 318a, and the other inputs of comparator 318a are coupled to a 1.6V source. Similarly, circuit 312 is coupled to the DM using a 15 kΩ pull-down resistor 316b. Additionally, a driver 314d including a voltage source, a 45 Ω resistor, and a switch is coupled to the DM. Finally, DM is coupled to comparator 318b, and the other inputs of comparator 318b are coupled to a 1.6V source. DP and DM operate as differential data pairs, with one or zero being transmitted by alternately using drivers 314c, 314d and corresponding drivers 314a, 314b on the hub swapping chip 302.The hub swap port 203 includes a DP circuit 306 and a DM circuit 308. In the illustrated embodiment, DP circuit 306 includes a driver 314a that includes a voltage source, a 45 ohm resistor, and a switch. The DP circuit 306 further includes a 1.5 kΩ pull-up resistor coupled to 3.3 V and a comparator 320, one input of which is coupled to the reference voltage and the other input is coupled to the DP line.The DM circuit 308 similarly includes a driver 314b including a voltage source, a 45 Ω resistor and a switch, and a comparator 322, one input of which is coupled to the reference voltage and the other input is coupled to the DM line. Additionally, in the illustrated embodiment, a 10 [mu]A current source 326 coupled to the DM line is provided. Note that the values of the resistors and other components can be appropriately changed as necessary. Therefore, the figures are only exemplary.In operation, as will be described in more detail below, if host 204 stops transmitting the state of the frame packet (SOF) (not shown), hub 202 will enter a paused state within 3 ms. Once this happens, the hub's control processor (Figure 1) periodically samples the state of the DP/DM pins to determine the presence of the host.If the host is present, the 15k pull-down resistor 316b of the host on the DM will sample the DM pin to a logic 0 through the hub. If the host is disconnected, the 10μA current source 326 will sample the DM pin to a logic 1 through the hub. If it is determined that the host will be disconnected, the hub will disable the swap state and revert to the default upstream port to pass control back to the original host.This is more particularly shown at Figures 4A and 4B. In Figure 4A, as shown, if a host is connected, the 1.5kΩ pull-up resistor 324 on the hub and the 15kΩ pull-down resistor 316a on the host form the circuitry in the DP line. Similarly, current source 326 and pull-down resistor 316b define the circuitry on the DM line. However, at Figure 4B, if the host is disconnected, pull-up resistor 324 pulls the DP line to one. Similarly, current source 326 causes the DM line to be marked high.In some embodiments, current source 326 can be replaced with a sufficiently large pull-up resistor (eg, 125 k[Omega]) that is coupled to the DM line. This is shown in Figures 5A and 5B. In Figure 5A, pull-up resistor 502 and pull-down resistor 316b define the circuitry on the DM line when the host is connected. If the connection is disconnected, the pull-up resistor 502 pulls up the DM line.Note that although similar to the battery charger detection circuit, this circuit is generally not provided on the UBS hub. Furthermore, in conventional battery charger detection circuits, this circuit, a 10uA source or a 125k pull-up resistor is typically applied to the DP instead of the DM.The operation of the embodiment is more particularly shown with respect to the flow chart of FIG. Initially, in step 602, the upstream host and downstream devices on the dual mode port switch functions. This may be in response to, for example, a setup packet sent by the USB device, indicating to the hub that it desires to function as a host.Next, in step 604, the hub controller will monitor the beginning of the line's frame command. If, as determined in step 606, the beginning of the frame command ends, then in step 608, the hub will begin monitoring the status of the DM line. If the DM line is low as determined at 610, then in step 612, it is determined that the host is disconnected. In this case, in step 614, the default host/device configuration is restored. If the DM line is not low, then in step 616, the hub will enter a paused state. If, as determined in step 618, it then detects the beginning of a frame or other command, it will exit the paused state and the process returns to step 604.Embodiments as claimed may be particularly suited for operation in conjunction with automotive entertainment and information ("entertainment information") systems. This system is shown with respect to Figures 7A through 7C. As shown, system 700 includes an entertainment information unit 702 coupled to hub system 704 via USB cable 706 and uAB port 708. Hub system 704 can include a switching hub 716, one or more onboard USB devices 710, a power switch 712, and an A port 714. Switching hub 716 can include host detection circuitry 718 that operates as discussed above. In the illustrated embodiment, the switching hub 716 is connected to the uAB port via a Flex (Flex) line 720 and to its Vbus pin via Flex_Vbus 722. In the illustrated embodiment, Vbus is also coupled to the VBUS_DET input of switching hub 716.On the A side, switching hub 716 is coupled to the DP and DM pins of the A port via a swap (SWAP) line 728. The Vbus from the external device is provided to the power switch 712 at 726. Finally, switching hub 716 controls power opening 712 via control line 724. The power switch 712 remains open when the devices are connected.The operation of environment 700 in device mode is illustrated in Figure 7B. Broadly speaking, in operation, hub system 704 will enable battery charging on downstream A port 714, and other downstream ports have been enumerated (e.g., the downstream port coupled to USB device 710). When the smartphone is connected to port A 714, it detects the Vbus voltage and battery charging signal exchange and begins charging. The entertainment information unit 702 enumerates and controls the smartphone 732a.More specifically, as shown in FIG. 7B, at 724a, switching hub 716 pulls the power control line high to enable battery charging. In response, at 726a, power switch 712 provides power on Vbus. The entertainment information unit 702 drives VBUS_DET high at 722a to signal its ready list. Host port 720a will detect switching hub 716 and enumerate the devices 720a coupled thereto. Next, the switching hub 716 will detect the smart phone 732a connected to the USB port A 714 via the USB cable 730, and it will enumerate the phone as a device.In some embodiments, then, if the user needs to use the interface provided by smart phone 732b or if smart phone 732b provides more performance, then entertainment information unit 702 can forward control of switching hub 716 to smart phone 732b. This is more specifically shown in Figure 7C.As shown, in response to the setup command received from smart phone 732b, entertainment information unit 702 receives a tag that desires to change direction. Next, the entertainment information unit 702 transmits a direction change command to the smartphone 732b. The hub then lists the peripheral and entertainment information units.As mentioned above, during the direction change, the USB device 710 still operates as a downstream device and the power switch remains on. If the smartphone is removed from the connection, then no smartphone is detected as the host as explained above. |
Pillars having a directed compliance geometry are arranged to couple a semiconductor die to a substrate. The direction of maximum compliance of each pillar may be aligned with the direction of maximum stress caused by unequal thermal expansion and contraction of the semiconductor die and substrate. Pillars may be designed and constructed with various shapes having particular compliance characteristics and particular directions of maximum compliance. The shape and orientation of the pillars may be selected as a function of their location on a die to accommodate the direction and magnitude of stress at their location. A method includes fabricating pillars with particular shapes by patterning to increase surface of materials upon which the pillar is plated or deposited. |
CLAIMS What is claimed is: 1. An apparatus, comprising: a semiconductor die; at least one conductive pad disposed on a surface of the semiconductor die; at least one pillar coupled to the at least one conductive pad, the at least one pillar having a non-uniform compliance geometry defining a compliant direction for each of the at least one pillars. 2. The apparatus of claim 1, in which the compliant direction is aligned with a direction of pillar strain as a function of a pillar location on a substrate. 3. The apparatus of claim 2, in which the direction of pillar strain is a direction of maximum thermal expansion. 4. The apparatus of claim 1, further comprising: a plurality of pillars distributed at different locations on the surface of the semiconductor die and arranged with different compliant direction orientations of the pillars corresponding to the different locations. 5. The apparatus of claim 4, further comprising: a substrate coupled by the plurality of pillars to the semiconductor die, the substrate including a plurality of solder joints coupled to a corresponding plurality of the pillars, in which the different compliant direction orientations of the pillars are normal to a direction of maximum expansion of the substrate relative to the semiconductor die for the corresponding different location. 6. The apparatus of claim 1, integrated into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 7. An apparatus, comprising: a semiconductor die; a plurality of conductive pads disposed on a surface of the semiconductor die; a first pillar having a first pillar geometry coupled to one of the conductive pads at a first location on the semiconductor die; and a second pillar having a second pillar geometry different from the first pillar geometry, the second pillar coupled to a different one of the conductive pads at a second location on the semiconductor die. 8. The apparatus of claim 7, in which: the first pillar geometry corresponds to a first thermal stress at the first location on the semiconductor die; and the second geometry corresponds to a second thermal stress at the second location on the semiconductor die. 9. The apparatus of claim 8, further comprising: a substrate coupled through the first pillar and the second pillar to the semiconductor die. 10. The apparatus of claim 7, integrated into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 11. A method for packaging a semiconductor die, comprising: fabricating a plurality of conductive pads on a surface of the semiconductor die; depositing a first pillar having a first pillar geometry on one of the conductive pads at a first location on the semiconductor die; and depositing a second pillar having a second pillar geometry different from the first pillar geometry on a different one of the conductive pads at a second location on the semiconductor die. 12. The method of claim 11 , further comprising:coupling a substrate through the first pillar and the second pillar to the semiconductor die. 13. The method of claim 11, in which the depositing the first pillar comprises: forming a first pattern of predetermined surface area distributions on the conductive pad at the first location; and depositing the first pillar onto the first pattern to generate the first pillar geometry including predetermined pillar heights corresponding to the predetermined surface area distributions of the first pattern. 14. The method of claim 11, further comprising the performing the receiving, dividing and transmitting in at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 15. An apparatus for packaging a semiconductor die, comprising: means for fabricating a plurality of conductive pads on a surface of the semiconductor die; means for depositing a first pillar having a first pillar geometry on one of the conductive pads at a first location on the semiconductor die; and means for depositing a second pillar having a second pillar geometry different from the first pillar geometry on a different one of the conductive pads at a second location on the semiconductor die. 16. The apparatus of claim 15, further comprising: means for coupling a substrate through the first pillar and the second pillar to the semiconductor die. 17. The apparatus of claim 15, in which the depositing the first pillar comprises: means for forming a first pattern of predetermined surface area distributions on the conductive pad at the first location; andmeans for depositing the first pillar onto the first pattern to generate the first pillar geometry including predetermined pillar heights corresponding to the predetermined surface area distributions of the first pattern. 18. The apparatus of claim 15, integrated into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 19. A method for packaging a semiconductor die, comprising steps of: fabricating a plurality of conductive pads on a surface of the semiconductor die; depositing a first pillar having a first pillar geometry on one of the conductive pads at a first location on the semiconductor die; and depositing a second pillar having a second pillar geometry different from the first pillar geometry on a different one of the conductive pads at a second location on the semiconductor die. 20. The method of claim 19, further comprising performing the receiving, dividing and transmitting in at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. |
INTERCONNECT PILLARS WITH DIRECTED COMPLIANCE GEOMETRY Field of the Disclosure [0001] The present disclosure is in the field of semiconductor packaging and more particularly in the field of copper pillar interconnects in semiconductor packaging. Background [0002] Integrated circuit (IC) fabrication processes have produced ICs with reduced node spacing in the range of 48 nm and 28 nm nodes, for example. Material having an extremely low dielectric constant (ELK) has been used to accommodate the reduced node spacing and to enhance electrical performance of the ICs that are produced with such small node spacing. The ELK material may include relatively porous material which may be susceptible to cracking in response to certain stresses. [0003] Interconnect pillars constructed from more rigid conductive materials such as copper have been used along with solder in certain solder bump connections between a semiconductor die and a substrate. In electronic packaging, for example, a flip chip can include a pillar that extends from a contact on a die or wafer to a solder connection on a substrate. The solder connection can be a solder on pad (SOP) connection, for example. [0004] The use of pillars provides an improvement over earlier semiconductor interconnect techniques by allowing a very high density of interconnects. The metallurgical properties of the pillars compared to earlier solder structures allow the smaller pitch connections to maintain an appropriate standoff distance between a semiconductor die and a substrate to which it is connected. The use of copper pillars also reduces electromigration (EM) in the interconnects. However, the use of copper pillars can make a backend silicon structure more susceptible to cracking during package assembly.[0005] During the processing of flip chips, the substrate and semiconductor die are subject to substantial heating and cooling. The semiconductor die may be constructed from a material such as silicon which has a coefficient of thermal expansion (CTE) of about 2.6 x 10~6/°C and the substrate may have a CTE in the range of about 15 x 10~6/°C to about 17x 10~6/°C. The CTE mismatch between the substrate and die causes the substrate to expand and contract more than the die during a heating and cooling cycle. In packages that include interconnects with copper pillars rather than traditional solder bumps to accommodate finer bump pitch, the copper of the pillars may not be able to deform enough to take up the stress caused by the thermal expansion mismatch between the die and substrate. The higher Young's modulus of the copper pillar causes more of the stress to be "transferred" to the sensitive ELK layers of the die. This increases the chance for ELK layer cracking for flip chip type interconnects. Such cracking due to high stress in the Extremely Low Dielectric Constant (ELK) layers is a common failure of semiconductor packages. Summary [0006] For a more complete understanding of the present disclosure, reference is now made to the following detailed description and the accompanying drawings. In an exemplary embodiment, a pillar for a flip chip interconnect is provided. The pillar includes an electrically conductive material such as copper, gold or silver. [0007] An apparatus according to an aspect of the present disclosure includes a semiconductor die and at least one conductive pad disposed on a surface of the semiconductor die. At least one pillar is coupled to the conductive pad. The pillar(s) have a non-uniform compliance geometry defining a compliant direction for each pillar. [0008] According to another aspect of the disclosure, an apparatus includes a semiconductor die and a number of conductive pads disposed on a surface of the semiconductor die. A first pillar that has a first pillar geometry is coupled to one of the conductive pads at a first location on the semiconductor die. A second pillar has a second pillar geometry that is different from the first pillar geometry. The second pillar is coupled to a different one of the conductive pads at a second location on thesemiconductor die. [0009] Another aspect of the present disclosure provides a method for packaging a semiconductor die. The method includes fabricating a number of conductive pads on a surface of the semiconductor die. A first pillar that has a first pillar geometry is deposited on one of the conductive pads at a first location on the semiconductor die. A second pillar that has a second pillar geometry different from the first pillar geometry is deposited on a different one of the conductive pads at a second location on the semiconductor die. [0010] Yet another aspect of the present disclosure provides an apparatus for packaging a semiconductor die. The apparatus includes means for fabricating conductive pads on a surface of the semiconductor die and means for depositing a first pillar that has a first pillar geometry on one of the conductive pads at a first location on the semiconductor die. The apparatus also includes means for depositing a second pillar that has a second pillar geometry different from the first pillar geometry on a different one of the conductive pads at a second location on the semiconductor die. [0011] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.Brief Description of the Drawings [0012] The accompanying drawings are presented to aid in the description of embodiments. The drawings are provided solely for illustration of the embodiments and not limitation thereof. [0013] FIGURE 1 is a diagram illustrating a conventional flip chip package prior to attachment. [0014] FIGURE 2 is a schematic view of the conventional flip package of FIGURE 1 after attachment. [0015] FIGURES 3A and 3B are schematic diagrams of a pillar having particular directions of maximum compliance according to aspects of the present disclosure. [0016] FIGURE 4 is diagram illustrating different stress magnitudes at different locations on a semiconductor die. [0017] FIGURE 5 is a diagram illustrating placement and orientation of pillars having a direction of maximum compliance aligned with local maximum stress according to an aspect of the present disclosure. [0018] FIGURES 6A - 6F are schematic diagrams illustrating processes for forming shaped pillars on a semiconductor die according to aspects of the present disclosure. [0019] FIGURE 7 is a process flow diagram showing coupling of a semiconductor die to a substrate according to aspects of the present disclosure. [0020] FIGURE 8 is a block diagram showing an exemplary wireless communication system in which a semiconductor die coupled to a substrate with shaped pillars may be advantageously employed. Detailed Description [0021] Referring to FIGURE 1, a conventional flip chip package design is described. The flip chip package 100 includes a die or wafer 102 from which a pillar104 extends. The flip chip package 100 is complete when the die or wafer 102 is coupled to a substrate 106. A solder bump 108 is disposed on the substrate 106 for coupling to the pillar 104. In FIGURE 2, for example, the solder material 108 couples to the pillar 104 and forms a conductive connection 200. While there is only one conductive connection 200 shown in FIGURE 2, there can be multiple conductive connections between pillars 104 and solder bumps 108. [0022] A conventional pillar has a symmetrical geometry and does include any particular directionality. For example, conventional pillars used in semiconductor packaging are substantially cylindrical and form an electrically conductive interconnect with the substrate through a solder on pad (SOP) connection. To reduce the susceptibility of the ELK layer to cracking and increase the robustness of the ELK layers, aspects of the present disclosure provide a directionally oriented pillar design on a semiconductor die or wafer. This reduces the stress on the ELK layers. [0023] Referring to FIGURE 3A, a directionally oriented pillar 300 is described according to one aspect of the disclosure. In this embodiment, shown in a top view, the pillar 300 includes a generally rectangular cross section. Because of the rectangular geometry, the pillar 300 is more compliant to stresses in the direction of the shorter side of the rectangle, shown as the "Y" direction, than it is in the direction of the longer side of the rectangle, shown as the "X" direction. The directions of maximum compliance 302, 304 of the pillar 300 are therefore normal to the "X" direction. [0024] Although the rectangular pillar 300 is compliant in two directions 302, 304, along the same line alternative aspects of the present disclosure provide pillars of different shapes which may have a single direction of maximum compliance or multiple different directions of maximum compliance. For example, FIGURE 3B shows a V shaped pillar 306 having two compliance directions 308 and 310 oriented along different lines. In another aspect, different heights of material within a pillar can be designed to affect the compliance geometry, as shown in FIGURES 6A-F. [0025] The mismatched thermal expansion of a die and substrate causes more relative displacement or the die and substrate in some areas of the die and less relative displacement in other areas of the die. For example, if the die is centered relative tothe substrate, the central portion of the die may be subject to little or no displacement relative to the central portion of the substrate. In contrast the edges of a die may be subject to significant displacement relative to the edge portions of a substrate. These different relative displacements cause the stress on a pillar to vary as a function of location on the die. FIGURE 4, shows a top view of a die 400 in which stress on a pillar due to mismatched thermal expansion is shown by the length of arrows. [0026] FIGURE 5 shows an aspect of the present disclosure in which pillars 502 are shaped to have a direction of maximum compliance. The pillars 502 are oriented on the die 500 as a function of their location on the die 500 so that their direction of maximum compliance 504 for each pillar 502 corresponds to direction of the maximum stress due to CTE mismatch of the die 500 and substrate (not shown). [0027] In addition to orienting similarly shaped pillars as a function of their position on a die, aspects of the present disclosure also may include using differently shaped pillars as a function of their position on a die. For example, a pillar that is located near the center of a die and thereby subject to very little stress due to CTE mismatch may have a circular cross section having no particular direction of maximum compliance. On the same die, a pillar that is located near the edge may have a rectangular cross section to absorb large stresses due to CTE mismatch. [0028] Another aspect of the present disclosure provides a method for depositing material to form pillars of various shapes on a semiconductor die. Referring to FIGURE 6A, the method includes depositing or etching a pattern 602 in a material 604 such as a passivation material on an area of the die 600 where the pillar is to be provided. The pattern 602 includes an increased surface area at locations where an increased pillar height is desired and a lower surface area where a lower pillar height is desired. [0029] The exemplary pattern 602 shown as a cross section in FIGURE 6A includes a series of rings 603, 603', 603" in which different locations in the pattern provide different surface area according to the pattern density of the rings. The surface area of the pattern 602 is relatively greater toward an inner ring 603 where the rings are spaced more closely than they are spaced near the outer ring 603 ' ' . According to aspects of the disclosure, the greater surface area of the pattern 602toward the inner ring 603 increases the pillar height in the center of the pattern 602, and the lower surface area of the pattern 602 toward the outer ring 603 ' ' results in decreased pillar height toward the outer edge of the pattern 602. [0030] Referring to FIGURE 6B a first under bump metallization layer (UBM-1) 606 is deposited over the passivation layer 604, using a physical vapor deposition (PVD) process, for example. Referring to FIGURE 6C, a photo resist pattern 608 is then applied around the pillar area. In FIGURE 6D, high accelerator plating is performed to form the pillar 610 by plating to higher thickness where there is a higher surface area of the UBM-1 layer. In FIGURE 6E, solder 612 may be applied to the pillar 610. In FIGURE 6F, the photo resist pattern is stripped and the UBM material adjacent to the pillar is etched away from the die 600. [0031] FIGURE 7 is a process flow diagram for coupling a semiconductor die to a substrate according to aspects of the present disclosure. In block 702, conductive pads are fabricated at different locations on a surface of the semiconductor die. In block 704, a first pillar with a first pillar geometry is deposited on one of the conductive pads. In block 706, a second pillar with a different geometry from the first pillar is deposited onto a different one of the conductor pads. In block 708, a substrate is coupled through the pillars to the semiconductor die. [0032] FIGURE 8 shows an exemplary wireless communication system 800 in which an embodiment of an electronic package with an improved flip chip interconnect may be advantageously employed. For purposes of illustration, FIGURE 8 shows three remote units 820, 830, and 850 and two base stations 840. It should be recognized that typical wireless communication systems may have many more remote units and base stations. Any of remote units 820, 830, and 850, as well as the base stations 840, may include an electronic package with an improved flip chip interconnect such as disclosed herein. FIGURE 8 shows forward link signals 880 from the base stations 840 and the remote units 820, 830, and 850 and reverse link signals 890 from the remote units 820, 830, and 850 to base stations 840. [0033] In FIGURE 8, remote unit 820 is shown as a mobile telephone, remote unit 830 is shown as a portable computer, and remote unit 850 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote unitsmay be cell phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although FIGURE 8 illustrates certain exemplary remote units that may include an electronic package with an improved flip chip interconnect as disclosed herein, the package is not limited to these exemplary illustrated units. Embodiments may be suitably employed in any electronic device in which an electronic package with an improved flip chip interconnect is desired. [0034] Although certain aspects of the present disclosure are described in terms of a copper pillar, it should be understood that other materials such as nickel, gold and silver may also be used to form pillars according various aspects of the disclosure. [0035] Although the term "pillar" is used throughout the present disclosure to describe a particular structure for coupling a semiconductor die to a substrate, it should be understood that various other terms such as "post" and "bump," for example, are commonly used for the same general type of structure. Although the term "interconnect" is used throughout the present disclosure, it should be understood that various other terms such as "connection" and "joint," for example, to describe the same type of structure. [0036] While exemplary embodiments incorporating the principles of the present disclosure have been disclosed hereinabove, the present disclosure is not limited to the disclosed embodiments. Instead, this application is intended to cover any variations, uses, or adaptations of the disclosure using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this disclosure pertains and which fall within the limits of the appended claims. |
A link controller, method, and data processing platform are provided with dual-protocol capability. The link controller includes a physical layer circuit for providing a data lane over a communication link, a first data link layer controller which operates according to a first protocol, and a second data link layer controller which operates according to a second protocol. A multiplexer/demultiplexer selectively connects both data link layer controllers to the physical layer circuit. A link training and status state machine (LTSSM) selectively controls the physical layer circuit to transmit and receive first training ordered sets over the data lane, and inside the training ordered sets, transmit and receive alternative protocol negotiation information over the data lane. In response to receiving the alternative protocol negotiation information, the LTSSM causes the multiplexer/demultiplexer to selectively connect the physical layer circuit to the second data link layer controller. |
1. A data processing platform, which includes:central processing unit;a dual protocol link controller, the dual protocol link controller is coupled to the central processing unit and includes:Physical layer circuitry coupled to a Peripheral Component Interconnect Express (PCIe) communications link;a first data link layer controller adapted to operate in accordance with a first Peripheral Component Interconnect Express (PCIe) protocol;a second data link layer controller adapted to operate in accordance with a second non-PCIe protocol;A multiplexer/demultiplexer coupled to the first data link layer controller, the second data link layer controller, and the physical layer circuit;A link training and state machine (LTSSM) adapted to control the physical layer circuitry to: (a) transmit and receive a training ordered set over the PCIe communications link; (b) within the training ordered set , transmitting and receiving alternative protocol negotiation information over the PCIe communication link; and (c) in response to receiving the alternative protocol negotiation information, causing the multiplexer/demultiplexer to convert the physical layer circuitry connected to the second data link layer controller; andA memory module including a memory, a media controller coupled to the memory, and an interface controller coupled to the media controller and the PCIe communication link, the interface controller including a second LTSSM , the second LTSSM is operable to transmit and receive a training ordered set through the PCIe communication link and within the training ordered set, transmit and receive alternative protocol negotiation information through the PCIe communication link; wherein The second LTSSM is one of the following: part of a second dual-protocol link controller, or part of a single-protocol link controller operating with the Gen-Z protocol.2. The data processing platform of claim 1, wherein the second LTSSM is part of a second dual protocol link controller.3. The data processing platform of claim 1, wherein the second LTSSM is part of a single protocol link controller operating with the Gen-Z protocol. |
Alternative protocol optionsBackground techniqueSystem interconnect bus standards provide communication between different elements on a chip, or in a system with multi-chip modules, circuit boards, server nodes, or in some cases entire server racks or networked systems. For example, the popular Peripheral Component Interconnect Express (PCIe or PCI Express) is a high-speed serial expansion bus that provides interconnection between components on the motherboard and connects to expansion cards. For multi-processor systems, and especially systems where multiple processors on different chips are interconnected and share memory, improved system interconnect standards are needed.The interconnection of multiprocessor computing resources and associated memories presents several challenges. Typically, as the number of interconnected processors and accelerators increases, the memory capacity requirements also increase. Additionally, new interconnect standards may be incompatible with older standards, such as PCIe, and therefore render obsolete various system components and expansion devices that employ older standards.Description of the drawingsFigure 1 shows in block diagram form a data processing platform with PCIe memory modules according to the prior art.Figure 2 illustrates, in block diagram form, a data processing platform in accordance with some embodiments.Figure 3 illustrates, in block diagram form, another data processing platform in accordance with some embodiments.FIG. 4 illustrates, in flowchart form, a state diagram for operating a prior art link training and status state machine (LTSSM).Figure 5 illustrates, in flowchart form, an example process for selecting an alternative protocol using enhanced LTSSM, in accordance with some embodiments.Figure 6 illustrates an unmodified ordered training set in symbolic sequence diagram form according to some embodiments.Figure 7 illustrates a modified ordered training set in symbolic sequence diagram form according to some embodiments.In the following description, the same reference numbers are used in different drawings to indicate similar or identical items. Unless otherwise stated, the term "coupled" and its associated verb forms include both direct connections and indirect electrical connections by methods known in the art, and unless otherwise stated, any description of a direct connection means that the use of the appropriate Alternative implementation in the form of indirect electrical connection.Specific implementation planThe link controller includes physical layer circuitry, first and second data link layer controllers, multiplexer/demultiplexer, and link training and state machine (LTSSM). The link controller is connected to the communication link and provides a data channel through the communication link. The first data link layer controller operates according to a first protocol, and the second data link layer controller operates according to a second protocol. The multiplexer/demultiplexer is coupled to the first data link layer controller, the second data link layer controller and the physical layer circuitry. The LTSSM selectively controls the physical layer circuitry to transmit and receive the first training ordered set through the data channel, and to transmit and receive alternative protocol negotiation information through the data channel within the training ordered set. LTSSM also controls the physical layer to transmit and receive data rate information and link width information through the data channel. In response to receiving the alternative protocol negotiation information, the LTSSM causes the multiplexer/demultiplexer to selectively connect the physical layer circuitry to the second data link layer controller.One method includes transmitting and receiving a first training ordered set using a link controller circuit coupled to a PCIe communications link to establish a bit lock and symbol lock for a Peripheral Component Interconnect Express (PCIe) communications link. Modified training ordered sets are transmitted and received using a link controller circuit connected to the PCIe communication link. Transmit and receive alternative protocol negotiation information over the data channel within the modified training ordered set. Link controller circuitry is also used to transmit data rate information and link width information. In response to receiving no alternative protocol negotiation information, the method causes the multiplexer/demultiplexer to selectively connect the physical layer circuitry to the first data link layer controller for the first protocol. In response to receiving the alternative protocol negotiation information, the method causes the multiplexer/demultiplexer to selectively connect the physical layer circuit to the second data link layer controller for the second protocol. Then operate the PCIe communication link.The data processing platform includes a central processing unit and a dual-protocol link controller connected to the central processing unit. A dual-protocol link controller includes physical layer circuitry connected to a Peripheral Component Interconnect Express (PCIe) communications link, a first data link layer controller operating in accordance with a first protocol, a second data link layer controller operating in accordance with a second protocol a data link layer controller, and a multiplexer/demultiplexer coupled to the first data link layer controller, the second data link layer controller, and the physical layer circuitry. The Link Training and State Machine (LTSSM) controls the physical layer circuitry to: (a) transmit and receive a training ordered set over the PCIe communications link; (b) within the training ordered set, transmit and receive surrogates over the PCIe communications link protocol negotiation information; and (c) in response to receiving the alternative protocol negotiation information, causing the multiplexer/demultiplexer to connect the physical layer circuit to the second data link layer controller.1 illustrates in block diagram form a data processing platform 100 having a PCIe memory module 120 in accordance with the prior art. Data processing platform 100 includes a processor 110 having a memory controller 112 and a PCIe port 114 connected to a PCIe bus 150 . Expansion memory for data processing platform 100 is provided by PCIe memory module 120 connected to PCIe bus 150 . PCIe memory module 120 includes a memory controller 122 in communication with PCIe bus 150, and storage class memory (SCM) 124, which includes a plurality of memory chips that provide persistent memory storage.Figure 2 illustrates, in block diagram form, a data processing platform 200 in accordance with some embodiments. The processor 210 communicates with the memory module 230 using the Gen-Z protocol, which is a data access technology used to enhance memory solutions for existing and emerging memory technologies. The Gen-Z protocol is found in the Gen-Z Core Specification 1.0 published by the Gen-Z Consortium Corporation and in later versions of the standard. Gen-Z provides an abstract device interface that supports a variety of memory types, including multiple byte-addressable persistent storage class memory technologies. Gen-Z provides a platform for fabric-attached storage, from point-to-point connections scaling to local storage scaled via local high-speed buses and switch buses, to rack-scale solutions. To support a variety of current and future memory subsystems, Gen-Z provides a common interface between the processor and its memory subsystem. Using this interface, components communicate with application-specific semantic overrides using memory semantic requests to derive meaning and drive type-specific actions. Normally, the host processor 210 communicates with the memory module 230 over the PCIe bus 220, but is able to recognize that a Gen-Z device is connected and configure the dual-protocol link controller 209 to communicate using the Gen-Z protocol as an alternative protocol.Host processor 210 includes four processor cores 202 interconnected by an on-chip interconnect network 204 . This number of processor cores 202 is only an example, and processor cores for various data processing platforms will typically include more processor cores, such as 32 or 64 cores all connected to an on-chip interconnect network. As shown, on-chip interconnect network 204 connects each processor core to the PCIe input of dual-protocol link controller 209 for PCIe traffic, and to Gen-Z memory controller 212 for access to memory modules 230 memory access. In this embodiment, dual-protocol link controller 209 includes a Gen-Z/PCIe external port, which includes PCIe hardware enhanced to include Gen-Z alternative protocol capabilities. This capability is provided through virtual Gen-Z ports 208, Gen-Z transaction layer controller 211, Gen-Z data link layer controller 213, and PCIe physical layer circuitry 216. Dual protocol link controller 209 provides Gen-Z protocol interconnection to memory modules 230 overlaid on PCIe physical links on PCIe bus 220 .Gen-Z memory controller 212 typically includes processor memory management logic and may include other logic circuitry such as request queues or memory directories. Gen-Z memory controller 212 sends and receives memory requests and responses over a connection to Gen-Z protocol layer 206, which prepares and formats messages according to the Gen-Z protocol. Gen-Z protocol layer 206 is connected to Gen-Z port 208 , which is connected to Gen-Z transaction layer controller 211 of dual protocol link controller 209 .Dual protocol link controller 209 includes Gen-Z transaction layer controller 211 connected to Gen-Z port 208 for transmitting memory access requests in the upstream direction through Gen-Z port 208 . Gen-Z transaction layer controller 211 is connected to Gen-Z data link layer controller 213 for providing and receiving Gen-Z data packets in the downstream direction. Gen-Z data link layer controller 213 typically manages the Gen-Z communication link through PCIe bus 220, performing link setup, sequencing packets, and controlling the flow of data through the link.The multiplexer/demultiplexer 215 selectively connects the PCIe physical layer circuit 216 to the Gen-Z data link layer controller 213 or the PCIe data link layer controller 214, thereby allowing the PCIe physical layer circuit to pass through 216 Complete Gen-Z link or PCIe link. PCIe physical layer circuitry 216 is connected to multiplexer/demultiplexer 215 and operates to create signals for transmission over PCIe bus 220 through the unidirectional transmit port labeled "TX" and It is the one-way receiving port of "RX" to receive the signal. The operation of the multiplexer/demultiplexer 215 is controlled by settings provided during initialization of the dual-protocol link controller 209 through the link training and state machine (LTSSM) 217, as described further below.On-chip interconnect 204 includes another path for processor 202 to communicate through dual protocol link controller 209 using the PCIe protocol through a connection to PCIe transaction layer controller 212 . This path is provided for normal PCIe traffic, allowing PCIe-enabled devices to be connected to the PCIe bus 220 as an alternative to or in addition to the memory module 230 operating with the Gen-Z protocol. PCIe devices may be connected to PCIe lanes of PCIe bus 220 that are different from those used by memory module 230 . PCIe transaction layer controller 212 is connected to PCIe data link layer controller 214, which is selectively connected to PCIe physical layer circuitry 216 through multiplexer/demultiplexer 215, As described further below. PCIe transaction layer controller 212 and PCIe data link layer controller 214 operate as known in the art.The blocks of dual protocol link controller 209 may be implemented with various combinations of hardware, firmware, and software. In this embodiment, dual protocol link controller 209 is implemented entirely in hardware. In another exemplary implementation, the PCIe physical layer circuit 216 is implemented in hardware, the PCIe transaction layer controller 212 is implemented in software, and the PCIe data link layer controller 214 is implemented partially in hardware and partially in software. The Gen-Z protocol layer 206 is implemented in software, the Gen-Z transaction layer controller 211 is partially implemented in hardware and partially in software, and the Gen-Z data link layer controller 213 is implemented in hardware.Memory module 230 may be an expansion card type module with PCIe connectors, or may take the form of other expansion modules and/or be built into the motherboard hosting host processor 210 . Memory module 230 includes memory 234 having one or more memory chips connected to interface controller 231 via a high speed local bus. Interface controller 231 includes media controller 232, Gen-Z protocol layer 206, virtual Gen-Z port 208, and link controller 233. The media controller typically performs memory access requests to memory 234 . Gen-Z protocol layer 206 is connected to media controller 232 and prepares and formats messages according to the Gen-Z protocol. Gen-Z protocol layer 206 is connected to virtual Gen-Z port 208 in the downstream direction. Virtual Gen-Z port 208 serves as a logical port for Gen-Z communications from media controller 232 and is connected to Gen-Z transaction layer controller 211 of link controller 233 .Link controller 233 includes Gen-Z transaction layer controller 211 , Gen-Z data link layer controller 213 , PCIe physical layer circuitry 216 , and LTSSM 217 , which operate similarly to those elements in link controller 209 . However, in link controller 233, no PCIe transaction layer, data link layer, or multiplexers are employed, allowing link controller 233 to communicate only with the Gen-Z protocol. The PCIe physical layer circuit 216 of the link controller 233 is connected to the transmission medium of the PCIe bus 220 and transmits and receives Gen-Z protocol communications through the PCIe bus 220. Multiple channels or a single channel may be used in a connection running through multiple lanes of PCIe bus 320. The LTSSM 217 of the link controller 233 performs the functions of the PCIe LTSSM and negotiates the use of the Gen-Z protocol, as described below.Memory module 230 may be used in a memory-centric architecture or a traditional processor-centric architecture, as each architecture is supported by Gen-Z. In this example, memory 234 is storage class memory (SCM) and is non-volatile memory (NVM). However, these examples are not limiting, and many types of memory modules may employ the techniques described herein. For example, RAM memory or memory with mixed NVM and RAM may be used, such as high-capacity flash memory or 3D cross-point memory with RAM buffers.The media controller 232 may be integrated with some or all of the port circuitry of the dual protocol link controller 209 on the interface controller chip (231). The two LTSSMs 217 negotiate with each other during link initialization to notify the host processor 210 of the presence of a Gen-Z device on the PCIe bus 220 and negotiate a connection protocol between the host processor 210 and the memory module 230 . Preferably, this negotiation occurs in addition to the LTSSM training process that is part of the PCIe link controller, as described further below.Figure 3 illustrates the data processing platform 300 in block diagram form. Typically, host processor 310 is connected to memory module 330 via PCIe bus 320, recognizes that a Gen-Z device is connected, and configures host processor 310 and memory module 330's dual protocol link controller 309 accordingly. Host processor 310 is the same as host processor 210 of FIG. 2, with reference numbers for corresponding elements beginning with a "3" instead of a "2".Memory module 330 may be an expansion card type module with PCIe connectors, or may take the form of other expansion modules and/or be built into the motherboard hosting host processor 310 . The memory module 330 includes a memory 334 having one or more memory chips and an interface controller 331. The interface controller 331 includes a media controller 332 and a dual-protocol link controller 309 connected to the transmission medium of the PCIe bus 320 . Multiple channels or a single channel may be used in a connection running through multiple lanes of PCIe bus 320.Media controller 332 and its associated Gen-Z protocol layer 306 operate to implement and respond to memory requests formatted in the memory semantics provided by the Gen-Z protocol. Memory module 330 may be used in a memory-centric architecture or a traditional processor-centric architecture, as each architecture is supported by Gen-Z. In this example, memory 334 is a storage class non-volatile memory similar to memory module 230 .The media controller 332 may be integrated with some or all of the port circuitry of the dual protocol link controller 309 on the interface controller chip (331). The dual-protocol link controller 309 is the same as the dual-protocol link controller 309 of the host processor 310, having elements 311, 313, 315, 316, 317, 312, and 314, except that the processor 310 can The complete PCIe root complex is included in the link controller 309. The two LTSSMs 317 negotiate with each other during link initialization to notify the host processor 310 of the presence of a Gen-Z device on the PCIe bus 320 and negotiate a connection protocol between the host processor 310 and the memory module 330, as further described below with respect to FIG. 5 described. Dual-protocol link controller 309 can generally be configured through register settings to negotiate the use of the Gen-Z protocol or the PCIe protocol. Preferably, this negotiation occurs as a supplement to the LTSSM training process that is part of the PCIe link controller.4 illustrates, in flowchart form, a state diagram 400 for operating a prior art PCIe LTSSM. As described in the PCIe standard, LTSSM typically provides physical layer control procedures that configure and initialize each link for operation. The LTSSM performs the following functions: configures and initializes the PCIe link, supports packet transmission, recovers from link errors, and restarts the PCIe port from a low-power state. When configuring and initializing a PCIe link, the LTSSM first enters a detection state in which it detects the presence of a channel on the lane, typically in response to the physical layer circuitry being initialized or commanded by the link layer (as shown). link partner. LTSSM goes from the detection state to the polling state in which it is established as the link partners exchange predetermined ordered sets of symbols (referred to as training set 1, "TS1" and training set 2, "TS2") Bit and sign lock and channel polarity. These ordered sets contain bit patterns that allow the transmitter and receiver to measure and adjust the performance of the transmitter and receiver over the specific transmission medium of each channel.The LTSSM then enters the configuration state where the TS1 and TS2 ordered sets are exchanged again and parameters such as data rate, lane ordering and link width are established. The LTSSM then enters L0, which is the normal operating state in which data is transmitted on the link. Various errors during the configuration process may cause LTSSM to enter a recovery state. The LTSSM may also enter a power idle or standby state (L0s), a lower power standby/hibernation state (L1), a low power sleep state (L2), or a link down state (L3).Figure 5 illustrates, in flowchart form, an example process 500 for selecting an alternative protocol using enhanced LTSSM 317, in accordance with some embodiments. Generally, process 500 is performed by enhanced LTSSM 317 controlling PCIe physical layer circuitry 316 (FIG. 3) at each end of a lane of PCIe bus 320. Process 500 begins with process block 502, where enhanced LTSSM 317 is typically initiated when the data processing platform is powered on or reset according to any suitable procedure (eg, cold reset or warm reset). The enhanced LTSSM 317 may also restart in response to a command from the host processor, such as a command to leave link standby. At block 504, the enhanced LTSSM 317 completes the detection state to detect the presence of a physical layer circuit transmitter or receiver at the opposite end of the channel. Then at block 506, the enhanced LTSSM 317 checks for alternative protocols enabled on the attached device. Setup is typically initialized using boot ROM, pins set to specific values, or some other suitable technique to set specified values in Gen-Z device registers. The Gen-Z device then checks the register to see that an alternative protocol, such as the Gen-Z Transaction Layer Protocol, is enabled. The check is performed on the Gen-Z device side of the link to determine the preferred protocol to communicate with, and can also be performed independently on the host processor side to determine whether alternative protocols are supported or allowed. At the host processor, settings can be stored in registers that are examined by the PCIe root complex.If the alternative protocol is not enabled, the process 500 enters the normal PCIe LTSSM process at block 508 , in which it completes the polling state at block 516 , completes the configuration state at block 518 , and completes at block 520 After link configuration, exit to L0 operating state at block 522. Block 518 may configure the multiplexer/demultiplexer 315 of the two I/O port controllers 309 at either end of the link to connect the PCIe data link layer controller 314 to the PCIe physical layer circuit 316 , or such a connection may have been set to a default state. If the PCIe protocol is not enabled by default, block 518 may also include transmitting PCIe protocol negotiation information identifying the PCIe protocol in the same manner as the alternative protocol negotiation information was exchanged at block 510 .If the alternative protocol is enabled, with reference to block 508, the process 500 proceeds to block 510 where it negotiates the use of the alternative protocol by transmitting modified TS1 and TS2 ordered sets. The ordered set is modified to insert information into the TS1 or TS2 set at the Gen-Z device end of the link indicating support for the alternative protocol. The enhanced LTSSM 317 transmits and receives alternative protocol negotiation information over the data channel within the modified TS1 and TS2 ordered sets. The host processor 300 end of the link similarly acknowledges acceptance of the alternative protocol by inserting acknowledgment information into the TS1 or TS2 ordered set transmitted back to the Gen-Z device 330.At block 512, the process 500 configures the multiplexer/demultiplexer 315 of the two I/O port controllers 309 at either end of the link to connect the Gen-Z data link layer controllers to PCIe physical layer circuit 316. Typically, a Gen-Z data link layer controller is used if the Gen-Z protocol is supported on both ends of the link, host processor 300 and Gen-Z device 330 . If either end of the link supports only the PCIe protocol, the PCIe data link layer controller 314 is used. At block 514, configuration of the channel is completed by negotiating link speed, link width, and other relevant parameters.This approach enables the use of PCIe or Gen-Z communications in a manner that is transparent to the application layer of the system. It also allows both protocols to use the same physical transmission medium, namely the lanes of the PCIe bus 320 (usually 16 or 32 lanes). Because alternative protocol negotiation is done on a channel-by-channel basis, many channels can be used for the Gen-Z protocol (for example, memory modules), while other channels are used for the PCIe protocol (for example, for peripherals). This article's technique also allows for backward compatibility since older PCIe installations won't interfere with Gen-Z-specific hardware. Additionally, using these techniques within a data structure allows a processing element multiple paths to its chosen port and choice of its chosen protocol.Figure 6 illustrates an unmodified ordered training set 602 in symbolic sequence diagram form, in accordance with some embodiments. The unmodified training set typically consists of two sets of 16 symbols each, which are used by the LTSSM to establish alignment and other link parameters during the polling and configuration states of the LTSSM.Figure 7 illustrates a modified ordered training set 702 in symbolic sequence diagram form, in accordance with some embodiments. Modified ordered training set 702 may be a modified version of one or both of the TS1 or TS2 ordered training sets used by LTSSM. Modified data 704 includes alternative link negotiation parameters that identify the protocol to be employed, such as the Gen-Z protocol. Modified data includes at least one bit changed from the original TS1 or TS2 ordered set. The enhanced LTSSM 317 checks whether there is changed data at the modified bit position to determine whether a modified ordered training set has been received.In various embodiments, the techniques herein may be used with any suitable product (eg, server, data processing computer, database host) that employs memory modules or other peripherals that benefit from high-speed communication links. Furthermore, the described technology is widely applicable to the use of data processors implemented in GPU and CPU architectures or ASIC architectures as well as programmable logic architectures.Although specific embodiments have been described, various modifications to these embodiments will be apparent to those skilled in the art. For example, multiple alternative protocols may be enabled by the link controller and negotiated as described herein.Therefore, it is intended that the appended claims cover all modifications of the disclosed embodiments that fall within the scope of the disclosed embodiments. |
In a lateral BJT formed using a BiCMOS process, the collector-to-emitter breakdown voltage (BVCEO) and BJT's gain, are improved by forming a graded collector contact region (320) with lower doping levels toward the base contact (340). |
CLAIMSWhat is claimed is:1. A lateral bipolar junction transistor (BJT), comprising:a collector, a base with a base contact, and an emitter, wherein the collector includes a graded collector contact extending toward the base contact, the graded collector contact having a doping concentration that drops from substantially Iel6/cm3 down to substantially Iel5/cm3 over a distance of about 2.5μιη as it gets closer to the base contact.2. The lateral BJT of claim 1, wherein the collector contact includes a deep well (DWELL).3. The lateral BJT of claim 2, wherein the collector contact also includes a collector contact moat in the form of a shallow well (SWELL).4. The lateral BJT of claim 1, wherein the lateral BJT is part of a BCD process in which the emitter is defined by a source-drain region, and the base is defined by an epitaxial region (Epi), with a source-drain region (SD) forming a contact to the base.5. The lateral BJT of claim 4, wherein the base further includes a shallow well (SW) in which the source-drain region is formed.6. The lateral BJT of claim 3, wherein the DWELL includes a DNWELL in the case of an NPN BJT, or a DPWELL in the case of a PNP BJT.7. The lateral BJT of claim 6, wherein the DWELL is configured to extend toward the base contact, with lower doping level closer to the base contact.8. The lateral BJT of claim 7, wherein the SWELL and DWELL are configured so that the SWELL is at least partially surrounded by the DWELL.9. The lateral BJT of claim 8, wherein the doping level of the DWELL is lower than that of the SWELL.10. A lateral bipolar junction transistor (BJT), comprising:a collector, a base with a base contact, and an emitter, wherein the collector includes a graded collector contact defined by a deep well (DWELL).11. The lateral BJT of claim 10, wherein the graded collector contact extends toward the base contact and has a doping concentration of substantially Iel6/cm3 dropping down to substantially Iel5/cm3 over a distance of about 2.5μιη as it gets closer to the base contact.12. A method of improving the characteristics of a lateral BJT having an emitter, base and collector, comprising:providing a graded collector contact.13. The method of claim 10, wherein the graded collector contact includes a graded deep well (DWELL).14. The method of claim 11, wherein the graded DWELL is achieved by high-energy phosphorous implant followed by an anneal cycle.15. The method of claim 12, wherein the phosphorus is implanted at approximately lMeV.16. The method of claim 12, wherein the anneal cycle is for approximately 75 minutes at substantially 1150 C.17. The method of claim 12, wherein the graded DWELL is configured to extend toward a base contact of the BJT and have a lower doping level closer toward the base contact.18. The method of claim 15, further comprising providing the collector contact with a shallow well (SWELL) moat of the same doping type as the DWELL.19. The method of claim 16, wherein the SWELL is partially surrounded by the DWELL and makes contact with a source-drain region (SD) defining a collector surface contact of same doping type as the DWELL and SWELL.20. The method of claim 17, wherein the SWELL is formed by ion implantation after the DWELL is formed. |
IMPROVING LATERAL BJT CHARACTERISTICS IN BCD TECHNOLOGY[0001] The relates generally to fabrication of semiconductor devices, and more particularly to BiCMOS devices and improving lateral BJT characteristics.BACKGROUND[0002] Integrated circuits having bipolar and MOS transistors formed on the same semiconductor substrate have many applications in the electronics industry and are therefore in great demand. They combine the high power and fast switching speeds of bipolar devices with the high density and low power consumption of MOS transistors.[0003] When forming devices using a bipolar complementary metal oxide semiconductor (BiCMOS) manufacturing process, care is taken to minimize a number of masks employed in the process, in order to lower the manufacturing costs. Therefore, as often as is practicable, efforts are made to integrate the use of regions typically used for CMOS/DMOS devices as regions in a bipolar device, and vice versa. In BCD (bipolar-CMOS-DMOS) technology, bipolar devices are therefore usually "mask free" because they do not use dedicated masks for the base, emitter and collector, but instead use existing process layers. Such integration helps to minimize manufacturing costs, but the integration causes performance tradeoffs in some cases.[0004] For example, FIG. 1 (prior art) illustrates an NPN type bipolar transistor 10 fabricated using a BiCMOS type fabrication process. The transistor 10 has an n-buried layer (NBL) 12 that is formed in a lightly doped P-type substrate 14. A P-type epitaxial (Pepi) layer 16 is then grown over the NBL 12 and the substrate 14. A deep N+ring 18 is formed by performing either an N-type implant or N-type thermal deposition in the epitaxial layer 16. The deep N+ring 18 extends down to the NBL 12 to couple with the NBL 12 and define a collector region. The deep N+ring 18 also defines therein an isolated base region 22 including the Pepi. The N+region 18 is usually configured as a ring to provide isolation and serve as a plug extending down to the NBL region 12 for purposes of making contact thereto. A P-type source/drain implant is then performed to define a base contact region 24, and an N-type source/drain implant is performed to form an emitter region 26, wherein the base contact region is formed concurrently with the formation of PMOS source/drain regions elsewhere, and the emitter region is formed concurrently with NMOS source/drain regions elsewhere, respectively.[0005] The NPN bipolar transistor 10 of FIG. 1 may be employed in various types of applications. In some applications, the collector-to-emitter breakdown voltage (BVCEO) of the transistor 10 may be an issue.[0006] Another consideration in bipolar transistor is its gain, which is sometimes referred to as the transistor β or HFE. When using the BiCMOS process described above, the spacing between the N-type source/drain region 26 (which forms the emitter) and the deep N+ring 18 (which forms the collector) of the lateral NPN bipolar transistor is relatively large, which contributes to poor bipolar transistor gain.[0007] FIG. 2 (prior art) shows another conventional BiCMOS structure that defines a medium voltage NPN device. The emitter of the NPN bipolar transistor is defined by an n-type source-drain region (NSD) region 210. The base is formed by the p-epitaxial region (Pepi) 212 and a p-buried layer (PBLMV) 214. An n-buried layer (NBL) 216, with its DEEPN 218 formed in a deep trench region providing contact to the NBL 216, defines the collector of a vertical NPN transistor. The shallow n-well (SNW) 222, with its n-type source-drain (NSD) contact region 224, defines the collector of a lateral NPN transistor. Current flows from emitter to collector in vertical (NSD-PBLMV-NBL) and lateral (NSD-Pepi-SNW) directions, but lateral current prevails for typical device dimensions.[0008] BVCEO of this device is limited by Pepi-SNW or Pepi-DEEPN junction breakdown and is often not high enough for device operation.SUMMARY[0009] This disclosure seeks to improve lateral BJT characteristics in a BCD process by using a graded collector contact. For purposes of this disclosure, the term graded refers to the grading of the doping profile.[0010] In described examples of a lateral bipolar junction transistor (BJT), the collector includes a graded collector contact. The graded collector contact includes a deep well (DWELL). The DWELL may be provided with a graded profile by subjecting it to very high thermo-cycle. The collector may also include a collector contact moat, which may include a shallow well (SWELL). The lateral BJT may be part of a BCD process, wherein an emitter of the BJT is defined by a source-drain region (SD), and a base is defined by an epitaxial region (epi) with a second source-drain region (SD) of opposite polarity to the SD of the emitter, forming a contact to the base. The base may further include a shallow well (SW) in which the base contact SD is formed. The DWELL may be configured to extend toward the base contact SD, with lower doping level closer to the base contact SD. The doping level of the DWELL may be lower than that of the SWELL Both DWELL and SWELL may be formed by ion implantation. The SWELL is typically formed after the DWELL and, accordingly, does not see high thermo-cycle. Typically, the SWELL and DWELL may be configured, so that the SWELL is at least partially surrounded by the DWELL.[0011] Further, according to described examples, a method of improving lateral BJT characteristics includes providing a graded collector contact. The graded collector contact may be defined by a deep well (DWELL). The graded DWELL may be achieved by high-energy, such as approximately lMeV phosphorous implant followed by a long anneal (e.g., 75 minutes at 1150 C). A lower doped portion of the graded collector contact may extend toward a base contact of the BJT. The method may include providing the collector contact with a shallow well (SWELL) moat of same doping type as the DWELL. The SWELL may be formed in the DWELL. The SWELL may make contact with a source-drain region (SD) that defines a collector surface contact of same doping type as the DWELL and SWELL. The SWELL may also make contact with a DEEP region (formed in a deep trench region that serves as contact for a buried layer). Both the DEEP region and the buried layer have the same doping type as the DWELL and SWELL.BRIEF DESCRIPTION OF THE DRAWINGS[0012] FIG. 1 (prior art) is a sectional side view through a conventional BiCMOS structure.[0013] FIG. 2 (prior art) is a sectional side view through another conventional BiCMOS structure.[0014] FIG. 3 is sectional view through a BiCMOS structure of example embodiments.[0015] FIG. 4 is a top view of the BiCMOS structure of FIG. 3.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0016] In example embodiments, a CMOS/DMOS manufacturing process allows for optimization of bipolar transistor parameters, including parameters related to horizontal bipolar transistors, without significantly increasing the number of steps and/or masks required in the process.[0017] FIGS. 3 and 4 show an example embodiment of a BiCMOS structure defining a vertical and a lateral NPN bipolar junction transistor (BJT). In other examples, the structure can also be implemented to define a lateral PNP, by using the opposite polarities of the various doped regions.[0018] As shown in the sectional side view of FIG. 3, the structure is formed on a p-substrate (PSub) 300. The emitter of both the vertical and lateral NPN bipolar transistor is defined by an n-type source-drain region (NSD) region 310. The base is formed by the p-epitaxial region (Pepi) 312 and a p-buried layer (PBLMV) 314. In the case of the lateral BJT, contact to the Pepi 312 defining the base is achieved by the p-type source-drain (PSD) region 340 via the shallow p-well (SPW) 342. An n-buried layer (NBL) 316 defines the collector of a vertical NPN transistor. A DEEPN region 318, formed in a Deep Trench, provides contact to the NBL 316.[0019] In this embodiment, the lateral NPN BJT collector is defined by a graded deep n-type well (DNWELL) 320 and a shallow n-type well (SNWELL) 322. The SNWELL forms a collector contact moat and makes contact with an n-type source-drain (NSD) region 324. By subjecting the DWELL (in this case DNWELL 320) to very high thermo-cycle (e.g., 75 minutes at 1150 degrees C), it is provided with a graded profile. The DNWELL may be configured to extend toward the PSD 340 defining the base contact, with lower doping level closer to the PSD 340. In this example, the doping level of the DWELL is chosen to be lower than that of the SNWELL. Both the DNWELL and SNWELL are formed by ion implantation. The SNWELL is formed after the DNWELL. Accordingly, unlike the DNWELL, the SNWELL does not see high thermo-cycle, but is annealed at typical lower temperatures and shorter times, such as 30 minutes at 900 degrees C. This embodiment shows the DNWELL 320 having a vertical dimension that allows it to extend into the PBLMV 314, whereas the SNWELL 322 does not extend deeper than the Pepi 312, but these dimensions may vary. The important aspect is the doping profile in a lateral direction, and ensuring that the DNWELL 320 has a lower doping profile than the SNWELL 322 and extends further laterally toward the PSD 340 than the SNWELL 322.[0020] As in the conventional structure discussed above with respect to FIG. 2, current flows from emitter to collector in vertical (NSD-PBLMV-NBL) and lateral (NSD-Pepi-SNW) directions. However, in the structure of example embodiments, the profile of the lateral collector contact is graded in a lateral direction, with doping levels getting lower toward the PSD 340 and the NSD emitter contact 310. This has the effect of reducing the electric field at the Pepi junction 330. Also, because the lateral base width is reduced by the addition of the DNWELL 320, the gain β or HFEis increased. The VA is also increased due to lower collector-base junction capacitance. Thus, by adding a collector contact with a graded profile (DNWELL in this example) to the collector contact moat, it creates a graded collector contact, which increases the device operating temperature and improves the β * VA product.Table 1. 20V Vertical NPN for MV flow[0021] Table 1 shows the significant increase in the gain β (HFE) at different current densities, and the collector-to-emitter breakdown voltage (BVCEO) for an NPN device with a graded collector contact in accordance with example embodiments, in comparison to a conventional device. Low Jc = le-7 Α/μιη2; medium Jc = le-6 Α/μιη2; high Jc = le-5 Α/μιη2. If a curve is plotted of output voltage Vce versus collector current Ic for some forward bias of the emitter and two reverse voltages on the collector, Va is the intercept on the Vce axis extrapolated to lc=0[0022] In this example embodiment, the DNWELL is formed by using high-energy (approximately lMeV) phosphorous implant and subsequently a long anneal cycle (approximately 75 minutes at 1150 degrees C). First, the DNWELL is implanted, and the SNWELL is implanted afterward. In this embodiment, the maximum DNWELL concentration is ~lel6 /cm3, going down to lel5/cm3over a distance of about 2.5μιη, while the maximum SNWELL concentration is ~2el7 /cm3. The maximum doping concentration of the PWELL and SWELL will depend on the voltage rating of the BJT.[0023] A top view of the structure of FIG. 3 is shown in FIG. 4, which shows the ring-like configuration of the collector structure (DNWELL 320, SNWELL 322, NSD 224) and the DEEPN 318.[0024] The graded collector contact can be implemented in different ways to the deep well described in the above embodiment. |
A multi-chip integrated circuit (IC) package is provided which is configured to protect against failure due to warpage. The IC package may comprise a substrate, a level-one IC die and a plurality of level-two IC dies. The level-one IC die having a surface that is electrically coupled to the substrate. The plurality of level-two IC dies is stacked above the level-one IC die. The plurality of level-two IC dies may each have an active surface that is electrically coupled to the substrate. The plurality of level-two IC dies may be arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane. Relative to a single die configuration, the level-two IC dies are separated thereby inhibiting cracking, peeling and/or other potential failures due to warpage of the IC package. |
A multi-chip integrated circuit (IC) package (500, 900, 1500, 1700), comprising:a substrate (514, 1114, 1714);a level-one IC die (502, 1002, 1602, 1702) having a surface (506, 1106, 1708) that is electrically coupled to the substrate by a first plurality of electrical conductors (520, 1030, 1734); anda plurality of level-two IC dies (504a, 504b 904a, 904b, 904c, 904d, 1504a, 1504b, 1504c, 1504d, 1704a, 1704b) stacked above the level-one IC die, the plurality of level-two IC dies each having an active surface (510a, 510b, 910a, 910b, 910c, 910d, 1710a, 1710b) that is electrically coupled via a second plurality of electrical conductors (516a, 516b, 518a, 518b, 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1732a, 1732b) to the substrate, wherein the second plurality of electrical conductors are disposed on at least one active surface perimeter overhang region (517, 519, 1017, 1019) of each of the plurality of level-two IC dies, wherein the plurality of level-two IC dies are arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane;characterized in that the electrical conductors of the first plurality of electrical conductors are smaller than the electrical conductors of the second plurality of electrical conductors.The IC package of claim 1, wherein the level-one IC die and the plurality of level-two IC dies are coupled, by the first plurality of electrical conductors and the second plurality of electrical conductors respectively, to a surface of the substrate in a single plane.The IC package of claim 1, wherein the plurality of electrical conductors are at least one of soldering bumps, soldering balls, pillars, pins, stud bumps, and/or stacks of stud bumps.The IC package of claim 1, wherein the plurality of level-two IC dies includes two, three or four level-two IC dies, wherein one of the plurality of level-two IC dies has a length and/or a width that is different from another level-two IC die.The IC package of claim 1, comprising two (2) level-two IC dies (504a, 504b, 1704a, 1704b)substantially identical in size.The IC package of claim 1, comprising two (2) level-two IC dies (504a, 504b, 1704a, 1704b), which each includes three sides having an active surface perimeter overhang region (517, 519), wherein the active surface perimeter overhang regions of the two level-two IC dies include the second plurality of electrical conductors.The IC package of claim 6, wherein each of the two (2) level-two IC dies includes at least one side, a portion of which is positioned directly above a back side surface (508, 1709) of the level-one IC die and lacks the plurality of electrical conductors.The IC package of claim 1, comprising four (4) level-two IC dies (904a, 904b, 904c, 904d, 1504a, 1504b, 1504c, 1504d), which each includes two sides having an active surface perimeter overhang region (1017, 1019), wherein the active surface perimeter overhang regions of the four level-two IC dies include the second plurality of electrical conductors.The IC package of claim 8, wherein each of the four (4) level-two IC dies includes at least two sides, a portion of each of which is positioned directly above a back side surface (1108) of the level-one IC die and lacks the plurality of electrical conductors.The IC package of claim 1, wherein the plurality of level-two IC dies includes a first level-two IC die, a second level-two IC die, a third level-two IC die and a fourth level-two IC die, wherein the first level-two IC die is adjacent the second level-two IC die in a first direction and adjacent the third level-two IC die (904c) in a second direction perpendicular to the first direction, and the fourth level-two IC die (904d) is adjacent the third level-two IC die (904c) in the first direction and adjacent the second level-two IC die (904b) in the second direction, wherein a first spacing exists between the first level-two IC die and the second level-two IC die and between the third level-two IC die and the fourth level-two IC die, wherein a second spacing exists between the first level-two IC die and the third level-two IC die and between the second level-two IC die and the fourth level-two ID die, andwherein the first spacing is between 0.1% and 1%, between 1% and 5%, between 5% and 10%, or between 10% and 20% of the width of the first level-two IC die or the second level-two IC die in the first direction, the second spacing is between 0.1% and 1%, between 1% and 5%, between 5% and 10%, or between 10% and 20% of the length of the first level-two IC die or the second level-two IC die in the second direction.The IC package of claim 1, wherein the plurality of level-two IC dies includes a first level-two IC die, a second level-two IC die, a third level-two IC die and a fourth level-two IC die, wherein the first level-two IC die is adjacent the second level-two IC die in a first direction and adjacent the third level-two IC die (904c) in a second direction perpendicular to the first direction, and the fourth level-two IC die (904d) is adjacent the third level-two IC die (904c) in the first direction and adjacent the second level-two IC die (904b) in the second direction, wherein a first spacing exists between the first level-two IC die and the second level-two IC die and between the third level-two IC die and the fourth level-two IC die, wherein a second spacing exists between the first level-two IC die and the third level-two IC die and between the second level-two IC die and the fourth level-two ID die, andwherein the 5% and 20% of the width of the first level-two IC die or the second level-two IC die in the first direction, and in that the second spacing is between 5% and 20% of the length of the first level-two IC die or the second level-two IC die in the second direction.The IC package of claim 1, wherein the plurality of level-two IC dies are formed by breaking up a single IC die into multiple dies.The IC package of any preceding claim, wherein the level-one IC die and the plurality of level-two IC dies are electrically coupled to each other by at least one of electrical interconnections in the substrate and/or through silicon vias.The IC package of any preceding claim, wherein the IC package is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer. 15. A method for manufacturing a multi-chip integrated circuit (IC) package (500, 900, 1500, 1700), the method comprising:providing a substrate (514, 1114, 1714);electrically coupling a surface (506, 1106, 1708) of a level-one IC die (502, 1002, 1602, 1702) to the substrate using a first plurality of electrical conductors (520, 1030, 1734);stacking a plurality of level-two IC dies (504a, 504b, 904a, 904b, 904c, 904d, 1504a, 1504b, 1504c, 1504d, 1704a, 1704b) above the level-one IC die, the plurality of level-two IC dies each having an active surface (510a, 510b, 910a, 910b, 910c, 910d, 1710a, 1710b) that is electrically coupled via a second plurality of electrical conductors (516a, 516b, 518a, 518b, 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1732a, 1732b) to the substrate, wherein the second plurality of electrical conductors are disposed on at least one active surface perimeter overhang region (517, 519, 1017, 1019) of each of the plurality of level-two IC dies; andarranging the plurality of level-two IC dies side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane;characterized in that the electrical conductors of the first plurality of electrical conductors are smaller than the electrical conductors of the second plurality of electrical conductors. |
BACKGROUNDFieldVarious features relate to integrated circuits (ICs), and more particularly to multi-chip ICs and methods for making the same.BackgroundThe ever increasing demand for smaller, lighter, and faster portable electronic devices, such as mobile phones and laptop computers, has forced the electronics industry to create circuit components that feature greater capacity and performance, but smaller dimensions. For example, portable devices may now contain IC packages having two or more semiconductor dies stacked vertically and encased within the same molding compound of the IC package. Such multi-chip IC packages may be commonly referred to as "system-in-packages" (SIP) and "chip stack multi-chip modules" (MCM).FIG. 1 illustrates a schematic, cross-sectional side view of an SIP 100 found in the prior art. The SIP 100 includes two IC dies 102, 104 that are stacked on top of each other. The top IC die 102 may be, for example, a memory circuit, and the bottom IC die 104 may be, for example, a processing circuit. The length and/or width of the top die 102 is larger than the length and/or width of the bottom die 104, and generally, the top die 102 may have a surface area that is greater than the bottom 104. The two dies 102, 104 are stacked on top of each other and encased within a single molding compound 106. The active surface 110 of the top die 102 is electrically coupled to a laminate substrate 108 via a plurality of soldering bumps 112a and conductive pillars 112b. The active surface 114 of the bottom die 104 is electrically coupled to the substrate 108 via another plurality of soldering bumps 116. In this fashion, both dies 102, 104 are electrically coupled to the substrate 108 in a flip-chip fashion, and communicate with each other through electrical connections (not shown) within the laminate substrate 108. The package 100 may be mounted onto a motherboard (e.g., PCB board) through a ball grid array or pin grid array structure (not shown).FIG. 2 illustrates a schematic, top view of the SIP package 100 with the molding compound 106 removed thereby exposing the top IC die 102 underneath. The top die 102 has a length lA and a width wA. FIG. 3 illustrates a schematic, bottom view of the SIP package 100. The substrate 108 and molding compound 106 have been omitted for clarity thereby exposing the top die 102 having the soldering bumps 112a and the bottom die 104 having the soldering bumps 116.The top IC die 102 will have limited speed, performance, reliability, and/or throughput due to its relatively larger size (e.g., larger surface area and/or greater dimensions along its length and/or width) compared to the bottom IC die 104. For example, the top die 102 may suffer from crosstalk and electromagnetic interference (EMI) effects among the various IC components located on its active surface 110. These undesirable effects limit the clock speed at which the top die 102, for example volatile dynamic random access memory (DRAM), can reliability operate due to clock signal jitter.Moreover, the larger, top die 102 is more prone to failure from open solder joints due to warpage effects. FIG. 4 illustrates a schematic, cross sectional side view of the SIP 100 (bottom die 104 and associated soldering bumps 116 have been omitted for clarity) where the substrate 108 has undergone significant concave warpage. According to the illustrated example, although some of the soldering bumps 402 near the corners 403 of the top die 102 remain in electrical contact with the substrate 108, other soldering bumps 404 near the center edge 405 of the top die 102 have separated away from the substrate 108 and are no longer in electrical contact with the substrate 108. Thus, warpage of the substrate 108 may lead to IC package 100 failure because critical connections between the top die 102 and the substrate 108 may become open/disconnected.Therefore, there is a need for advanced multi-chip IC package designs that improve circuit speed and performance, and also protect against IC package failure due to warpage.SUMMARYA multi-chip integrated circuit (IC) package configured to resist failure due to warpage. The IC package may include a substrate, a level-one IC die, and a plurality of level-two IC dies. The level-one IC die may have a surface that is electrically coupled to the substrate. The plurality of level-two IC dies may be stacked above the level-one IC die, with the plurality of level-two IC dies each having an active surface that is electrically coupled to the substrate. The plurality of level-two IC dies may be arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane. A plurality of electrical conductors may electrically couple the plurality of level-two IC dies to the substrate, where the plurality of electrical conductors may be disposed on at least one active surface perimeter overhang region of each of the plurality of level-two IC dies. According to various examples, the plurality of electrical conductors may be at least one of soldering bumps, soldering balls, pillars, pins, stud bumps, and/or stacks of stud bumps. The level-one IC die and the plurality of level-two IC dies may be electrically coupled to each other by at least one of electrical interconnections in the substrate and/or through silicon vias. At least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies may allow the two (2) level-two IC dies to bend or rotate with respect to one another and remain electrically coupled to the substrate in response to warpage of the substrate. At least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies causes a first corner or a first side of a first level-two IC die to move below a second corner of the first level-two IC die in response to concave substrate warpage, and further causes the first corner or the first side of the first level-two IC die to move above the second corner of the first level-two IC die in response to convex substrate warpage. The IC package may be incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.In one implementation, the plurality of level-two IC dies may comprise two (2) level-two IC dies. In one example, the two (2) level-two IC dies may have at least one of a length and/or a width that is different from one another. In another example, the two (2) level-two IC dies may be substantially identical in size. According to one aspect, each of the two (2) level-two IC dies may include three sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the two (2) level-two IC dies to the substrate. Each of the two (2) level-two IC dies may include at least one side, a portion of which, is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.In another implementation, the plurality of level-two IC dies comprises four (4) level-two IC dies. In one example, each of the four (4) level-two IC dies may include two sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the four (4) level-two IC dies to the substrate. Each of the four (4) level-two IC dies may include at least two sides, a portion of each of which is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.The IC package may also include a plurality of level-three IC dies stacked above the level-two IC dies. The plurality of level-three IC dies may each have an active surface that is electrically coupled to the substrate. The plurality of level-three IC dies may be arranged side by side such that the active surfaces of the plurality of level-three IC dies are positioned substantially in another same plane.A method for manufacturing a multi-chip integrated circuit (IC) package is also provided. A substrate is provide or formed and a surface of a level-one IC die is electrically coupled to the substrate. A plurality of level-two IC dies are stacked above the level-one IC die, with the plurality of level-two IC dies each having an active surface that is electrically coupled to the substrate. The plurality of level-two IC dies may be arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane. The plurality of level-two IC dies are electrically coupled to the substrate with a plurality of electrical conductors, the plurality of electrical conductors disposed on at least one active surface perimeter overhang region of each of the plurality of level-two IC dies.The plurality of level-two IC dies may comprise two (2) level-two IC dies. In one example, each of the two (2) level-two IC dies may include three sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the two (2) level-two IC dies to the substrate. Each of the two (2) level-two IC dies may include at least one side, a portion of which, is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.In another example, the plurality of level-two IC dies may comprise four (4) level-two IC dies. Each of the four (4) level-two IC dies may include two sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the four (4) level-two IC dies to the substrate. Each of the four (4) level-two IC dies may include at least two sides, a portion of each of which is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.The method may further include (a) stacking a plurality of level-three IC dies above the level-two IC dies, the plurality of level-three IC dies each having an active surface that is electrically coupled to the substrate, and/or (b) arranging the plurality of level-three IC dies side by side such that the active surfaces of the plurality of level-three IC dies are positioned substantially in another same plane.The method may further include: (a) providing at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies that allows the two (2) level-two IC dies to bend or rotate with respect to one another and remain electrically coupled to the substrate in response to warpage of the substrate, and/or (b) providing at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies that causes a first corner or a first side of a first level-two IC die to move below a second corner of the first level-two IC die in response to concave substrate warpage, and that further causes the first corner or the first side of the first level-two IC die to move above the second corner of the first level-two IC die in response to convex substrate warpage.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a schematic, cross-sectional side view of a system-on-package (SIP)found in the prior art.FIG. 2 illustrates a schematic, top view of the SIP package with the molding compound removed thereby exposing the top IC die underneath.FIG. 3 illustrates a schematic, bottom view of the SIP package.FIG. 4 illustrates a schematic, cross sectional side view of the SIP where the substrate has undergone significant concave warpage.FIG. 5 illustrates a schematic, cross-sectional side view of a stacked multi-chip IC package according to one aspect of the disclosure.FIG. 6 illustrates a schematic, top view of the IC package according to one aspect.FIG. 7 illustrates a schematic, bottom view of the IC package according to one aspect.FIG. 8 illustrates a schematic, bottom view of one of the level-two IC dies according to one aspect.FIG. 9 illustrates a schematic, top view of an IC package according to one aspect.FIG. 10 illustrates a schematic, bottom view of the IC package according to one aspect.FIGS. 11 ― 13 illustrate a schematic, cross-sectional side views of the stacked multi-chip IC package according to one aspect of the disclosure.FIG. 14 illustrates a schematic, bottom view of one of the level-two IC dies according to one aspect.FIG. 15 illustrates a schematic, top view of an IC package according to one aspect.FIG. 16 illustrates a schematic, bottom view of the IC package according to one aspect.FIG. 17 illustrates a schematic, bottom view of a three level, stacked, multi-chip IC package according to one aspect.FIGS. 18 and 19 illustrate schematic, cross sectional side views of the three level IC package according to one aspect.FIGS. 20 and 21 respectively illustrate schematic, top and bottom views of the stacked multi-chip IC package.FIGS. 22 and 23 illustrate schematic, cross-sectional side views of the stacked multi-chip IC package after the substrate has undergone warpage according to one aspect.FIG. 24 illustrates a flowchart for a method of manufacturing a multi-chip IC package according to one aspect of the disclosure.FIG. 25 illustrates various electronic devices that may be integrated with any of the aforementioned IC packages.DETAILED DESCRIPTIONIn the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. As used herein, the terms "electrically coupled" is used herein to refer to the direct or indirect coupling between two objects that allows for the flow of electrical current to take place between the two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered electrically coupled to one another—even if they do not directly physically touch each other—if object B is a conductor that allows for the flow of electrical current to take place from object A to object C and/or from object C to object A.The term "horizontal" is defined as a plane substantially parallel to the conventional plane and/or surface of an IC package substrate upon which IC dies are coupled to, regardless of the orientation of the package substrate. The term "vertical" refers to a direction substantially perpendicular to the horizontal plane as defined above. Prepositions, such as "above," "below," "upper," "higher," "lower," "over," "under," "underneath," and "on," when used with respect to the IC packages described herein, are defined with respect to the horizontal plane regardless of the absolute orientation of the package substrate. Thus, if a first IC die is positioned above a second IC die, then the second IC die is physically closer to the aforementioned package substrate surface than the first IC die. Prepositions, such as "next to," "side by side," and "adjacent to," when used with respect to IC packages described herein, are defined with respect to the vertical direction regardless of the absolute orientation of the package substrate. Thus, if a first and a second IC die are positioned side by side, then both IC dies may be the same distance away from the aforementioned package substrate surface, but are located at different distances from a vertical plane that is perpendicular to the aforementioned package substrate surface.Note that while various examples herein may describe IC dies in flip chip configuration, the IC features, configurations, and/or arrangements noted may also be implemented with IC dies in wire bonded configurations.OverviewA multi-chip integrated circuit (IC) package is provided which is configured to protect against failure due to warpage. The IC package may comprise a substrate, a level-one IC die and a plurality of level-two IC dies. The level-one IC die having a surface that is electrically coupled to the substrate. The plurality of level-two IC dies is stacked above the level-one IC die. The plurality of level-two IC dies may each have an active surface that is electrically coupled to the substrate. The plurality of level-two IC dies may be arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane. Relative to a single die configuration, the level-two IC dies are separated thereby inhibiting cracking, peeling and/or other potential failures due to warpage of the IC package.Two Level Multi-chip PackageFIG. 5 illustrates a schematic, cross-sectional side view of a stacked multi-chip IC package 500 according to one aspect of the disclosure. The two level IC package 500 comprises a level-one IC die 502 (also referred to herein as "bottom IC die") and two (2) level-two IC dies 504a, 504b all of which may be made of semiconductor materials, such as, but not limited to, silicone and/or germanium. The IC dies 502, 504a, 504b may be any type of IC, such as, but not limited to, processing circuits, memory circuits, or a combination thereof. In one aspect, the level-one IC die 502 is an IC that is substantially a processing circuit, and the level-two dies 504a, 504b are memory circuits, such as double data rate type three (DDR3) synchronous dynamic random access memory (SDRAM) circuits. Of course, in other aspects, the dies 502, 504a, 504b may be other types of processing and/or memory circuits.The level-one IC die 502 has an active surface side 506 (e.g., front side surface) that includes a plurality of integrated circuit components (e.g., transistors, capacitors, inductors, resistors, etc.). Similarly, the level-two IC dies 504a, 504b each have an active surface side 510a, 510b, (e.g., front side surface) that includes a plurality of integrated circuit components (e.g., transistors, capacitors, inductors, resistors, etc.). The dies 502, 504a, 504b may each have a back side surface 508, 512a, 512b as well. The active surface 510a of the first level-two IC die 504a may be electrically coupled to a package substrate 514 (e.g., laminate substrate, metal based substrate, such as copper based substrate, etc.) that it faces via a plurality of electrical conductors 516a, 516b. Similarly, the active surface 510b of the second level-two IC die 504b may be electrically coupled to the substrate 514 that it faces via another plurality of electrical conductors 518a, 518b. Specifically, the electrical conductors 516a, 516b, 518a, 518b are disposed on active surface perimeter overhang regions 517, 519 of the dies 504a, 504b. It will be understood that in an alternative embodiment, any or all of the electrical conductors 516a, 516b, 518a, 518b may be first disposed on the package substrate 514 and then attached to the active surface perimeter overhang regions 517, 519 of the dies 504a, 504b. The active surface perimeter overhang regions 517, 519 define active surface 510a, 510b areas near the perimeter of the dies 504a, 504b that extend past the side edges 521, 523 of the level-one IC die 502, and thus create overhangs.The active surface 506 of the level-one IC die 502 may also be electrically coupled to the substrate 514 that it faces via a plurality of smaller electrical conductors 520. In the illustrated example, the electrical conductors 516a, 516b, 518a, 518b, 520 are soldering balls, and thus the IC dies 502, 504a, 504b may be electrically coupled to the substrate 514 in a ball grid array (BGA) flip chip fashion. However, the electrical conductors 516a, 516b, 518a, 518b, 520 are not limited to soldering balls, and may be any metal, metal alloy, or conductive element that is capable of readily transmitting an electrical signal. For example, the electrical conductors 516a, 516b, 518a, 518b, 520 may be, but are not limited to, soldering bumps, pillars, pins, stud bumps, and/or stacks of stud bumps. In one aspect, the IC dies 502, 504a, 504b may electrically communicate with one another by transmitting and receiving electrical signals via interconnections within the multi-layer package substrate 514. In another aspect, the level-one IC die 502 may be electrically coupled to the level-two IC dies 504a, 504b using through substrate vias (TSV). For example, level-one IC die 502 may have both a front side (not labeled) and a back side 508. The front side of the level-one IC die 502 faces the smaller electrical conductors 520 and the back side of level-one IC die faces IC dies 512a and 512b. Thus, TSV elements (not shown) may pass through the back side surface 508 of the level-one IC die 502 and electrically couple with the active surfaces 510a, 510b of the level-two IC dies 504a, 504b. Consequently, the stacked IC dies may electrically communicate with each other through the substrate or through TSVs.Moreover, the active surface 506 of the level-one IC die 502 may be physically secured to the substrate 514 with die attach and/or underfill adhesive 522. According to one aspect an adhesive material 524 may be used to secure the level-one IC die 502 to the level-two IC dies 504a, 504b. Finally, an epoxy and/or resin molding compound 526 encases the dies 502, 504a, 504b, the electrical conductors 516a, 516b, 518a, 518b, 520, the underfill 522, and other components to form the package 500. The molding compound 526 may also partially cover the package substrate 514.In this fashion, the level-two IC dies 504a, 504b are positioned substantially side by side in the same planar region (e.g., in the X-Y plane as shown in FIG. 6 ) and are each positioned above the level-one IC die 502. For example, the IC dies 504a, 504b may be positioned side by side such that their active surfaces 510a, 510b are substantially in the same plane. As will be discussed in greater detail below, having two or more IC dies 504a, 504b that are each smaller (e.g., have less surface area and/or have less length and/or width) than a single, large top IC die 102 (See FIG. 1 ) having the same number of active components offers distinct advantages.FIG. 6 illustrates a schematic, top view of the IC package 500 according to one aspect. A portion of the molding compound 526 has been removed to illustrate the level-two IC dies 504a, 504b and the adhesive material 524 underneath. As shown in FIG. 6 , the level-two IC dies 504a, 504b are positioned side by side in the X-Y plane. The first level-two IC die 504a has a length lB and a width wB1, and the second level-two IC die 504b has a length lB and a width wB2. According to one aspect, the widths wB1 and wB2 are each less than the width wA of the IC package 100 (See FIG. 1 ) having a single, large top IC die 102. In one aspect, wB1 and wB2 are each less than half of the width wA.FIG. 7 illustrates a schematic, bottom view of the IC package 500 according to one aspect. The molding compound 526, underfill 522, and substrate 514 have been omitted for clarity. As illustrated in FIG. 7 , the plurality of electrical conductors 516a, 516b and 518a, 518b that electrically couple the level-two IC dies 504a, 504b to the substrate 514 (not shown in FIG. 7 ), respectively, may be arranged around perimeter regions of the level-two IC dies 504a, 504b. For example, the level-two die 504a may have a plurality of inner perimeter region electrical conductors 516b that electrically couple the level-two die 504a to the substrate 514 (not shown in FIG. 7 ). The level-two die 504a may also have a plurality of outer perimeter region electrical conductors 516a that also electrically couple the level-two die 504a to the substrate 514 (not shown in FIG. 7 ). The inner perimeter region electrical conductors 516b are closer to center region c of the package 500 than the outer perimeter region electrical conductors 516a. Similarly, the level-two die 504b may have a plurality of inner perimeter region electrical conductors 518b that electrically couple the level-two die 504b to the substrate 514 (not shown in FIG. 7 ). The level-two die 504b may also have a plurality of outer perimeter region electrical conductors 518a that also electrically couple the level-two die 504b to the substrate 514 (not shown in FIG. 7 ). The inner perimeter region electrical conductors 518b are closer to center region c of the package 500 than the outer perimeter region electrical conductors 518a. Although the illustrated example shows only two (e.g., inner and outer) perimeter regions of electrical conductors 516a, 516b, 518a, 518b, each level-two IC die 504a, 504b may be electrically coupled to the substrate 514 (not shown in FIG. 7 ) with any number of perimeter region electrical conductors, such as three or more.FIG. 8 illustrates a schematic, bottom view of one of the level-two IC dies 504a according to one aspect. The die 504a includes four (4) sides 802, 804, 806, 808. The first side 802 has a first active surface perimeter overhang region 810 associated with it that is near the first side 802 of the die 504a. Similarly, the second side 804 has a second active surface perimeter overhang region 812 associated with it that is near the second side 804 of the die 504a. The third side 806 also has a third active surface perimeter overhang region 814 associated with it that is near the third side 806 of the die 504a. Each of the active surface perimeter overhang regions 810, 812, 814 have a plurality of electrical conductors 516a, 516b disposed thereon that electrically couple the die 504a to the substrate 514. By contrast, the fourth side 808 includes a portion 816 that is positioned directly above the back side surface 508 of the level-one IC die 502 and lacks the electrical conductors 516a, 516b. This allows space in the vertical direction (i.e. Z direction) to accommodate the level-one IC die 502 underneath the level-two IC die 504a (See FIG. 5 ). The other level-two IC die 504b may have a structure similar to the die 504a just described.FIG. 9 illustrates a schematic, top view of an IC package 900 according to one aspect. A portion of the molding compound 926 has been removed to illustrate four (4) level-two IC dies 904a, 904b, 904c, 904d and the adhesive material 924 underneath. As shown in FIG. 9 , the level-two IC dies 904a, 904b, 904c, 904d are positioned side by side in the X-Y plane, and each has a back side surface 912a, 912b, 912c, 912d. For example, the dies 904a, 904b, 904c, 904d may be positioned side by side such that their active surfaces 910a, 910b, 910c, 910d (See FIGS. 11 ― 13) are substantially in the same plane. Referring to FIG. 9 , the first level-two IC die 904a has a length lC1 and a width wC1, the second level-two IC die 904b has a length lC1 and a width wC2, the third level-two IC die 904c has a length lC2 and a width wC1, and the fourth level-two IC die 904d has a length lC2 and a width wC2. According to one aspect, the lengths lC1, and lC2 are each less than the length lA and widths wC1 and wc2 are each less than the width wA of the IC package 100 (See FIG. 1 ) having a single, large top IC die 102. In one aspect, wC1 and wC2 are each less than half of the width wA. In another aspect, lC1 and lC2 are each less than half of the length lA. According to one aspect, wC1 is equal to wC2 and lC1 is equal to lC2.FIG. 10 illustrates a schematic, bottom view of the IC package 900 according to one aspect. Various components of the package 900 have been omitted for clarity. As illustrated in FIG. 10 , a plurality of electrical conductors 1016a, 1016b that electrically couple the level-two IC die 904a to the substrate (not shown in FIG. 10 ) may be arranged around the perimeter region of the dies 904a. For example, the level-two die 904a may have a plurality of inner perimeter region electrical conductors 1016b that electrically couple the level-two die 904a to the substrate. The level-two die 904a may also have a plurality of outer perimeter region electrical conductors 1016a that also electrically couple the level-two die 904a to the substrate. The inner perimeter region electrical conductors 1016b are closer to center region c of the package 900 than the outer perimeter region electrical conductors 1016a. Similarly, a plurality of electrical conductors 1018a, 1018b, 1020a, 1020b, 1022a, 1022b that electrically couple the level-two IC dies 904b, 904c, 904d to the substrate (not shown in FIG. 10 ) may be arranged around the perimeter regions of the dies 904b, 904c, 904d. For example, the second level-two die 904b may have a plurality of inner perimeter region electrical conductors 1018b that electrically couple the second level-two die 904b to the substrate. The second level-two die 904b may also have a plurality of outer perimeter region electrical conductors 1018a that also electrically couple the second level-two die 904b to the substrate. The inner perimeter region electrical conductors 1018b are closer to center region c of the package 900 than the outer perimeter region electrical conductors 1018a. As illustrated, the third and fourth dies 904c, 904d may have electrical conductors 1020a, 1020b, 1022a, 1022b that are similarly arranged. Although the illustrated example shows only two (e.g., inner and outer) perimeter regions of electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b each level-two IC die 904a, 904b, 904c, 904d may be electrically coupled to the substrate with any number of perimeter region electrical conductors, such as three or more.FIGS. 11 ― 13 illustrate a schematic, cross-sectional side views of the stacked multi-chip IC package 900 according to one aspect of the disclosure. The IC package 900 comprises the level-one IC die 1002 and the four (4) level-two IC dies 904a, 904b, 904c, 904d that are made from semiconductor materials, such as, but not limited to, silicone and/or germanium. The IC dies 1002, 904a, 904b, 904c, 904d may be any type of IC, such as, but not limited to, processing circuits, memory circuits, or a combination thereof. In one aspect, the level-one IC die 1002 is an IC that is substantially a processing circuit, and the level-two dies 904a, 904b, 904c, 904d are memory circuits, such as DDR3 DRAM circuits. Of course, in other aspects, the dies 1002, 904a, 904b, 904c, 904d may be other types of processing and/or memory circuits.The level-one IC die 1002 has an active surface side 1106 (e.g., front side surface) that includes a plurality of integrated circuit components (e.g., transistors, capacitors, inductors, resistors, etc.). Similarly, the level-two IC dies 904a, 904b, 904c, 904d each have an active surface side 910a, 910b, 910c, 910d (e.g., front side surface) that includes a plurality of integrated circuit components (e.g., transistors, capacitors, inductors, resistors, etc.). The dies 1002, 904a, 904b, 904c, 904d may each have a back side surface 1108, 912a, 912b, 912c, 912d as well. The active surface 910a of the first level-two IC die 904a may be electrically coupled to a package substrate 1114 (e.g., laminate substrate, metal based substrate, such as copper based substrate, etc.) that it faces via a plurality of electrical conductors 1016a, 1016b (See FIG. 11 ). Similarly, the active surface 910b of the second level-two IC die 904b may be electrically coupled to the substrate 1114 that it faces via another plurality of electrical conductors 1018a, 1018b. The active surface 910c of the third level-two IC die 904c may be electrically coupled to the substrate 1114 that it faces via yet another plurality of electrical conductors 1020a, 1020b (See FIG. 12 ). The active surface 910d of the fourth level-two IC die 904d may be electrically coupled to the substrate 1114 that it faces via another plurality of electrical conductors 1022a, 1022b (See FIG. 13 ). Specifically, the electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b are disposed on active surface perimeter overhang regions 1117, 1119, 1221, 1323 of the dies 904a, 904b, 904c, 904d. The active surface perimeter overhang regions 1117, 1119, 1221, 1323 define active surface 910a, 910b, 910c, 910d areas near the perimeter of the dies 904a, 904b, 904c, 904d that extend past the side edges 1125, 1127, 1229, 1331 of the level-one IC die 1002, and thus create overhangs.The active surface 1106 of the level-one IC die 1002 may be electrically coupled to the substrate 1114 that it faces via a plurality of smaller electrical conductors 1030. In one aspect, the IC dies 1002, 904a, 904b, 904c, 904d may electrically communicate with one another by transmitting and receiving electrical signals via interconnections within the multi-layer package substrate 1114. In another aspect, the level-one IC die 1002 may be electrically coupled to the level-two IC dies 904a, 904b, 904c, 904d using through silicon vias (TSV). Thus, TSV elements (not shown) may pass through the back side surface 1108 of the level-one IC die 1002 and electrically couple with the active surfaces 910a, 910b, 910c, 910d of the level-two IC dies 904a, 904b, 904c, 904d.Moreover, the active surface 1106 of the level-one IC die 1002 may be physically secured to the substrate 1114 with die attach and/or underfill adhesive 1122. According to one aspect an adhesive material 924 may be used to secure the level-one IC die 1002 to the level-two IC dies 902a, 902b, 902c, 902d. Finally, an epoxy and/or resin molding compound 926 encases the dies 1002, 904a, 904b, 904c, 904d, the electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1030, the underfill 1122, and other components to form the package 900. The molding compound 926 may also partially cover the package substrate 1114.FIG. 14 illustrates a schematic, bottom view of one of the level-two IC dies 904a according to one aspect. The die 904a includes four (4) sides 1402, 1404, 1406, 1408. The first side 1402 has a first active surface perimeter overhang region 1410 associated with it that is near the first side 1402 of the die 904a. Similarly, the second side 1404 has a second active surface perimeter overhang region 1412 associated with it that is near the second side 1404 of the die 904a. Each of the active surface perimeter overhang regions 1410, 1412 have a plurality of electrical conductors 1016a, 1016b disposed thereon that electrically couple the die 904a to the substrate 1114. By contrast, the third side 1406 and the fourth side 1408 include portions 1414, 1416 that are positioned directly above the back side surface 1108 of the level-one IC die 1002 and lack the electrical conductors 1016a, 1016b. This allows space in the vertical direction (i.e. Z direction) to accommodate the level-one IC die 1002 underneath the level-two IC die 904a (See FIG. 11 ). The other level-two IC dies 904b, 904c, 904d may have structures similar to the die 904a just described.In this fashion, the level-two IC dies 904a, 904b, 904c, 904d are positioned substantially side by side in the same planar region (e.g., in the X-Y plane as shown in FIGS. 9 and 10 ) and are each positioned above the level-one IC die 1002. As will be discussed in greater detail below, having four or more IC dies 904a, 904b, 904c, 904d that are each smaller (e.g., have less surface area and/or have less length and/or width) than a single large top IC die 102 (See FIG. 1 ) having the same number of active components offers distinct advantages.FIG. 15 illustrates a schematic, top view of an IC package 1500 according to one aspect. A portion of the molding compound 1526 has been removed to illustrate four (4) level-two IC dies 1504a, 1504b, 1504c, 1504d and the adhesive material 1524 underneath. As shown in FIG. 15 , the level-two IC dies 1504a, 1504b, 1504c, 1504d are positioned side by side in the X-Y plane, and each has a back side surface 1512a, 1512b, 1512c, 1512d. The first level-two IC die 1504a has a length lD1 and a width wD1, the second level-two IC die 1504b has a length lD1 and a width wD2, the third level-two IC die 1504c has a length lD2 and a width wD1, and the fourth level-two IC die 1504d has a length lD2 and a width wD2. Notably, unlike the level-two IC dies 904a, 904b, 904c, 904d of FIG. 9 , the level-two IC dies 1504a, 1504b, 1504c, 1504d in FIG. 15 each have different dimensions and surface areas compared to each other. For example, according to one aspect wD1 is less than wD2 and lD2 is less than lD1. In this fashion the level-two IC dies 1504a, 1504b, 1504c, 1504d may comprises ICs that are each different sizes. According to one aspect, the package 1500 may include two level-two IC dies 1504a, 1504c that are positioned substantially diagonally from one another, but not include the other level-two IC dies 1504b, 1504d. According to another aspect, the package 1500 may include two level-two IC dies 1504b, 1504d that are positioned substantially diagonally from one another, but not include the other level-two IC dies 1504a, 1504c. According to another aspect, the package 1500 may include three level-two IC dies 1504a, 1504b, 1504c, but not include the other level-two IC die 1504d.FIG. 16 illustrates a schematic, bottom view of the IC package 1500 according to one aspect. Various components of the package 1500 have been omitted for clarity. As illustrated in FIG. 16 , the IC package 1500 also comprises a level-one IC die 1602 that is positioned underneath the level-two IC dies 1504a, 1504b, 1504c, 1504d. The adhesive material 1524 (see FIG. 15 ) helps the level-one IC die 1602 adhere to the level-two IC dies 1504a, 1504b, 1504c, 1504d. The IC dies 1602, 1504a, 1504b, 1504c, 1504d may also include a plurality of electrical conductors similar to the ones described above with respect to IC package 900.Three Level Multi-chip PackageFIG. 17 illustrates a schematic, bottom view of a three level, stacked, multi-chip IC package 1700 according to one aspect. Various components of the package 1700, such as a molding compound that encapsulates the package 1700, have been omitted for clarity. As illustrated in FIG. 17 , the IC package 1700 comprises a level-one IC die 1702, a first level-two IC die 1704a, a second level-two IC die 1704b, a first level-three IC die 1706a, a second level-three IC die 1706b, a third level-three IC die 1706c, and a fourth level-three IC die 1706d. The level-one IC die 1702 is positioned underneath the level-two IC dies 1704a, 1704b, and the level-two IC dies 1704a, 1704b are positioned underneath the level-three IC dies 1706a, 1706b, 1706c, 1706d. The level-two IC dies 1704a, 1704b are also situated such that they are side by side in the same plane parallel to the X-Y plane orientation shown in FIG. 17 . Similarly, the level-three IC dies 1706a, 1706b, 1706c, 1706d are also situated such that they are side by side in the same plane parallel to the X-Y plane.The IC dies 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d may also include a plurality of electrical conductors similar to the ones described above with respect to the IC packages 500, 900 above. For example, the level-one IC die 1702 may include a plurality of electrical conductors 1734 that electrically couple the level-one IC die 1702 to the package substrate (not shown in FIG. 17 ). The first level-two IC die 1704a may include a plurality of electrical conductors 1732a, 1732b that electrically couple the level-two IC die 1704a to the package substrate. Specifically, the first level-two IC die 1704a may have a plurality of inner perimeter region electrical conductors 1732b and a plurality of outer perimeter region electrical conductors 1732a. The inner perimeter region electrical conductors 1732b are closer to center region c of the package 1700 than the outer perimeter region electrical conductors 1732a. The second level-two IC die 1704b may also have an electrical conductor arrangement similar to the first level-two IC die 1704a. The first level-three IC die 1706a may include a plurality of electrical conductors 1734a, 1734b that electrically couple the level-three IC die 1706a to the package substrate. Specifically, the first level-three IC die 1706a may have a plurality of inner perimeter region electrical conductors 1734b and a plurality of outer perimeter region electrical conductors 1734a. The inner perimeter region electrical conductors 1734b are closer to center region c of the package 1700 than the outer perimeter region electrical conductors 1734a. The second, third, and fourth level-three IC dies 1706b, 1706c, 1706d may also have electrical conductor arrangements similar to the first level-three IC die 1706a.FIGS. 18 and 19 illustrate schematic, cross sectional side views of the three level IC package 1700 according to one aspect. As discussed above with respect to FIG. 17 , the IC package 1700 includes the level-one IC die 1702, the first level-two IC die 1704a, the second level-two IC die 1704b, the first level-three IC die 1706a, the second level-three IC die 1706b, the third level-three IC die 1706c, and the fourth level-three IC die 1706d. The level-two IC dies 1704a, 1704b are arranged side by side in the same plane, as are the level-three IC dies 1706a, 1706b, 1706c, 1706d. For example, the level-three IC dies 1706a, 1706b, 1706c, 1706d may be arranged side by side such that their active surfaces 1712a, 1712b, 1712c, 1712d are substantially in the same plane. In this fashion, the IC package 1700 includes three distinct stacked levels/layers of IC dies. The IC dies 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d may be any type of IC, such as, but not limited to, processing circuits, memory circuits, or a combination thereof. In one aspect, the level-one IC die 1702 is a processing circuit, and the level-two and level-three IC dies 1704a, 1704b, 1706a, 1706b, 1706c, 1706d are memory circuits, such as DDR3 DRAM circuits. Of course, in other aspects, the dies 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d may be other types of processing and/or memory circuits.The level-one IC die 1702 has an active surface side 1708 (e.g., front side surface) that includes a plurality of integrated circuit components (e.g., transistors, capacitors, inductors, resistors, etc.). Similarly, the level-two IC dies 1704a, 1704b and the level-three IC dies 1706a, 1706b, 1706c, 1706d each have an active surface side 1710a, 1710b and 1712a, 1712b, 1712c, 1712d, respectively, that faces a package substrate 1714 and includes a plurality of integrated circuit components. The active surface 1712a of the first level-three IC die 1706a may be electrically coupled to the package substrate 1714 (e.g., laminate substrate, metal based substrate, such as copper based substrate, etc.) via a plurality of electrical conductors 1730a, 1730b. Similarly, the active surfaces 1712b, 1712c, 1712d of the second, third, and fourth level-three IC dies 1706b, 1706c, 1706d may also be electrically coupled to the substrate 1714 through other electrical conductors. The active surface 1710a of the level-two IC die 1704a may be electrically coupled to the package substrate 1714 via a plurality of electrical conductors 1732a, 1732b. Similarly, the active surface 1710b of the second level-two IC die 1704b may also be electrically coupled to the substrate 1714 through electrical conductors. The active surface 1708 of the level-one IC die 1702 may be electrically coupled to the package substrate 1714 via a plurality of electrical conductors 1734. In this way, the IC dies 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d may be electrically coupled to the substrate 1714 in a flip chip fashion, and may electrically communicate with one another by transmitting and receiving electrical signals via interconnections within the multi-layer package substrate 1714. In one aspect, the level-one IC die 1702 and the level-two IC dies 1704a, 1704b may be electrically coupled with each other and the level-three IC dies 1706a, 1706b, 1706c, 1706d using through silicon vias (TSV). Thus, TSV elements (not shown) may pass through the back side surface 1709 of the level-one IC die 1702 and electrically couple with the active surfaces 1710a, 1710b of the level-two IC dies 1704a, 1704b. Other TSV elements (not shown) may also pass through the back side surface 1711a, 1711b of the level-two IC dies 1704a, 1704b and electrically couple with the active surfaces 1712a, 1712b, 1712c, 1712d of the level-three IC dies 1706a, 1706b, 1706c, 1706d.The active surface 1708 of the level-one IC die 1702 may be physically secured to the substrate 1714 with die attach and/or underfill adhesive 1716. According to one aspect an adhesive material 1718 may be used to secure the level-one IC die 1702 to the level-two IC dies 1704a, 1704b, and more adhesive material 1720 may be used to secure the level-two IC dies 1704a, 1704b to the level-three IC dies 1706a, 1706b, 1706c, 1706d. Finally, an epoxy and/or resin molding compound 1722 encases the dies 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d and the electrical conductors 1730a, 1730b, 1732a, 1732b, 1734, the underfill 1716, and other components to form the package 1700. The molding compound 1722 may also partially cover the package substrate 1714. In some implementations, the level-two IC dies 1704 and level-three IC dies 1706 may also be secured using underfill.Similar to the level-two IC die 904a, the level-three IC die 1706a includes four (4) sides. The first and second sides have active surface perimeter overhang regions that each have a plurality of electrical conductors disposed thereon that electrically couple the die 1706a to the substrate 1714. By contrast, the third and fourth sides include portions that are positioned directly above the back side surface 1711a of the level-two IC die 1704a and lack electrical conductors. This allows space in the vertical direction (i.e. Z direction) to accommodate the level-two IC die 1704a underneath the level-three IC dies 1706a (See FIGS. 17 ― 18). The other level-three IC dies 1706b, 1706c, 1706d may have structures similar to the die 1706a just described so as to accommodate one or more of the level-two IC dies 1704a, 1704b. The level-two IC dies 1704a, 1704b may have structures similar to the level-two dies 504a, 504b described above with respect to FIG. 8 .Notably, the level-two IC dies 1704a, 1704b are each smaller (i.e., less surface area) than a single, large level-two IC die containing all of the IC components (e.g., transistors, resistors, capacitors, inductors, etc.) of the level-two IC dies 1704a, 1704b. Similarly, dividing the IC components onto four level-three IC dies 1706a, 1706b, 1706c, 1706d rather than placing them all on a single, large level-three IC die has distinct advantages.In the illustrated examples, the electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1030, 1730a, 1730b, 1732a, 1732b, 1734 are soldering balls, and thus the IC dies 904a, 904b, 904c, 904d, 1002, 1504a, 1504b, 1504c, 1504d, 1602, 1702, 1704a, 1704b, 1706a, 1706b, 1706c, 1706d may be electrically coupled to their respective substrates 1114, 1714 in a ball grid array (BGA) flip chip fashion. However, the electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1030, 1730a, 1730b, 1732a, 1732b, 1734 are not limited to soldering balls, and may be any metal, metal alloy, or conductive element that is capable of readily transmitting an electrical signal. For example, the electrical conductors 1016a, 1016b, 1018a, 1018b, 1020a, 1020b, 1022a, 1022b, 1030, 1730a, 1730b, 1732a, 1732b, 1734 may be, but are not limited to, soldering bumps, pillars, pins, stud bumps, and/or stacks of stud bumps.Breaking up a single, large upper level IC die (for example IC die 102 in FIG. 1 ) into multiple dies in order to divide the IC components (transistors, resistors, diodes, capacitors, inductors, etc.) onto the active surfaces of multiple level-two IC dies 504a, 504b, 904a, 904b, 904c, 904d, 1504a, 1504b, 1504c, 1504d, 1704a, 1704b, and multiple level-three IC dies 1706a, 1706b, 1706c, 1706d offer distinct performance advantages. For example, IC component cross-talk plays a dominant role in IC performance. More than 50% of the IC clock and/or data jitter comes from cross-talk. Reducing jitter allows the IC dies to be run at higher clock speeds thereby improving performance of the IC dies and IC package. Dividing the IC components onto multiple level-two and/or level-three IC dies reduces cross-talk, jitter, and clock skew because the IC components are better electrically isolated from one another since they are placed on different dies.Note that in existing prior art package-on-package (PoP) configurations of multiple ranks, those ranks that belong to a same channel are sharing the DRAM package routing, and are connected to different DRAM die using bonding wires. Also the space between the neighboring bytes are relatively small (minimum space usually) since all bytes for all different ranks have to be routed on an identical DRAM package. In such prior art configurations, the electrical and/or EMI coupling among ranks are very strong. By contrast, the configurations described herein break the DRAM package into multiple packages and rout the DRAM packages for different ranks independently. Also in an individual rank, there may be more space to isolate the routing for each byte, so that these configurations may have less electrical and/or EMI coupling and better jitter performance.For EMI effects, the answers are similar, multiple package configuration will provide better EMI performance due to the physical isolation among different ranks.Another limiting factor of IC die and package performance are electromagnetic interference (EMI) effects. Improving IC component isolation by dividing the IC components onto multiple level-two and/or level-three IC dies reduces EMI effects, which further boosts IC die and package performance (e.g., the clock speed of the IC dies and package may be increased). For EMI effects, the multiple package configurations may provide better EMI performance due to the physical isolation among different ranks. The resulting IC component isolation described herein may reduce cross-talk and EMI effects by more than 50%, which may result in a IC die and package clock speed increase of more than 30%.Moreover, in cases where the level-two and/or level-three IC dies are memory circuits (e.g., DRAM, DDR3 RAM, etc.), IC routing may be more independent among different memory channels and different memory ranks. This helps alleviate loading due to fan-out of the clock signal, which in turn may increase IC die and package performance.Response to WarpageFIGS. 20 and 21 respectively illustrate schematic, top and bottom views of the stacked multi-chip IC package 900. As described above, the package 900 comprises the level-one IC die 1002 and four (4) level-two IC dies 904a, 904b, 904c, 904d. FIG. 20 also shows a close-up portion of the package 900 that illustrates spacings s1 and s2 between the level-two IC dies 904a, 904b, 904c, 904d. Specifically, a spacing s1 exists between: the first level-two IC die 904a and the second level-two IC die 904b; and the third level-two IC die 904c and the fourth level-two IC die 904d. Another spacing s2 exists between: the first level-two IC die 904a and the third level-two IC die 904c; and the second level-two IC die 904b and the fourth level-two IC die 904d.Referring to FIGS. 9 and 20 , in one aspect, the amount of spacing s1 may be between 0.1% and 1% of the width wC1 or wC2. According to another aspect, the amount of spacing s1 may be between 1% and 5% of the width wC1 or wC2. In another aspect, the amount of spacing s1 may be between 5% and 10% of the width wC1 or wC2. In yet another aspect, the amount of spacing s1 may be between 10% and 20% of the width wC1 or wC2. Similarly, in one aspect, the amount of spacing s2 may be between 0.1% and 1% of the length lc1 or lc2. According to another aspect, the amount of spacing s2 may be between 1% and 5% of the length lc1 or lc2. In another aspect, the amount of spacing s2 may be between 5% and 10% of the length lc1 or lc2. In yet another aspect, the amount of spacing s2 may be between 10% and 20% of the length lc1 or lc2.FIGS. 22 and 23 illustrate schematic, cross-sectional side views of the stacked multi-chip IC package 900 after the substrate 1114 has undergone warpage according to one aspect. The IC package 900, featuring a plurality of smaller level-two IC dies 904a, 904b, 904c, 904d, is more resistant to failure due to warpage than prior art designs that feature a single, large upper level IC die 102 (See FIGS. 1 and 4 ). While one or more soldering bumps 404 of the single IC die 102 may lose electrical contact with their respective substrate 108, the spacings s1 and s2 between the level-two IC dies 904a, 904b, 904c, 904d (shown in FIGS. 22 and 23 ) allow the dies 904a, 904b, 904c, 904d to bend and/or rotate with respect to one another so that the electrical conductors 1016a, 1018a, 1020a, 1022a do not lose electrical contact with the substrate 1114. Specifically, the spacing s1 allows a first corner 2102 and a first side 2104 of the first level-two IC die 904a to move lower (i.e., dip under, with respect to the Z vertical direction) than a second corner 2106 in response to substrate 1114 warpage. Similarly, the spacing s2 allows a third corner 2108 and a second side 2110 of the first level-two IC die 904a to move lower (i.e., dip under, with respect to the Z vertical direction) than the second corner 2106 in response to the warping of the substrate 1114. The other IC dies 904b, 904c, 904d may also react in the same way to the concave warpage as the first level-two IC die 904a just described.Although the illustrated examples of FIGS. 22 and 23 show the resistance and response of the level-two IC dies 904a, 904b, 904c, 904d to a concave substrate warpage, the same principles apply to make said dies 904a, 904b, 904c, 904d resistant to convex warpage. For example, in such a case the spacing s1 may allow the first corner 2102 and the first side 2104 of the first level-two IC die 904a to move higher (i.e., go above, with respect to the Z vertical direction) than the second corner 2106 in response to convex substrate 1114 warpage. Similarly, the spacing s2 may allow the third corner 2108 and the second side 2110 of the first level-two IC die 904a to move higher (i.e., go above, with respect to the Z vertical direction) than the second corner 2106 in response to convex substrate 1114 warpage. The other IC dies 904b, 904c, 904d may also react in the same way to the convex warpage as the first level-two IC die 904a just described.FIG. 24 illustrates a flowchart 2400 for a method of manufacturing a multi-chip IC package according to one aspect of the disclosure. At step 2402, a substrate is provided. At step 2404, a surface of a level-one IC die is electrically coupled to the substrate. In one example, an active surface of the level-one IC die may face the substrate (e.g., as illustrated in FIGs. 13 and 19 ). In other examples, the active surface of the level-one IC die may face up, opposite the substrate. At step 2406, a plurality of level-two IC dies is stacked above the level-one IC die, where the plurality of level-two IC dies each has an active surface that is electrically coupled to the substrate. At step 2408, the plurality of level-two IC dies are arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane.Note that description of the method in FIG. 24 assumes that the level-one IC die is in a flip chip arrangement, such that its active surface is coupled to the substrate. However, this method may also be implemented even when the active surface is on the top side (opposite the substrate) using bond wires to electrically couple it the substrate and/or through substrate vias (TSV).FIG. 25 illustrates various electronic devices that may be integrated with any of the aforementioned IC packages 500, 900, 1500, 1700. For example, a mobile telephone 2502, a laptop computer 2504, and a fixed location terminal 2506 may include an IC package 2500 featuring a plurality of level-two and level-three IC dies. The IC package 2500 may be, for example, any of the packages 500, 900, 1500, 1700 described herein. The devices 2502, 2504, 2506 illustrated in FIG. 25 are merely exemplary. Other electronic devices may also feature the IC package 2500 including, but not limited to, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, GPS enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, or any other device that stores or retrieves data or computer instructions, or any combination thereof.Also, it is noted that the aspects of the present disclosure may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.Moreover, a storage medium may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums and, processor-readable mediums, and/or computer-readable mediums for storing information. The terms "machine-readable medium", "computer-readable medium", and/or "processor-readable medium" may include, but are not limited to non-transitory mediums such as portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instruction(s) and/or data. Thus, the various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a "machine-readable medium", "computer-readable medium", and/or "processor-readable medium" and executed by one or more processors, machines and/or devices.Furthermore, aspects of the disclosure may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executable by a processor, or in a combination of both, in the form of processing unit, programming instructions, or other directions, and may be contained in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the invention. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.The following numbered clauses are hereby included to give further description of the invention:1. A multi-chip integrated circuit (IC) package, comprising:a sub strate;a level-one IC die having a surface that is electrically coupled to the substrate; anda plurality of level-two IC dies stacked above the level-one IC die, the plurality of level-two IC dies each having an active surface that is electrically coupled to the substrate, the plurality of level-two IC dies arranged side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane.2. The IC package of clause 1, further comprising:a plurality of electrical conductors that electrically couple the plurality of level-two IC dies to the substrate, the plurality of electrical conductors disposed on at least one active surface perimeter overhang region of each of the plurality of level-two IC dies.3. The IC package of clause 2, wherein the plurality of electrical conductors are at least one of soldering bumps, soldering balls, pillars, pins, stud bumps, and/or stacks of stud bumps.4. The IC package of clause 1, wherein the plurality of level-two IC dies comprises two (2) level-two IC dies.5. The IC package of clause 4, wherein the two (2) level-two IC dies have at least one of a length and/or a width that is different from one another.6 The IC package of clause 4, wherein the two (2) level-two IC dies are substantially identical in size.7. The IC package of clause 4, wherein each of the two (2) level-two IC dies includes three sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the two (2) level-two IC dies to the substrate.8. The IC package of clause 7, wherein each of the two (2) level-two IC dies includes at least one side, a portion of which, is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.9. The IC package of clause 1, wherein the plurality of level-two IC dies comprises four (4) level-two IC dies.10. The IC package of clause 9, wherein each of the four (4) level-two IC dies includes two sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the four (4) level-two IC dies to the substrate.11. The IC package of clause 10, wherein each of the four (4) level-two IC dies includes at least two sides, a portion of each of which, is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.12. The IC package of clause 1, further comprising:a plurality of level-three IC dies stacked above the level-two IC dies, the plurality of level-three IC dies each having an active surface that is electrically coupled to the substrate, the plurality of level-three IC dies arranged side by side such that the active surfaces of the plurality of level-three IC dies are positioned substantially in another same plane.13. The IC package of clause 1, wherein the level-one IC die and the plurality of level-two IC dies are electrically coupled to each other by at least one of electrical interconnections in the substrate and/or through silicon vias.14. The IC package of clause 1, wherein at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies allows the two (2) level-two IC dies to bend or rotate with respect to one another and remain electrically coupled to the substrate in response to warpage of the substrate.15. The IC package of clause 1, wherein at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies causes a first corner or a first side of a first level-two IC die to move below a second corner of the first level-two IC die in response to concave substrate warpage, and further causes the first corner or the first side of the first level-two IC die to move above the second corner of the first level-two IC die in response to convex substrate warpage.16. The IC package of clause 1, wherein the IC package is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.17. A method for manufacturing a multi-chip integrated circuit (IC) package, the method comprising:providing a substrate;electrically coupling a surface of a level-one IC die to the substrate;stacking a plurality of level-two IC dies above the level-one IC die, the plurality of level-two IC dies each having an active surface that is electrically coupled to the substrate; andarranging the plurality of level-two IC dies side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane.18. The method of clause 17, further comprising:electrically coupling the plurality of level-two IC dies to the substrate with a plurality of electrical conductors, the plurality of electrical conductors disposed on at least one active surface perimeter overhang region of each of the plurality of level-two IC dies.19. The method of clause 17, wherein the plurality of level-two IC dies comprises two (2) level-two IC dies.20. The method of clause 19, wherein each of the two (2) level-two IC dies includes three sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the two (2) level-two IC dies to the substrate.21. The method of clause 20, wherein each of the two (2) level-two IC dies includes at least one side, a portion of which, is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.22. The method of clause 17, wherein the plurality of level-two IC dies comprises four (4) level-two IC dies.23. The method of clause 22, wherein each of the four (4) level-two IC dies includes two sides having an active surface perimeter overhang region that includes a plurality of electrical conductors that electrically couple each of the four (4) level-two IC dies to the substrate.24. The method of clause 23, wherein each of the four (4) level-two IC dies includes at least two sides, a portion of each of which is positioned directly above a back side surface of the level-one IC die and lacks the plurality of electrical conductors.25. The method of clause 17, further comprising:stacking a plurality of level-three IC dies above the level-two IC dies, the plurality of level-three IC dies each having an active surface that is electrically coupled to the substrate; andarranging the plurality of level-three IC dies side by side such that the active surfaces of the plurality of level-three IC dies are positioned substantially in another same plane.26. The method of clause 17, further comprising:providing at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies that allows the two (2) level-two IC dies to bend or rotate with respect to one another and remain electrically coupled to the substrate in response to warpage of the substrate.27. The method of clause 17, further comprising:providing at least one spacing between two (2) level-two IC dies of the plurality of level-two IC dies that causes a first corner or a first side of a first level-two IC die to move below a second corner of the first level-two IC die in response to concave substrate warpage, and that further causes the first corner or the first side of the first level-two IC die to move above the second corner of the first level-two IC die in response to convex substrate warpage.28. A multi-chip integrated circuit (IC) package, comprising:a substrate;means for electrically coupling a surface of a level-one IC die to the substrate;means for stacking a plurality of level-two IC dies above the level-one IC die, the plurality of level-two IC dies each having an active surface that is electrically coupled to the substrate; andmeans for arranging the plurality of level-two IC dies side by side such that the active surfaces of the plurality of level-two IC dies are positioned substantially in a same plane.29. The multi-chip integrated circuit package of clause 28, further comprising:means for electrically coupling the plurality of level-two IC dies to the substrate with a plurality of electrical conductors, the plurality of electrical conductors disposed on at least one active surface perimeter overhang region of each of the plurality of level-two IC dies.30. The multi-chip integrated circuit (IC) package of clause 28, further comprising:means for stacking a plurality of level-three IC dies above the level-two IC dies, the plurality of level-three IC dies each having an active surface that is electrically coupled to the substrate; andmeans for arranging the plurality of level-three IC dies side by side such that the active surfaces of the plurality of level-three IC dies are positioned substantially in another same plane. |
Methods, systems, and devices are described for supporting unknown peripheral function protocols (PFP) with a wireless docking station. A wireless docking station may facilitate connections between a wireless dockee and peripherals employing both recognized and unrecognized PFPs. A docking station may request one or more service discovery parameters from a peripheral having an unrecognized PFP. The docking station may receive service discovery parameters in response, convey the received discovery parameters to a wireless dockee, and facilitate discovery and a connection between the device and the peripheral. The discovery parameters may include various identifiers related to peripheral function, identity, and location. |
CLAIMSWhat is claimed is: 1. A method for wireless communications, the method comprising: requesting, by a wireless docking station, at least one service discovery parameter from a peripheral device having an unrecognized peripheral function protocol (PFP), the at least one service discovery parameter comprising a transport protocol identifier or a port identifier;receiving the at least one service discovery parameter from the peripheral device in response to the requesting; andtransmitting service discovery information to a wireless dockee, the service discovery information based at least in part on the at least one service discovery parameter received from the peripheral device. 2. The method of claim 1, further comprising:generating the service discovery information by repackaging the at least one service discovery parameter at the wireless docking station. 3. The method of claim 1, further comprising:facilitating a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information. 4. The method of claim 1 , wherein the unrecognized PFP comprises a proprietary PFP. 5. The method of claim 1, wherein the at least one service discovery parameter from the peripheral device comprises both the transport protocol identifier and the port identifier. 6. The method of claim 1, wherein the at least one service discovery parameter from the peripheral device further comprises at least one of:a PFP name of an unrecognized PFP associated with the peripheral device, an advertisement identifier associated with the peripheral device, a service name associated with the peripheral device, a network address associated with the peripheral device, application service information data associated with the peripheral device, or a network role associated with the peripheral device. 7. A wireless docking station apparatus, comprising:a parameter requester configured to request a at least one service discovery parameter from a peripheral device having an unrecognized peripheral function protocol (PFP), the at least one service discovery parameter comprising a transport protocol identifier or a port identifier;a parameter receiver configured to receive the at least one service discovery parameter from the peripheral device in response to a request from the parameter requester; anda transmitter configured to transmit service discovery information to a wireless dockee, the service discovery information based at least in part on the at least one service discovery parameter received from the peripheral device. 8. The wireless docking station apparatus of claim 7, further comprising: a parameter repackager configured to generate the service discovery information by repackaging the at least one service discovery parameter at the wireless docking station apparatus. 9. The wireless docking station apparatus of claim 7, further comprising: an antenna configured to facilitate a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information. 10. The wireless docking station apparatus of claim 7, wherein the unrecognized PFP comprises a proprietary PFP. 11. A wireless docking station apparatus, comprising:means for requesting at least one service discovery parameter from a peripheral device having an unrecognized peripheral function protocol (PFP), the at least one service discovery parameter comprising a transport protocol identifier or a port identifier;means for receiving the at least one service discovery parameter from the peripheral device in response to a request; and means for transmitting service discovery information to a wireless dockee, the service discovery information based at least in part on the at least one service discovery parameter received from the peripheral device. 12. The wireless docking station apparatus of claim 11 , further comprising: means for generating the service discovery information by repackaging the at least one service discovery parameter at the wireless docking station apparatus. 13. The wireless docking station apparatus of claim 11 , further comprising: means for facilitating a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information. 14. The wireless docking station apparatus of claim 11, wherein the unrecognized PFP comprises a proprietary PFP. 15. The wireless docking station apparatus of claim 11, wherein the at least one service discovery parameter from the peripheral device comprises both the transport protocol identifier and the port identifier. 16. The wireless docking station apparatus of claim 11, wherein the at least one service discovery parameter from the peripheral device further comprises at least one of: a PFP name of an unrecognized PFP associated with the peripheral device, an advertisement identifier associated with the peripheral device, a service name associated with the peripheral device, a network address associated with the peripheral device, application service information data associated with the peripheral device, or a network role associated with the peripheral device. 17. A computer program product, comprising:a non-transitory computer-readable medium comprising computer-readable program code embodied thereon, the computer-readable program code configured to cause at least one processor to:request at least one service discovery parameter from a peripheral device having an unrecognized peripheral function protocol (PFP), the at least one service discovery parameter comprising a transport protocol identifier or a port identifier; receive the at least one service discovery parameter from the peripheral device in response to the request; andtransmit service discovery information to a wireless dockee, the service discovery information based at least in part on the at least one service discovery parameter received from the peripheral device. 18. The computer program product of claim 17, wherein the computer- readable program code is further configured to:generate the service discovery information by repackaging the at least one service discovery parameter. 19. The computer program product of claim 17, wherein the computer- readable program code is further configured to:facilitate a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information.20. The computer program product of claim 17, wherein the unrecognized PFP comprises a proprietary PFP. |
SUPPORTING UNRECOGNIZED PERIPHERAL FUNCTION PROTOCOL INA WIRELESS DOCKING STATIONCROSS REFERENCES[0001] The present Application for Patent claims priority to U.S. Patent Application No. 14/224,451 by Huang et al, entitled "Supporting Unrecognized Protocol in Wireless Docking," filed March 25, 2014; U.S. Provisional Patent Application No. 61/889,014 byHuang et al, entitled "Supporting Unrecognized PFPs in Wireless Docking," filed October 9, 2013; and U.S. Provisional Patent Application No. 61/902,519 by Huang et al., entitled "Supporting Unrecognized PFPs in Wireless Docking," filed November 1 1 , 2013; each of which is assigned to the assignee hereof.BACKGROUND[0002] The following relates generally to wireless communication, and more specifically to wireless docking stations for electronic devices. Wireless docking stations, which are also referred to as docking stations, wireless docking centers, or docks, may be used to connect a computer to various peripheral devices, including monitors, keyboards, mice, printers, scanners, cameras, and the like. Docking stations may be used in conjunction, or to facilitate communication with laptop computers, notebook computers, netbooks, tablets, smartphones, PDAs, and other similar electronic devices.[0003] In some cases, docking stations are able to communicate with peripherals using a peripheral function protocol (PFP) known to the docking station. But sometimes, peripherals may employ PFPs (e.g., proprietary PFPs) that are unknown to a docking station. In such cases, it may be beneficial for a docking station to gather certain information from the peripheral and convey that information to another device, notwithstanding the unknown PFP.SUMMARY[0004] The described features generally relate to one or more methods, systems, and apparatuses for supporting unknown PFPs with a docking station. A docking station may request one or more service discovery parameters from a peripheral having an unknown PFP; and the docking station may convey those parameters to an electronic device in order to facilitate discovery and a connection between the device and the peripheral. [0005] Further scope of the applicability of the described methods and apparatuses will become apparent from the following detailed description, claims, and drawings. The detailed description and specific examples are given by way of illustration only, since various changes and modifications within the scope of the description will become apparent to those skilled in the art.[0006] According to a set of illustrative embodiments, a method for wirelesscommunications may include: requesting, by a wireless docking station, a service discovery parameter from a peripheral device having an unrecognized peripheral function protocol, the parameter comprising a transport protocol identifier or a port identifier; receiving the parameter from the peripheral device in response to the request; and transmitting service discovery information to a wireless dockee, where the service discovery information is based at least in part on the parameter received from the peripheral device. In certain examples, the service discovery information may be generated by repackaging the parameter at the wireless docking station. [0007] In certain examples, the wireless docking station may facilitate a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information. In certain examples, the unrecognized protocol may include a proprietary peripheral function protocol (PFP).In certain examples, the parameter from the peripheral device may include both the transport protocol identifier and the port identifier. [0008] In certain examples, the parameter may include at least one of: a PFP name of an unrecognized PFP associated with the peripheral device, an advertisement identifier associated with the peripheral device, a service name associated with the peripheral device, a network address associated with the peripheral device, application service information data associated with the peripheral device, or a network role associated with the peripheral device. [0009] According to a second set of illustrative embodiments, a wireless docking station apparatus may include: a parameter requester configured to request a service discovery parameter from a peripheral device having an unrecognized peripheral function protocol, the parameter comprising a transport protocol identifier or a port identifier; a parameter receiver configured to receive the service discovery parameter from the peripheral device in response to a request from the requester; and a transmitter configured to transmit service discovery information to a wireless dockee, the service discovery information based at least in part on the parameter received from the peripheral device.[0010] In certain examples, the wireless docking station apparatus may implement one or more aspects of the method described above with respect to the first set of illustrative embodiments. For example, the apparatus may include additional modules and/or a processor configured to implement one or more of the examples of the method described above with respect to the first set of illustrative embodiments.[0011] According to a third set of illustrative embodiments, a wireless docking station apparatus may include: means for requesting a service discovery parameter from a peripheral device having an unrecognized peripheral function protocol, the parameter comprising a transport protocol identifier or a port identifier; means for receiving the parameter from the peripheral device in response to the request; and means for transmitting service discovery information to a wireless dockee, the service discovery information based at least in part on the parameter received from the peripheral device. [0012] In certain examples, the wireless docking station apparatus may implement one or more aspects of the method described above with respect to the first set of illustrative embodiments. For example, the wireless docking station may include means forimplementing one or more of the examples of the method described above with respect to the first set of illustrative embodiments. [0013] According to a fourth set of illustrative embodiments, a computer program product may include a non-transitory computer-readable medium comprising computer-readable program code embodied thereon. The computer-readable program code may be configured to cause a processor to: request a service discovery parameter from a peripheral device having an unrecognized peripheral function protocol, the parameter comprising a transport protocol identifier or a port identifier; receive the parameter from the peripheral device in response to the request; and transmit discovery information to a wireless dockee, the discovery information based at least in part on the parameter received from the peripheral device.[0014] In certain examples, the computer program product may implement one or more aspects of the method described above with respect to the first set of illustrativeembodiments. For example, the computer-readable program code may cause the processor to implement one or more of the examples of the method described above with respect to the first set of illustrative embodiments.[0015] Further scope of the applicability of the described methods and apparatuses will become apparent from the following detailed description, claims, and drawings. The detailed description and specific examples are given by way of illustration only, since various changes and modifications within the scope of the description will become apparent to those skilled in the art.BRIEF DESCRIPTION OF THE DRAWINGS[0016] A further understanding of the nature and advantages of the present invention may be realized by reference to the following drawings. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If, in an instance in the specification, only the first reference label, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0017] FIG. 1 shows a block diagram of a wireless communications system according to various aspects of the present disclosure;[0018] FIG. 2 shows a block diagram of a wireless communications system according to various aspects of the present disclosure; [0019] FIG. 3 shows a call flow diagram illustrating communication in a wireless communication system according to various aspects of the present disclosure;[0020] FIG. 4 shows a call flow diagram illustrating communication in a wireless communication system according to various aspects of the present disclosure;[0021] FIG. 5 shows a call flow diagram illustrating communication in a wireless communication system according to various aspects of the present disclosure;[0022] FIG. 6 shows a block diagram of a device configured for communication in a wireless network according to various aspects of the present disclosure; [0023] FIG. 7 shows a block diagram of a wireless communications system according to various aspects of the present disclosure;[0024] FIG. 8 shows a flowchart diagram of an illustrative method for wirelesscommunications according to various aspects of the present disclosure; and [0025] FIG. 9 shows a flowchart diagram of an illustrative method for wirelesscommunications according to various aspects of the present disclosure.DETAILED DESCRIPTION[0026] Methods, systems, and apparatuses are described for supporting unknown PFPs in wireless docking. A docking station may request one or more service discovery parameters from a peripheral having an unknown PFP. The docking station may convey those parameters to an electronic device in order to facilitate discovery and a connection between the device and the peripheral.[0027] The various techniques described herein for supporting unknown PFPs are generally described with respect to WLAN or Wi-Fi networks. A WLAN or Wi-Fi network may refer to a network that is based on the protocols described in the various IEEE 802.1 1 standards(e.g., IEEE 802.1 la/g, 802.1 1η, 802.1 lac, 802.1 1 ad, 802.1 lah, etc.), for example. However, the same or similar techniques may also be used in any wireless network (e.g. , a cellular network). For example, the same or similar techniques may be used for various wireless communications systems such as cellular wireless systems, Peer-to-Peer wirelesscommunications, ad hoc networks, satellite communications systems, and other systems. The terms "system" and "network" are often used interchangeably. These wirelesscommunications systems may employ a variety of radio communication technologies such as Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA),Frequency Division Multiple Access (FDMA), Orthogonal FDMA (OFDMA), Single-Carrier FDMA (SC-FDMA), and/or other radio technologies. Generally, wireless communications are conducted according to a standardized implementation of one or more radiocommunication technologies called a Radio Access Technology (RAT). A wireless communications system or network that implements a Radio Access Technology may be called a Radio Access Network (RAN). [0028] Examples of Radio Access Technologies employing CDMA techniques include CDMA2000, Universal Terrestrial Radio Access (UTRA), etc. CDMA2000 covers IS-2000, IS-95, and IS-856 standards. IS-2000 Releases 0 and A are commonly referred to asCDMA2000 IX, IX, etc. IS-856 (TIA-856) is commonly referred to as CDMA2000 lxEV- DO, High Rate Packet Data (HRPD), etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. Examples of TDMA systems include various implementations of Global System for Mobile Communications (GSM). Examples of Radio AccessTechnologies employing OFDM and/or OFDMA include Ultra Mobile Broadband (UMB), Evolved UTRA (E-UTRA), Wi-Fi, IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS).3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are new releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, and GSM are described in documents from an organization named "3rd Generation Partnership Project" (3GPP).CDMA2000 and UMB are described in documents from an organization named "3rdGeneration Partnership Project 2" (3GPP2). The techniques described herein may be used for the systems and radio technologies mentioned above as well as other systems and radio technologies.[0029] Thus, the following description provides examples, and is not limiting of the scope, applicability, or configuration set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to certain embodiments may be combined in other embodiments.[0030] Referring first to FIG. 1, a block diagram illustrates a wireless communications system 100 according to various embodiments. The system 100 includes a wireless docking station 105, peripheral devices 110, and a wireless dockee 115. The peripheral devices 110 may be electronic devices that each has one or more peripheral functions. For example, the peripheral device 110-a may be a mouse with a peripheral function of controlling a pointer on a graphical user interface. In some embodiments, the peripheral device 110-b is a keyboard with a peripheral function of user input. The peripheral device 110-c may be a multi-function printer, for example, with peripheral functions of printing and scanning. Additionally or alternatively, the wireless docking station 105 may include embedded peripherals, such as the peripheral device 1 10-d. Some or all of the peripheral devices 1 10 may be connected to and/or in communication with the wireless docking station 105.[0031] The wireless dockee 1 15 may wirelessly connect to the wireless docking station 105, for example, utilizing Wi-Fi. The wireless dockee 1 15 may seek out or connect to the wireless docking station 105 based at least in part on the peripheral functions available via the wireless docking station 105. Thus, the wireless docking station 105 may advertise the peripheral functions, and thus the peripheral devices, available to a wireless dockee 1 15. Once connected to (e.g., docked) the wireless docking station 105, the wireless dockee 1 15 may exploit the peripheral functions available through the wireless docking station 105.[0032] The wireless docking station 105 may support a variety of known and/or common PFPs. For example, the wireless docking station 105 may support Miracast, universal serial bus (USB), IEEE 802.1 lad ("WiGig"), Universal Plug and Play (UPnP), and/or Wi-Fi Direct Services Application Service Platform (WFDS ASP). Some of the peripherals 1 10 may employ such known PFPs, and the wireless docking station 105 may thus readily transmit service discovery information (also referred to as discovery information) related to these peripherals 1 10 to the wireless dockee 1 15. In some cases, however, a peripheral device 1 10 may utilize an unknown PFP. For example, a peripheral device 1 10 may employ a proprietary PFP. The wireless docking station 105 may therefore request one or more service discovery parameters from a peripheral device 1 10.[0033] Next, turning to FIG. 2, a block diagram depicts a wireless communication system 200 according to various embodiments. The system 200 may be an example of aspects of the system 100. The system 200 includes a wireless docking station 105-a, a peripheral device 1 10-e, and a wireless dockee 1 15-a. Each of these may be examples of the corresponding devices of the system 100.[0034] In some embodiments, the peripheral device 1 10-e is a peripheral device with an unrecognized or unknown PFP. The peripheral device 1 10-e may be external to the wireless docking station 105-a, or it may be embedded in the wireless docking station 105-a. The wireless docking station 105-a may request from the peripheral device 1 10-e service discovery parameters, including a transport protocol identifier or a port identifier, or both. The peripheral device 1 10-e may respond by sending the requested service discovery parameters to the wireless docking station 105 -a. The wireless docking station 105 -a, in turn, may transmit to the wireless dockee 1 15 -a service discovery information pertaining to the peripheral device 1 10-e. The service discovery information may be based at least in part on the received service discovery parameters. For example, the service discovery information may include the received service discovery parameters. In some embodiments, the wireless docking station 105 generates the service discovery information by repackaging the received parameters, and then transmits the service discovery information to the wireless dockee 1 15.[0035] The wireless docking station 105-a may request any of several service discovery parameters from the peripheral devices 1 10-e. By way of example, the wireless docking station 105-a may request a PFP name, an advertisement identifier, a service name, a network address, application service information data, or a network role; or it may request any combination of such parameters. The wireless docking station 105-a may incorporate any or all these service discovery parameters into the service discovery information that it transmits to the wireless dockee 1 15-a. In some embodiments, the wireless docking station 105-a facilitates a connection between the wireless dockee 1 15-a and the peripheral device 1 10-e based at least in part on the service discovery information. The wireless dockee 1 15-a may thus connect with and utilize one or more peripheral functions of the peripheral device 1 10-e based at least in part on service discovery information provided by the wireless docking station 105-a, notwithstanding the unrecognized PFP of the peripheral device 1 10-e. [0036] In some cases, the wireless docking station 105-a may also request a peripheral device 1 10-e using an unrecognized PFP to provide additional information. For example, if the peripheral device 1 10-e employs an unrecognized (e.g., a proprietary) PFP, the wireless docking station 105-e may request that the peripheral device 1 10-e provide the name of the unrecognized PFP. The peripheral device 1 10-e may provide the name of the unrecognized PFP in a specific element or sub-element of a network protocol known and utilized by the wireless docking station 105-a, the peripheral device 1 10-e, and/or the wireless dockee 1 15-a. The wireless docking station 105-a may, in turn, provide the name of the unrecognized PFP to the wireless dockee 1 15-a as part of the service discovery information.[0037] The wireless docking station 105-a, the peripheral device 1 10-e, and the wireless dockee 1 15-a may employ one or more networking protocols for requesting and exchanging PFP parameters and/or service discovery information. In various embodiments, the devices may utilize UPnP, WFDS ASP, and/or Extensible Mark-up Language (XML). The various parameters and service discovery information may thus occupy specific elements of sub- elements of a WFDS APS or XML string. By way of example, a simple XML type"PfpName" may identify the name of a PFP. The simple XML type "PfpNameEnum" may identify the name of a standard PFP; and the simple XML type "PfpNameAnyString may identify the names of unknown or unrecognized PFPs. The wireless docking station 105-a may thus advertise both standardized and non- standardized (e.g., proprietary) PFPs.[0038] FIG. 3 is a call flow diagram 300 illustrating communication in a wireless communication system according to various embodiments. The diagram 300 may illustrate aspects of the systems 100 and 200 described with reference to FIGS. 1 and 2. The diagram 300 includes a wireless docking station 105-b, a peripheral device 1 10-f, and a wireless dockee 1 15-b. Each of these may be examples of corresponding devices of systems 100 and 200.[0039] The wireless docking station 105-b may establish communication 305 with the peripheral device 1 10-f. The peripheral device 1 10-f may be external to or embedded in the wireless docking station 105-b. Upon establishing communication 305, the wireless docking station 105-b may determine that the peripheral device 1 10-f employs an unrecognized or unknown PFP 310. The wireless docking station 105-b may thus request service discovery information or parameters 315 from the peripheral device 1 10-f. If the peripheral device 1 10- f is embedded in the wireless docking station 105-b, the wireless docking station 105-b may also request parameters of the PFP that drive the use of the peripheral device 1 10-f. In response, the peripheral device 1 10-f may transmit service discovery information or parameters 320. An embedded peripheral device 1 10-f may also transmit additional driver parameters. Upon receiving the service discovery information, the wireless docking station 105-b may transmit the service discovery information 325 to the wireless dockee 1 15-b. The service discovery information 325 includes, for example, a transport protocol or a port number, or both.[0040] Next, FIG. 4 depicts a call flow diagram 400 illustrating communication in a wireless communication system according to various embodiments. The diagram 400 may illustrate aspects of the systems 100 and 200 described with reference to FIGS. 1 and 2. The diagram 400 includes a wireless docking station 105-c, a peripheral device 1 10-g, and a wireless dockee 115-c. Each of these may be examples of corresponding devices of systems 100 and 200.[0041] The wireless docking station 105-c may establish communication 405 with the peripheral device 110-g. The peripheral device 110-g may be external to or embedded in the wireless docking station 105-c. Upon establishing communication 405, the wireless docking station 105-c may determine that the peripheral device 110-g employs an unrecognized or unknown PFP 410. The wireless docking station 105-c may thus request service discovery information or parameters 415. If the peripheral device 110-g is embedded in the wireless docking station 105-c, the wireless docking station 105-c may also request parameters of the PFP that drive the use of the peripheral. The request 415 may be made in a standardized format known a priori to both the wireless docking station 105-c and the peripheral device 110-g. In response, the peripheral device 110-g may transmit service discovery information or parameters 420. An embedded peripheral device 110-g may also transmit additional driver parameters. The service discovery information 420 may include one or more of: a transport protocol, a port number, advertisement identification, a service name, a network or IP address, service information, network role, or other suitable parameters. In someembodiments, the service discovery information 420 includes a PFP name.[0042] Each of the service discovery parameters may convey particular information about the peripheral device 110-g. By way of example, the transport protocol parameter indicates what transport protocol the peripheral device 110-g employs, and the port number indicates which IP port the peripheral device 110-g utilizes. The PFP name may be the name of an unrecognized PFP that the peripheral device 110-g utilizes. The advertisement identification may be an indication that the peripheral device 110-g is available for connection. The service name may indicate a name of the peripheral function the peripheral device 110-g offers. In some cases, the service information includes information about the peripheral function of the peripheral device 110-g. The network address may be an IP address of the peripheral device 110-g. And, in some embodiments, the network role is the intended role of the peripheral device 110-g in a device-to-device communication scenario. For example, the network role may indicate that the peripheral device 110-g intends to assume a master or slave role, as between it and another device. [0043] In some embodiments, the wireless docking station 105-c repackages 425 the service discover information and then it transmits the service discovery information 430 to the wireless dockee 115-c. The wireless dockee 115-c may utilize the service discovery information to select a peripheral device 110, and the wireless dockee 115-c may indicate the peripheral device selection 435 to the wireless docking station 105-c. In some embodiments, the wireless docking station 105-c then facilitates a connection 440 between the wireless dockee 115-c and the peripheral device 110-g.[0044] FIG. 5 depicts a call flow diagram 500 illustrating communication in a wireless communication system according to various embodiments. The diagram 500 may illustrate aspects of the systems 100 and 200 described with reference to FIGS. 1 and 2. The diagram 500 includes a wireless docking station 105-d, peripheral devices 110-h and 110-i, and a wireless dockee 115-d. The peripheral devices 110-h and 110-i may be external to or embedded in the wireless docking station 105-c. Each of these may be examples of corresponding devices of systems 100 and 200. [0045] The wireless docking station 105-d may establish communication 505 with the peripheral device 110-h. Upon establishing communication 505, the wireless docking station 105-d may recognize the PFP 510 that the peripheral device 110-h utilizes. The wireless docking station 105-d may thus convey service discovery information 515 to the wireless dockee 115-d according to the recognized PFP. [0046] The wireless docking station 105-d may also establish communication 520 with the peripheral device 110-i. Upon establishing communication 520, the wireless docking station 105-d may determine that that the peripheral device 110-i employs an unrecognized or unknown PFP 525. The wireless docking station 105-d may thus request service discovery information or parameters 530. If the peripheral device 110-i is embedded in the wireless docking station 105-d, the wireless docking station 105-d may also request parameters of the PFP that drive the use of the peripheral device 110-i. The request 530 may be of a nonproprietary format, which, for example, is known a priori to both the wireless docking station 105-d and the peripheral device 110-i. In response, the peripheral device 110-i may transmit service discovery information or parameters 535. An embedded peripheral device 110-i may also transmit additional driver parameters. The service discovery information 535 may include one or more of: a transport protocol, a port number, a PFP name, advertisement identification, a service name, a network address, service information, network role, or other suitable parameters. The wireless docking station 105-d may, in turn, transmit the service discovery information 550 to the wireless dockee 1 15-d.[0047] Next, FIG. 6 depicts a block diagram of a device 105-e configured forcommunication in a wireless network according to various embodiments. The device 105-e may be a wireless docking station, and it may be an example of the wireless docking stations 105 described with reference to the preceding figures. The device 105-e may include a receiver module 610, a service discovery management module 615, and a transmitter module 620. Each of the modules may be in communication with one another. In someembodiments, the device 105-e includes a processor module (not shown).[0048] The various modules of the device 105-e may be means for performing the function described herein. For instance, the transmitter module 620 may be configured to transmit a request for one or more service discovery parameters to a peripheral device. In some embodiments, the receiver module is configured to receive parameters from the peripheral device transmitted in response to the request. The service discovery management module 615 may be configured to package the received parameters as service discovery information. And, in some cases, the transmitter module 620 is further configured to transmit the service discovery information to a wireless dockee. The device 105-e may communicate with external peripheral devices and embedded peripheral devices. [0049] In some embodiments, the service discovery management module 615 is configured to facilitate a connection between the wireless dockee and the peripheral device. The service discovery management module 615 may, in combination with the receiver module 610 and the transmitter module 620, receive peripheral input/output signals from a wireless dockee and transmit them to a peripheral device. Likewise, the device 105-e may receive input/output signals from a peripheral device and transmit them to a wireless dockee.[0050] The components of the device 105-e may, individually or collectively, be implemented with one or more ASICs adapted to perform some or all of the applicable functions in hardware. Alternatively, the functions may be performed by one or more other processing units (or cores), on one or more integrated circuits. In other embodiments, other types of integrated circuits may be used (e.g., Structured/Platform ASICs, FPGAs, and other Semi-Custom ICs), which may be programmed in any manner known in the art. The functions of each unit may also be implemented, in whole or in part, with instructions embodied in a memory, formatted to be executed by one or more general or application- specific processors.[0051] Turning now to FIG. 7, which depicts a wireless communications system 700 according to various embodiments. The system 700 may include a wireless docking station 105-f, peripheral devices 110-j, and wireless dockees 115-e. The peripheral devices 110-j may be external to the wireless docking station 105-f; but in some cases the peripheral devices 110-j are embedded in the wireless docking station 105-f. The docking station 105-f may be an example of the devices 105 described with reference to the preceding FIGS. 1, 2, 3, 4, 5, and/or 6. The docking station 105-f may include a processor module 705, a memory module 710 (including software module 715), a transceiver module 720, antenna(s) 725, a peripheral device communication module 730, a wireless dockee communication module 735, a peripheral function exposure module 740, and a connection management module 745. In some embodiments, the transceiver module(s) 720 may be referred to as a transmitter. [0052] The transceiver module 720, in conjunction with antenna(s) 725 may facilitate wireless transmission with wireless dockees 115-e and/or peripheral device(s) 110-j.Additionally or alternatively, the peripheral device communication module 730 may facilitate wireline communication with the peripheral devices 110-j. In some embodiments, the wireless dockee communication module 735 facilitates wireline communications with a wireless dockee 115-e. For example, a wireless dockee 115-e may be temporarily connected via wireline to the wireless docking station 105-f for certain synchronization operations.[0053] The peripheral function exposure module 740 may identify or otherwise determine peripheral functions of peripheral devices 110-j. The peripheral function exposure module 740 may facilitate advertisement of peripheral functions available via the wireless docking station 105-f.[0054] The connection management module 745 may facilitate connections between wireless dockees 115-e and peripheral devices 110-j. For example, the connection management module 745 may, in combination with the transceiver module 720, receive peripheral input/output signals from a wireless dockee and transmit them to a peripheral device. Likewise, the connection management module 745 may receive input/output signals from a peripheral device and transmit them to a wireless dockee. [0055] In some embodiments, the wireless docking station 105-f includes a service discovery management module 615-a. The service discovery management module 615-a may perform substantially the same functions as the corresponding module described with reference to FIG. 6. By way of example, the service discovery management module 615-a determines whether a connected peripheral is using a recognized or unrecognized PFP. The service discovery management module 615-a may also include a recognized PFP module 750 and an unrecognized PFP module 755. The recognized PFP module 750 may further include a parameter determining module 760. In some cases, the recognized PFP module 750 is configured to operate when a connected peripheral employs a recognized PFP. For those peripherals employing recognized or known PFPs, the parameter determining module 760 may determine parameters for use by a wireless dockee in discovery of a peripheral.[0056] The unrecognized PFP module 755 may include a parameter requesting module 765, a parameter receiving module 770, and/or a parameter repackaging module 775. These modules may alternatively be referred to as a parameter requester, a parameter receiver, and a parameter repackager, respectively. In some embodiments, the unrecognized PFP module755 is configured to operate when a connected peripheral employs an unrecognized PFP. For those peripherals employing unrecognized or unknown PFPs, the parameter requesting module 765 may request service discovery parameters from the peripheral device, via the transceiver module 720, for example. The parameter receiving module 770 may receive service discovery parameters in response to the request. The parameter repackaging module 775 may package the received parameters and prepare them to be transmitted to a wireless dockee 1 15-e for use in peripheral discovery.[0057] Operations of the service discovery management module 615-a may be further understood by reference to FIGS. 3-5. For example, the service discovery management module 615-a may be configured to implement a call flow sequence substantially similar to the call flow diagram 500.[0058] The memory module 710 may include random access memory (RAM) and readonly memory (ROM). The memory module 710 may store computer-readable, computer- executable software/firmware code 715 containing instructions that are configured to, when executed, cause the processor module 705 to perform various functions described herein (e.g., requesting, receiving, and transmitting service discovery parameters, etc.). Alternatively, the software/firmware code 715 may not be directly executable by the processor module 705 but may be configured to cause a computer (e.g., when compiled and executed) to perform functions described herein. The processor module 705 may include an intelligent hardware device, e.g., a central processing unit (CPU), a microcontroller, an application-specific integrated circuit (ASIC), etc.[0059] Each of the modules of the wireless docking station 105-f may be in communication with each other, for example via a communication bus 780.[0060] FIG. 8 shows a flowchart diagram of an illustrative method 800 for wireless communications according to an aspect of the principles described above. The method 800 may be implemented by one or more of the wireless docking stations 105 described above with reference to the previous Figures. In certain examples, one or more of the wireless docking stations 105 of FIGS. 1-7; modules 610, 615, or 620 of FIG. 6; and/or modules 705, 710, 715, 720, 725, 730, 735, 740, 745, 750, 755, 760, 765, 770, 775, or 615-a of FIG. 7 may be means for performing the blocks 805, 810, 815 illustrated in connection with the method 800 of FIG. 8.[0061] At block 805, a wireless docking station may request a service discovery parameter from a peripheral device having an unrecognized peripheral function protocol (PFP). The unrecognized PFP may be a proprietary PFP or other PFP with which the wireless docking is unfamiliar. The parameter may include a transport protocol identifier or a port identifier for the peripheral device. At block 810, the parameter may be received from the peripheral device in response to the request. At block 815, service discovery information may be transmitted to a wireless dockee. The service discovery information may be based at least in part on the parameter received from the peripheral device.[0062] FIG. 9 shows a flowchart diagram of an illustrative method 900 for wireless communications according to an aspect of the principles described above. The method 900 may be an example of the method 800 described above with reference to FIG. 8. The method 900 may be implemented by one or more of the wireless docking stations 105 described above with reference to the previous Figures. In certain examples, one or more of the wireless docking stations 105 of FIGS. 1-7; modules 610, 615, or 620 of FIG. 6; and/or modules 705, 710, 715, 720, 725, 730, 735, 740, 745, 750, 755, 760, 765, 770, 775, or 615-a of FIG. 7 may be means for performing the blocks 905, 910, 915, 920, 925, 930, 935 illustrated in connection with the method 900 of FIG. 9.[0063] At block 905, a wireless docking station may detect a peripheral device. At block 910, the wireless docking station may determine whether a peripheral function protocol (PFP) employed by the peripheral device is recognized.[0064] If the PFP is recognized (block 910, Yes), the wireless docking station may generate service discovery information at block 915 for the detected peripheral device based at least in part on known information about the recognized PFP and the detected peripheral device. The generated service discovery information for the detected peripheral device may betransmitted to a wireless dockee at block 930, and the wireless docking station may facilitate a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information at block 935.[0065] If the PFP is not recognized (block 910, No), the wireless docking station may request from the peripheral device one or more of: a PFP name, an advertisement identifier associated with the peripheral device, a service name associated with the peripheral device, an Internet Protocol (IP) address or other network address associated with the peripheral device, a transport protocol (e.g., User Datagram Protocol (UDP) or IP) associated with the peripheral device, a port number (e.g., an IP port) associated with the peripheral device, service information associated with the peripheral device, a network role associated with the peripheral device, or the like. At block 925, service discovery information for the detected peripheral device may be generated based at least in part on the requested parameter(s), as received from the peripheral device. At block 930, the service discovery information generated for the detected peripheral device may be transmitted to a wireless dockee, and at block 935 the wireless docking station may facilitate a connection between the wireless dockee and the peripheral device based at least in part on the service discovery information.[0066] The detailed description set forth above in connection with the appended drawings describes exemplary embodiments and does not represent the only embodiments that may be implemented or that are within the scope of the claims. The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.[0067] Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0068] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be amicroprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0069] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items prefaced by "at least one of indicates a disjunctive list such that, for example, a list of "at least one of A, B, or C" means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). [0070] Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special- purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0071] The previous description of the disclosure is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Throughout this disclosure the term "example" or "exemplary" indicates an example or instance and does not imply or require any preference for the noted example. Thus, the disclosure is not to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Methods and apparatus to detect proximity of objects to computing devices using near ultrasonic sound waves are disclosed. An example apparatus includes a signal generator to cause a speaker of a computing device to produce a series of pulses. Successive ones of the pulses are spaced at fixed intervals. Ones of the pulses having a central frequency between 18 kHz and 24 kHz. The example apparatus includes an echo profile generator to process noise information sensed by a microphone of the computing device. The noise information includes the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device. The example apparatus further includes an object detection analyzer to determine whether a first object is within an activation region associated with the computing device based on the pulses and the echoes sensed by the microphone. |
An apparatus comprising:a signal generator to cause a speaker of a computing device to produce a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz;an echo profile generator to process noise information sensed by a microphone of the computing device, the noise information including the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device; andan object detection analyzer to determine whether a first object is within an activation region associated with the computing device based on the pulses and the echoes sensed by the microphone.The apparatus of claim 1, further including an activation operation controller to, in response to the detection of the first object within the activation region, implement an operation in the computing device.The apparatus of claim 1, further including an environment profile analyzer to generate a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device, the object detection analyzer to:compare the echoes to the static environment echo profile; andidentify a presence of the first object based on the comparison.The apparatus of claim 3, wherein the environment profile analyzer is to identify a change in the environment based on the echoes sensed by the microphone, and update the static environment echo profile based on the change in the environment.The apparatus of claim 3, wherein the echo profile generator is to generate a full echo profile based on the pulses and the corresponding echoes sensed by the microphone, the object detection analyzer is to:remove static data corresponding to the static environment echo profile from the full echo profile to generate a non-static echo profile; anddetermine whether the first object is within the activation region based on the non-static echo profile.The apparatus of claim 1, wherein the signal generator is to cause the speaker to produce successive ones of the pulses spaced at first fixed intervals during a first time period and to, in response to the object detection analyzer detecting the first object within the activation region during the first time period, cause the speaker to produce additional ones of the pulses spaced at second fixed intervals during a second time period after the first time period, the second fixed intervals being shorter than the first fixed intervals, the object detection analyzer to verify the first object is within the activation region based on the pulses and corresponding echoes sensed during the second time period.The apparatus of claim 6, wherein the signal generator is to, in response to the object detection analyzer no longer detecting the first object within the activation region during the second time period, cause the speaker to produce additional ones of the pulses spaced at the first fixed intervals during a third time period after the second time period.The apparatus of any one of claims 1-7, wherein the echo profile generator is to:generate a full echo profile based on the pulses and the corresponding echoes sensed by the microphone; andidentify peaks in the full echo profile, different ones of the peaks corresponding to either the pulses or the corresponding echoes, the object detection analyzer to:identify repeating reference signals based on the peaks identified in the full echo profile, the repeating reference signals corresponding to the pulses sensed by the microphone;identify an echo signal between separate occurrences of the repeating reference signals, the echo signal corresponding to one of the echoes; anddetermine whether the first object is within the activation region based on a time difference between the echo signal and a preceding one of the repeating reference signals.The apparatus of claim 8, wherein the object detection analyzer is to identify the repeating reference signals by:identifying a first subset of the peaks associated with an intensity that satisfy a threshold; andidentifying a second subset of the peaks from among the first subset that are detected at a periodicity corresponding to the fixed intervals of the pulses.The apparatus of claim 8, wherein the object detection analyzer is to:identify the repeating reference signals at a first point in time;verify whether subsequent ones of the peaks identified after the first point in time are associated with an intensity and a periodicity corresponding to subsequent occurrences of the repeating reference signals;in response to verification that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, determine whether the first object is within the activation region; andin response to an inability to verify that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, inhibit the determination of whether the first object is within the activation region until the repeating reference signals are again identified at a second point in time.A method comprising:producing, via a speaker of a computing device, a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz;sensing, via a microphone of the computing device, the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device; anddetermining, by executing an instruction with at least one processor, whether a first object is within an activation region associated with the computing device based on the pulses and the echoes sensed by the microphone.The method of claim 11, further including:generating a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device;comparing the echoes to the static environment echo profile; andidentifying a presence of the first object based on the comparison.A non-transitory computer readable medium comprising instructions that, when executed, cause a computing device to at least:produce a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz;sense the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device; anddetermine whether a first object is within an activation region associated with the computing device based on the pulses and the echoes.The non-transitory computer readable medium of claim 13, wherein the instructions further cause the computing device to:generate a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device;compare the echoes to the static environment echo profile; andidentify a presence of the first object based on the comparison.The non-transitory computer readable medium of any one of claims 13 or 14, wherein the instructions further cause the computing device to:produce successive ones of the pulses spaced at first fixed intervals during a first time period;detect the first object within the activation region based on the pulses and corresponding echoes sensed during the first time period;in response to detecting the first object within the activation region during the first time period, produce additional ones of the pulses spaced at second fixed intervals during a second time period after the first time period, the second fixed intervals being shorter than the first fixed intervals; andverify the first object is within the activation region based on the pulses and corresponding echoes sensed during the second time period. |
FIELD OF THE DISCLOSUREThis disclosure relates generally to proximity sensing, and, more particularly, to methods and apparatus to detect proximity of objects to computing devices using near ultrasonic sound waves.BACKGROUNDThere are a number of different human-machine interfaces that enable people to interact with a computing device. Some example human-machine interfaces include a keyboard or keypad, a mouse or other pointing device, a touchscreen, etc. Other techniques have been developed that do not require a person to physically touch the device such as, for example, through voice commands and/or based on detecting of the proximity and/or gestures of a user near the device.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example computing device implemented in accordance with teachings disclosed herein.FIG. 2 illustrates an example static environment echo profile generated in accordance with teachings disclosed herein based on actual data.FIG. 3 illustrates an example full echo profile generated in accordance with teachings disclosed herein based on actual data.FIG. 4 illustrates an example non-static echo profile generated in accordance with teachings disclosed herein based on actual data.FIG. 5 is a table providing experimental results from implementing teachings disclosed herein.FIG. 6 illustrates an example implementation of the example computing device of FIG. 1 .FIGS. 7-10 are flowcharts representative of example machine readable instructions that may be executed to implement the example computing device of FIGS. 1 and/or 6.FIG. 11 is a block diagram of an example processing platform structured to execute the example instructions of FIGS. 7-10 to implement the example computing device of FIGS. 1 and/or 6.The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.Descriptors "first," "second," "third," etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.DETAILED DESCRIPTIONThere are a variety of techniques that may be implemented by a computing device to identify and/or detect an object in proximity to the computing device. In some instances, infrared (IR) depth sensors may be employed. However, many existing computing devices do not include IR depth sensors, thus, limiting the applicability of such approaches and/or imposing increased costs to the development and manufacture of new devices that include the additional components to implement this technique. Further, the effective range of detection possible by IR sensors is relatively limited. Further still, processing IR sensor data is relatively computationally intensive, thereby requiring increased computational capacity and a corresponding increase in power capacity relative to devices that do not implement IR sensors.A second technique to detect the proximity of objects include the processing of images captured by a camera. Image processing for object detection is computationally intensive. Indeed, many image processing applications implement dedicated hardware (e.g., a specialized image processor) to improve efficiencies in the heavy computations involved. Therefore, as with IR sensors, there are significant costs to include the components needed for effective image sensing. Furthermore, the relatively high computational burdens associated with image processing result in relatively significant power requirements.A third approach to detecting proximity of objects is based on high bandwidth ultrasonic technologies. Such techniques involve specialized speakers, microphones, and/or associated circuitry that are not implemented in many known computing devices. For example, traditional speakers and microphones for mobile devices (e.g., laptops, tablets, smartphones, etc.) support only 48 kHz and/or 44.1 kHz sampling frequencies. However, some high bandwidth ultrasonic technologies employ sensors with high bandwidth (e.g., greater than 96 kHz) CODECs (e.g., speaker driving circuit) in the excitation path. Further, the sound capturing microphone and associated driving circuit also need to support high bandwidths beyond what is traditionally implemented in many existing computing devices. Therefore, there are increased costs in manufacturing devices capable of implementing such techniques because additional and/or more expensive components are required. Further, the higher bandwidths associated with such technique produces more data to be analyzed thereby resulting in increased computational burdens and associated increases in power requirements.One particular application for object proximity detection is to enable a user of a computing device to cause the device to perform or initiate some action on the computing device without the user having to specifically touch the device. An example action might be to wake up the device from a sleep or idle state (e.g., a low power state) to an active (full power) state. As mentioned above, IR depth sensing techniques, image processing techniques, and high bandwidth ultrasonic sensing techniques require relatively significant amounts of power such that they are unsuitable for implementation in a low power state (e.g., a sleep state or idle state), particular in associated with mobile computing devices that rely on a battery for power.Examples disclosed herein overcome the limitations of the above approaches by implementing a methodology that does not require any specialized components. Rather, examples disclosed herein may be implemented using typical speakers and microphones (that support 48kHz/44.1kHz sampling frequencies) commonly found in the vast majority of mobile devices and other computing devices that exist today. As a result, examples disclosed herein do not result in any additional costs to manufacture the devices that may implement the disclosed methodologies. Furthermore, the computational burden of examples disclosed herein is relatively small such that specialized processing components or not required. Further, the power requirements for examples disclosed herein are sufficiently low to enable implementation when the computing device is in a low power or idle state (e.g., asleep). That is, examples disclosed herein may be implement when a computing device is in a lower power state (e.g., in a sleep state) than when the device is in a fully active state to wake up the device from the lower power state to the fully active state.More particularly, examples disclosed herein detect the presence and/or proximity of objects near a computing device based on near ultrasonic sound waves. Ultrasonic sound waves are sound waves with frequencies higher than the upper limit of the frequency ranges for sound that is audible to humans. While the upper limit of audible sound waves varies from person to person, the limit for most people is around 20 kilohertz (kHz). As used herein, near ultrasonic sound waves refer to sound waves within a region that is close to the upper limit of human hearing. More specifically, as used herein, near ultrasonic sound waves are sound waves having a frequency between 18 kHz and 24 kHz. By contrast, known high bandwidth ultrasonic sensing techniques mentioned above are typically implemented at frequencies well above the human limit of hearing (e.g., at frequencies above 40 kHz). Existing ultrasonic techniques operate at such high frequencies because operating in the near ultrasonic range (e.g., between 18 kHz and 24 kHz) has presented significant challenges due to noise in the environment. That is, while many devices already include speakers and microphones capable of operating in this frequency range, the noise that is picked up by microphones in this range has made it difficult to reliably identify relevant signals needed for accurate depth sensing. As described below, examples disclosed herein enable the identification and/or isolation of relevant signals within the near ultrasonic frequency range from among other noises that may be in the environment to allow for accurate and reliable object detection. Further, the processing of such signals is accomplished in a computationally and power efficient manner that is suitable for implementation when a computing device is in a low power sleep. As a result, examples disclosed herein may be used to detect the presence of an object (e.g., a user's hand) in the vicinity of a computing device in an idle state to trigger the device to wake up to a full power active state.FIG. 1 illustrates an example computing device 100 implemented in accordance with teachings disclosed herein. The example computing device 100 includes a speaker 102 and a microphone 104. In the illustrated example, the computing device 100 is shown as a laptop computer. However, the computing device may be any type of computing device (e.g., a desktop computer, a tablet, a smartphone, etc.) that includes both a speaker 102 and a microphone 104. The speaker and microphone may be standard components that are built into the originally manufactured device. Although only one speaker 102 and one microphone 104 are shown, teachings disclosed herein may be implemented by a device that includes more than one speaker and/or more than one microphone.The speaker 102 may emit or produce sound waves that propagate in the environment surrounding the computing device 100. In some examples, such acoustic signals may be detected by the microphone 104. More particularly, such acoustic signals may follow a direct signal path 106 in which the signals are sensed directly by the microphone 104. Additionally or alternatively, the signals may follow an indirect or echo signal path 108 in which the signals reflect off objects in the vicinity of the computing device 100 as an echo of the initial sound wave that is then sensed by the microphone 104. In the illustrated example, the echo signal path 108 is shown as reflected off the hand 110 of a user. However, the same acoustic signals may also reflect off other objects in the vicinity of the device 100 that are not represented in the illustrated example of FIG. 1 . For example, the same acoustic signal produced by the speaker 102 may also reflect off the user's arm and/or other parts of the user's body (e.g., torso, face, etc.), with such echoes being sensed by the microphone 104. Further, the same acoustic signal may reflect as an echo off of furniture (e.g., a desk, a chair, etc.), walls, ceilings, and/or any other object(s) within the vicinity of the computing device 100.For purposes of explanation, small waveforms are shown on each of the direct and echo signal paths 106, 108 to represent individual acoustic pulses (collectively identified by reference numeral 112) generated in series by the speaker 102 at fixed intervals. While separate waveforms are shown on both the direct signal path 106 and the echo signal path 108, corresponding ones of the waveforms on both paths 106, 108 are associated with the same acoustic pulses 112. That is, the two waveforms identified by reference numeral 112a correspond to a single first acoustic pulse 112a (i.e., both are generated from a single excitation of the speaker 102 at a single point in time). Similarly, the two waveforms identified by reference numeral 112b correspond to a single second acoustic pulse 112b generated a period of time (corresponding to the fixed interval for the repeating pulses 112) after the first acoustic pulse 112a. Further, the two waveforms identified by reference numeral 112c correspond to a single third acoustic pulse 112c generated a period of time after the second acoustic pulse 112b.The illustrated example of FIG. 1 also includes additional waveforms along the echo signal path 108 after being reflected off the user's hand 110 to represent echoes 114a, 114b, 114c associated with additional acoustic pulses 112 generated before the first acoustic pulse 112a. The waveforms corresponding to the echoes 114a, 114b, 114c do not have a corresponding waveform shown on the direct signal path 106 in FIG. 1 because the associated acoustic pulses 112 have already reached the microphone 104 at the time represented in the illustrated example. That is, as shown in the illustrated example, the direct signal path 106 is shorter than the echo signal path 108 such that the microphone 104 will sense an acoustic pulse 112 propagating along the direct signal path 106 before sensing echoes 114 corresponding to the same acoustic pulse 112 propagating along the echo signal path 108. The time delay between when an acoustic pulse 112 is sensed directly by the microphone 104 and when an echo 114 of the same acoustic pulse 112 is sensed by the microphone 104 after reflecting off an object is proportional to the distance of the of the object from the computing device 100.The waveforms representative of the echoes 114 in FIG. 1 are shown as being smaller (e.g., having less amplitude or power) than the first, second, and third acoustic pulses 112a, 112b, 112c because objects do not perfectly reflect acoustic signals. Rather, some power in the incident signal is lost when it is reflected as an echo. Furthermore, the strength of an echo is proportional to the size of the obstacle from which the echo was reflected. Another factor affecting the strength of an echo is the distance of the object. More particularly, the strength of an echo is inversely proportional to the distance of the object from the original source of the acoustic signal (e.g., the speaker 102). The time delay between the detecting of acoustic pulses 112 (via the direct signal path 106) and the detecting of corresponding echoes 114 in conjunction with the strength of the echoes 114 is used herein to identify the presence and/or proximity of an object (e.g., the user's hand 110) to the computing device 100. The acoustic pulses 112 sensed directly by the microphone 104 (via the direct signal path 106) are referred to herein as reference signals because they serve as reference points to which subsequently detected echoes 114 are compared to determine depth information indicative of the proximity or distance of objects (e.g., the user's hand 110) to the computing device 100.As mentioned above, in some examples, the acoustic pulses 112 are generated at a fixed interval. The fixed interval establishes a consistent periodicity for the acoustic pulses 112 to enable reliable identification of the acoustic pulses 112 as they are sensed by the microphone 104 (as reference signals) after propagating along the direct signal path 106. More particularly, because the distance between the speaker 102 and the microphone 104 is fixed, the time for an acoustic pulse 112 to travel along the direct signal path 106 from the speaker 102 and be detected by microphone 104 as a reference signal is also fixed. Therefore, the interval between subsequent reference signals detected by the microphone 104 will match the fixed interval between the acoustic pulses 112 as produced by the speaker 102. In some examples, the fixed interval changes depending on whether the system is operating in a standby (lower power mode) or an active (higher power) mode. For instance, in some examples, the acoustic pulses 112 are generated by the speaker 102 at intervals of 125 milliseconds (or eight times a second) during the active mode and at intervals of 500 milliseconds (or twice a second) during the standby mode. In some examples, the fixed periodicity of the acoustic pulses 112 during the active mode may be more or less than 125 milliseconds . Likewise, the fixed periodicity of the acoustic pulses 112 during the standby mode may be more or less than 500 milliseconds. Regardless of the particular period of successive acoustic pulses 112 in each of the standby mode and active mode, the active mode is associated with a shorter interval than the standby mode. The shorter period or interval during the active mode serves to increase the accuracy and/or precision of the object detection process while the longer period or interval during the standby mode serves to reduce power consumption of the process. Although the active mode consumes more power than the standby mode because the speaker 102 is excited more frequently, as described more fully below, even the active mode is relatively power efficient because the duration of each individual acoustic pulse 112 is less than 1 millisecond (e.g., approximately 400 microseconds). Assuming a pulse duration of 400 microseconds with a repetition period of 125 milliseconds (during the active mode), the total amount of time the speaker 102 is excited each second is just over 3 milliseconds. Therefore, even during the active mode, the speaker 102 is actively producing acoustic pulses 112 less than 1 percent of the time such that relatively little power is consumed.In some examples, in addition to the fixed periodicity, each successive acoustic pulse 112 is generated with a central frequency corresponding to near ultrasonic sound waves (e.g., in the range of 18 kHz to 24 kHz). For instance, in some examples, the acoustic pulses 112 are centered at approximately 22 kHz. In other examples, the central or nominal frequency may be lower than 22 kHz (e.g., 20 kHz) but at least above 18 kHz. In other examples, the central frequency of the acoustic pulses 112 may be higher than 22 kHz (e.g., 23 kHz) but no greater than 24 kHz. Further, in some examples, the acoustic pulses 112 are defined by a particular shape and power level so that the pulses remain substantially inaudible to humans. More particularly, in some examples, the acoustic pulses are shaped with sufficient ramp up time and ramp down time so that the pulses remain inaudible to humans. In some examples, the basic shape of the acoustic pulses 112 is defined by Equation 1: x n = A sin 2 π f / Fs n where f is the excitation frequency that is centered within the near ultrasonic frequency range between 18 kHz and 24 kHz (e.g., centered at 22 kHz); Fs is the sampling frequency that corresponds to the sampling frequency supported by the microphone 104 (e.g., 48 kHz sampling frequency); n corresponds to the sample number along the signal length (N) that may be represented by any number of samples based on the sampling frequency and duration of the sample; and A is the amplitude that may have a value ranging from between 0.5 and 1 (e.g., 0.8). Further, the shape and generation of the acoustic pulses 112 is defined by an autocorrelation smoothening and a scaling factor defined as follows: x 1 n = x n ⊗ x nScaleFactor = Max x 1 n / 2 A final pulse value at sample n in each acoustic pulse 112 may be defined by dividing Equation 2 by Equation 3: y n = x 1 n / ScaleFactorWhile the acoustic pulses 112 generated by the speaker 102 have a consistent form and are produced at a consistent periodicity, the resulting echoes 114 corresponding to different ones of the pulses 112 do not necessarily have a consistent form (e.g., intensity) and may not be detected at consistent time intervals. Variation between different echoes 114 arises from the nature (e.g., size, shape, and material) of the objects off which the acoustic pulses 112 reflect and the distance of such objects from the speaker 102. For example, echoes 114 reflecting off a distant object will be weaker and arrive at a later point in time than echoes 114 reflecting off a closer object. In some examples, the variations in time and/or intensity of the echoes 114 detected by the microphone 104 are compared against the consistent acoustic pulses 112 detected by the microphone 104 to determine the presence and/or proximity of objects in the vicinity of the computing device 100.In some examples, the proximity detection system of the computing device 100 is designed to detect when an object (e.g., the user's hand 110) is within an activation region 116 associated with the computing device 100. In some examples, the activation region 116 corresponds to an area within a threshold distance 118 of the computer device. The threshold distance 118 may be any suitable distance (e.g., 6 inches, 12 inches, 18 inches, 2 feet, 3 feet, etc.). If an object is detected within the activation region 116 (e.g., the object is within the threshold distance 118), the computing device 100 may activate or initiate an operation that is associated with a detected object. In some examples, the operation triggered by an object being detected within the activation region 116 includes waking up the computing device 100 from a low powered sleep state or idle state to a full powered active state.A challenge to identifying a particular object (e.g., the user's hand 110) in the vicinity of the computing device 100 arises from the fact that the microphone 104 is likely to detect many other echoes 114 reflected off other objects in the surrounding environment of computing device 100. Furthermore, independent of the echoes 114 corresponding to the acoustic pulses 112, the environment may contain many other sources of noise (e.g., machines, people, etc.) that may also be detected by the microphone 104. Such environmental noises may supersede and/or mimic the acoustic pulses 112 and/or the echoes 114 resulting in errors in detecting an intended object such as, for example, the user's hand 110. Errors may be false negatives (in which an object in the activation region 116 is not detected) or false positives (in which an object is detected in the activation region 116 when no object is actually present). Of the two types of errors, false positives are more problematic because a false positive will trigger the operation of the computing device 100 when the user did not intend such operation to occur. Accordingly, examples disclosed herein are designed to reduce (e.g., minimize) the likelihood of a false positive occurring.Noise is a significant challenge in the near ultrasonic frequency range (e.g., between 18 kHz and 24 kHz) because there are many sources in everyday environments that produce noises in that range. This is a primary reason why known ultrasonic proximity detection systems are typically implemented at much higher frequencies (e.g., above 40 kHz). However, as mentioned above, such techniques come at increased cost and complexity due to the need for specialized components capable of handling the high frequencies.Examples disclosed herein overcome the challenges of detecting objects at frequencies where a lot of noise may exist, while still using standard components already incorporated into many computing devices. In some examples, a robust and error resilient object detection scheme is accomplished by generating and storing a static environment echo profile for the environment in which the computing device 100 is located. A static environment echo profile represents the echoes 114 associated with the acoustic pulses 112 reflected off fixed (e.g., static) objects in the environment surrounding the computing device 100. An example static environment echo profile 200 based on actual data is shown in the illustrated example of FIG. 2 . As shown in the illustrated example, the very tall peaks 202 in the signal stream correspond to reference signals (e.g., acoustic pulses 112 directly sensed by the microphone 104 without being reflected). A period of time after each reference signal there are much lower intensity peaks 204 corresponding to echoes 114 reflected off of fixed or static objects in the surrounding environment. The relatively low intensity of the echoes 114 and their distance from the preceding reference signal is indicative of the objects being at a relatively substantial distance from the computing device 100.In some examples, the static environment echo profile 200 is generated as the result of several stages of preprocessing of the audio data captured by the microphone 104. For instance, in addition to directly sensing the acoustic pulses 112 (e.g., the reference signals) and the echoes 114, the microphone 104 is likely to pick up other noises generated in the environment surrounding the computing device 100. Accordingly, in some examples, the computing device 100 removes substantially all humanly audible noises by processing the input signal through one or more signal filters. In some examples, the computing device 100 processes the input signal using a band pass filter with a lower cutoff frequency of 18 kHz and an upper cutoff frequency of 24kHz to isolate noise information captured within the near ultrasonic range as defined above. Further, in some examples, the band pass filter is implemented with a central frequency of 22 kHz and uses an elliptic infinite impulse response filter with a 1 decibel passband ripple. Further, in some examples, the output of the band pass filter is analyzed to identify significant signal peaks in the preprocessed signal (e.g., the peaks 202, 204 of FIG. 2 ).Assuming that the computing device 100 does not move relative to its environment, the echoes 114 reflected off static objects in the environment should be relatively consistent over time. Thus, as shown in the illustrated example of FIG. 2 , the lower intensity peaks 204 (e.g., echoes 114) after the first reference signal (the high intensity peak 202) are substantially the same (in terms of intensity and relative timing) as the lower intensity peaks 204 (e.g., echoes 114) following the second reference signal. However, if there is a non-static object in the environment (e.g., a human moving around in the same room as the computing device 100), the echoes 114 reflected off the non-static object will change in intensity and/or time of detection based on changes in the movement and/or position of the object relative to the computing device 100. Echoes 114 corresponding to non-static objects are identified as being separate from the static environment echo profile and further analyzed for the possibility of corresponding to an object within the activation region 116 of the computing device 100 as described further below.In some examples, during an object detection process, the computing device 100 generates a full echo profile that is representative of all acoustic pulses 112 and corresponding echoes 114 detected by the microphone 104 over a most recent period of time. That is, in contrast to the static environment echo profile 200 that represents echoes from static objects in the environment, a full echo profile represents echoes from all objects (whether static or not) in the environment. A full echo profile can be expressed mathematically as follows: EchoProfile n = RefSig n + ∑ m = 0 M − 1 echo m where echo[m] refers to each particular echo 114 captured by the microphone 104 from the environment. A similar mathematically expression may be provided for the static environment echo profile except that the summation of echoes 114 is limited to echoes reflected from static objects in the environment.An example full echo profile 300 based on actual data is shown in the illustrated example of FIG. 3 . As with the example static environment echo profile 200 of FIG. 2 , the full echo profile 300 includes high intensity peaks 302 corresponding to reference signals (associated with directly sensed acoustic pulses 112) and lower intensity peaks 304 corresponding to echoes 114 of the acoustic pulses 112. As compared with the low intensity peaks 204 in FIG. 2 , the low intensity peaks 304 in FIG. 3 are considerably larger when viewed as a proportion of the intensity of the associated reference signals (e.g., the high intensity peaks 302). Further, the low intensity peaks 304 in FIG. 3 are relatively close to the preceding reference signal. The somewhat higher intensity of the low intensity peaks 304 and the relatively short duration after the corresponding reference signal is indicative of an echo 114 reflected off an object that is relatively close to the computing device 100.As shown in FIG. 3 , the intensity of the low intensity peaks 304 varies considerably from one peak to the next indicating that the object reflecting the corresponding echoes 114 is a non-static object (e.g., an object that is moving relative to the computing device 100). In this particular example, the data reflected in FIG. 3 is based on a person moving their hand away from and towards an associated computing device implementing the processes disclosed herein. It should be noted that there is also some variability in the intensity of the high intensity peaks 302 corresponding to the reference signals. In ideal conditions, the reference signals should be substantially identical in intensity as described above. However, some variability is expected due to imperfect environmental conditions and/or as a result of some measure or error introduced by the preprocessing of the input data captured by the microphone 104. While there is some variability in the intensity of the reference signals, the periodicity of the reference signals is substantially consistent over time.In the full echo profile 300, some of the low intensity peaks 304 may correspond to echoes 114 reflected off static objects in the environment. These same echoes 114 are represented in the static environment echo profile 200. Accordingly, the presence of non-static objects in the environment can be identified by comparing and identifying the differences between the full echo profile 300 and the static environment echo profile 200. More particularly, in some examples, the static environment echo profile is subtracted from the full echo profile to remove the echoes 114 reflected off static objects. That is, the static signal data represented by the static environment echo profile is removed from the full echo profile. The output of this calculation is referred to herein as a non-static echo profile. An example non-static echo profile 400 is shown in FIG. 5 . The presence of any residual echoes 114 in the non-static echo profile based on differences (above a certain threshold to account for minor variability noted above) between the static environment echo profile 200 and the full echo profile 300 serve as a trigger to implement subsequent analysis for object detection purposes as described more fully below. Thus, in some examples, the static environment echo profile 200 serves as a baseline for comparison with echoes 114 detected by the microphone 104 at any particular time to determine when additional processing and analysis is appropriate. In some examples, when a non-static object is detected and further processing and analysis is warranted, the further processing and analysis is based on the non-static echo profile to simplify the computations by first isolating the echoes 114 associated with non-static objects from static objects.In some situations, the computing device 100 may dynamically monitor and update the static environment echo profile 200 in substantially real time based on changes to static objects in the environment (e.g., the relocation of a chair or other piece of furniture), and/or changes in the location of the computing device 100 relative to the environment (including the relocation of the computing device to a new environment). In this manner, the computing device 100 is able to adapt to any particular environment by updating the static environment echo profile to reflect current environmental conditions to increase the accuracy at which non-static objects may be identified and analyzed as described herein. In some examples, if a static environment echo profile cannot be reliably generated (or updated) due to too much variability in the echoes 114 detected by the microphone 104, the computing device 100 may enter an error state until a reliable static environment echo profile may again be generated. In some such examples, the subsequent processing of echo data may be prevented while the device is in the error state to avoid the possibility of an inaccurate detection of an object in the vicinity of the computing device 100.As mentioned above, once a difference (that satisfies a threshold) between the static environment echo profile and the full echo profile has been detected as indicative of an echo 114 corresponding to a non-static object, the computing device 100 may initiate subsequent analysis and processing of the noise information captured by the microphone 104. In some examples, the computing device 100 may automatically switch between different modes while processing the noise information to reduce power consumption. More particularly, in some examples, when the computing device 100 initially begins analyzing the noise information, the computing device 100 may operate in a low power standby sensing mode. In some examples, the computing device 100 performs a relatively course analysis of the echo data in the standby sensing mode to determine whether the non-static object detected based on the difference between the static environment echo profile and the full echo profile is located within the activation region 116. If the computing device 100 determines that the non-static object is in the activation region 116, the computing device 100 may then switch to an active sensing mode in which a more accurate analysis is performed to confirm or validate the determination made during the standby sensing mode that the non-static object is within the activation region 116. In some examples, only after the computing device 100 has confirmed there is an object within the activation region 116 using the analysis of the active sensing mode does the computing device 100 activate or initiate the operation associated with the detection of such an object (e.g., wake up the computing device from a low power idle state).As outlined above, in some examples, the processing of echo data captured by the microphone 104 involves a two stage process that passes through a standby sensing mode and an active sensing mode before the computing device 100 implements a particular operation in response to a detected object. The different modes associated with the separate stages in this process serve to increase the power efficiency of the system. In particular, while both modes consume relatively little power (e.g., both may be implemented while the computing device 100 is in a low power sleep state or idle state), the standby sensing mode consumes less power than the active sensing mode. In some examples, the standby sensing mode is more power efficient because the acoustic pulses 112 are generated less frequently (e.g., spaced apart at longer intervals) during the standby sensing mode than during the active sensing mode. For example, during the active sensing mode, the speaker 102 may generate eight acoustic pulses 112 every second whereas the speaker may generate fewer (e.g., 4, 2, etc.) acoustic pulses 112 every second during the standby sensing mode. The fewer acoustic pulses 112 in the standby mode reduces power consumption because the speaker 102 is being excited less frequently. Further, the standby mode uses less power because the microphone 104 is collecting less echo data to be processed. While the standby sensing mode reduces power consumption, the lower time resolution renders the standby sensing mode less accurate than during the active mode. Accordingly, in some examples, once an object within the activation region 116 has been detected in the standby mode, the system automatically switches to the active mode to confirm the object detection is accurate using a higher resolution for increased accuracy. If the active sensing mode does not confirm the presence of the object or the object is removed from within the activation region, the computing device 100 may switch back to the standby sensing mode to continue monitoring for an object in proximity to the computing device 100.In addition to using echoes 114 based on acoustic pulses 112 generated at a shorter periodicity, the active mode also includes more complex computations than in the standby mode to increase the accuracy and resilience of object detection even in the presence of significant environmental noises. More particularly, in some examples, the object detection processing during the active stage monitors input noise levels and abnormal echoes to automatically switch between a lock-in state and a lock-out state to maintain accurate object detection while enabling relatively quick recovery from error conditions due to, for example, environmental noises that may disrupt the monitoring of the acoustic pulses 112 and/or associated echoes 114.In some examples, the output of the speaker 102 is not synchronized or timed to the noise information collected by the microphone 104. Accordingly, the system analyzing the noise information does not have any way to directly identify when an acoustic pulse 112 is sensed directly by the microphone 104 (i.e., a reference signal) and when noises captured by the microphone 104 correspond to echoes 114 (or other environmental noise). Accordingly, the lock-out state of the active sensing mode serves to identify reference signals in the noise information that can be used to synchronize the timing of the acoustic pulses 112 and the corresponding echoes 114 going forward. In some examples, the reference signals are detected by analyzing the noise information over time until a repeating signal is identified that satisfies criteria corresponding to the repeating acoustic pulses 112. More particular, the criteria that must be satisfied for a signal to constitute a reference signal includes (1) that the signal repeats with a substantially consistent periodicity corresponding to the time interval of successive ones of the acoustic pulses 112 and (2) that the repeating signal has an intensity that is within a threshold of an expected signal level for the acoustic pulses 112. In some examples, the repeating signal must fall between an upper threshold and a lower threshold. In other examples, the repeating signal only needs to exceed a lower threshold that is higher than a maximum intensity expected for an echo 114.In some examples, identification of the reference signals is based on an analysis of the non-static echo profile that includes signals corresponding to the acoustic pulses 112 and echoes 114 reflected from non-static objects but excludes echoes 114 of static objects that have been subtracted out from a corresponding full echo profile. Because the echoes 114 included in the analysis correspond to non-static objects, the timing at which subsequent ones of the echoes 114 are detected will not be consistent. As a result, the echoes 114 will not satisfy the first criterion of a consistent periodicity corresponding to the time interval between separate acoustic pulses. By contrast, because the acoustic pulses 112 are repeated at consistent intervals and sensed directly by the microphone 104, the acoustic pulses can be recognized as the reference signals as outlined above. In some examples, the computations of the analysis during the standby mode are simplified by ignoring the first criterion used during the active mode. That is, while both a consistent periodicity and expected intensity of signals are used to identify and track reference signals in the active mode, individual reference signals are detected independently based on their intensity without reference to their relative spacing during the standby mode. This simplified approach during the standby mode provides a rough analysis for detecting objects that may be within the activation region that can then be confirmed or validated by the more robust and accurate methodology used during the active mode. Alternatively, in some examples, the analysis during the standby mode may identify the reference signals based on their repeating nature at a consistent periodicity in a manner similar to the active mode. However, in some such examples, the references signals may be identified with a lower threshold of confidence such that the process may be performed with reduced computational power relative to the active mode.When the computing device 100 is in the active mode and has identified the repeating reference signals (corresponding to the acoustic pulses 112) as described above, the device may be said to have "locked-in" to the reference signals and, therefore, may switch to the lock-in state. Typically, the lock-out state, during which the computing device 100 seeks for and identifies the reference signals, lasts for a relatively short duration (e.g., less than a few seconds or even shorter) corresponding to a sufficient number of repetitions of the acoustic pulses 112 to enable the device to detect the repeating sequence to verify the first criterion mentioned above. The particular duration for the lock-out state may depend on the periodicity of the acoustic pulses 112 and the amount and/or nature of noise in the environment. For instance, as shown in the example non-static echo profile 400 of FIG. 4 , the first seven reference signals correspond to a lock-out period 402 and then all reference signals thereafter are associated with a lock-in period 404.In some examples, the first reference signal positively identified by the computing device 100 as satisfying the criteria indicative of a reference signal in the active mode is referred to herein as the pilot reference signal or simply pilot signal as identified by reference numeral 406 in FIG. 4 . The pilot reference signal 406 is used as a reference to identify and validate subsequent reference signals identified while the computing device 100 is operating in the lock-in state. That is, even after the reference signals have been identified within the non-static echo profile, the computing device 100 continues to monitor and detect subsequent reference signals to verify that the system remains synchronized to the timing at which the speaker 102 produces the acoustic pulses 112 by detecting subsequent reference signals at the expected frequency and intensity (within a certain threshold) corresponding to parameters defined by the pilot reference signal 406. As long as the computing device 100 is able to identify and verify each successive reference signal, the device 100 remains in the lock-in state. If the computing device 100 is unable to identify a reference signal at the expected point in time (based on the fixed periodicity of the signals), the computing device 100 may revert to the lock-out state to again seek for the reference signals before returning to the lock-in state. In some examples, the computing device 100 may wait a threshold period of time after failing to identify an expected reference signal (e.g., the duration of a particular number (e.g., 2, 3, 5, etc.) of intervals of acoustic pulses 112) on the assumption that the missing reference signal(s) were lost due to an abnormal noise in the environment but can be detected again at the time expected for a subsequent reference signal.Aside from continuing to identify subsequent reference signals, the computing device 100 also performs object depth calculations on the echoes 114 contained in the non-static echo profile being analyzed. As mentioned above, the time delay between an acoustic pulse 112 (e.g., a reference signal) and an echo 114 of the acoustic pulse 112 is proportional to the distance of the object reflecting the echo 114. As a result, by determining the time between a reference signal and a following echo, the computing device 100 may determine the distance of an associated object from the computing device 100. This can be expressed mathematically as follows: EchoDuration m = EchoTime m − ReferenceTime nEchoDepth m = EchoDuration m / DepthScaleFactor where ReferenceTime[n] is the time index of the reference signal preceding the particular echo 114 being analyzed, and EchoTime[m] is the time index of the particular echo 114. For the above equations to work, it is assumed that EchoTime[m] is greater than ReferenceTime[n], which necessarily assumes that EchoDuration[m] is greater than 0. In some examples, when the calculated distance of an object (e.g., EchoDepth[m]) is less than the threshold distance 118 for the activation region 116, the computing device 100 may generate an output that causes the activation or initiation of an operation associated with an object being detected in the activation region 116.Experimental testing has shown that teachings disclosed herein provide accurate and robust results regardless of the level or nature of noise in the environment. Particular test results are shown in the table 500 of FIG. 5 . As shown in the table 500, accuracy remained at or above 90% across all different types of noise. Further, false positives did not occur under any type of noise conditions. As mentioned above, examples disclosed herein are specifically designed to reduce (e.g., prevent) false positives from occurring because a false positive means that an object was incorrectly detected as being within the activation region 116 of FIG. 1 , thereby incorrectly triggering the operation associated with the presence of an object.FIG. 6 illustrates an example implementation of the example computing device 100 of FIG. 1 . As shown in the illustrated example, the computing device 100 includes the example speaker 102, the example microphone 104, an example signal generator 602, an example echo profile generator 604 (that includes an example signal filter analyzer 606, an example signal smoothening analyzer 608, and an example signal peak detector 610), an example environment profile analyzer 612, an example object detection analyzer 614 (that includes an example power state controller 616, an example echo profile comparator 618, an example proximity calculator 620, an example reference signal identifier 622, and an example timer 624), an example activation operation controller 626, and an example database 628.The example signal generator 602 controls the excitation of the speaker 102 to produce the acoustic pulses 112 at fixed intervals corresponding to the current sensing mode (e.g., active or standby) in which the computing device 100 is operating. In some examples, the signal generator 602 generates acoustic pulses based on Equations 1-4 outlines above. In some examples, to reduce computational burdens, the signal generator 602 simply causes a recording of the acoustic pulses 112 to be played at the relevant periodicity. In such examples, the recording may be stored in the example database 628.The example echo profile generator 604 of FIG. 6 performs the preprocessing of noise information captured by the microphone 104 to generate echo profiles associated with the current circumstances of the computing device 100. As shown in the illustrated example, the signal filter analyzer 606 processes the noise information based on one or more filters. In some examples, the filters include a band pass filter to remove all data outside of the near ultrasonic range (e.g., between 18 kHz and 24 kHz). The example signal smoothening analyzer 608 analyzes the noise information to define the signal envelope for the signal samples contained in the noise information. The example signal peak detector 610 analyzes the noise information to identify significant peaks within the noise information. The output of the echo profile generator 604 corresponds to the full echo profile associated with the current environment in which the computing device 100 is located. In some examples, the full echo profile is stored in the example database 628.In some examples, the environment profile analyzer 612 uses the full echo profile to generate a static environment echo profile that is limited to reference signals and echoes 114 reflected off of static objects in the environment. In some examples, the static environment echo profile is stored in the example database 628. In some examples, when there are no non-static objects in the environment, the static environment echo profile is the same as the full echo profile. Accordingly, in some examples, the environment profile analyzer 612 may simply store the full echo profile as the static environment echo after confirming there are no non-static objects represented in the profile. In other examples, where non-static objects are present in the environment, the example environment profile analyzer 612 may identify and remove echoes 114 corresponding to the non-static objects before storing the static environment echo profile. Further, in some examples, the environment profile analyzer 612 monitors changes to the full echo profile (as output by the echo profile generator 604) to determine whether there are changes to the static environment. If so, the environment profile analyzer 612 may update the static environment echo profile. In some examples, the circumstances associated with the computing device 100 at any given point in time may be such that the static environment echo profile cannot be reliably generated and/or updated due, for example, to frequent changes in the static environment and/or overly noisy conditions. In some such examples, the environment profile analyzer 612 may enter an error state to prevent inaccurate depth sensing and processing to occur based on an invalid static environment echo profile.The example object detection analyzer 614 analyzes echo profiles generated by the echo profile generator 604 and the environment profile analyzer 612 to identify objects within the vicinity of the computing device 100. More particular, in some examples, the object detection analyzer 614 is interested in determining whether an object with within the activation region 116 of the computing device 100. As described above, in some examples, object detection may be done in two stages corresponding to different power modes for the object detection analyzer including a standby sensing mode and an active sensing mode. In the illustrated example, the power state controller 616 determines and controls when the object detection analyzer 614 is to operate in the standby mode and when to operate in the active mode. The example echo profile comparator 618 compares the current full echo profile with the current static echo profile to identify any differences. Differences between the full echo profile and the current static echo profile may be indicative of a non-static object within the environment. Accordingly, when differences are identified, the echo profile comparator 618 causes subsequent processing and analysis to confirm whether the non-static object is within the activation region 116. In some examples, the echo profile comparator 618 subtracts the static environment echo profile from the full echo profile to generate a non-static echo profile that is used during the subsequent analysis and processing.The example proximity calculator 620 determines a proximity or distance of an object reflecting an echo 114 represented in the non-static echo profile based on the duration of time between a particular reference signal in the profile and a following echo signal. The example reference signal identifier 622 identifies and tracks reference signals as they occur in the non-static echo profile to maintain synchronization with the timing of when the acoustic pulses 112 (associated with the reference signals) are generated by the speaker 102. As described above, the active mode is associated with two internal states including the lock-in state and the lock-out state. The example reference signal identifier 622 determines when and whether the object detection analyzer 614 is to switch between the lock-in and lock-out states based on whether the reference signals can be identified from the input stream of the microphone 104. In some examples, switching from the lock-in state to the lock-out state after the reference signals are lost is based on the elapsing of a threshold period of time determined by the example timer 624.The example activation operation controller 626 implements or causes to be implemented an operation in the computing device 100 in response to the object detection analyzer 614 determining an object is within the activation region 116 of the computing device 100. In some examples, the operation includes waking up the computing device 100 from a low power sleep state or idle state to a full power active state.While an example manner of implementing the computing device 100 of FIG. 1 is illustrated in FIG. 6 , one or more of the elements, processes and/or devices illustrated in FIG. 6 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example speaker 102, the example microphone 104, the example signal generator 602, the example echo profile generator 604, the example signal filter analyzer 606, the example signal smoothening analyzer 608, the example signal peak detector 610, the example environment profile analyzer 612, the example object detection analyzer 614, the example power state controller 616, the example echo profile comparator 618, the example proximity calculator 620, the example reference signal identifier 622, the example timer 624, the example activation operation controller 626, the example database 628 and/or, more generally, the example computing device 100 of FIG. 1 maybe implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example speaker 102, the example microphone 104, the example signal generator 602, the example echo profile generator 604, the example signal filter analyzer 606, the example signal smoothening analyzer 608, the example signal peak detector 610, the example environment profile analyzer 612, the example object detection analyzer 614, the example power state controller 616, the example echo profile comparator 618, the example proximity calculator 620, the example reference signal identifier 622, the example timer 624, the example activation operation controller 626, the example database 628 and/or, more generally, the example computing device 100 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example speaker 102, the example microphone 104, the example signal generator 602, the example echo profile generator 604, the example signal filter analyzer 606, the example signal smoothening analyzer 608, the example signal peak detector 610, the example environment profile analyzer 612, the example object detection analyzer 614, the example power state controller 616, the example echo profile comparator 618, the example proximity calculator 620, the example reference signal identifier 622, the example timer 624, the example activation operation controller 626, and/or the example database 628 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example computing device 100 of FIG. 1 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 6 , and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase "in communication," including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the computing device 100 of FIGS. 1 and/or 6 is shown in FIGS. 7-10 . The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor such as the processor 1112 shown in the example processor platform 1100 discussed below in connection with FIG. 11 . The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 1112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 1112 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated in FIGS. 7-10 , many other methods of implementing the example computing device 100 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware.The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, etc. in order to make them directly readable and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein. In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.As mentioned above, the example processes of FIGS. 7-10 maybe implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of "include" or "comprise" (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term "comprising" and "including" are open ended. The term "and/or" when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.The example process of FIG. 7 begins at block 702 where the example power state controller 616 determines whether the object detection analyzer 614 is in the standby mode or the active mode. In some examples, when the process first begins, the object detection analyzer 614 begins in the standby mode. If the object detection analyzer 614 is in the standby mode, control advances to block 704 where the speaker 102 produces acoustic pulses (e.g., the acoustic pulses 112) at a long periodicity. As used in this context, the term "long periodicity" is used in a relative sense to the "short periodicity" discussed at block 706. As described above, the acoustic pulses 112 may be generated based on signals from the example signal generator 602. In some examples, the long periodicity of the acoustic pulses 112 in the standby mode corresponds to a period of fixed interval between successive pulses of 250 milliseconds or four times per second. In other examples, the period may be 500 milliseconds or twice per second. After producing the acoustic pulses 112 at block 704, control advances to block 708. Returning to block 702, if the object detection analyzer 614 is in the active mode, control advances to block 706 where the speaker 102 produces acoustic pulses 112 at a short periodicity. In some examples, the short periodicity corresponds to a period or fixed interval between pulses of 125 milliseconds or eight times per second. After producing the acoustic pulses 112 at block 706, control advances to block 708.At block 708, the example microphone captures noise information from environment including the acoustic pulses 112 and corresponding echoes (e.g., the echoes 114). At block 710, the example echo profile generator 604 generates and/or updates a full echo profile (e.g., the full echo profile 300 of FIG. 3 ) based on the acoustic pulses 112 and the corresponding echoes 114. Further detail regarding the implementation of block 710 is provided below in connection with FIG. 8 . At block 712, the example environment profile analyzer 612 determines whether a static environment echo profile (e.g., the static environment echo environment 200 of FIG. 2 ) is available and suitable for the current circumstances. If not, control advances to block 714 where the example environment profile analyzer 612 determines whether a suitable static environment echo profile can be generated based on current circumstances. If not, the example environment profile analyzer 612 enters an error state and prevents the process from proceeding by passing control back to block 702. If a suitable static environment echo profile 200 can be generated (block 714), control advances to block 716 where the example environment profile analyzer 612 generates and/or updates the static environment echo profile 200. Returning to block 712, if the static environment echo profile 200 is available and suitable for the current circumstances, control advances directly to block 716 to update the static environment echo profile 200. In some such examples, if the currently available static environment echo profile 200 does not need updating, block 716 may be skipped.At block 718, the example power state controller 616 determines whether the object detection analyzer 614 is in standby mode or active mode. If the object detection analyzer 614 is in the standby mode, control advances to block 720 where the example object detection analyzer 614 analyzes the data in the standby mode. Further detail regarding the implementation of block 720 is provided below in connection with FIG. 9 . If the object detection analyzer 614 is in the active mode, control advances to block 722 where the example object detection analyzer 614 analyzes the data in the active mode. Further detail regarding the implementation of block 722 is provided below in connection with FIG. 10 . After the completion of blocks 720 and 722 (described further below), control returns to block 702 to continue the example process.FIG. 8 illustrates an example implementation of block 710 of FIG. 7 to generate and/or update a full echo profile. The example process of FIG. 8 begins at block 802 where the example signal filter analyzer 606 processes the captured noise information through a band pass filter. At block 804, the example signal smoothening analyzer 608 applies a signal smoothener to the filtered noise information. At block 806, the example signal peak detector 610 identifies peaks in the processed signal. Thereafter, the example process of FIG. 8 ends and returns to continue with the process of FIG. 7 .FIG. 9 illustrates an example implementation of block 720 of FIG. 7 to analyze data in the standby mode. The example process of FIG. 9 begins at block 902 where the example echo profile comparator 618 generates a non-static echo profile based on the differences between the static environment echo profile 200 and the full echo profile 300. In some examples, the non-static echo profile corresponds to the static environment echo profile subtracted from the full echo profile. At block 904, the example echo profile comparator 618 determines whether the non-static echo profile includes at least one residual echo indicative of a non-static object. If not, the example process of FIG. 9 ends and returns to continue the process of FIG. 7 . If so, control advances to block 906 where the example proximity calculator 620 calculates the distance of the non-static object based on the time delay between the most recently detected reference signal and the at least one echo in the non-static echo profile. At block 908, the example proximity calculator 620 determines whether the distance of the object is within an activation region (e.g., the activation region 116 of FIG. 1 ). In some examples, the distance of the object is determined to be within the activation region when the distance is less than a threshold distance 118 defining the activation region 116. If the distance of the object is not within the activation region 116, the example process of FIG. 9 ends and returns to continue the process of FIG. 7 . If the distance of the object is within the activation region 116, control advances to block 910 where the example power state controller 616 transitions the object detection analyzer 614 to the active mode. Thereafter, the example process of FIG. 9 ends and returns to continue with the process of FIG. 7 .FIG. 10 illustrates an example implementation of block 722 of FIG. 7 to analyze data in the active mode. The example process of FIG. 10 begins at block 1002 where the example reference signal identifier 622 determines whether repeating reference signals have been identified. If the reference signal identifier 622 determines that repeating reference signals have not been identified (indicative of the lock-out state described above), control advances to block 1004 where the example reference signal identifier 622 analyzes the non-static echo profile to identify repeating reference signals satisfying the criteria associated with the acoustic pulses 112. As described above, in some examples, the criteria include that the intensity of the signals satisfy a threshold and that the signals have a fixed periodicity corresponding to the interval between the acoustic pulses 112. If, at block 1002, the reference signal identifier 622 determines that repeating reference signals have been identified (indicative of the lock-in state described above), control advances to block 1006 where the example reference signal identifier 622 determines whether a threshold period of time has elapsed since the last detected occurrence of the repeating reference signals. In some examples, the threshold period of time may be longer than the fixed interval between successive ones of the acoustic pulses 112. As such, the threshold period of time having elapsed indicates that one or more references signals was not detected when expected to the point where the object detection analyzer 614 needs to revert to the lock-out state to again search for the reference signals. Accordingly, in such a situation, control advance to block 1004 to again identify the repeating reference signals.At block 1008, the example reference signal identifier 622 determines whether the repeating reference signals can be identified based on the current non-static echo profile. In some examples, there may not be enough data (e.g., enough cycles of the acoustic pulse 112) to reliable identify the reference signals. Thus, if the repeating reference signals cannot be identified, the example process of FIG. 10 ends and returns to continue the process of FIG. 7 to gather additional noise information including additional acoustic pulses. If the repeating reference signals can be identified at block 1008, control advances to block 1010 where the example database stores parameters defining the repeating reference signals. In some examples, the parameters are defined by a pilot reference signal (e.g., the pilot reference signal 406). Thereafter, control advances to block 1012. Returning to block 1006, if the threshold period of time has not elapsed, control advances directly to block 1012.At block 1012, the example proximity calculator 620 calculates the distance of the non-static object based on the time delay between the reference signals and the corresponding echoes 114 in the non-static echo profile. In some examples, multiple distance calculations are performed for multiple successive echoes 114 associated with multiple successive reference signals. The multiple data points serve to increase the accuracy and reliability of the output. Furthermore, as this process is associated with the active mode when the acoustic pulses 112 are produced at a relatively short periodicity, there is a smaller time resolution to further increase the accuracy of the analysis. At block 1014, the example proximity calculator 620 determines whether the distance of the object is within the activation region 116. If so, control advances to block 1016 where the example activation operation controller 626 generates an output to initiate an operation on the computing device 100 associated with object being detected within the activation region. In some examples, the operation includes waking up the computing device 100 from a sleep or idle state.At block 1018, it is determined whether to continue the process. If so, the example process of FIG. 9 ends and returns to continue with the process of FIG. 7 . Otherwise, the example process of FIG. 9 ends as does the higher level process of FIG. 7 . Returning to block 1014, if the example proximity calculator 620 determines that the distance of the object is not within the activation region 116, control advances to block 1020 where the example power state controller 616 transitions the object detection analyzer 614 to the standby mode. Thereafter, the example process of FIG. 9 ends and returns to continue with the process of FIG. 7 .FIG. 11 is a block diagram of an example processor platform 1100 structured to execute the instructions of FIGS. 7-10 to implement the computing device 100 of FIGS. 1 and/or 6. The processor platform 1100 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a headset or other wearable device, or any other type of computing device.The processor platform 1100 of the illustrated example includes a processor 1112. The processor 1112 of the illustrated example is hardware. For example, the processor 1112 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example signal generator 602, the example echo profile generator 604, the example signal filter analyzer 606, the example signal smoothening analyzer 608, the example signal peak detector 610, the example environment profile analyzer 612, the example object detection analyzer 614, the example power state controller 616, the example echo profile comparator 618, the example proximity calculator 620, the example reference signal identifier 622, the example timer 624, and the example activation operation controller 626.The processor 1112 of the illustrated example includes a local memory 1113 (e.g., a cache). The processor 1112 of the illustrated example is in communication with a main memory including a volatile memory 1114 and a non-volatile memory 1116 via a bus 1118. The volatile memory 1114 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 1116 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1114, 1116 is controlled by a memory controller.The processor platform 1100 of the illustrated example also includes an interface circuit 1120. The interface circuit 1120 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.In the illustrated example, one or more input devices 1122 are connected to the interface circuit 1120. The input device(s) 1122 permit(s) a user to enter data and/or commands into the processor 1112. The input device(s) can be implemented by, for example, an audio sensor, a microphone (e.g., the microphone 104 of FIGS. 1 and/or 6), a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.One or more output devices 1124 are also connected to the interface circuit 1120 of the illustrated example. The output devices 1124 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker (e.g., the speaker 102 of FIGS. 1 and/or 6). The interface circuit 1120 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.The interface circuit 1120 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1126. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.The processor platform 1100 of the illustrated example also includes one or more mass storage devices 1128 for storing software and/or data. Examples of such mass storage devices 1128 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. In this example, the mass storage device 1128 implements the example database 628.The machine executable instructions 1132 of FIGS. 7-10 may be stored in the mass storage device 1128, in the volatile memory 1114, in the non-volatile memory 1116, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that enable the robust and error resilient detection of objections within the vicinity of a computing device based on near ultrasonic sound waves (e.g., at frequencies ranging between 18 kHz and 24 kHz) despite the presence of environmental noises that are common at such frequencies. Further, examples disclosed herein are able to achieve reliably results using standard speakers, microphones, and processing circuitry commonly used in many computing devices today, thereby reducing costs relative to other methodologies that require specialized components. Further, examples disclosed herein are more power efficient than other known methodologies, thereby enabling examples disclosed herein to be implemented on a computing device that is in a low power idle state or sleep state. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by implementing an adaptive multi-stage objection detection scheme that automatically switches between a low power and low compute standby mode and a slightly higher power and higher compute active mode (though still sufficiently low powered for implementation while the device is in an inactive sleep state). The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.Example methods, apparatus, systems, and articles of manufacture to detect proximity of objects to computing devices using near ultrasonic sound waves are disclosed herein. Further examples and combinations thereof include the following:Example 1 includes an apparatus comprising a signal generator to cause a speaker of a computing device to produce a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz, an echo profile generator to process noise information sensed by a microphone of the computing device, the noise information including the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device, and an object detection analyzer to determine whether a first object is within an activation region associated with the computing device based on the pulses and the echoes sensed by the microphone.Example 2 includes the apparatus of example 1, further including an activation operation controller to, in response to the detection of the first object within the activation region, implement an operation in the computing device.Example 3 includes the apparatus of example 2, wherein the operation includes transitioning the computing device from a first state to a second state, the first state being a lower power state than the second state.Example 4 includes the apparatus of any one of examples 1-3, further including an environment profile analyzer to generate a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device, the object detection analyzer to compare the echoes to the static environment echo profile, and identify a presence of the first object based on the comparison.Example 5 includes the apparatus of example 4, wherein the environment profile analyzer is to identify a change in the environment based on the echoes sensed by the microphone, and update the static environment echo profile based on the change in the environment.Example 6 includes the apparatus of any one of examples 4 or 5, wherein the echo profile generator is to generate a full echo profile based on the pulses and the corresponding echoes sensed by the microphone, the object detection analyzer is to remove static data corresponding to the static environment echo profile from the full echo profile to generate a non-static echo profile, and determine whether the first object is within the activation region based on the non-static echo profile.Example 7 includes the apparatus of any one of examples 1-6, wherein the signal generator is to cause the speaker to produce successive ones of the pulses spaced at first fixed intervals during a first time period and to, in response to the object detection analyzer detecting the first object within the activation region during the first time period, cause the speaker to produce additional ones of the pulses spaced at second fixed intervals during a second time period after the first time period, the second fixed intervals being shorter than the first fixed intervals, the object detection analyzer to verify the first object is within the activation region based on the pulses and corresponding echoes sensed during the second time period.Example 8 includes the apparatus of example 7, wherein the signal generator is to, in response to the object detection analyzer no longer detecting the first object within the activation region during the second time period, cause the speaker to produce additional ones of the pulses spaced at the first fixed intervals during a third time period after the second time period.Example 9 includes the apparatus of any one of examples 1-8, wherein the echo profile generator is to generate a full echo profile based on the pulses and the corresponding echoes sensed by the microphone, and identify peaks in the full echo profile, different ones of the peaks corresponding to either the pulses or the corresponding echoes, the object detection analyzer to identify repeating reference signals based on the peaks identified in the full echo profile, the repeating reference signals corresponding to the pulses sensed by the microphone, identify an echo signal between separate occurrences of the repeating reference signals, the echo signal corresponding to one of the echoes, and determine whether the first object is within the activation region based on a time difference between the echo signal and a preceding one of the repeating reference signals.Example 10 includes the apparatus of example 9, wherein the object detection analyzer is to identify the repeating reference signals by identifying a first subset of the peaks associated with an intensity that satisfy a threshold, and identifying a second subset of the peaks from among the first subset that are detected at a periodicity corresponding to the fixed intervals of the pulses.Example 11 includes the apparatus of any one of examples 9 or 10, wherein the object detection analyzer is to identify the repeating reference signals at a first point in time, verify whether subsequent ones of the peaks identified after the first point in time are associated with an intensity and a periodicity corresponding to subsequent occurrences of the repeating reference signals, in response to verification that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, determine whether the first object is within the activation region, and in response to an inability to verify that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, inhibit the determination of whether the first object is within the activation region until the repeating reference signals are again identified at a second point in time.Example 12 includes a method comprising producing, via a speaker of a computing device, a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz, sensing, via a microphone of the computing device, the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device, and determining, by executing an instruction with at least one processor, whether a first object is within an activation region associated with the computing device based on the pulses and the echoes sensed by the microphone.Example 13 includes the method of example 12, further including, in response to the determination of the first object being within the activation region, implementing an operation in the computing device.Example 14 includes the method of example 13, wherein implementing the operation includes transitioning the computing device from a first state to a second state, the first state being a lower power state than the second state.Example 15 includes the method of any one of examples 12-14, further including generating a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device, comparing the echoes to the static environment echo profile, and identifying a presence of the first object based on the comparison.Example 16 includes the method of example 15, further including identifying a change in the environment based on the echoes sensed by the microphone, and updating the static environment echo profile based on the change in the environment.Example 17 includes the method of any one of examples 15 or 16, further including generating a full echo profile based on the pulses and the corresponding echoes sensed by the microphone, removing static data corresponding to the static environment echo profile from the full echo profile to generate a non-static echo profile, and determining whether the first object is within the activation region based on the non-static echo profile.Example 18 includes the method of any one of examples 12-17, further including producing successive ones of the pulses spaced at first fixed intervals during a first time period, detecting the first object within the activation region based on the pulses and corresponding echoes sensed during the first time period, in response to detecting the first object within the activation region during the first time period, producing additional ones of the pulses spaced at second fixed intervals during a second time period after the first time period, the second fixed intervals being shorter than the first fixed intervals, and verifying the first object is within the activation region based on the pulses and corresponding echoes sensed during the second time period.Example 19 includes the method of example 18, further including, in response to no longer detecting the first object within the activation region during the second time period, producing additional ones of the pulses spaced at the first fixed intervals during a third time period after the second time period.Example 20 includes the method of any one of examples 12-19, further including generating a full echo profile based on the pulses and the corresponding echoes sensed by the microphone, identifying peaks in the full echo profile, different ones of the peaks corresponding to either the pulses or the corresponding echoes, identifying repeating reference signals based on the peaks identified in the full echo profile, the repeating reference signals corresponding to the pulses sensed by the microphone, identifying an echo signal between separate occurrences of the repeating reference signals, the echo signal corresponding to one of the echoes, and determining whether the first object is within the activation region based on a time difference between the echo signal and a preceding one of the repeating reference signals.Example 21 includes the method of example 20, wherein the identifying the repeating reference signals includes identifying a first subset of the peaks associated with an intensity that satisfy a threshold, and identifying a second subset of the peaks from among the first subset that are detected at a periodicity corresponding to the fixed intervals of the pulses.Example 22 includes the method of any one of examples 20 or 21, further including identifying the repeating reference signals at a first point in time, verifying whether subsequent ones of the peaks identified after the first point in time are associated with an intensity and a periodicity corresponding to subsequent occurrences of the repeating reference signals, in response to verification that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, determining whether the first object is within the activation region, and in response to an inability to verify that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, inhibiting the determination of whether the first object is within the activation region until the repeating reference signals are again identified at a second point in time.Example 23 includes a non-transitory computer readable medium comprising instructions that, when executed, cause a computing device to at least produce a series of pulses, successive ones of the pulses spaced at fixed intervals, ones of the pulses having a central frequency between 18 kHz and 24 kHz, sense the pulses and echoes of the pulses reflected off objects in a vicinity of the computing device, and determine whether a first object is within an activation region associated with the computing device based on the pulses and the echoes.Example 24 includes the non-transitory computer readable medium of example 23, wherein the instructions further cause the computing device to, in response to the determination of the first object being within the activation region, implement an operation in the computing device.Example 25 includes the non-transitory computer readable medium of example 24, wherein the operation includes transitioning the computing device from a first state to a second state, the first state being a lower power state than the second state.Example 26 includes the non-transitory computer readable medium of any one of examples 23-25, wherein the instructions further cause the computing device to generate a static environment echo profile based on ones of the echoes reflected off static objects in the vicinity of the computing device, compare the echoes to the static environment echo profile, and identify a presence of the first object based on the comparison.Example 27 includes the non-transitory computer readable medium of example 26, wherein the instructions further cause the computing device to identify a change in the environment based on the echoes, and update the static environment echo profile based on the change in the environment.Example 28 includes the non-transitory computer readable medium of any one of examples 26 or 27, wherein the instructions further cause the computing device to generate a full echo profile based on the pulses and the corresponding echoes, remove static data corresponding to the static environment echo profile from the full echo profile to generate a non-static echo profile, and determine whether the first object is within the activation region based on the non-static echo profile.Example 29 includes the non-transitory computer readable medium of any one of examples 23-28, wherein the instructions further cause the computing device to produce successive ones of the pulses spaced at first fixed intervals during a first time period, detect the first object within the activation region based on the pulses and corresponding echoes sensed during the first time period, in response to detecting the first object within the activation region during the first time period, produce additional ones of the pulses spaced at second fixed intervals during a second time period after the first time period, the second fixed intervals being shorter than the first fixed intervals, and verify the first object is within the activation region based on the pulses and corresponding echoes sensed during the second time period.Example 30 includes the non-transitory computer readable medium of example 29, wherein the instructions further cause the computing device to, in response to no longer detecting the first object within the activation region during the second time period, produce additional ones of the pulses spaced at the first fixed intervals during a third time period after the second time period.Example 31 includes the non-transitory computer readable medium of any one of examples 23-30, wherein the instructions further cause the computing device to generate a full echo profile based on the pulses and the corresponding echoes, identify peaks in the full echo profile, different ones of the peaks corresponding to either the pulses or the corresponding echoes, identify repeating reference signals based on the peaks identified in the full echo profile, the repeating reference signals corresponding to the pulses, identify an echo signal between separate occurrences of the repeating reference signals, the echo signal corresponding to one of the echoes, and determining whether the first object is within the activation region based on a time difference between the echo signal and a preceding one of the repeating reference signals.Example 32 includes the non-transitory computer readable medium of example 31, wherein the instructions further cause the computing device to identify the repeating reference signals by identifying a first subset of the peaks associated with an intensity that satisfy a threshold, and identifying a second subset of the peaks from among the first subset that are detected at a periodicity corresponding to the fixed intervals of the pulses.Example 33 includes the non-transitory computer readable medium of any one of examples 31 or 32, wherein the instructions further cause the computing device to identify the repeating reference signals at a first point in time, verify whether subsequent ones of the peaks identified after the first point in time are associated with an intensity and a periodicity corresponding to subsequent occurrences of the repeating reference signals, in response to verification that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, determine whether the first object is within the activation region, and in response to an inability to verify that the subsequent ones of the peaks correspond to the subsequent occurrences of the repeating reference signals, inhibit the determination of whether the first object is within the activation region until the repeating reference signals are again identified at a second point in time.Example 34 includes a computing device comprising a speaker to produce a series of repeating pulses at a consistent periodicity, the repeating pulses having a central frequency between 18kHz and 24kHz, a microphone to sense noise information including the pulses and echoes of the pulses reflected off objects in an environment surrounding the computing device, and at least one processor to determine a proximity of a first one of the objects based on the noise information.Example 35 includes the computing device of example 34, wherein the at least processor is to, in response to detection of the first object within an activation region associated with the computing device, implement an operation in the computing device.Example 36 includes the computing device of example 35, wherein the operation includes waking up the computing device from a sleep state to an active state.Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. |
The invention relates to a high frequency ceramic package with an improved pheasant wall and metal layer architecture. In some examples, a semiconductor package (500) includes a ceramic substrate (100) and a horizontal metal layer (104) covered by the ceramic substrate. The metal layer is configured to carry signals in a frequency range of 5 GHz to 38 GHz. The package also includes a vertical castellated wall (110) on the outer surface of the ceramic substrate, the castellated wall coupled to the metal layer and having a height in the range of from 0.10 mm to 0.65 mm. |
1. A semiconductor package, comprising:ceramic substrate;a horizontal metal layer covered by the ceramic substrate, the metal layer configured to carry signals in the frequency range of 5 GHz to 38 GHz; andA vertical castellation on an outer surface of the ceramic substrate, the castellation coupled to the metal layer and having a height in the range of 0.10 mm to 0.65 mm.2. The semiconductor package of claim 1, further comprising a second horizontal metal layer covered by the ceramic substrate and coupled to the horizontal metal layer by one or more vias layer, the second horizontal metal layer is configured to carry signals in the frequency range of 5 GHz to 38 GHz, the horizontal metal layer is coupled to the castellation at a first location, and the second horizontal metal layer is at a second position to couple to the castellation wall.3. The semiconductor package of claim 2, wherein the first location and the second location are separated by a vertical distance of at least 50% of the height of the castellation wall.4. The semiconductor package of claim 3, wherein the vertical distance is 100% of the height of the castellation.5. The semiconductor package of claim 4, wherein the second horizontal metal layer is above the horizontal metal layer, and wherein the second horizontal metal layer is not the topmost layer covered by the ceramic substrate metal layer.6. A semiconductor package comprising:ceramic substrate;A first horizontal metal layer and a second horizontal metal layer, which are covered by the ceramic substrate and coupled to each other through one or more via holes, the first metal layer and the second metal layer are configured to carry 5 GHz to Signals in the 38GHz frequency range; anda vertical castellation on an outer surface of the ceramic substrate, the castellation coupled to the first metal layer at a first location and coupled to the second metal layer at a second location, the The first location and the second location are separated by a vertical distance of at least 50% of the height of the castellation wall.7. The semiconductor package of claim 6, wherein the castellation has a height in the range of 0.10 mm to 0.65 mm.8. The semiconductor package of claim 6, wherein the vertical distance is 100% of the height of the castellation.9. The semiconductor package of claim 8, wherein the second metal layer is not a topmost metal layer covered by the ceramic substrate.10. The semiconductor package of claim 6, wherein the first metal layer is a bottommost metal layer covered by the ceramic substrate.11. The semiconductor package of claim 6, wherein the second metal layer is a topmost metal layer covered by the ceramic substrate.12. An electronic device comprising:Printed circuit boards or PCBs with conductive traces; anda semiconductor package coupled to the PCB and the conductive traces by a solder fillet, the semiconductor package comprising:ceramic substrate;semiconductor die; anda horizontal metal layer covered by the ceramic substrate and coupled to the semiconductor die through one or more vias, the metal layer configured to carry signals in the frequency range of 5 GHz to 38 GHz, the metal layer coupled to the solder fillets without coupling to the vertical castellation walls. |
High Frequency Ceramic Package with Improved Castellation and Metal Layer ArchitectureBackground techniqueSemiconductor chips are packaged in packages that protect the chips from harmful environmental influences such as heat, moisture, and debris. A packaged chip typically communicates with electronics outside the package via conductive members (eg, leads) exposed to the surface of the package. Some packages include a substrate on which a semiconductor die is positioned. The substrate may include multiple metal layers or traces that carry electrical signals or power.Contents of the inventionIn some examples, a semiconductor package includes a ceramic substrate and a horizontal metal layer covered by the ceramic substrate. The metal layer is configured to carry signals in the frequency range of 5GHz to 38GHz. The package also includes a vertical castellation on the outer surface of the ceramic substrate, the castellation coupled to the metal layer and having a height ranging from 0.10 mm to 0.65 mm.In some examples, a semiconductor package includes a ceramic substrate, and first and second horizontal metal layers covered by the ceramic substrate and coupled to each other through one or more vias. The first metal layer and the second metal layer are configured to carry signals in a frequency range of 5GHz to 38GHz. The package also includes a vertical castellation on the outer surface of the ceramic substrate, the castellation coupled to the first metal layer at a first location and coupled to the second metal layer at a second location, the first location and The second locations are separated by a vertical distance of at least 50% of the height of the castellation wall.In some examples, an electronic device includes a printed circuit board (PCB) having conductive traces, and a semiconductor package coupled to the PCB and the conductive traces by solder fillets. The semiconductor package includes a ceramic substrate, a semiconductor die, and a horizontal metal layer covered by the ceramic substrate and coupled to the semiconductor die through one or more vias. The metal layer is configured to carry signals in the frequency range of 5GHz to 38GHz. This metal layer is coupled to the solder fillet and not to the vertical castellation.Description of drawingsFor a detailed description of various examples, reference will now be made to the accompanying drawings, in which:1A-1C are perspective, top and cross-sectional views of a semiconductor package according to various examples.2 is a network diagram of metal layers and vias in a semiconductor package according to various examples.3A1-3E1 are perspective views of metal layers in a semiconductor package according to various examples, and FIGS. 3A2-3E2 are top views of metal layers in a semiconductor package according to various examples.4 is a graph depicting improvements in insertion loss associated with semiconductor packages according to various examples.5A-5C are perspective, top and cross-sectional views of a semiconductor package according to various examples.6 is a network diagram of metal layers and vias in a semiconductor package according to various examples.7A-7C are perspective, top and cross-sectional views of a semiconductor package according to various examples.8 is a network diagram of metal layers and vias in a semiconductor package according to various examples.9 is a graph depicting improvements in insertion loss associated with semiconductor packages according to various examples.10 is a graph depicting phase change improvements associated with semiconductor packages according to various examples.11 is a graph depicting phase and amplitude improvements associated with semiconductor packages according to various examples.12 is a graph depicting peaking and loss improvements associated with semiconductor packages according to various examples.13A-13C are perspective, top and cross-sectional views of a semiconductor package according to various examples.14 is a network diagram of metal layers and vias in a semiconductor package according to various examples.15 is a graph depicting improvements in insertion loss associated with semiconductor packages according to various examples.16 is a flowchart of a method according to various examples.17 is a block diagram of an electronic device according to various examples.Detailed waysA ceramic semiconductor package is a hermetically sealed package comprising a ceramic substrate covered with multiple metal layers. The ceramic substrate in such packages may include a cavity at the top of the package, and the semiconductor die may be positioned on the floor of the cavity. The metal layers of the ceramic substrate may be coupled to each other and to the semiconductor die by a network of metal vias. The one or more metal layers may be configured to carry high frequency signals, such as signals in the frequency range of 5 gigahertz (GHz) to 38 GHz.The bottom metal layer in the ceramic substrate is used to couple the metal layer, the network of vias, and the semiconductor die inside the package to the electronic components outside the package (eg, conductive traces on a printed circuit board (PCB)). However, the bottom metal layer is thin, making it difficult to couple the bottom metal layer to conductive traces on the PCB using solder fillets. To facilitate the coupling of the solder fillet to the bottom metal layer of the package, vertical conductive members known as castellations are provided on the outer surface of the package. The castellation is coupled to the bottom metal layer, thus providing a larger and more vertical surface area to which the solder fillet can couple. In this way, the solder fillet forms a more mechanically stable connection with the bottom metal layer of the package.However, the structural configuration of the crenelated walls introduces significant disadvantages. In particular, the thin horizontal bottom metal layer of the package is coupled to the vertical castellation, and the vertical castellation and bottom metal layer are coupled together to a solder fillet, which in turn is coupled to a conductive trace on the PCB. The resulting structure can be broadly described as a thin conductive base layer and vertical features (eg, castellation and solder fillets) coupled to the thin conductive base layer that substantially form a conductive "T" shape. When carrying signals in the GHz range, this structure behaves as a quarter-wavelength resonator, which means that the resonances created by the structure (and more specifically, by the vertical crenellations) significantly attenuate high-frequency signals and This results in very problematic insertion loss in the frequency band of interest (eg, 20GHz to 30GHz). Insertion loss can have a significant negative impact on package performance.Disclosed herein are various examples of ceramic packages with improved castellation and/or improved metal layer architectures that alleviate the above-mentioned challenges. The ceramic packages described herein enable the use of crenelated walls in high frequency applications (eg, providing a stable solder fillet connection) while addressing the quarter-wave resonance and insertion loss challenges described above. In some examples, the castellation height is reduced relative to that in other solutions, thereby pushing the resonant frequency outside the frequency band of interest and mitigating insertion loss. In some examples, multiple metal layers carrying high frequency signals are coupled to the castellation walls, thereby reducing the length of the castellation walls that can generate resonant signals. Therefore, the resonant frequency is pushed outside the frequency band of interest and insertion loss is mitigated. In some examples, the castellation height is reduced as described above, and multiple metal layers are coupled to the castellation as described above, thereby achieving a significant reduction in insertion loss. In some examples, the castellation is omitted, resulting in a significant reduction in insertion loss.FIG. 1A is a perspective view of a semiconductor package 98 according to various examples. In some examples, package 98 is a ceramic package including ceramic substrate 100 . In some examples, enclosure 98 may be hermetically sealed. In some examples, package 98 includes a plurality of conductive contacts 102 . Conductive contacts 102 are adapted to be coupled to a semiconductor die (not explicitly shown). For example, the conductive contacts 102 may extend through or be exposed to the bottom surface of the cavity 103 in the ceramic substrate 100 . A semiconductor die may be positioned in cavity 103 and coupled to conductive contacts 102 .Conductive contacts 102 are coupled to metal layer 104 and a network of vias 108 in package 98 . The specific configuration of the metal layer 104 and the network of vias 108 may vary depending on the application. Vias 108 couple the different metal layers 104 to each other, and at least some of the metal layers 104 terminate in conductive members, such as vertical castellations 110 , exposed outside of the package 98 . In this manner, semiconductor die within package 98 can communicate with and/or receive power from electronic devices external to package 98 . Metal layers 104 may have different configurations and may be positioned in different horizontal planes relative to each other. At least some of the metal layers 104 are configured to carry high frequency signals, such as signals in the range of 5 GHz to 38 GHz. Metal layer 104 may include conductive traces, such as conductive trace 106 , configured to carry high frequency signals. Although exemplified using numeral 106 , conductive trace 106 is an instance or portion of metal layer 104 . Conductive trace 106 is coupled to conductive contact 102 , or alternatively, conductive trace 106 is coupled to other conductive members (eg, vias 108 ) that are coupled to conductive contact 102 . The conductive traces 106 may terminate in the castellation 110 like the other metal layers 104 . In some examples, only one metal layer 104 (eg, the bottom-most metal layer of metal layers 104 ) is coupled to castellation 110 . In some examples, two metal layers 104 are coupled to castellation 110 . In some examples, three or more metal layers 104 are coupled to castellation 110 . In the example of FIGS. 1A-1C , the bottommost metal layer 104 and conductive trace 106 are coupled to the castellation 110 , and the remaining metal layers 104 are not coupled to the castellation 110 .As mentioned above, in other solutions, vertical castellation may create undesired resonances of high frequency signals in frequency bands of interest (eg, frequency bands intended for a particular application). Thus, in some examples, multiple metal layers may be coupled to the castellation, thereby reducing the length of the castellation in which resonant signals may be generated. Because the length of the castellation wall in which the resonant signal may be generated is reduced, the resonant frequency is increased and pushed beyond the frequency range of interest. For example, as shown in FIG. 1A , the bottommost metal layer 104 and conductive traces 106 are coupled to the castellation 110 , while the remaining metal layers 104 are not coupled to the castellation 110 . Because multiple metal layers 104 (including conductive traces 106 ) are coupled to the castellations 110 , the portion of each castellation 110 between points of contact with the metal layers 104 does not resonate. Instead, only the portion of each castellation 110 that extends above the metal layer 104 furthest away from the bottommost metal layer 104 resonates. The length of the part of the castellation wall 110 that resonates is important because it determines the resonant frequency according to the expression (1):where L is the length of the portion of the crenelated wall 110 that resonates, λ is the wavelength of the signal in the crenelated wall 110, c is the speed of light in vacuum, fres is the resonant frequency generated by the crenelated wall 110, and ε is the surrounding metal layer The dielectric constant of the ceramic material. In examples including multiple metal layers 104 coupled to the castellation 110 (eg, bottommost metal layer 104 and conductive traces 106 ), L is reduced and equal to the The length of the castellation wall 110 . When L is decreased, fres increases. Therefore, L can be controlled to generate fres beyond the frequency band of interest. When fres exceeds the frequency band of interest, the insertion loss also exceeds the frequency band of interest, thereby significantly improving the insertion loss within the frequency band of interest. FIG. 1B is a top view of the structure of FIG. 1A , and FIG. 1C is a cross-sectional view of the structure of FIG. 1A .FIG. 2 is a simplified schematic diagram of a network of metal layers and vias in a ceramic substrate 100 . In particular, FIG. 2 shows conductive contacts 102 , vias 108 coupled to conductive contacts 102 , metal layer 104 coupled to various vias 108 , and castellation 110 . Metal layer 104 includes metal layers 104a and 104b located in different horizontal planes. Metal layer 104a is the bottommost metal layer 104 in ceramic substrate 100, while metal layer 104b is neither the topmost nor the bottommost metal layer 104 in ceramic substrate 100 (although in some examples, metal layer 104b may be a ceramic the topmost metal layer 104 in the substrate 100). Metal layer 104b includes conductive traces 106 (FIG. 1A). Both metal layers 104a, 104b are coupled to the castellation wall 110 . The remaining metal layers 104 are not coupled to the castellation 110 , although in some examples additional metal layers 104 may be coupled to the castellation 110 .In operation and as indicated by the arrows shown in FIG. 2 , a semiconductor die coupled to conductive contact 102 provides a high frequency signal (eg, 5 GHz to 38 GHz) to via 108 , which in turn provides a signal to metal layer 104 b. Signal. Vias 108 provide signals to metal layer 104a. The metal layers 104a and 104b provide high frequency signals to the castellation wall 110 . The section of castellation 110 between metal layers 104a and 104b does not produce a resonant signal, but the section of castellation 110 extending above metal layer 104b does produce a resonant signal. However, the coupling of the metal layer 104b to the castellation 110 reduces the portion of the castellation 110 that will resonate from the entire castellation 110 to only the section of the castellation 110 that extends above the metal layer 104b. Therefore, the amount L in the above expression (1) decreases, so that fres in the expression (1) increases. The specific location at which the metal layer 104b is coupled to the castellation 110 can be tuned to result in a value of L that produces a fres value outside the frequency band of interest.The distance between where the metal layers 104a and 104b contact the castellation wall 110 can vary, but will be at least 50% of the overall height of the castellation wall 110 . Distances below this range are disadvantageous at least because it results in unacceptably low resonance frequencies and thus unacceptably insertion loss in the frequency band of interest. In some examples, this distance is 100% of the overall height of the castellation wall 110 for optimal insertion loss mitigation.3A1-3E1 are perspective views of metal layers in a semiconductor package according to various examples, and FIGS. 3A2-3E2 are top views of metal layers in a semiconductor package according to various examples. Each pair of figures (eg, 3A1 and 3A2; 3B1 and 3B2; etc.) The portion of the conductive trace 106 of the via 108 of the wall 110 .FIG. 4 is a graph depicting improvements in insertion loss associated with package 98 . Curve 400 depicts insertion loss as a function of frequency of a signal carried through a network of metal layers and vias in a conventional package. As shown, the insertion loss is significant in the 20GHz to 30GHz range, which is the frequency range where the castellation of conventional packages resonates. Insertion losses in the 45GHz and above ranges are not relevant because they are outside the frequency band of interest (for example, 5GHz to 38GHz). Curve 402 depicts insertion loss as a function of frequency of a signal carried through the network of metal layers and vias in package 98 . As shown, the insertion losses still exist, but they have been pushed to and beyond the upper end of the frequency band of interest (eg, 5GHz to 38GHz). In the frequency band of interest, eg, from 5 GHz to 38 GHz, the insertion loss in package 98 is generally better than that in conventional packages.As noted above, the length of the section of castellation 110 that generates the resonant signal determines the resonant frequency. Therefore, reducing the length of this section L by coupling another metal layer 104 to the castellation 110 (expression (1) above) increases the resonance frequency fres to a range outside the frequency band of interest. In some examples, however, this principle is exploited differently. Specifically, instead of coupling another metal layer 104 to the castellation 110 to reduce L as described above, in some examples, L may be reduced by reducing the height of the castellation 110 . In such an example, a single metal layer 104 (eg, the bottommost metal layer 104, such as metal layer 104a in FIG. 2 ) is coupled to the castellation 110, but the height of the castellation 110 is reduced, thereby reducing L in (1) also achieves improvement in insertion loss in the frequency band of interest as described above.FIG. 5A is a perspective view of a semiconductor package 500 according to various examples. Package 500 is similar but not identical to package 98 described above, wherein like reference numerals refer to like components, except for the exceptions described below. In FIG. 5A , only the bottommost metal layer 104 is coupled to the castellation 110 . However, the height of the castellation wall 110 is reduced relative to that used in other solutions. The height of the castellation 110, measured from the bottom surface of the ceramic substrate 100, is in the range of 0.10 mm to 0.65 mm, heights above this range result in unacceptably high levels of resonance and insertion loss, and heights below this range result in The height of the solder fillet creates an unacceptably low level of mechanical stability as the solder fillet is used to couple the castellation 110 to the PCB. 5B is a top view of the structure of FIG. 5A, and FIG. 5C is a cross-sectional view of the structure of FIG. 5A. 6 is a schematic diagram of a network of metal layers and vias in a semiconductor package according to various examples. As shown in FIG. 6 , the overall height of the castellation wall 110 is reduced compared to other solutions in which the castellation wall 110 generally extends along the entire height of the ceramic substrate 100 . The height of the castellation wall 110 is within the ranges provided above. Furthermore, as shown, the only metal layer 104 in contact with the castellation 110 is the bottommost metal layer 104 , although in some examples a different metal layer 104 may contact the castellation 110 . As the height of the castellation wall 110 is reduced, L (expression (1) above) is reduced, thereby increasing the resonant frequency Fres (expression (1) above) and alleviating the insertion loss challenge described above.The reduced castellation height of package 500 and the multiple metal layers of package 98 to the castellation contacts may be combined to mitigate the insertion loss described above. These insertion losses are mitigated since the distance L in expression (1) is reduced relative to other existing solutions. FIG. 7A is a perspective view of a semiconductor package 700 according to various examples. Package 700 is similar but not identical to packages 98 and 500 described above, wherein like reference numerals refer to like components, except for the exceptions described below. In package 700, the height of castellation 110 is reduced as in package 500 (FIGS. 5A-5C and 6), and as in package 98 (FIGS. 1A-1C and 2), There are a plurality of metal layers 104 in contact with the castellation walls 110 . 7B is a top view of the structure of FIG. 7A , FIG. 7C is a cross-sectional view of the structure of FIG. 7A , and FIG. 8 is a schematic diagram of the network of metal layers and vias in package 700 . In the package 700, the height of the castellation wall 110 ranges from 0.10mm to 0.65mm, and castellation wall heights outside this range have the disadvantages described above. In addition, the distance between the points where the metal layers 104a and 104b contact the castellation wall 110 ( FIG. 8 ) can vary, but is at least 50% of the overall height of the castellation wall 110 . Distances below this range are disadvantageous at least because it results in unacceptably low resonance frequencies and thus unacceptably insertion loss in the frequency band of interest. In some examples, this distance is 100% of the overall height of the castellation wall 110 for optimal insertion loss mitigation.FIG. 9 is a graph depicting improvements in insertion loss associated with a semiconductor package 700 according to various examples. Curve 900 plots insertion loss as a function of operating frequency in the existing solution, and curve 902 plots insertion loss as a function of operating frequency in package 700 . As shown, both curves 900, 902 exhibit insertion loss, but the insertion loss in curve 902 is outside the frequency band of interest (eg, 5GHz to 38GHz). The insertion loss of curve 902 is generally better than curve 900 in the frequency band of interest (eg, 5GHz to 38GHz).10 is a graph depicting phase change improvements associated with semiconductor packages according to various examples. 11 is a graph depicting phase and amplitude improvements associated with semiconductor packages according to various examples. 12 is a graph depicting peaking and loss improvements associated with semiconductor packages according to various examples. In particular, the graph of FIG. 10 includes a top graph and a bottom graph. The top plot shows the phase behavior in degrees as a function of frequency in Hertz (Hz). The bottom plot shows loop gain in decibels (dB) as a function of frequency in Hz. Curves 1000 and 1004 illustrate the behavior of other solutions, and curves 1002 and 1006 illustrate the behavior of packages according to various examples of the present disclosure. Curve 1000 exhibits significant phase change, while curve 1002 exhibits greater phase stability. This improvement in phase stability is due to the absence of observed in-band resonances from package parasitics, resulting in improved package insertion loss and return loss. Curve 1004 exhibits less ringing, while curve 1006 exhibits greater ringing.The graph of FIG. 11 includes a top graph and a bottom graph. The bottom plot shows the phase characteristic in degrees as a function of frequency in Hertz (Hz). The top plot shows loop gain in decibels (dB) as a function of frequency in Hz. Curves 1100 and 1104 demonstrate the behavior of other solutions, and curves 1102 and 1106 demonstrate the behavior of packages according to various examples of the present disclosure. Curve 1104 exhibits significant phase change, while curve 1106 exhibits greater phase stability. This improvement in phase stability is due to the absence of observed in-band resonances from package parasitics. Curve 1100 exhibits more ringing, indicating less stability, while curve 1102 exhibits less ringing, indicating better stability.The graph of Figure 12 includes a top graph and a bottom graph. Top plot shows chip and package amplification gain in dB on the y-axis as a function of frequency in Hz, bottom plot shows chip, package and PCB amplification gain in dB on the y-axis Function of frequency in Hz. Curves 1200 and 1204 demonstrate the behavior of other solutions, while curves 1202 and 1206 demonstrate the behavior of packages according to various examples of the present disclosure. Curve 1200 exhibits better ringing and poorer amplification performance due to resonances induced by encapsulation castellations, while curve 1202 exhibits less ringing and improved Amplify performance. Curve 1204 exhibits greater ringing and poorer amplification performance due to resonances induced by the encapsulation castellations, while curve 1206 exhibits less ringing and improved amplification performance due to resonance mitigation using structures described herein .In some examples, one or more of the castellations may be omitted to eliminate resonances associated with the castellations. In such examples, the solder fillets on the PCB are directly coupled to one or more metal layers (eg, the bottommost metal layer in the package). FIG. 13A is a perspective view of a semiconductor package 1300 according to an example. As shown, the castellation walls are omitted from enclosure 1300 . Therefore, the above-mentioned resonance and accompanying insertion loss generated by the castellation wall 110 are absent in the package 1300, thereby significantly improving the insertion loss performance. FIG. 13B is a top view of the package 1300 , and FIG. 13C is a cross-sectional view of the package 1300 . 14 is a schematic diagram of a network of metal layers and vias in a semiconductor package according to various examples. As shown, the absence of castellation results in no resonance caused by the castellation. The bottommost metal layer 104a may be coupled directly to the solder fillet on the PCB, rather than to the castellation.FIG. 15 is a graph depicting improvements in insertion loss associated with a semiconductor package 1300 according to various examples. The y-axis represents insertion loss, while the x-axis represents frequency (in GHz). Curve 1500 shows the behavior of other solutions, while curve 1502 shows the behavior of package 1300 . As shown, curve 1500 exhibits significant insertion loss in the frequency band of interest (eg, 5GHz to 38GHz), while curve 1502 exhibits no significant insertion loss.16 is a flowchart of a method 1600 according to various examples. Method 1600 begins with layer-by-layer formation of a ceramic substrate array, including punching and drilling vias and castellation openings, filling the vias and castellation openings with metal, and screen printing metal layers (1602). Step 1602 is iteratively performed layer by layer until the ceramic substrate array is completed. The precise manner in which the holes are punched and filled with metal, and the precise screen-printed pattern used is application specific. Castellation and metal layers may be formed according to one or more examples described herein. Method 1600 includes performing a singulation technique on the array to produce individual ceramic substrates (1604). Method 1600 includes cofiring a ceramic substrate (1606) (eg, at temperatures up to 1600 degrees Celsius) and brazing and plating the ceramic substrate (1608). Method 1600 includes positioning a semiconductor die in a cavity of a ceramic substrate (1610) and covering the cavity with a lid using vacuum techniques to hermetically seal the cavity (1612).FIG. 17 is a block diagram of an electronic device 1700 according to various examples. Electronic device 1700 may include personal electronic devices (e.g., smartphones, laptops, desktop computers, tablets, notebooks, artificial intelligence assistants), appliances (e.g., refrigerators, microwave ovens, ovens, dishwashers), network or Enterprise-grade electronic equipment or systems (for example, servers, routers, modems, mainframe computers, wireless access points), automotive or aviation equipment or systems (for example, control panels, entertainment equipment, navigation equipment, power electronics), or various Any other electronic device or system. Electronic device 1700 may include PCB 1702 . A semiconductor package 1704 (eg, any package described herein) may be coupled to PCB 1702 .A device "configured" to perform a task or function may be configured (e.g., programmed and/or hardwired) at the time of manufacture by the manufacturer to perform that function and/or may be user-configurable (or reconfigurable) after manufacture. configurable) to perform this function and/or other additional or alternative functions. Configuration may be accomplished by firmware and/or software programming of the device, by construction and/or layout of hardware components and interconnection of the device, or a combination thereof. "About," "approximately," or "substantially" preceding a numerical value means +/- 10% of the stated numerical value, unless otherwise indicated. Modifications to the examples described are possible, and other examples are possible, within the scope of the claims. |
A processor includes a front-end with an instruction set that operates at a first bit width and a floating point unit coupled to receive the instruction set in the processor that operates at the first bit width. The floating point unit operates at a second bit width and, based upon a bit width assessment of the instruction set provided to the floating point unit, the floating point unit employs a shadowlatch configured floating point register file to perform bit width reconfiguration. The shadow-latch configured floating point register file includes a plurality of regular latches and a plurality of shadow latches for storing data that is to be either read from or written to the shadow latches. The bit width reconfiguration enables the floating point unit that operates at the second bit width to operate on the instruction set received at the first bit width. |
WHAT IS CLAIMED IS:1. A processor, comprising: a front-end with an instruction set operating at a first bit width; and a floating point unit coupled to receive the instruction set, the floating point unit operating at a second bit width, wherein, based upon a bit width assessment of the instruction set provided to the floating point unit, the floating point unit employs a shadow-latch configured register file to perform bit width reconfiguration.2. The processor of claim 1 , wherein: the bit width reconfiguration enables the floating point unit that operates at the second bit width to operate on the instruction set received at the first bit width.3. The processor of claim 1 , wherein: the shadow-latch configured register file includes a plurality of regular latches and a plurality of shadow latches for storing data that is to be either read from or written to the shadow latches.4. The processor of claim 3, wherein: at least one of the plurality of regular latches store a plurality of lower bits of a first bit width operation and at least one of the plurality of shadow latches store a plurality of upper bits associated with the first bit width operation.5. The processor of claim 4, wherein: the first bit width operation is a 512-bit width operation and the second bit width is 256 bits.6. The processor of claim 3, wherein: the shadow-latch configured register file includes a plurality of shadow multiplexers (MUXs) coupled to the plurality of shadow latches.
7. The processor of claim 6, wherein: during a read operation, at least one of the plurality of shadow MUXs are used to select at least one shadow latch of the plurality of shadow latches to read from. 8. The processor of claim 3, wherein: during a write operation, at least one shadow latch of the plurality of shadow latches is activated using a write control signal during a second clock cycle of a plurality of clock cycles.9. The processor of claim 3, wherein: during a read operation, the read operation of at least one shadow latch of the plurality of shadow latches is activated by a read control signal during a second clock cycle, the read control signal causing the shadow select multiplexers to select the shadow latches for the read operation during a second clock cycle of a plurality of clock cycles. 10. The processor of claim 1 , wherein: read logic is used to select a shadow latch to read data from plurality of shadow latches.11. The processor of claim 3, wherein: wherein the shadow latches are located in a single entry in the shadow-latch configured register file.12. A method, comprising: receiving an instruction set operating at a first bit width; operating a floating point unit at a second bit width; and based on a bit width assessment of the instruction set, employing a shadow- latch configured register file to perform bit width reconfiguration.
13. The method of claim 12, wherein: the bit width reconfiguration enables the floating point unit that operates at the second bit width to operate on the instruction set received at the first bit width. 14. The method of claim 12, wherein: the shadow-latch configured register file includes a plurality of regular latches and a plurality of shadow latches for storing data that is to be either read from or written to the shadow latches.15. The method of claim 14, further comprising: storing a plurality of lower bits of a first bit width operation in at least one of the plurality of regular latches and storing a plurality of upper bits associated with the first bit width operation in at least one of the plurality of shadow latches.16. The method of claim 14, wherein: the first bit width is a 512-bit width and the second bit width is a 256-bit width.17. The method of claim 14, wherein: the shadow-latch configured register file includes a plurality of shadow multiplexers (MUXs) coupled to the plurality of shadow latches.18. The method of claim 17, wherein: during a read operation, at least one of the plurality of shadow MUXs are used to select at least one shadow latch of the plurality of shadow latches to read from; and during a write operation, the write operation to at least one shadow latch of the plurality of shadow latches is activated using a write control signal during a second clock cycle of a plurality of clock cycles.
19. A floating point unit, comprising: a scheduler unit; and a shadow-latch configured register file coupled to the scheduler unit, wherein based upon a bit width assessment of an instruction set provided to the floating point unit at a first bit width, the floating point unit employs the shadow-latch configured register file to perform bit width reconfiguration using a second bit width.20. The processor of claim 1 , wherein: the shadow-latch configured register file includes a plurality of regular latches and a plurality of shadow latches for storing data that is to be either read from or written to the shadow latches. |
BIT WIDTH RECONFIGURATION USING A SHADOW-LATCH CONFIGURED REGISTER FILEBACKGROUNDProcessors employ various structures to store data for use during processing activities. One type of data structure is a register file. A typical register file stores data in functional latches that are associated with an entry that may be written to or read from in parallel. In order to access the data stored in the functional latches, typical processors utilize split renaming in order to “split” registers into high bit registers and low bit registers. Split-renaming allows the processor to implement registers wider than the native width of the processor. In particular, the high bit portion of a register and low bit portion of the register are assigned different identifiers, or names, by the microprocessor, so that the register is logically treated as two different registers. For example, several currently available microprocessors split rename 256-bit registers into a high 128-bit register and a low 128-bit register. Split renaming registers into high and low registers results in an increase in the amount register space required to perform computational operations. For example, split renaming the 256-bit register described above into high and low 128-bit registers requires twice the number of entries and area in the physical register file. The increased size of the physical register file required for split renaming results in an increase in manufacturing costs as more microprocessor space is required to perform the split renaming operations.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.FIG. 1 is a block diagram of a processor core that supports bit width reconfiguration of a register using shadow latches in accordance with some embodiments.
FIG. 2 is a bitcell layout of a shadow-latch configured floating point register file in the processor core of FIG. 1 in accordance with some embodiments.FIG. 3 is a flow diagram of a method employing bit width reconfiguration using shadow latches in the processor core of FIG. 1 in accordance with some embodiments.FIG. 4 is a block diagram of a shadow-latch configured floating point register file in the processor core of FIG. 1 in accordance with some embodiments.FIG. 5 is a timing diagram utilized in the shadow-latch configured floating point register file in the processor core of FIG. 4 in accordance with some embodiments.DETAILED DESCRIPTIONFIGs. 1-5 illustrate systems and techniques that support bit width reconfiguration of registers in a processor core of a processor in accordance with some embodiments. A floating point unit in the processor includes a shadow-latch configured floating point register file that reconfigures a bit width from a first bit width (e.g., 256-bit width) to a second bit width (e.g., 512-bit width) based on the availability of shadow latches in the shadow-latch configured floating point register file, so that the floating point unit that operates at the first bit width is usable in a processor that operates at a second bit width. The shadow-latch configured floating point register file includes shadow latches, regular latches, and shadow select multiplexers (MUXs) that are used for bit width reconfiguration during, for example, read and write data operations that utilize the floating point unit.In order to perform the bit width reconfiguration, during a first and second clock cycle operation, the first 256-bits of the 512-bit operation are stored in the regular latches and the second 256-bits are stored in the shadow latches within the shadow- latch configured floating point register file of the same single entry. During, for example, a 512-bit read or write operation, the first 256-bits are accessed from the shadow-latch configured floating point register file during a first clock cycle and the second 256-bits are accessed during a second clock cycle, where both accesses occur from the same entry. Because both the first 256-bits and the second 256-bits are stored in a single entry in the shadow-latch configured floating point register file,
split-renaming is not required in order to reconfigure the bit width for 512-bit operation. That is, by utilizing the shadow-latch configured floating point register file, split-renaming that normally splits 512-bit instructions into two separate registers (i.e., a high bit register and a low-bit register) is not required in order to have the floating point unit operate on the 512-bit instruction set.Figure 1 illustrates a processor core 100 of a processor having an execution pipeline 105 that supports bit width reconfiguration in accordance with some embodiments. In some embodiments, the illustrated processor core 100 includes, for example, a central processing unit (CPU) core based on an x86 instruction set architecture (ISA), an ARM ISA, and the like. The processor implements a plurality of such processor cores, and the processor is implemented in one of a variety of electronic devices, such as a notebook computer, desktop computer, tablet computer, server, computing-enabled cellular phone, personal digital assistant (PDA), set-top box, game console, and the like.In some embodiments, the processor utilized for processor core 100 supports the x86 architecture that supports execution of two types of vector arithmetic instructions: Streaming Single Instruction Multiple Data (SIMD) Extensions (SSE) instructions and Advanced Vector extension (AVX) instructions. AVX instructions manipulate 256-bit operands and SSE instructions manipulate 128-bit operands. AVX-512 instructions are 512-bit extensions to the 256-bit AVX SIMD instructions forx86 instruction set architecture (ISA). Accordingly, a processor that employs a register file with 512-bit registers supports execution of both AVX and SSE instructions. In some embodiments, utilizing the shadow-latch configured floating point register file described herein, a processor or processing unit (such as the floating point unit 120) that employs a register file with 256-bit registers, also supports 512-bit operations.In the depicted example, the execution pipeline 105 includes an instruction cache 110 (“lcache”), a front end 115, floating point unit 120, and fixed point unit 125 (also commonly referred to as “integer execution units”). The processor core 100 also includes a load store unit (LSU) 130 coupled to a memory hierarchy (not shown), including one or more levels of cache (e.g., L1 cache, L2, cache, etc.), a system
memory, such as system RAM, and one or more mass storage devices, such as a solid-state drive (SSD) or an optical drive.The instruction cache 110 stores instruction set data that is fetched by a fetch unit (not shown) of the front end 115 in response to demand fetch operations (e.g., a fetch to request the next instruction in the instruction stream identified by the program counter) or in response to speculative prefetch operations. The front end 115 decodes instructions fetched by the fetch unit into one or more operations that are to be performed, or executed, by either the floating point unit 120 or the fixed point unit 125. Those operations involving floating point calculations are dispatched to the floating point unit 120 for execution, whereas operations involving fixed point calculations are dispatched to the fixed point unit 125.As used herein, a type of instruction refers to a size of the operands manipulated by the instruction. Thus, instructions of different types manipulate operands of different sizes. For example, in some embodiments the floating point unit 120 executes operations decoded from instructions that manipulate 128-bit operands (referred to as 128-bit instructions) and also executes operations decoded from instructions that manipulate 256-bit operands (referred to as 256-bit instructions). In addition, floating point unit 120, utilizing the bit width reconfiguration techniques described herein, executes operations decoded from instructions that manipulate 512-bit operands (referred to as 512-bit instructions).In some embodiments, the floating point unit (FPU) 120 includes a map unit 135, a scheduler unit 140, a shadow-latch configured floating point register file (SC- FPRF) 145, and one or more execution (EX) units 150. In some embodiments, FPU 120 carries out operations on floating point numbers and performs operations including addition, subtraction, multiplication, division, square root, and bit shifting or broadcasting, as well as transcendental functions such as exponential functions, trigonometric functions, and the like. In various embodiments, the FPU 120 supports operation of various graphics processing units (GPUs) and central processing units (CPUs). For example, if the CPU encounters an instruction that requires performing a floating-point operation, the CPU transmits a request to the FPU 120, which carries out the operation and returns the results to the CPU. Although the FPU 120 shown in
FIG. 1 is implemented internally to the processor core 100, in other embodiments FPU 120 is implemented externally to the GPU and the CPU.The SC-FPRF 145, utilizing the additional shadow latches 147 and shadow select MUXs 148, stores instructions, operands used by the instructions, and results of executed instructions. Entries in the SC-FPRF 145 are indicated by physical register numbers. In some embodiments, the physical register numbers are mapped (or renamed) using map unit 135 to architectural register numbers that are defined by an instruction set architecture. Typically, a queue entry maintained by the scheduler unit 140 includes a field to store the operation payload or operation identifier (e.g., the opcode for the operation), fields for the addresses or other identifiers of physical registers that contain the source operand(s) for the operation, fields to store any immediate or displacement values to be used with the operation, a destination field that identifies the physical register in which the result of the execution of the corresponding operation is to be stored, and at least one field to store instruction dependency information. For example, a load instruction includes address information indicating the target of the load instruction and an architected register operand indicating the PRN in the SC-FPRF 145 that receives the data from the target address.In addition to operating on instructions that operate at a first bit width (256-bit width), the FPU 120 operates on instructions that operate at a second bit-width that include a relatively large number of bits, e.g., on 512-bit instructions. That is, in some embodiments, even though the datapaths of FPU 120 are limited to 256-bit instructions, FPU 120 is able to utilize the SC-FPRF145 to reconfigure the 256-bit datapath to operate on 512-bit instructions by extending the instruction operation or transaction from a single clock cycle to two clock cycles (e.g., a first clock cycle and a second clock cycle). Thus, in some embodiments, when the SC-FPRF 145 is a 512- bit registerfile (i.e., stores the lower 256 bits in regular latches 146 and the upper 256 bits in the shadow latches 147), access to the 512 bits occurs over two 256-bit cycles, instead on one 512-bit cycle.In some embodiments, for example, during a read operation, when the execution units 150 read data from the SC-FPRF 145, the lower 256 bits are read from the regular latches 146 in the first cycle of the transaction and the upper 256-
bits are read from the shadow latches 147 in the second cycle of the transaction. Using a read address provided to the shadow select MUXs 148, the shadow select MUXs 148 utilize a read function to select which shadow latch of the shadow latches 147 to read during the second cycle of the read operation. In some embodiments, in order to perform the read operation, the read function is added to the SC-FPRF 145 that is used to determine whether to read the shadow data stored in the shadow latches or the normal data stored in the regular latches. Thus, the read function allows the execution units 150 to select the data to read using the shadow select MUXs 148.Similarly, during a write operation, when either the schedule unit 140 or the execution units 150 perform a write operation to SC-FPRF 145, the lower 256 bits are written to the regular latches 146 during the first cycle of the transaction and the upper 256 bits are written to the shadow latches 147 during the second cycle of the transaction. During the write operation, no additional write logic is required compared to traditional register files because the additional 256 bits that are being written are not being written as a separate entry, i.e. , the additional 256 bits are a shadow piece of data associated with the regular latches in the same entry.In some embodiments, at the input to the interface to SC-FPRF 145, a write control signal and a read control signal are provided from a SC-FPRF controller 127 that dictates whether the read operation or the write operation is going to occur during the second cycle. During the write operation, if a write control signal (e.g.,Is512 write control input signal) provided from SC-FPRF controller 127 is set to a high logic value when the transaction starts, the clock for the shadow write is activated during the second cycle. That is, the Is512 write control input signal causes the shadow write clock to fire in the second cycle of the two cycles. For a read operation, when a read control signal (e.g., Is512 read control input signal) provided from SC- FPRF controller 127 is set to a high logic value when the transaction starts, the shadow select MUX 148 selects the shadow latch to be read based upon the read address provided to the shadow select MUX 148 during the second cycle. That is, the Is512 read control input signal causes the shadow select MUX 148 to choose the shadow latch 147 corresponding to the requested address for reading in the second cycle. In other words, in the second cycle of the transaction data from the shadow
latch 147 is selected by the shadow select MUX 148. As a result of using the SC- FPRF 145, in various embodiments, the read decoders and the write decoders are not clocked for the second cycle, holding the decoded values steady and saving power while executing instructions in processor core 100.In some embodiments, since the control signal for the shadow select MUX 148 arrives ahead of schedule, i.e. , within the first cycle of the transaction, the signal provided to the shadow select MUX (i.e., a shadow select MUX signal) provided by, for example, a flip flop, hides the timing associated with adding the additional shadow select MUX 148, essentially nullifying the effect of having to switch the additional shadow select MUX 148 that has been added to the register file.In some embodiments, activation of FPU 120 for 512-bit operations or 256-bit operations is dependent on the configuration of SC-FPRF controller 127. When the micro-operation to be executed is a 512-bit instruction, then SC-FPRF controller 127 enables the FPU 120 for 512-bit operations. When the micro-operation to be executed is a 256-bit instruction, then SC-FPRF controller 127 enables the FPU 120 for 512-bit operations. That is, in order for the FPU 120 to determine whether a 512- bit operation or 256-bit operation is to occur, SC-FPRF controller 127 activates the FPU 120 as either a 512-bit operator or a 256-bit operator. When the FPU 120 is not enabled for 512-bit read or write operations, a 256-bit read or write operation is activated and occurs in a single cycle. When the FPU 120 is enabled for 512-bit read or write operations, 512-bit read or write operation is activated and it takes two clock cycles on a given port to do the 512 operation.In some embodiments, since FPU 120 is a 256-bit wide FPU with two cycles of 256-bits being used to execute the 512-bit operation, scheduler unit 140 in FPU 120 blocks acceptance of a second micro-op during the second cycle in order to allow the first micro-op to complete during the first and second cycle. That is, since execution of the 512-bit operation by FPU 120 takes two cycles, scheduler unit 140 in FPU 120 is flagged by SC-FPRF controller 127 that the 512-bit micro-ops take two cycles and prevents another micro-op or another transaction from commencing during the second cycle.
Similarly, load store unit 130 operates in both 512-bit operations and 256-bit operations. Load store unit 130 is flagged by SC-FPRF controller 127 that FPU 120 is executing 512-bit micro-ops. As load store unit 130 handles the 512-bit loads and store with internal 256-bit datapaths, the lower 256-bits of the 512-bit operation are executed during the first cycle and the upper 256-bits are executed during the second cycle, matching the SC-FPRF 145 and execution pipes. Thus, in some embodiments, both the load store unit 130 interface and the FPU 120 interface are 256-bits wide.In some embodiments, executing 512-bit micro-ops in FPU 120 allows 512-bit instructions to use a single entry in the retire queue (not shown) and many other structures in processor core 100, such as, for example, a load queue, and a scheduler in EX 150. Using a single entry improves performance over, for example, split renaming, which splits 512-bit instructions into two 256-bit micro-ops. In some embodiments, the shadow-latch configured floating point register file scheme described herein is extended to multiple latches and cycles, such as, four latches and four cycles to perform 512-bit operations with 128-bit datapaths.In order to use the SC-FPRF 145 to implement 512-bit renaming and 512-bit micro-ops, with 256-bit datapaths, in addition to the regular latches that normally used store data in a register file, and an additional set of shadow latches are added per entry in the register file (depicted in detail with reference to FIG. 2). Further, a second write clock is added to the floating point unit 120 to allow the shadow latch to be written to.The scheduler unit 140 schedules instructions for execution in the FPU 120. In addition, because the SC-FPRF 145 uses two cycles to perform a single cycle operation, scheduler unit 140 is adapted to accommodate for the additional cycle needed to perform the two cycle operation. As a result, scheduler unit 140 in the floating point unit 120 blocks or delays accepting another micro-op for an additional cycle, until the two cycle operation has completed. That is, in one embodiment, scheduler unit 140 understands that 512-bit micro-ops take two cycles and block taking another micro-op or another transaction in that second cycle. In some embodiments, the floating point unit 120 also requires the scheduler (scheduler unit 140) to discern that 512-bit micro-ops take two cycles in the register file and execution pipelines.
Load store unit 130 performs load and store operations over two cycles instead of a single cycle in order to adjust for the additional cycle added for the shadow latch operations. Thus, for example, for a 512-bit operation, the load store unit 130 performs 512-bit loads and stores with 256-bit data paths over two cycles, instead of a single cycle.In various embodiments, although the FPU 120 performs its entire operations using 256-bit datapaths, the decoder (not shown) decodes the 512-bit operation using 512-bit datapaths, instead of 256 bits. In other words, the decoder is not aware that the FPU 120 operates using a 256-bit datapath, and instead performs as the decoder normally would for a 512-bit operation.In some embodiments, the shadow select multiplexer signal is output by a local flip-flop, since the shadow select multiplexer signal comes along with the first cycle transaction. In some embodiments, outputting the shadow select multiplexer signal from the local flip-flop allows the processor to be faster than the read decode, and hides the timing through the extra or additional shadow select multiplexer.Although the following description is related to a shadow-latch configured floating point register file 145 that is implemented in the floating point unit 120, it applies to any type of register file or shadow-latch configured register file that is implemented for, for example, the fixed point unit 125, or an entirely different type of processing unit, such as a digital signal processor, a graphics processor, an application specific integrated circuit (ASIC), etc. The SC-FPRF 145 includes functional latches, shadow latches and shadow select multiplexers that allow data to be read to and written from the functional latches and shadow latches (discussed further below with reference to FIG. 2).FIG. 2 is a bitcell layout of the SC-FPRF 145 of FIG. 1 that employs bit width reconfiguration using shadow latches in accordance with some embodiments. SC- FPRF 145 includes shadow latches 147, shadow select MUXs 148, functional or regular latches 146, read logic units (read logic) 265, and write logic units (write logic) 270. In the illustrated example, shadow latches 147 include a plurality of shadow latches, shadow select MUXs 148 include a plurality of shadow select multiplexers, regular latches 146 include a plurality of regular latches, read logic units 265 include
a plurality of read logic units, write logic units 270 include a plurality of write logic units. In some embodiments, each shadow latch 147 and regular latch 146 perform the latching operations that store the data that is to be written and read to the SC- FPRF 145 during bit width reconfiguration operations. Each shadow select MUX 148 is used to select the data that is be read from the SC-FPRF 145 during bit width reconfiguration operations. In some embodiments, read logic unit 265 and write logic unit 270 include the logic that is used to perform the read and write operations as is generally known in the art.As depicted in FIG. 2, for a bit width reconfiguration of a 512-bit operation, FPU 120 (having a 256-datapath), stores the first 256-bits in regular latches 146 and the second 256-bits in shadow latches 147 during two clock cycles. That is, during a write operation, in the first cycle, regular latches 146 store the lower 256-bits of associated with the 512-bit operation. In the second cycle, shadow latches 147 store the upper 256-bits of the 512-bit operation. During a read operation, in the first cycle, data is read from the regular latches 146. In the second cycle, the shadow latches 147 that have been selected by shadow select MUXs 148 are read from and provided to EX 150 (as illustrated in FIGs. 1 and 3).FIG. 3 illustrates a method 300 of employing shadow latching using the processor core of FIG. 1 in accordance with some embodiments. At block 310, an instruction set having a data operation (e.g., a read operation or write operation) is initiated by processor core 100 to the floating point unit 120. At block 330, the floating point unit 120 operates on the instruction set at a second bit width. At block 340, based upon on a bit width operation assessment of the instruction set, the floating point unit 120 employs a SC-FPRF 145 to perform bit width reconfiguration. For example, in some embodiments, the bit width operation assessment of the instruction set includes determining whether the bit width of the instruction set is a 512-bit operation or a 256-bit operation and determining whether the data operation to be performed by the floating point unit 120 is a read operation, a write operation, or both a read operation and a write operation. In some embodiments, determining whether the bit width of the instruction set is 512-bits or 256-bits dictates whether the floating point unit 120 will be performing bit width reconfiguration during floating point operations, or simply performing floating point operations at the prescribed 256-bits
(the bitwidth of the datapath of the floating point unit 120). In some embodiments, determining whether the data operation to be performed by floating point unit 120 is a read operation or a write operation, will activate either a write control signal or a read control signal that dictates the timing of accessing the shadow latches and the shadow latches that are to be accessed using the shadow select MUXs 148 for read and write operations in the floating point unit 120.FIG. 4 is a block diagram of SC-FPRF 145 of the processor core 100 of FIG. 1 in accordance with some embodiments. SC-FPRF 145 includes a write MUX 470, regular latch 446, shadow latch 447, shadow select MUX 448. In various embodiments, the two latches (e.g., regular latch 446 and shadow latch 447) share a single write MUX 470, but utilize different write clocks (e.g., write clock signal 410 and shadow write clock signal 420) during the writing process.During a write operation, at the write port of SC-FPRF 145, write MUX 470 receives write data (e.g., 512-bit data) that is to be written to regular latch 446 and shadow latch 447. Based on write MUX signal 440, when the write clock signal 410 logic value is high, write MUX 470 directs write data 491 to be written to regular latch 446. When the shadow write clock signal 420 logic value is high, write MUX 470 directs write data 492 to shadow latch 447. Regular latch 446 and shadow latch 447 store the received write data 491 and write data 492, respectively. During a read operation, regular latch 446 and shadow latch 447 release latch data 461 and shadow latch data 471 based on, for example, the logic value of shadow select MUX signal 430 that controls shadow select MUX 448. In some embodiments, when, for example, the logic value of shadow select MUX signal 430 is low, latch data 461 is read from latch 446 as read data 499. When shadow select MUX signal 430 is high, shadow latch data 471 is read from shadow latch 447 as read data 499. Read data 499 is then provided via read port MUXs to execution unit 150 as output of SC-FPRF 145.FIG. 5 is a timing diagram 500 of read and write operations utilizing SC-FPRF 145 of FIG. 4 in accordance with some embodiments. Timing diagram 500 depicts a clock signal 510, shadow select MUX signal 430, read data 499, write clock signal 410, latch data 461 , shadow write clock signal 420, and shadow latch data 471. In
the embodiment shown, timing diagram 500 illustrates four clock cycles, however, a varying number of clock cycles is utilized in alternate embodiments.For the write operation, during the first clock cycle, write clock signal 410 and shadow write clock signal 420 are low and data is not being written to regular latch 446 or shadow latch 447. At the end of the first clock cycle, write clock signal 410 transitions from low to high and, as a result, write data 491 is written to regular latch 446. Shadow write clock signal 420 remains low during the transition and data is not written to shadow latch 447 during the second cycle. At the end of the second clock cycle, write clock signal 410, which transitioned to low mid-second clock cycle remains low and no data is written to regular latch 446 during the third cycle. Shadow write clock signal 420, at the end of the second clock cycle, transitions from low to high and write data 492 is written to shadow latch 447. At the end of the third clock cycle, write clock signal 410 and shadow write clock signal 420 remain low and no data is written to regular latch 446 and shadow latch 447 during the fourth clock cycle, respectively. For the read operation, during the first clock cycle, shadow select MUX signal 430 is low and data is not being read from shadow latch 447, while latch data 461 is being read from regular latch 446. At the end of the first cycle, when shadow select MUX signal 430 transitions from low to high, shadow latch data 471 is read from shadow latch 447. Together, latch data 461 and shadow latch data 471 are combined to provide the desired bit width configuration at the output of SC-FPRF 145 of FIG. 1.A computer readable storage medium may include any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media includes, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc , magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical
disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular
embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. |
A method of forming a silicon-on-insulator substrate is disclosed, including providing a silicon substrate; depositing a first insulation layer over the silicon substrate; forming a conductive layer over the first insulation layer to a first structure; providing a second structure comprising a silicon device layer and a second insulation layer; bonding the first structure and the second structure together so that the conductive layer is located between the first and second insulation layers; and removing a portion of the silicon device layer thereby providing the silicon-on-insulator substrate having two discrete insulation layers. In one embodiment, the method further includes forming at least one conductive plug through the silicon substrate and the first insulation layer and/or the second insulation layer so as to contact the conductive layer. Methods of facilitating heat removal from the device layer are disclosed. |
What is claimed is:1. A method of forming a silicon-on-insulator substrate, comprising:providing a silicon substrate;depositing a first insulation layer over the silicon substrate;forming a conductive layer over the first insulation layer to form a first structure;providing a second structure comprising a silicon device layer and a second insulation layer;bonding the first structure and the second structure together so that the conductive layer is located between and in contact with the first and second insulation layers; andremoving a portion of the silicon device layer thereby providing the silicon-on-insulator substrate having two discrete insulation layers.2. The method of claim 1, wherein each of the first and the second insulation layers independently has a thickness of about 50 Ȧ to about 2,500 Ȧ.3. The method of claim 1, wherein the conductive layer has a thickness that is: (1) less than about 15% of the thickness of at least one of the first and second insulation layers; or (2) greater than 50% of the thickness of at least one of the first and second insulation layers.4. The method of claim 1, wherein the conductive layer comprises at least one of chromium, molybdenum, platinum, tantalum, titanium, and tungsten.5. The method of claim 1, wherein the conductive layer comprises at least one of chromium silicide, molybdenum silicide, platinum silicide, tantalum silicide, titanium silicide, and tungsten silicide.6. The method of claim 1, wherein the conductive layer has a thickness from about 150 Ȧ to about 500 Ȧ.7. The method of claim 1, wherein the conductive layer has a thickness from about 1,500 Ȧ to about 3,500 Ȧ.8. The method of claim 1, further comprising forming at least one conductive plug through the silicon substrate and the first insulation layer so as to contact the conductive layer.9. The method of claim 1, further comprising forming at least one conductive plug through the silicon device layer and the second insulation layer so as to contact the conductive layer.10. The method of claim 1, wherein at least one of the first insulation layer and the second insulation layer comprise silicon dioxide.11. The method of claim 1, wherein the silicon-on-insulator substrate comprises the silicon substrate; the first insulation layer; the conductive layer; the second insulation layer; and a device layer comprising silicon.12. A method of facilitating heat removal from a device layer of a silicon-on-insulator substrate comprising bulk silicon, a first insulation layer over the bulk silicon, a conductive layer over the first insulation layer, a second insulation layer over the conductive layer, and a silicon device layer over the second insulation layer, comprising:forming at least one conductive plug through the bulk silicon and the first insulation layer so as to contact the conductive layer.13. The method of claim 12, wherein the conductive layer comprises at least one of chromium, molybdenum, platinum, tantalum, titanium, and tungsten.14. The method of claim 13, wherein the conductive layer comprises at least one of titanium, platinum and tungsten.15. The method of claim 12, wherein the conductive layer comprises at least one of chromium silicide, molybdenum silicide, platinum silicide, tantalum silicide, titanium silicide, and tungsten silicide.16. The method of claim 12, wherein the conductive layer has a thickness of from about 100 Ȧ to about 4,000 Ȧ.17. The method of claim 16, wherein the conductive layer has a thickness from about 100 Ȧ to about 1,000 Ȧ.18. The method of claim 16, wherein the conductive layer has a thickness from about 1,200 Ȧ to about 4,000 Ȧ.19. The method of claim 12, wherein the conductive plug comprises at least one of titanium, platinum and tungsten.20. The method of claim 12, wherein the conductive plug includes a barrier layer.21. A method of facilitating heat removal from a device layer of a silicon-on-insulator substrate comprising bulk silicon, a first insulation layer over the bulk silicon, a conductive layer over the first insulation layer, a second insulation layer over the conductive layer, and a silicon device layer over the second insulation layer, comprising:forming at least one first conductive plug through the silicon device layer and the second insulation layer so as to contact the conductive layer, andforming at least one second conductive plug through the bulk silicon and the first insulation layer so as to contact the conductive layer.22. The method of claim 21, wherein the conductive layer comprises at least one of chromium, molybdenum, platinum, tantalum, titanium, and tungsten.23. The method of claim 22, wherein the conductive layer comprises at least one of titanium, platinum and tungsten.24. The method of claim 21, wherein the conductive layer comprises at least one of chromium silicide, molybdenum silicide, platinum silicide, tantalum silicide, titanium silicide, and tungsten silicide.25. The method of claim 21, wherein the conductive layer has a thickness of from about 100 Ȧ to about 4,000 Ȧ.26. The method of claim 21, wherein the conductive layer has a thickness from about 100 Ȧ to about 1,000 Ȧ.27. The method of claim 21, wherein the conductive layer has a thickness from about 1,200 Ȧ to about 4,000 Ȧ.28. The method of claim 21, wherein each conductive plug independently comprises at least one of titanium, platinum and tungsten.29. The method of claim 21, wherein at least one of the first and the second conductive plug includes a barrier layer. |
RELATED APPLICATION DATAThis application is a division of and claims priority under 35 U.S.C. [section]120 to commonly assigned U.S. application Ser. No. 10/174,328, filed Jun. 18, 2002, now U.S. Pat. No. 6,833,587, which in turn claims priority under 35 U.S.C. [section]119(e) to previously filed U.S. Provisional Application No. 60/298,980, filed on Jun. 18, 2001, entitled "Heat Removal in SOI Devices Using a Buried Oxide Layer/Conductive Layer Combination", the disclosures of which are hereby incorporated herein by reference in their entirety.FIELD OF THE INVENTIONThe present invention generally relates to improved Silicon-on-Insulator (SOI) devices. More particularly, the present invention relates to methods for removing heat from Silicon-on-Insulator devices and devices having such characteristics.BACKGROUND OF THE INVENTIONSilicon-on-Insulator (SOI) technology is of growing importance in the field of integrated circuits. SOI technology involves forming transistors in a relatively thin layer of semiconductor material overlying a layer of insulating material. More particularly, SOI technology is characterized by the formation of a thin silicon layer (device region) for formation of the active devices over an insulating layer, such as an oxide, which is in turn formed over a substrate. Transistor sources and drains are formed, for example, by implantations into the silicon layer while transistor gates are formed by forming a patterned oxide and conductor layer structure.Such structures provide a significant gain in performance compared to bulk silicon structures by having lower parasitic capacitance (due to the insulator layer) and increased drain current due to floating body charging effects. This is because no connection is made to the channel region and charging of the floating body provides access towards a majority of carriers which dynamically lower the threshold voltage, resulting in increased drain current. Devices, such as metal oxide silicon field effect transistors (MOSFET), have a number of advantages when formed on SOI wafers versus bulk silicon MOS transistors. These advantages include: reduced source/drain capacitance and hence improved speed performance at higher-operating frequencies; reduced N<+> to P<+> spacing and hence higher packing density due to ease of isolation; absence of latch-up; lower voltage applications; and higher "soft error" upset immunity (i.e., the immunity to the effects of alpha particle strikes).Although there are significant advantages associated with SOI technology, there are significant disadvantages as well. For example, poor heat removal from devices on an SOI substrate is a significant disadvantage. Electrical devices generate heat, and the inability to remove or dissipate the heat results in poor and/or inconsistent performance of the electrical devices, or even in some instances device and/or substrate degradation.There is poor heat removal for devices on SOI substrates primarily because of the oxide insulation layer. More specifically, the oxide insulation layer has a markedly lower thermal conductivity than the thermal conductivity of conventional bulk silicon (typically used as semiconductor substrates), which typically surrounds semiconductor devices. For example, the thermal conductivity of silicon dioxide is about 1.4 W/m[deg.] C., while the thermal conductivity of conventional bulk silicon is about 150 W/m[deg.] C. As a result, the buried oxide layer can undesirably thermally insulate the device region in SOI substrates.In view of the aforementioned disadvantages, there is a need for SOI devices of improved quality, particularly SOI devices having improved heat removal characteristics, and more efficient methods of making such SOI devices.SUMMARY OF THE INVENTIONAs a result of the present invention, an SOI substrate having improved heat removal characteristics (from the device layer) is provided. By forming an SOI substrate according to the present invention, improved performance of devices subsequently formed on the SOI substrate is facilitated. Moreover, forming an SOI substrate in accordance with the present invention does not degrade or deleteriously effect the advantageous properties and characteristics commonly associated with SOI technology (improved speed performance at higher-operating frequencies, higher packing density, absence of latch-up, lower voltage applications, and higher "soft error" upset immunity).According to an aspect of the invention, a silicon-on-insulator substrate is disclosed which comprises: a silicon substrate layer; a first insulation layer over the silicon substrate layer; a conductive layer over the first insulation layer comprising at least one metal or metal silicide over the first insulation layer; a second insulation layer over the conductive layer; a silicon device layer comprising silicon over the second insulation layer; and at least one conductive plug through the silicon substrate layer and the first insulation layer contacting the conductive layer, or at least one conductive plug through the silicon device layer and the second insulation layer contacting the conductive layer.According to another aspect of the invention, a method of forming a silicon-on-insulator substrate is disclosed which comprises the steps of: providing a silicon substrate; depositing a first insulation layer over the silicon substrate; forming a conductive layer over the first insulation layer to a first structure; providing a second structure comprising a silicon device layer and a second insulation layer; bonding the first structure and the second structure together so that the conductive layer is located between the first and second insulation layers; and removing a portion of the silicon device layer thereby providing the silicon-on-insulator substrate having two discrete insulation layers.According to another aspect of the invention, a method of facilitating heat removal from a device layer of a silicon-on-insulator substrate comprising bulk silicon, a first insulation layer over the bulk silicon, a second insulation layer over the conductive layer, and a silicon device layer over the second insulation layer, is disclosed which comprises: forming a conductive layer between the first and second insulation layers; and forming at least one conductive plug through the bulk silicon and the first insulation layer so as to contact the conductive layer.According to yet another aspect of the invention, a method of facilitating heat removal from a device layer of a silicon-on-insulator substrate comprising bulk silicon, a first insulation layer over the bulk silicon, a second insulation layer over the conductive layer, and a silicon device layer over the second insulation layer, is disclosed which comprises: forming a conductive layer between the first and second insulation layers; and forming at least one conductive plug through the silicon device layer and the second insulation layer so as to contact the conductive layer.Due in part to the above methods, silicon-on-insulator substrates can be formed which have improved heat transfer capabilities. Additionally, devices formed from such silicon-on-insulator substrates yield SOI devices of improved quality and reliability.To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These embodiments are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSIn the annexed drawings:FIG. 1 is a cross-sectional view of a portion of an SOI substrate according to one embodiment of the present invention;FIG. 2 is a cross-sectional view of a portion of a first structure used to make an SOI substrate according to one embodiment of the present invention;FIG. 3 is cross-sectional view of a portion of a second structure used to make an SOI substrate according to one embodiment of the present invention;FIG. 4 is cross-sectional view of a portion of a bonded structure used to make an SOI substrate according to one embodiment of the present invention;FIG. 5 is cross-sectional view of a portion of an SOI substrate according to one embodiment of the present invention;FIG. 6 is cross-sectional view of a portion of an SOI substrate according to another embodiment of the present invention;FIG. 7 is cross-sectional view of a portion of an SOI substrate according to yet another embodiment of the present invention;FIG. 8 is cross-sectional view of a portion of an SOI substrate according to still another embodiment of the present invention; andFIG. 9 is a flow chart showing the process steps used to produce a SOI substrate according to one embodiment of the present invention.DETAILED DESCRIPTIONThe present invention generally relates to improved Silicon-on-Insulator (SOI) devices. More particularly, the present invention relates to methods for removing heat from Silicon-on-Insulator devices and devices having such characteristics. As used throughout the specification and claims, the term conductive layer means a layer that is at least thermally conductive, and the term conductive plug means a plug that is at least thermally conductive. Such a layer and/or plug may, in some embodiments of the present invention, also be electrically conductive. Additionally, it should be noted that in the following text, range limits may be combined.By forming an SOI substrate having improved heat removal characteristics, the performance of devices subsequently formed on the SOI substrate can be substantially improved. While not wishing to be bound to any theory, it is believed that by forming a conductive layer between two insulation layers (e.g., two buried oxide layers) according to the present invention, it is consequently possible to increase the amount of heat that may be removed (and/or increase the rate at which heat may be removed) from the device layer of the SOI substrate by spreading the heat through the conductive layer and/or conductive plugs. Improving the removal of heat from the device layer consequently improves the performance and increases the life of devices, such as MOSFETs, formed on the device layer of the SOI substrate.As is illustrated in FIG. 1, the present invention involves positioning a conductive layer 106 between two insulation layers 104 and 204 (e.g., two buried oxide layers) of an SOI substrate. In the completed SOI substrate 275, the conductive layer 106 acts as a heat spreader or dissipation layer. The conductive layer 106 has a relatively high thermal conductivity and thus facilitates the transfer of heat away from and/or evenly spreads (preventing local build-up of) heat generated in the device layer of the SOI substrate 275. If desired, contacts or conductive plugs 220 (FIG. 6 or 8) or 230 (FIG. 7) can be employed to further draw any heat away from the conductive layer 106, either up through plugs in the device layer or down into the bulk silicon layer.The conductive layer 106 contains a conductive material (e.g., a metal) that forms a stable layer and adheres well to bulk silicon and/or an insulator material (such as silicon dioxide). In one embodiment, the conductive layer 106 is formed from at least one metal. Such metals include, but are not limited to, one or more of chromium, molybdenum, platinum, tantalum, titanium, and tungsten. The thermal conductivity of the conductive layer 106 is relatively high compared to the thermal conductivity of at least one of the insulation layers (104 and/or 204) and the bulk silicon. In one embodiment, the thermal conductivity of the conductive layer 106 is at least 100 times higher than the thermal conductivity of at least one of the insulation layers (104 and/or 204). In another embodiment, the conductive layer 106 has a thermal conductivity of at least about 150 W/m[deg.] C., or even at least about 200 W/m[deg.] C. In yet another embodiment, the thermal conductivity of the conductive layer 106 is at least 200 times higher than the thermal conductivity of at least one of the insulation layers (104 and/or 204).The conductive layer 106 can be formed to any thickness suitable for facilitating heat removal from the subsequently formed device layer 210. In one embodiment, generally, the thickness of the conductive layer 106 is from about 100 Ȧ to about 4,000 Ȧ. In another embodiment, the thickness of the conductive layer 106 is from about 200 Ȧ to about 3,000 Ȧ. In one embodiment, the conductive layer has a thickness from about 150 Ȧ to about 500 Ȧ. In another embodiment, the conductive layer has a thickness from about 1,500 Ȧ to about 3,500 Ȧ. In another embodiment, the conductive layer has a thickness from about 100 Ȧ to about 1,000 Ȧ. In yet another embodiment, the conductive layer has a thickness from about 1,200 Ȧ to about 4,000 Ȧ. In another embodiment the thickness of the conductive layer 106 is based on the thickness of at least one of the insulation layers (104 and/or 204) located on either side of the conductive layer 106. In one embodiment, the conductive layer 106 is less than 15% the thickness of at least one of the insulation layers (104 and/or 204) on either side of the conductive layer 106. In another embodiment, the conductive layer 106 is less than 15% the thickness of both of the insulation layers (104 and/or 204) on either side of the conductive layer 106. In yet another embodiment, the conductive layer 106 is greater than 50% the thickness of at least one of the insulation layers (104 and/or 204) on either side of the conductive layer 106. In another embodiment, the conductive layer 106 is greater than 50% the thickness of both of the insulation layers (104 and/or 204) on either side of the conductive layer 106.A first structure 100 is produced by forming an insulation layer 104 and a conductive layer 106 thereon in any suitable manner over a bulk or monocrystalline silicon layer 102. Initially, the insulation layer 104 (e.g., an oxide layer) is formed over the bulk or monocrystalline silicon layer 102 using methods known in the art, such as chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), spin on depositing, thermal oxidation, or a wet and dry oxidation process. In one embodiment, the insulation layer 104 can be formed from, but is not limited to, any one of silicon dioxide, a variation of silicon dioxide, silicon nitride, hydrogen silsesquioxane (HSQ), methyl silsesquioxane (MSQ), benzocyclobutene (BCB), fluorinated aromatic ether (FLARE), SILK(R), NANOGLASS(R) and fluorinated glass (FSG).Next the conductive layer 106 is formed over the insulation layer from one or more of the materials previously discussed above. The conductive layer 106 is formed in any suitable manner over the insulation layer 104 including direct metal deposition. Direct metal deposition simply involves depositing a metal on the insulation layer 104. This can be accomplished by physical vapor deposition (PVD) and particularly sputtering or chemical vapor deposition (CVD). Such methods are known in the art. This structure 100 containing the conductive layer 106 is then bonded to a second structure 200 (see FIG. 3) containing an insulation layer 204 on a bulk silicon layer 202 (typically the same type of structure, but without conductive layer 106 formed over the insulation layer 104). The two structures are fused so that the conductive layer 106 on the first structure is bonded to the insulation layer 204 of the second structure to yield a conductive layer 106 sandwiched between two insulation layers 104 and 204 (see FIG. 4), and the bulk silicon layer 202 of the second structure is etched back to a desired thickness to form an SOI substrate 250a having a silicon device layer 210 (see FIG. 5).In another embodiment, the conductive layer 106 of the first structure 100 can be formed from a metal silicide rather than a metal. The metal silicide layer may, for example, be formed by PVD or CVD techniques. After the metal silicide layer is formed over the above-mentioned insulation layer 104, this first structure 100 containing the silicide conductive layer 106 is then bonded to the second structure 200 containing the insulation layer 204 on bulk silicon layer 202 (typically the same type of structure, but without the silicide conductive layer). The two structures 100 and 200 are fused together as noted above, and the bulk silicon layer 202 of the second structure 200 is etched back to a desired thickness to form the device layer 210 on an SOI substrate 250a. The SOI substrate 250a formed in accordance with the present invention has a bulk or monocrystalline silicon layer 102, a first buried insulation layer 104 over the bulk silicon layer 102, a conductive layer 106 over the first buried insulation layer 104, a second buried insulation layer 204 over and on the other side of the conductive layer 106, and a silicon layer 210 (device layer) over the second buried insulation layer 204. The first and second buried insulation layers 104 and 204, respectively, typically contain silicon dioxide. Although, as noted above, the buried insulation layers may contain any suitable insulating or oxide material. Each buried insulation layer has thickness from about 100 Ȧ to about 5,000 Ȧ. In another embodiment, each buried insulation layer has a thickness from about 1,000 Ȧ to about 4,000 Ȧ. In yet another embodiment, each buried insulation layer has thickness from about 2,000 Ȧ to about 3,500 Ȧ. In one embodiment, each of the first and the second insulation layers independently has a thickness of about 50 Ȧ to about 2500 Ȧ.The device layer has thickness from about 500 Ȧ to about 5,000 Ȧ. In another embodiment, the device layer has thickness from about 1,000 Ȧ to about 3,000 Ȧ, or even from about 1,000 Ȧ to about 2,000 Ȧ.In one embodiment, the conductive layer has a thickness that is one of less than 15% of the thickness of at least one of the insulation layers and greater than 50% of the thickness of at least one of the insulation layers. In another embodiment, the conductive layer has a thickness that is one of less than 10% of the thickness of at least one of the insulation layers and greater than 60% of the thickness of at least one of the insulation layers.One or more conductive plugs 220 (FIG. 6) or 230 (FIG. 7) may be formed above or below the conductive layer 106. The conductive plugs 220 and/or 230 serve to further facilitate the transfer of heat away from the device layer, and particularly away from the conductive layer. Heat removed via the conductive plugs 220 and/or 230 is dissipated in the bulk silicon layer 102 or in overlying layers or structures. The conductive plugs 220 and/or 230 have a thermal conductivity of at least about 150 W/m[deg.] C., or even at least about 200 W/m[deg.] C.Referring to FIGS. 2 to 6, one embodiment of the present invention is described. Specifically as is illustrated in FIG. 2, the first structure 100 is formed which contains the bulk silicon layer 102, the first buried insulation layer 104, over the bulk silicon layer 102, and the conductive layer 106 over the first buried insulation layer 104 as is described below. Initially, the bulk silicon substrate or wafer 102 is provided and the insulation layer 104 containing silicon dioxide is then formed over the bulk silicon substrate or wafer 102 by CVD techniques. Either low pressure chemical vapor deposition (LPCVD) or plasma enhanced chemical vapor deposition (PECVD) may be employed. In this embodiment, the insulation layer 104 is formed by PECVD using either silane and oxygen or silane and nitrous oxide. In this embodiment, the insulation layer 104 has a thickness of about 1,500 Ȧ. Next, the conductive layer 106 is formed over the insulation layer 104 from a suitable metal or metal silicide. In this embodiment, platinum is sputtered over the insulation layer 104 to a thickness of about 400 Ȧ. Alternatively, one or more of chromium, molybdenum, tantalum, titanium, and tungsten can be used in place of or in addition to platinum.Referring to FIG. 3, the second structure 200 is provided. The second structure 200 contains a bulk silicon layer 202 and an insulation layer 204 there over. In this embodiment, the insulation layer 204 contains silicon dioxide. Also in this embodiment, the thickness of the insulation layer 204 is about 1,500 Ȧ.Referring to FIG. 4, the first structure 100 is bonded to the second structure 200 via the first structure's conductive layer 106 and the second structure's insulation layer 204 to yield a combined structure 250. The conductive layer 106 and the insulation layer 204 are fused by application of heat for a sufficient period of time to bond the first and second structures 100 and 200. For example, the first and second structures 100 and 200 are held together for about 2 hours under a temperature of about 1,100[deg.] C.Referring to FIG. 5, the bulk silicon layer 202 of FIG. 4 of the second structure 200 is etched to a desired thickness to provide an SOI substrate 250a and specifically a device layer 210. The SOI substrate 250a contains the bulk silicon layer 102, the first buried insulation layer 104, the conductive layer 106, the second buried insulation layer 204, and the device layer 210. The thickness of the device layer 210 is about 1,500 Ȧ. The thickness of each of the first and second buried insulation layers 104 and 204 (formerly insulation layers 104 and 204) is about 1,500 Ȧ. The thickness of the conductive layer 106 remains about the same as initially deposited. In this embodiment, the conductive layer 106 has a thickness that is about 13% of the combined thickness of both the first and second buried insulation layers 104 and 204.The SOI substrate 250a has good heat removal properties due to the presence of the conductive layer 106. In particular, the high thermal conductivity of platinum or even platinum silicide (relative to silicon dioxide) removes heat that may locally accumulate in certain areas (typically near or under devices and/or conductive structures) of the device layer and the buried insulation layers. The high thermal conductivity of platinum also dissipates heat that may locally accumulate in certain areas of the device layer and the buried insulation layers (or distributes the heat throughout the platinum silicide layer).Referring to FIG. 6, additional heat may be removed from SOI substrate 250a by optionally forming at least one conductive plug 220 in the bulk silicon substrate 102 and the first buried insulation layer 104 to thermally contact the conductive layer 106 to form structure 275. In one embodiment, conductive plug 220 contains an optional barrier layer and a conductive material. Use of an optional barrier layer (not shown) depends upon the identity of the conductive material of the conductive plug 220. The barrier layer, if employed, serves as a diffusion barrier layer preventing the conductive material of the conductive plug 220 from diffusing into the bulk silicon substrate 102. The barrier layer may be made of any suitable conductive material or materials. Examples of suitable conductive materials for the barrier layer include titanium nitride, tungsten, tantalum, tungsten-titanium alloys such as an alloy containing about 90% tungsten and about 10% titanium, tantalum silicon nitride, tungsten nitride, niobium, molybdenum and combinations thereof. The barrier layer may be formed using any suitable technique to a thickness sufficient to serve as a diffusion barrier for conductive plug 220. For example, the thickness of the barrier layer may be in the range from about 100 Ȧ to about 1,500 Ȧ.The conductive plug 220 is formed in the substrate 102 and the overlying first buried insulation layer 104 (by initially etching a contact hole using suitable lithography and etching techniques) to yield structure 275. The conductive plug 220 may be made of any suitable conductive material or materials. Examples of suitable conductive materials include one or more of copper, tungsten, gold, silver, aluminum, and any alloys thereof. In one embodiment, the conductive material is tungsten. The barrier layer and the conductive plug 220 may be deposited using CVD or PVD techniques. The conductive plug removes heat from the conductive layer 106 and transfers it up through the structure to other layers or structures (not shown).Referring to FIG. 8, another embodiment of the present invention is illustrated in which at least two conductive plugs 220 are formed in the substrate 102 and the overlying first buried insulation layer 104.Referring to FIGS. 2 to 5 and 7, another specific example of the present invention is described. Specifically referring to FIG. 1, the first structure 100 is formed which contains the bulk silicon layer 102, the first buried insulation layer 104, over the bulk silicon layer 102, and the conductive layer 106 over the first buried insulation layer 104 as is described below. Initially, the bulk silicon substrate or wafer 102 is provided and the insulation layer 104 containing silicon dioxide is then formed over the bulk silicon substrate or wafer 102 by CVD techniques. Either low pressure chemical vapor deposition (LPCVD) or plasma enhanced chemical vapor deposition (PECVD) may be employed. In this embodiment, the insulation layer 104 is formed by PECVD using either silane and oxygen or silane and nitrous oxide. In this embodiment, the insulation layer 104 has a thickness of about 1,000 Ȧ. The conductive layer 106 is formed over the insulation layer 104 from a suitable metal or metal silicide. In this embodiment, titanium is sputtered over the insulation layer 104 to a thickness of about 1,100 Ȧ. Alternatively, one or more of chromium, molybdenum, tantalum, platinum, and tungsten can be used in place of or in addition to titanium.Referring to FIG. 3, the second structure 200 is provided. The second structure 200 contains a bulk silicon layer 202 and an insulation layer 204 there over. In this embodiment, the insulation layer 204 contains silicon dioxide. Also in this embodiment, the thickness of the insulation layer 204 is about 1,000 Ȧ.Referring to FIG. 4, the first structure 100 is bonded to the second structure 200 via the first structure's conductive layer 106 and the second structure's insulation layer 204 to yield a combined structure 250. The conductive layer 106 and the insulation layer 204 are fused by application of heat for a sufficient period of time to bond the first and second structures 100 and 200. For example, the first and second structures 100 and 200 are held together for about 3 hours under a temperature of about 1,050[deg.] C.Referring to FIG. 5, the bulk silicon layer 202 of FIG. 4 of the second structure 200 is etched to a desired thickness to provide an SOI substrate 250a and specifically a device layer 210. The SOI substrate 250a contains the bulk silicon layer 102, the first buried insulation layer 104, the conductive layer 106, the second buried insulation layer 204, and the device layer 210. The thickness of the device layer 210 is about 2,000 Ȧ. The thickness of each of the first and second buried insulation layers 104 and 204 (formerly insulation layers 104 and 204) is about 1,000 Ȧ. The thickness of the conductive layer 106 remains about the same as initially deposited. In this embodiment, the conductive layer 106 has a thickness that is about 55% of the combined thickness of both the first and second buried insulation layers 104 and 204.The SOI substrate 250a has good heat removal properties due to the presence of the conductive layer 106. In particular, the high thermal conductivity of titanium or even titanium silicide (relative to silicon dioxide) removes heat that may locally accumulate in certain areas (typically near or under devices and/or conductive structures) of the device layer and the buried insulation layers. The high thermal conductivity of titanium also dissipates heat that may locally accumulate in certain areas of the device layer and the buried insulation layers (or distributes the heat throughout the titanium silicide layer).Referring to FIG. 7, additional heat removal may be removed from SOI substrate 250a by optionally forming at least one conductive plug 230 in the device layer 210 and the second buried insulation layer 204 to thermally contact the conductive layer 106 to form structure 290. In one embodiment, conductive plug 230 contains an optional barrier layer and a conductive material. Use of an optional barrier layer (not shown) depends upon the identity of the conductive material of the conductive plug 230. The barrier layer, if employed, serves as a diffusion barrier layer preventing the conductive material of the conductive plug 230 from diffusing into the device layer 210. The barrier layer may be made of any suitable conductive material or materials. Examples of suitable conductive materials for the barrier layer include titanium nitride, tungsten, tantalum, tungsten-titanium alloys, tantalum silicon nitride, tungsten nitride, niobium, molybdenum and combinations thereof. The barrier layer may be formed using any suitable technique to a thickness sufficient to serve as a diffusion barrier for conductive plug 230. For example, the thickness of the barrier layer may be in the range from about 100 Ȧ to about 1,500 Ȧ.The conductive plug 230 is formed in device layer 210 and the underlying second buried insulation layer 204 (by initially etching a contact hole using suitable lithography and etching techniques) to yield structure 290. The conductive plug 230 may be made of any suitable conductive material or materials. Examples of suitable conductive materials include one or more of copper, tungsten, gold, silver, aluminum, and any alloys thereof. In one embodiment, the conductive material is tungsten. The barrier layer and the conductive plug 230 may be deposited using CVD or PVD techniques. The conductive plug removes heat from the conductive layer 106 and transfers it up through the structure to other layers or structures (not shown).Although the invention has been shown and described with respect to a certain embodiment or embodiments, it is obvious that equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described elements (components, assemblies, devices, compositions, etc.), the terms (including a reference to a "means") used to describe such elements are intended to correspond, unless otherwise indicated, to any element which performs the specified function of the described element (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary embodiment or embodiments of the invention. In addition, while a particular feature of the invention may have been described above with respect to only one or more of several illustrated embodiments, such feature may be combined with one or more other features of the other embodiments, as may be desired and advantageous for any given or particular application. |
A semiconductor package having a mechanical fuse therein and methods to form a semiconductor package having a mechanical fuse therein are described. For example, a semiconductor structure includes a semiconductor package. A semiconductor die is housed in the semiconductor package. A microelectromechanical system (MEMS) device is housed in the semiconductor package. The MEMS device has a suspended portion. A mechanical fuse is housed in the semiconductor package and either coupled to, or decoupled from, the suspended portion of the MEMS device. |
CLAIMS What is claimed is: 1. A semiconductor structure, comprising: a semiconductor package; a semiconductor die housed in the semiconductor package; a microelectromechanical system (MEMS) device housed in the semiconductor package, the MEMS device having a suspended portion; and a mechanical fuse housed in the semiconductor package and coupled to the suspended portion of the MEMS device. 2. The semiconductor structure of claim 1, wherein the semiconductor package comprises a bumpless build-up layer (BBUL) substrate, wherein the semiconductor die is embedded in the BBUL substrate, and the MEMS device and mechanical fuse are disposed in one or more layers of the BBUL substrate, wherein the MEMS device and mechanical fuse are disposed above an active surface of the semiconductor die, and wherein the BBUL substrate is a coreless substrate. 3. The semiconductor structure of claim 1, wherein the MEMS device comprises a singly- clamped cantilever or doubly-clamped beam structure, and the mechanical fuse is coupled to the cantilever or beam structure. 4. The semiconductor structure of claim 1, wherein the suspended portion of the MEMS device has an effective spring constant, and wherein the mechanical fuse modifies the effective spring constant of the suspended portion. 5. The semiconductor structure of claim 1, wherein the suspended portion of the MEMS device has a resonance frequency, and wherein the mechanical fuse modifies the resonance frequency of the suspended portion. 6. The semiconductor structure of claim 1, further comprising: one or more additional mechanical fuses housed in the semiconductor package and coupled to the suspended portion of the MEMS device. 7. The semiconductor structure of claim 1, wherein the mechanical fuse and the MEMS device comprise copper. 8. The semiconductor structure of claim 1, wherein the MEMS device is electrically coupled to the semiconductor die. 9. A semiconductor structure, comprising: a semiconductor package; a semiconductor die housed in the semiconductor package; a microelectromechanical system (MEMS) device housed in the semiconductor package, the MEMS device having a suspended portion; and a mechanical fuse housed in the semiconductor package and decoupled from the suspended portion of the MEMS device. 10. The semiconductor structure of claim 9, wherein the semiconductor package comprises a bumpless build-up layer (BBUL) substrate, wherein the semiconductor die is embedded in the BBUL substrate, and the MEMS device and mechanical fuse are disposed in one or more layers of the BBUL substrate, wherein the MEMS device and mechanical fuse are disposed above an active surface of the semiconductor die, and wherein the BBUL substrate is a coreless substrate. 11. The semiconductor structure of claim 9, wherein the MEMS device comprises a singly- clamped cantilever or doubly-clamped beam structure, and the mechanical fuse is decoupled from the cantilever or beam structure. 12. The semiconductor structure of claim 9, further comprising: one or more additional mechanical fuses housed in the semiconductor package and decoupled from the suspended portion of the MEMS device. 13. The semiconductor structure of claim 9, further comprising: one or more additional mechanical fuses housed in the semiconductor package and coupled to the suspended portion of the MEMS device. 14. The semiconductor structure of claim 9, wherein the mechanical fuse and the MEMS device comprise copper. 15. The semiconductor structure of claim 9, wherein the MEMS device is electrically coupled to the semiconductor die. 16. A method of modifying a mechanical property for a microelectromechanical system (MEMS) device of a semiconductor structure, the method comprising: applying a voltage to a MEMS structure comprising the MEMS device and a mechanical fuse coupled to a suspended portion of the MEMS device; and decoupling the mechanical fuse from the suspended portion of the MEMS device by the applying of the voltage. 17. The method of claim 16, wherein decoupling the mechanical fuse comprises using a thermal rupture mechanism. 18. The method of claim 17, the mechanical fuse and the MEMS device comprise copper, and the thermal rupture mechanism comprises melting a portion of the mechanical fuse, but not melting the MEMS device. 19. The method of claim 16, wherein decoupling the mechanical fuse comprises using an electromigration rupture mechanism. 20. The method of claim 16, wherein the MEMS device comprises a singly-clamped cantilever or doubly-clamped beam structure, and decoupling the mechanical fuse comprises decoupling from the cantilever or beam structure. |
Semiconductor Package with Mechanical Fuse TECHNICAL FIELD Embodiments of the invention are in the field of semiconductor packages and, in particular, semiconductor packages with mechanical fuses. BACKGROUND Today's consumer electronics market frequently demands complex functions requiring very intricate circuitry. Scaling to smaller and smaller fundamental building blocks, e.g. transistors, has enabled the incorporation of even more intricate circuitry on a single die with each progressive generation. Semiconductor packages are used for protecting an integrated circuit (IC) chip or die, and also to provide the die with an electrical interface to external circuitry. With the increasing demand for smaller electronic devices, semiconductor packages are designed to be even more compact and must support larger circuit density. For example, some semiconductor packages now use a coreless substrate, which does not include the thick resin core layer commonly found in conventional substrates. Furthermore, the demand for higher performance devices results in a need for an improved semiconductor package that enables a thin packaging profile and low overall warpage compatible with subsequent assembly processing. Furthermore, for the past several years, microelectromechanical systems (MEMS) structures have been playing an increasingly important role in consumer products. For example, MEMS devices, such as sensors, actuators, and mirrors, can be found in products ranging from air-bag triggers in vehicles to displays in the visual arts industry. As these technologies mature, the demands on precision and functionality of the MEMS structures have escalated. For example, optimal performance may depend on the ability to fine-tune the characteristics of various components of these MEMS structures. Furthermore, consistency requirements for the performance of MEMS devices (both intra-device and device-to-device) often dictates that the processes used to fabricate such MEMS devices need to be extremely sophisticated. Although packaging scaling is typically viewed as a reduction in size, the addition of functionality in a given space is also considered. However, structural issues may arise when attempting to package semiconductor die with additional functionality also housed in the package. For example, the addition of packaged MEMS devices may add functionality, but ever decreasing space availability in a semiconductor package may provide obstacles to adding such functionality. BRIEF DESCRIPTION OF THE DRAWINGS Figures 1A-1C illustrate plan views of a cantilever MEMS device having a mechanical fuse (m-FUSE), and subsequent fusing of the cantilever, in accordance with an embodiment of the present invention. Figure 2 illustrates a variety of single-clamped cantilever or double-clamped beam MEMS structures (a-g) having mechanical fuses included therein, in accordance with an embodiment of the present invention. Figures 3A-3C illustrate a MEMS device having five fuses/fuse pairs, and corresponding resonant frequency plot and spring constant plot, in accordance with an embodiment of the present invention. Figures 4A (shown as 4A-1, 4A-2 and 4A-3) and 4B (shown as 4B-1, 4B-2 and 4B-3) depict results from ANSYS finite element mechanical simulations for a MEMS device having associated fuse widths of 5 microns and 2 microns, respectively, in accordance with an embodiment of the present invention. Figures 5A and 5B depict results from ANSYS finite element thermal simulations for a MEMS device having associated fuse widths of 2 microns and 5 microns, respectively, in accordance with an embodiment of the present invention. Figures 6A-60 illustrate cross-sectional views of various operations in a process of fabricating a packaged MEMS device having an m-FUSE, in accordance with an embodiment of the present invention. Figure 7 includes scanning electron microscope (SEM) images of singly-clamped cantilever structures formed in a laminate layer of a bumpless build up layer (BBUL) package, in accordance with an embodiment of the present invention. Figure 8 includes SEM images of doubly-clamped beam structures formed in a BBUL laminate layer, in accordance with an embodiment of the present invention. Figure 9 is a schematic of a computer system, in accordance with an embodiment of the present invention. DESCRIPTION OF THE EMBODIMENTS Semiconductor packages with mechanical fuses are described. In the following description, numerous specific details are set forth, such as packaging architectures, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale. One or more embodiments described herein are directed to semiconductor packages having one or more microelectromechanical systems (MEMS) structures incorporated therein. In an embodiment, one or more mechanical fuses is included in the semiconductor package along with the MEMS structure. Spring constant and resonance tuning may be performed by including a MEMS mechanical fuse in packaging build up layers, such as bumpless build up layers (BBUL). The MEMS structures may include, but are not limited to, actuators and sensors. One or more embodiments described herein may be applicable to enabling a method to tune, calibrate, or program an effective spring constant or resonance (e.g., by use of a mechanical fuse, or m-FUSE) of a MEMS actuating or sensing system fabricated based on packaging build up layers. In an embodiment, the tuning, calibrating or programming of the effective spring constant or resonance is used for one or more of the following: (a) redundancy implementation for packaged MEMS-based systems, (b) self-repair for packaged MEMS-based systems, (c) reconfiguration of a packaged MEMS-based systems at a factory or at a customer site, or (d) improved yields for packaged MEMS-based systems by integrating one or more m-FUSEs with advanced built-in self test and self -repair approaches. One or more embodiments described herein may be applicable to enabling advanced mechanical array functionality, e.g., non-volatile based memory systems, by mechanically fusing and mechanical sensing read out using a combination of appropriate the m-FUSE systems. One or more embodiments described herein may be applicable to enabling programming of mechanical fuses to modify effective spring constants or resonance(s) using a thermal rupture mechanism or an electromigration rupture mechanism. For example, applying voltage to such an m-FUSE may allow "mechanical" programming for a laminate MEMS systems embedded within buildup layers of a semiconductor package. Such an approach is distinguished from line-of-sight laser ablation typically used for electrical fusing in silicon-based platforms. Single or multiple mechanical fuse programming in MEMS sensor or actuator systems may, in an embodiment, be facilitated by fabricating such m- FUSES in buildup layers of packaging technology (e.g., a specific embodiment based on BBUL is described below in association with Figures 6A-60, but conventional packaging technologies may also be used. Once fabricated, an m-FUSE may be subjected to an applied voltage to invoke either a thermal rupture mechanism or an electromigration rupture mechanism. The specific mechanism may depend on design rules and material selection. In either case, fusing type behavior may be derived based on a highest stress state being within an actuating beam or sensor, and not on the mechanical fuses themselves. Accordingly, in an embodiment, buildup layers of packaging (BBUL or conventional substrates) are used to fabricate a series of mechanical fuses, which are ultimately programmed (e.g., to tune effective spring constants or resonance frequencies of applicable systems) using a thermal or electromigration rupture mechanism by applying a voltage to the mechanical fuses. Approaches described herein may be conceptualized as a mechanical analogy of conventional electrical fusing in silicon technology. However, the fusing is performed in packaging build up layers and not in the silicon platform. As mentioned above, embodiments of such off-silicon (e.g., in-package) fusing include applications in redundancy implementations, MEMS self repair, MEMS reconfiguration based on need or application, and improvements in packaging MEMS yield by integrating m-FUSEs with advanced built-in self test and self -repair approaches. A mechanical fuse as contemplated herein may be conceptualized based on a cantilever MEMS device model. For example, Figures 1A-1C illustrate plan views of a cantilever MEMS device having an m-FUSE, and subsequent fusing of the cantilever, in accordance with an embodiment of the present invention. Referring to Figure 1A, an initial fuse condition is shown for a laminate MEMS structure 100, e.g., a singly clamped cantilever (cantilever 102 and clamp 104) actuator with an initial effective length (L). The MEMS structure 100 is surrounded by mechanical fuses 106 on the periphery of the main actuator. Referring to Figure IB, sufficient current 108 is applied to break one or more of the mechanical fuses 106. For example, the current 108 breaks fuses 106A-106D and changes the mechanical properties of the MEMS structure 100 by modifying the effective length from L to L', as depicted in Figure 1C. By modifying the effective length, a spring constant and resonance characteristic is fused for the MEMS structure 100 in a final use condition different from the spring constant and resonance characteristic in the initial fuse condition. Thus, electrical energy is used to selectively break mechanical fuses to provide a modified MEMS structure. It is to be understood that not all fuses need be broken for a fusing process. For example, in the structure of Figure 1C, fuses 106A-106D are broken while fuses 106E-106H are not. A variety of possibilities exist for geometries of mechanical fuses associated with MEMS structures. For example, Figure 2 illustrates a variety of single-clamped cantilever or double- clamped beam MEMS structures (a-g) having mechanical fuses included therein, in accordance with an embodiment of the present invention. The mechanical fuses are illustrated as being in an initial fuse (e.g., unbroken) state in Figure 2. In a general embodiment, each mechanical fuse has a thin, breakable section, but with sufficient structural stiffness to ensure integrity of the fuse even if the MEMS structure (e.g., cantilever, beam, actuator) is in a resonance mode. In one such embodiment, the breakable section of each fuse is breakable only upon target application of a current there through, such as a programming current. Stiffness and resonant frequency changes from process variation may be compensated for a packaged MEMS device post fabrication. In an embodiment, such compensation is achieved by programming the associated fuses of a MEMS device, e.g., by design at a factory or customer site. In one such embodiment, the programming performed by selectively breaking mechanical fuses, as described above in association with Figures 1A-1C. In an embodiment, a MEMS device having mechanical fuses associated therewith is designed such that fuses are broken sequentially. As an example, Figures 3A-3C illustrate a MEMS device 300 having five fuses/fuse pairs labeled Fuse 1 - Fuse 5, and corresponding resonant frequency plot 302 and spring constant plot 304, in accordance with an embodiment of the present invention. Referring to plots 302 and 304, both the resonant frequency and the spring constant of MEMS structure 300 decrease with increasing number of fuse breaks, assuming that for a later fuse to break (e.g., Fuse 3) earlier fuses must first or simultaneously be broken (e.g., Fuse 1 and then Fuse 2). Each of plots 302 and 304 show results for maximum (302A and 304A), nominal (302B and 304B), and minimum (302C and 304C) values due to process variation of copper thickness (e.g., 15um +/- 5um), processes for which are described in greater detail below. In an embodiment, the MEMS/fuse structures are designed to have maximum stresses occur in a beam or cantilever of a MEMS device, as opposed to within an associated fuse. Such a design enables user control over the timing of fuse breakage, or control to not fuse the device at all, even under operation conditions of the MEMS structure. As an example, Figures 4A (shown as 4A-1, 4A-2 and 4A-3) and 4B (shown as 4B-1, 4B-2 and 4B-3) depict results from ANSYS finite element mechanical simulations for a MEMS device having associated fuse widths of 5 microns and 2 microns, respectively, in accordance with an embodiment of the present invention. Referring to Figures 4A-1 - 4A-3, a starting structure 400 A has a 12 micron beam width and an associated 5 micron fuse width. Structure 402A depicts the case for two broken fuse traces (or a pair of broken fuse traces), while structure 404A depicts the case for four broken fuse traces (or two pairs of broken fuse traces). Corresponding beam bending in the Y axis or Z axis is depicted below each corresponding structure in Figures 4A-1 - 4A-3. Referring to Figures 4B- 1 - 4B-3, a starting structure 400B has a 9 micron beam width and an associated 2 micron fuse width. Structure 402B depicts the case for two broken fuse traces (or a pair of broken fuse traces), while structure 404B depicts the case for four broken fuse traces (or two pairs of broken fuse traces). Corresponding beam bending in the Y axis or Z axis is depicted below each corresponding structure in Figures 4B-1 - 4B-3. In both examples of Figures 4A and 4B, the maximum stresses (Pa) occurs in the beam, not in the fuse. As such, the fuse provides sufficient mechanical support for the mechanical structure for different fuse/beam parameters. Furthermore, the ANSYS finite element simulations indicate changes in resonant frequency and mechanical spring constant as a result of opening or breaking fuses, consistent with classical beam bending theory. In an embodiment, fusing is performed by melting a portion of the fuse without impacting the remaining features of the associated MEMS device. As an example, Figures 5A and 5B depict results from ANSYS finite element thermal simulations for a MEMS device having associated fuse widths of 2 microns and 5 microns, respectively, in accordance with an embodiment of the present invention. Referring to Figures 5A and 5B, a starting structure 500A has a 9 micron beam width and an associated 2 micron fuse width. Another starting structure 500B has a 12 micron beam width and an associated 5 micron fuse width. In either case, plots of voltage distribution and temperature distribution across the structures indicate highest voltage and temperature within the fuse portions of the structure. As an example, in the case that a temperature of 1083 degrees Celsius is exceeded in the fuse, a copper based fuse may melt, effectively breaking the fuse and, hence, programming the MEMS device. In an embodiment, an application of approximately 0.2V is applied to break the fuse. A packaged MEMS device and associated one or more fuses may be housed in a variety of packaging options. One such option is housing in a coreless substrate formed by a BBUL process. For example, Figures 6A-60 illustrate cross-sectional views of various operations in a process of fabricating a packaged MEMS device having an m-FUSE, in accordance with an embodiment of the present invention. Referring to Figure 6 A, a simplified view of a coreless carrier 600 including two panel sides 602 and 602' is depicted. A fully embedded process may be performed to package die 604/604' on either panel 602/602', respectively. As an example, Figure 6B depicts a BBUL fully embedded die process up to level 2 (L2) metal layer definition. BBUL is a processor packaging technology that is bumpless since it does not use the usual small solder bumps to attach the silicon die to the processor package wires. It has build-up layers since it is grown or built-up around the silicon die. Some semiconductor packages now use a coreless substrate, which does not include the thick resin core layer commonly found in conventional substrates. In an embodiment, as part of the BBUL process, electrically conductive vias and routing layers are formed above the active side of the semiconductor die 604/604' using a semi-additive process (SAP) to complete remaining layers. Thus, referring again to Figure 6B, a semiconductor die may be packaged on a panel of a carrier. Carrier 600 may be provided having planar panels or panels with a plurality of cavities disposed therein, each sized to receive a semiconductor die 600/600' . During processing, identical structures (e.g., 602 and 602') may be mated in order to build a back-to-back apparatus for processing utility. Consequently, processing throughput is effectively doubled. The structure shown in Figure 6B may form part of a larger carrier/panel structure with a plurality of identical regions having a similar or the same cross-section. For example, a carrier may include panels with 1000 recesses on either side, allowing for fabrication of 2000 individual packages from a single carrier. The panel may include an adhesion release layer and an adhesive binder. A cutting zone may be provided at each end of the apparatus 602 or 602' for separation processing. A backside of a semiconductor die may be bonded to the panel with a die-bonding film. Encapsulating layers may be formed by a lamination process. In another embodiment, one or more encapsulation layers may be formed by spinning on and curing a dielectric upon a wafer-scale array of apparatuses of which the apparatus 602/602' is merely a subset for illustrative simplicity. Referring to Figure 6C, a MEMS bottom electrode 606/606' is formed, e.g., by a sequence of electroless plating, dry film resist (DFR) patterning, electroplating, and flash etch processing. The MEMS bottom electrode 606/606' may be provided for ultimate electrostatic actuation or capacitive sensing detection of a subsequently formed MEMS actuator/sensor structure. A release etch stop layer lamination layer 608 (e.g., low-E ABF or ABF derivative having a lower plasma etch rate than a standard ABF film) is then provided, as depicted in Figure 6D. It is noted that only one side of the BBUL panel is shown for simplicity from Figure 6D and on. Referring to Figure 6E, a BBUL MEMS bottom permanent dielectric layer 610 is formed, e.g., by deposition or lamination. In an embodiment, For this flow, the permanent dielectric layer 610 is an alumina (AlOx)-filled dielectric film having an order of magnitude lower etch rate (e.g., in a CF4/02 plasma) compared with a standard ABF film. The BBUL MEMS bottom permanent dielectric layer 610 may then be patterned to form patterned dielectric layer 612, as depicted in Figure 6F. In an embodiment, the BBUL MEMS bottom permanent dielectric layer 610 is patterned to form patterned dielectric layer 612 using DFR lithography and etch processing. Referring to Figure 6G, a BBUL MEMS bottom sacrificial layer 614 is defined (e.g., to nominally define a gap height for a subsequently formed actuator gap). Subsequently, anchor patterning is performed by, e.g., a via C02 laser and electroless copper plating process to form copper layer 616. A BBUL MEMS structure 618 (e.g., an anchor 620 and cantilever 622) is then fabricated, e.g., by DFR patterning and copper electroplating on electroless copper layer 616, as depicted in Figure 6H. Referring to Figure 61, remaining portions of electroless copper layer 616 are removed, e.g., by a selective flash etch. A BBUL MEMS top sacrificial layer 624 is then defined, e.g., via an ABF lamination process, as depicted in Figure 6J. Referring to Figure 6K, a copper seal layer 626 is formed, e.g., by electroless copper plating followed by patterned DFR film 628 formation to define open region in the copper seal layer 626. A patterned copper seal layer 630 having release holes therein is then provided, e.g., by copper electroplating, followed by selective flash etch or copper remaining from electroless copper layer 626, as depicted in Figure 6L. Referring to Figure 6M, a DFR layer 634 (or other patterning material layer) lamination followed by photo-patterning is performed to aid initiating releasing of structure 630. A full BBUL MEMS release operation may be used to release structure 630, e.g., by plasma ashing through the release holes of the copper seal layer, as depicted in Figure 6N. As depicted, in an embodiment, the plasma release operation removes the top ABF sacrificial layer 634 and the bottom ABF sacrificial layer while leaving the AlOx dielectric layer to remain. It is to be understood that similar release holes may be patterned on a MEMS cantilever if the dimensions of the cantilever structure are too large for successful isotropic plasma undercutting. Referring to Figure 60, a BBUL MEMS cavity seal operation is performed, e.g., using a copper foil lamination layer 632 above the copper seal layer 630. The cavity thus formed, and depicted in Figure 60, defines a local environment for the BBUL MEMS device 618. Also depicted is a fuse feature 699. The fuse feature 699 is depicted with dashed lines to indicate that the fuse feature may or may not still be present in the structure of Figure 60. As described in association with earlier Figures, the exact location of the fuse feature 699 may vary. Although not depicted, an array of external contacts may then be formed above the structure depicted in Figure 60. Regarding the overall packaging process described in association with Figures 6A-60, in an embodiment, the substrate thus formed is a coreless substrate since a panel is used to support packaging of semiconductor die 604 through to formation of an array of external conductive conducts. The panel is then removed to provide a coreless package for the semiconductor die. Accordingly, in an embodiment, the term "coreless" is used to mean that the support upon which the package was formed for housing a die is ultimately removed at the end of a build-up process. In a specific embodiment, a coreless substrate is one that does not include a thick core after completion of the fabrication process. As an example, a thick core may be one composed of a reinforced material such as is used in a motherboard and may include conductive vias therein. It is to be understood that die-bonding film may be retained or may be removed. In either case, inclusion or exclusion of a die-bonding film following removal of the panel provides a coreless substrate. Still further, the substrate may be considered a coreless substrate because it does not include a thick core such as a fiber reinforced glass epoxy resin. In an embodiment, an active surface of semiconductor die 604 includes a plurality of semiconductor devices, such as but not limited to transistors, capacitors and resistors interconnected together by a die interconnection structure into functional circuits to thereby form an integrated circuit. As will be understood to those skilled in the art, the device side of the semiconductor die includes an active portion with integrated circuitry and interconnections. The semiconductor die may be any appropriate integrated circuit device including but not limited to a microprocessor (single or multi-core), a memory device, a chipset, a graphics device, an application specific integrated circuit according to several different embodiments. In another embodiment, more than one die is embedded in the same package. For example, in one embodiment, a packaged semiconductor die further includes a secondary stacked die. The first die may have one or more through-silicon vias disposed therein (TSV die). The second die may be electrically coupled to the TSV die through the one or more through-silicon vias. In one embodiment, both dies are embedded in a coreless substrate. The packaged semiconductor die 604 may, in an embodiment, be a fully embedded and surrounded semiconductor die. As used in this disclosure, "fully embedded and surrounded" means that all surfaces of the semiconductor die are in contact with an encapsulating film (such as a dielectric layer) of substrate, or at least in contact with a material housed within the encapsulating film. Said another way, "fully embedded and surrounded" means that all exposed surfaces of the semiconductor die are in contact with the encapsulating film of a substrate. The packaged semiconductor die 604 may, in an embodiment, be a fully embedded semiconductor die. As used in this disclosure, "fully embedded" means that an active surface and the entire sidewalls of the semiconductor die are in contact with an encapsulating film (such as a dielectric layer) of a substrate, or at least in contact with a material housed within the encapsulating film. Said another way, "fully embedded" means that all exposed regions of an active surface and the exposed portions of the entire sidewalls of the semiconductor die are in contact with the encapsulating film of a substrate. However, in such cases, the semiconductor die is not "surrounded" since the backside of the semiconductor die is not in contact with an encapsulating film of the substrate or with a material housed within the encapsulating film. In a first embodiment, a back surface of the semiconductor die protrudes from the global planarity surface of the die side of a substrate. In a second embodiment, no surface of the semiconductor die protrudes from the global planarity surface of the die side of a substrate. In contrast to the above definitions of "fully embedded and surrounded" and "fully embedded," a "partially embedded" die is a die having an entire surface, but only a portion of the sidewalls, in contact with an encapsulating film of a substrate (such as a coreless substrate), or at least in contact with a material housed within the encapsulating film. In further contrast, a "non- embedded" die is a die having at most one surface, and no portion of the sidewalls, in contact with an encapsulating film of a substrate (such as a coreless substrate), or in contact with a material housed within the encapsulating film. As mentioned briefly above, an array of external conductive contacts may subsequently be formed. In an embodiment, the external conductive contacts couple the formed substrate to a foundation substrate. The external conductive contacts may be used for electrical communication with the foundation substrate. In one embodiment, the array of external conductive contacts is a ball grid array (BGA). In other embodiments, the array of external conductive contacts is an array such as, but not limited to, a land grid array (LGA) or an array of pins (PGA). In an embodiment, as described above, the substrate is a BBUL substrate, as depicted in Figure 60. In one such embodiment, mechanical fuses for programming a MEMS device are embedded within the buildup layers along with the MEMS device. The programming may be performed by later modifying effective spring constants/resonance frequencies of the MEMS device using a thermal rupture mechanism or electromigration rupture mechanism. In an embodiment, by applying voltage to the MEMS/fuse structure enables mechanical programming for laminate MEMS systems. Although described in detail above for a BBUL process, other process flows may be used instead. For example, in another embodiment, die 604 is housed in a core of a substrate. In another embodiment, fan-out layers are used. The term "MEMS" generally refers to an apparatus incorporating some mechanical structure having a dimensional scale that is comparable to microelectronic devices. The mechanical structure is typically capable of some form of mechanical motion and having dimensions below approximately 250 microns. However, in an embodiment, a MEMS on package structure has a total size exceeding approximately 1 mm, but has a beam width on an order of approximately 10 microns. Thus, MEMS structures contemplated herein are, in an embodiment, any device that falls within the scope of MEMS technologies. For example, a MEMS structure may be any mechanical and electronic structure having a critical dimension of less than approximately 250 microns and fabricated using lithography, deposition, and etching processes above a substrate. In accordance with an embodiment of the present invention, the MEMS structure is a device such as, but not limited to, a resonator, a sensor, a detector, a filter or a mirror. In one embodiment, the MEMS structure is a resonator. In a specific embodiment, the resonator is one such as, but not limited to, a beam, a plate and a tuning fork or a cantilever arm. In an embodiment, the fabricated MEMS device includes a cantilever structure. For example, Figure 7 includes scanning electron microscope (SEM) images of singly-clamped cantilever structures formed in a BBUL laminate layer, in accordance with an embodiment of the present invention. In another example, Figure 8 includes scanning electron microscope (SEM) images of doubly-clamped beam structures formed in a BBUL laminate layer, in accordance with an embodiment of the present invention. Embodiments of the present invention may be suitable for fabricating a system on a chip (SOC), e.g., for a smartphone or a tablet. In an embodiment, an m-FUSE structure is integrated and fabricated in a BBUL packaging fab. The same backend processing used for existing BBUL coreless packaging may be used as a base flow. Alternatively, the process flow for fuse integration with MEMS may be applicable to other packaging substrate technologies. Overall, in an embodiment, programmable mechanical fuses are used to tune the spring constant/resonance behavior for MEMS sensors and actuating systems fabricated by BBUL packaging buildup layer technology or other packaging technology. Figure 9 is a schematic of a computer system 900, in accordance with an embodiment of the present invention. The computer system 900 (also referred to as the electronic system 900) as depicted can embody a semiconductor package having a mechanical fuse therein according to any of the several disclosed embodiments and their equivalents as set forth in this disclosure. The computer system 900 may be a mobile device such as a netbook computer. The computer system 900 may be a mobile device such as a wireless smart phone. The computer system 900 may be a desktop computer. The computer system 900 may be a hand-held reader. In an embodiment, the electronic system 900 is a computer system that includes a system bus 920 to electrically couple the various components of the electronic system 900. The system bus 920 is a single bus or any combination of busses according to various embodiments. The electronic system 900 includes a voltage source 930 that provides power to the integrated circuit 910. In some embodiments, the voltage source 930 supplies current to the integrated circuit 910 through the system bus 920. The integrated circuit 910 is electrically coupled to the system bus 920 and includes any circuit, or combination of circuits according to an embodiment. In an embodiment, the integrated circuit 910 includes a processor 912 that can be of any type. As used herein, the processor 912 may mean any type of circuit such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor, or another processor. In an embodiment, the processor 912 includes or is included in a semiconductor package having a mechanical fuse therein, as disclosed herein. In an embodiment, SRAM embodiments are found in memory caches of the processor. Other types of circuits that can be included in the integrated circuit 910 are a custom circuit or an application-specific integrated circuit (ASIC), such as a communications circuit 914 for use in wireless devices such as cellular telephones, smart phones, pagers, portable computers, two-way radios, and similar electronic systems. In an embodiment, the processor 910 includes on-die memory 916 such as static random-access memory (SRAM). In an embodiment, the processor 910 includes embedded on-die memory 916 such as embedded dynamic random- access memory (eDRAM). In an embodiment, the integrated circuit 910 is complemented with a subsequent integrated circuit 911. Useful embodiments include a dual processor 913 and a dual communications circuit 915 and dual on-die memory 917 such as SRAM. In an embodiment, the dual integrated circuit 910 includes embedded on-die memory 917 such as eDRAM. In an embodiment, the electronic system 900 also includes an external memory 940 that in turn may include one or more memory elements suitable to the particular application, such as a main memory 942 in the form of RAM, one or more hard drives 944, and/or one or more drives that handle removable media 946, such as diskettes, compact disks (CDs), digital variable disks (DVDs), flash memory drives, and other removable media known in the art. The external memory 940 may also be embedded memory 948 such as the first die in an embedded TSV die stack, according to an embodiment. In an embodiment, the electronic system 900 also includes a display device 950 and an audio output 960. In an embodiment, the electronic system 900 includes an input device such as a controller 970 that may be a keyboard, mouse, trackball, game controller, microphone, voice- recognition device, or any other input device that inputs information into the electronic system 900. In an embodiment, an input device 970 is a camera. In an embodiment, an input device 970 is a digital sound recorder. In an embodiment, an input device 970 is a camera and a digital sound recorder. As shown herein, the integrated circuit 910 may be implemented in a number of different embodiments, including a semiconductor package having a mechanical fuse therein according to any of the several disclosed embodiments and their equivalents, an electronic system, a computer system, one or more methods of fabricating an integrated circuit, and one or more methods of fabricating an electronic assembly that includes a semiconductor package having a mechanical fuse therein according to any of the several disclosed embodiments as set forth herein in the various embodiments and their art-recognized equivalents. The elements, materials, geometries, dimensions, and sequence of operations can all be varied to suit particular I/O coupling requirements including array contact count, array contact configuration for a microelectronic die embedded in a processor mounting substrate according to any of the several disclosed semiconductor package having a mechanical fuse therein embodiments and their equivalents. A foundation substrate may be included, as represented by the dashed line of Figure 9. Passive devices may also be included, as is also depicted in Figure 9. Embodiments of the present invention include semiconductor packages with mechanical fuses. In an embodiment, a semiconductor structure includes a semiconductor package. A semiconductor die is housed in the semiconductor package. A microelectromechanical system (MEMS) device is housed in the semiconductor package. The MEMS device has a suspended portion. A mechanical fuse is housed in the semiconductor package and coupled to the suspended portion of the MEMS device. In one embodiment, the semiconductor package includes a bumpless build-up layer (BBUL) substrate. In one embodiment, the semiconductor die is embedded in the BBUL substrate, and the MEMS device and mechanical fuse are disposed in one or more layers of the BBUL substrate. In one embodiment, the MEMS device and mechanical fuse are disposed above an active surface of the semiconductor die. In one embodiment, the BBUL substrate is a coreless substrate. In one embodiment, the MEMS device includes a singly-clamped cantilever or doubly- clamped beam structure, and the mechanical fuse is coupled to the cantilever or beam structure. In one embodiment, the suspended portion of the MEMS device has an effective spring constant, and the mechanical fuse modifies the effective spring constant of the suspended portion. In one embodiment, the suspended portion of the MEMS device has a resonance frequency, and the mechanical fuse modifies the resonance frequency of the suspended portion. In one embodiment, the semiconductor structure further includes one or more additional mechanical fuses housed in the semiconductor package and coupled to the suspended portion of the MEMS device. In one embodiment, the mechanical fuse and the MEMS device are composed of copper. In one embodiment, the MEMS device is electrically coupled to the semiconductor die. In an embodiment, a semiconductor structure includes a semiconductor package. A semiconductor die is housed in the semiconductor package. A microelectromechanical system (MEMS) device is housed in the semiconductor package. The MEMS device has a suspended portion. A mechanical fuse is housed in the semiconductor package and decoupled from the suspended portion of the MEMS device. In one embodiment, the semiconductor package includes a bumpless build-up layer (BBUL) substrate. In one embodiment, the semiconductor die is embedded in the BBUL substrate, and the MEMS device and mechanical fuse are disposed in one or more layers of the BBUL substrate. In one embodiment, the MEMS device and mechanical fuse are disposed above an active surface of the semiconductor die. In one embodiment, the BBUL substrate is a coreless substrate. In one embodiment, the MEMS device includes a singly-clamped cantilever or doubly- clamped beam structure, and the mechanical fuse is decoupled from the cantilever or beam structure. In one embodiment, the semiconductor structure further includes one or more additional mechanical fuses housed in the semiconductor package and decoupled from the suspended portion of the MEMS device. In one embodiment, the semiconductor structure further includes one or more additional mechanical fuses housed in the semiconductor package and coupled to the suspended portion of the MEMS device. In one embodiment, the mechanical fuse and the MEMS device are composed of copper. In one embodiment, the MEMS device is electrically coupled to the semiconductor die. In an embodiment, a method of modifying a mechanical property for a microelectromechanical system (MEMS) device of a semiconductor structure includes applying a voltage to a MEMS structure including the MEMS device and a mechanical fuse coupled to a suspended portion of the MEMS device. The mechanical fuse is decoupled from the suspended portion of the MEMS device by the applying of the voltage. In one embodiment, decoupling the mechanical fuse includes using a thermal rupture mechanism. In one embodiment, the mechanical fuse and the MEMS device are composed of copper, and the thermal rupture mechanism includes melting a portion of the mechanical fuse, but not melting the MEMS device. In one embodiment, decoupling the mechanical fuse includes using an electromigration rupture mechanism. In one embodiment, one or more additional mechanical fuses is coupled to a suspended portion of the MEMS device. The decoupling of the mechanical fuse further includes decoupling one or more of the additional mechanical fuses from the suspended portion of the MEMS device by the applying of the voltage. In one embodiment, one or more additional mechanical fuses is coupled to a suspended portion of the MEMS device. The decoupling of the mechanical fuse includes decoupling none of the additional mechanical fuses from the suspended portion of the MEMS device. In one embodiment, the MEMS structure is included in one or more layers of a bumpless build-up layer (BBUL) substrate of a semiconductor package. The applying of the voltage includes applying the voltage to external contacts of the semiconductor package. In one embodiment, the MEMS device includes a singly-clamped cantilever or doubly- clamped beam structure. Decoupling the mechanical fuse includes decoupling from the cantilever or beam structure. In an embodiment, a semiconductor structure includes a semiconductor package. A microelectromechanical system (MEMS) device is housed in the semiconductor package. The MEMS device has a suspended portion. A mechanical fuse is housed in the semiconductor package and coupled to the suspended portion of the MEMS device. |
In an aspect, a heterojunction bipolar transistor (HBT) includes a sub-collector disposed on a collector. The collector has a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor. The HBT includes an emitter disposed on an emitter cap. The emitter has an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor. The HBT includes a base having a base contact located on the second side of the heterojunction bipolar transistor. |
CLAIMSWHAT IS CLAIMED IS:1. An apparatus comprising a heterojunction bipolar transistor comprising: a sub-collector disposed on a collector, the collector having a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor; an emitter disposed on an emitter cap, the emitter having an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor; and a base having a base contact located on the second side of the heterojunction bipolar transistor.2. The apparatus of claim 1, wherein: the heterojunction bipolar transistor comprises an epitaxial stack; and the epitaxial stack includes the collector, the emitter, and the base.3. The apparatus of claim 1, wherein the sub-collector comprises Indium Gallium Arsenide.4. The apparatus of claim 1, wherein the base comprises at least one of GalliumArsenide Antimonide or Indium Gallium Arsenide.5. The apparatus of claim 1, wherein the emitter comprises Indium Phosphide.6. The apparatus of claim 1, wherein the collector comprises Indium Phosphide.7. The apparatus of claim 1, further comprising a silicon interposer.8. The apparatus of claim 7, wherein the silicon interposer is coupled to the second side of the heterojunction bipolar transistor with a hybrid bond interface.9. The apparatus of claim 7, wherein one or more passive components are embedded in the silicon interposer.10. The apparatus of claim 7, wherein a plurality of radio frequency front end components are coupled to the silicon interposer.11. The apparatus of claim 10, wherein the plurality of radio frequency front end components comprise at least one of: a variable gain amplifier; a component of a frequency synthesizer; a complementary metal oxide semiconductor low noise amplifier; a complementary metal oxide semiconductor beamformer; and a silicon-on-insulator switch.12. The apparatus of claim 7, wherein the silicon interposer is coupled to the heterojunction bipolar transistor via a plurality of solder balls.13. The apparatus of claim 1, further comprising: an antenna module coupled to the first side of the heterojunction bipolar transistor.14. The apparatus of claim 13, wherein the antenna module comprises one or more antenna tiles and a package substrate.15. The apparatus of claim 1, wherein the apparatus is selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.16. A method of fabricating a semiconductor device comprising a heterojunction bipolar transistor, the method comprising: forming a sub-collector;
forming a collector on the sub-collector, the collector having a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor; forming an emitter; forming an emitter cap on the emitter, the emitter having an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor; and forming a base having a base contact located on the second side of the heterojunction bipolar transistor.17. The method of claim 16, further comprising: coupling, with a hybrid bond interface, a silicon interposer to the second side of the heterojunction bipolar transistor.18. The method of claim 17, further comprising: hybrid bonding a complementary metal oxide semiconductor device to the silicon interposer.19. The method of claim 18, wherein the complementary metal oxide semiconductor device comprises: at least one complementary metal oxide semiconductor beamformer; and at least one silicon-on-insulator (SOI) switch.20. The method of claim 17, further comprising: embedding one or more passive components into the silicon interposer.21. The method of claim 16, wherein: the heterojunction bipolar transistor comprises an epitaxial stack; and the epitaxial stack includes the collector, the emitter, and the base.22. The method of claim 16, further comprising: bonding a sacrificial wafer to the second side of the heterojunction bipolar transistor.23. The method of claim 22, further comprising:
debonding the sacrificial wafer from the second side of the heterojunction bipolar transistor; hetero-integrating the heterojunction bipolar transistor with one or more chiplets on the second side of the heterojunction bipolar transistor; and hetero-integrating the heterojunction bipolar transistor with one or more chiplets on the first side of the heterojunction bipolar transistor.24. The method of claim 16, wherein: the sub-collector comprises Indium Gallium Arsenide; the base comprises at least one of Gallium Arsenide Antimonide or Indium Gallium Arsenide; the emitter comprises Indium Phosphide; and the collector comprises Indium Phosphide.25. The method of claim 16, further comprising: coupling, with a hybrid bond interface, a silicon interposer to the second side of the heterojunction bipolar transistor; and coupling, with a hybrid bond interface, a reconstituted complementary metal oxide semiconductor wafer to the silicon interposer, wherein one or more passive components are embedded into the silicon interposer, and wherein the reconstituted complementary metal oxide semiconductor wafer comprises a plurality of radio frequency front end components.26. The method of claim 16, wherein the semiconductor device is incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle. |
RADIO FREQUENCY FRONT END (RFFE) HETERO-INTEGRATIONBACKGROUND OF THE DISCLOSURE1. Field of the Disclosure[0001] Aspects of this disclosure relate generally to integrated circuit (IC) fabrication, and particularly to radio frequency front end (RFFE) hetero-integration with Indium Phosphide (InP) on Silicon (Si) to reduce a size of a base collector junction area.2. Description of the Related Art[0002] In a semiconductor (also known as a chip or integrated circuit (IC)), at the power amplifier stage or the low noise amplifier stage, power gain decreases as frequency increases, typically about 15 decibels (dB) per decade of frequency. When the frequency goes above 70 Gigahertz (GHz), particularly 100 GHz and beyond, semiconductors have difficulty providing power gain.[0003] Indium phosphide (InP) is one of the few technologies that can provide adequate power gains beyond 100GHz (e.g., particularly 140Ghz and above). However, the base collector junction area of an InP heterojunction bipolar transistor (HBT) has a size large enough to accommodate the base contact surrounding the emitter, to reduce the base resistance. The resulting large base collector junction area leads to reduced gain, particularly at higher frequencies. In addition, the large base collector junction area increases the form factor. Semiconductors are typically hetero-integrated on the other side of an antenna tile. In hetero-integration, chips that are functionally different and that use different processes are stacked into a complete system, such as a system-on-a-chip (SOC).[0004] As frequency increases, size of antenna tiles decreases. For example, near 70GHz and above, an antenna tile becomes smaller than the chip on the other side, thereby resulting in unused laminate space. Thus, as frequencies increase and antenna tiles become smaller, the wasted space increases due to the size of the base collector junction area because in a conventional HBT transistor, the base contact is on the same side as the collector contact and the emitter contact.SUMMARY[0005] The following presents a simplified summary relating to one or more aspects disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects or to delineate the
scope associated with any particular aspect. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects relating to the mechanisms disclosed herein in a simplified form to precede the detailed description presented below.[0006] In afirst aspect, an apparatus includes a heterojunction bipolar transistor (HBT). The HBT includes a sub-collector disposed on a collector. The collector has a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor. The HBT includes an emitter disposed on an emitter cap. The emitter has an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor. The HBT includes a base having a base contact located on the second side of the heterojunction bipolar transistor.[0007] In a second aspect, a method of fabricating a heterojunction bipolar transistor (HBT) includes forming a sub-collector and forming a collector on the sub-collector. The collector has a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor. The method includes forming an emitter and forming an emitter cap on the emitter. The emitter has an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor. The method includes forming a base having a base contact located on the second side of the heterojunction bipolar transistor.[0008] Other obj ects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The accompanying drawings are presented to aid in the description of various aspects of the disclosure and are provided solely for illustration of the aspects and not limitation thereof. A more complete understanding of the present disclosure may be obtained by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.[0010] FIG. 1 illustrates an exemplary cross-section of a semiconductor structure with a backside collector contact, according to various aspects of the disclosure.
[0011] FIGS. 2A, 2B, 2C, and 2D illustrate a portion of a process to create a structure with a backside collector contact, according to aspects of the disclosure.[0012] FIGS. 3A, 3B, 3C, and 3D illustrate using a sacrificial wafer to complete a process to create a structure with a backside collector contact, according to aspects of the disclosure.[0013] FIGS. 4A, 4B, 4C, and 4D illustrate using a Silicon (Si) interposer to complete a process to create a structure with a backside collector contact, according to aspects of the disclosure.[0014] FIG. 5 illustrates a process that includes forming a collector, an emitter, and a base of a semiconductor, according to aspects of the disclosure.[0015] FIG. 6 illustrates an example process that includes patterning vias and metals, according to aspects of the disclosure.[0016] FIG. 7 illustrates an example process that includes bonding a frontside to a sacrificial wafer, according to aspects of the disclosure.[0017] FIG. 8 illustrates an example process that includes hybrid bonding and interconnecting an Si interposer, according to aspects of the disclosure.[0018] FIG. 9 illustrates components of an integrated device in accordance with one or more aspects of the disclosure.[0019] FIG. 10 illustrates an exemplary mobile device in accordance with one or more aspects of the disclosure.[0020] FIG. 11 illustrates various electronic devices that may be integrated with an integrated device or a semiconductor device in accordance with one or more aspects of the disclosure.DETAILED DESCRIPTION[0021] Disclosed are systems and techniques to reduce a base collector junction area of a heterojunction bipolar transistor (HBT) by moving a collector contact to the opposite side of an emitter contact and a base contact. Technical advantages of this include, for example, reducing the base collector junction area by about 30% and increasing power gain by about 2 decibels (db). A first mesa is used on an emitter-to-base side and a second mesa is used on a collector-to-base side. Various aspects are disclosed for hetero- integrating an HBT wafer (e.g., that includes Indium Phosphide (InP)) including using either (1) a sacrificial wafer or (2) a Silicon (Si) interposer on a backside.
[0022] In one aspect, a sacrificial wafer is bonded to the HBT wafer to provide mechanical stability during the process to create the chip. After the process is complete, the sacrificial wafer is de-bonded. After dicing, chiplets (also referred to as a die) may be separately hetero-integrated on either side of the silicon wafer.[0023] In another aspect, a Si interposer is used to interconnect HBTs created on the 300 millimeter (mm) Si wafer to chiplets on the other side. A chiplet is an integrated circuit block that has been designed to work with other chiplets to form larger more complex chips. For example, a conventional chip is subdivided into functional circuit blocks, called chiplets. Thus, chiplets refer to the independent constituents which make up a large chip built out of multiple smaller dies. The Si interposer provides mechanical stability during the process and enables base and collector contacts and interconnects to be provided. The SI interposer may be used for hetero-integrating chips on the other side. For example, after the process is complete, hetero-integration may be performed with the InP-on-Si wafer on one side and chiplets on the other side of the Si interposer.[0024] The systems and techniques described herein differ from a conventional InP structure in several ways. First, unlike a conventional semiconductor in which collector contacts are on the same side as the base contacts and the emitter contacts, the systems and techniques provide (e.g., on a 300mm Si substrate) collector contacts on an opposite side of the base contacts and the emitter contacts. Second, the systems and techniques provide a mesa on both the emitter side and the collector side. A mesa is an area on a semiconductor wafer where the semiconductor has not been etched away, thereby creating a flat-topped protrusion. In contrast, in a conventional semiconductor, the mesa is only on the emitter side. Third, the systems and techniques enable hetero-integration. Conventionally, separate bonded wafer or interconnects are not provided. The systems and techniques use a Si interposer that is bonded on the emitter side with chiplets on the other side. The advantages of the systems and techniques described herein include about 30% reduction in base collector junction area, a 2 dB power gain increase (e.g., at 100 gigahertz (GHz)), lower cost, and a wafer scalable to 300mm (e.g., because of the use of a Si substrate).[0025] Aspects of the disclosure are provided in the following description and related drawings directed to various examples provided for illustration purposes. Alternate aspects may be devised without departing from the scope of the disclosure. Additionally, well-known elements of the disclosure will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure.
[0026] The words “example” and/or “example” are used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “example” and/or “example” is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term “aspects of the disclosure” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation.[0027] Those of skill in the art will appreciate that the information and signals described below may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description below may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof, depending in part on the particular application, in part on the desired design, in part on the corresponding technology, etc.[0028] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, the sequence(s) of actions described herein can be considered to be embodied entirely within any form of non- transitory computer-readable storage medium having stored therein a corresponding set of computer instructions that, upon execution, would cause or instruct an associated processor of a device to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, “logic configured to” perform the described action.[0029] As used herein, the terms “user equipment” (UE) and “base station” are not intended to be specific or otherwise limited to any particular radio access technology (RAT), unless otherwise noted. In general, a UE may be any wireless communication device (e.g., a mobile phone, router, tablet computer, laptop computer, consumer asset tracking device, wearable device (e.g., smartwatch, glasses, augmented reality (AR) / virtual reality (VR) headset, etc.), vehicle (e.g., automobile, motorcycle, bicycle, etc.), Internet of Things (IoT) device, etc.) used by a user to communicate over a wireless communications network. A UE may be mobile or may (e.g., at certain times) be stationary, and may
communicate with a radio access network (RAN). As used herein, the term “UE” may be referred to interchangeably as an “access terminal” or “AT,” a “client device,” a “wireless device,” a “subscriber device,” a “subscriber terminal,” a “subscriber station,” a “user terminal” or UT, a “mobile device,” a “mobile terminal,” a “mobile station,” or variations thereof. Generally, UEs can communicate with a core network via a RAN, and through the core network the UEs can be connected with external networks such as the Internet and with other UEs. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over wired access networks, wireless local area network (WLAN) networks (e.g., based on Institute of Electrical and Electronics Engineers (IEEE) 802.11, etc.) and so on.[0030] A base station may operate according to one of several RATs in communication with UEs depending on the network in which it is deployed, and may be alternatively referred to as an access point (AP), a network node, a NodeB, an evolved NodeB (eNB), a next generation eNB (ng-eNB), a New Radio (NR) Node B (also referred to as a gNB or gNodeB), etc. A base station may be used primarily to support wireless access by UEs, including supporting data, voice, and/or signaling connections for the supported UEs. In some systems a base station may provide purely edge node signaling functions while in other systems it may provide additional control and/or network management functions. A communication link through which UEs can send RF signals to a base station is called an uplink (UL) channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the base station can send RF signals to UEs is called a downlink (DL) or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to either an uplink / reverse or downlink / forward traffic channel.[0031] The term “base station” may refer to a single physical transmission-reception point (TRP) or to multiple physical TRPs that may or may not be co-located. For example, where the term “base station” refers to a single physical TRP, the physical TRP may be an antenna of the base station corresponding to a cell (or several cell sectors) of the base station. Where the term “base station” refers to multiple co-located physical TRPs, the physical TRPs may be an array of antennas (e.g., as in a multiple-input multiple-output (MIMO) system or where the base station employs beamforming) of the base station. Where the term “base station” refers to multiple non-co-located physical TRPs, the physical TRPs
may be a distributed antenna system (DAS) (a network of spatially separated antennas connected to a common source via a transport medium) or a remote radio head (RRH) (a remote base station connected to a serving base station). Alternatively, the non-co-located physical TRPs may be the serving base station receiving the measurement report from the UE and a neighbor base station whose reference RF signals (or simply “reference signals”) the UE is measuring. Because a TRP is the point from which a base station transmits and receives wireless signals, as used herein, references to transmission from or reception at a base station are to be understood as referring to a particular TRP of the base station.[0032] An “RF signal” comprises an electromagnetic wave of a given frequency that transports information through the space between a transmitter and a receiver. As used herein, a transmitter may transmit a single “RF signal” or multiple “RF signals” to a receiver. However, the receiver may receive multiple “RF signals” corresponding to each transmitted RF signal due to the propagation characteristics of RF signals through multipath channels. The same transmitted RF signal on different paths between the transmitter and receiver may be referred to as a “multipath” RF signal. As used herein, an RF signal may also be referred to as a “wireless signal,” a “radar signal,” a “radio wave,” a “waveform,” or the like, or simply a “signal” where it is clear from the context that the term “signal” refers to a wireless signal or an RF signal.[0033] As a first example, a semiconductor includes an N+ doped InGaAs sub-collector region located at a top of the semiconductor and an N- doped InP collector located below the N+. The N+ sub-collector has a collector contact located on a backside of the semiconductor. The semiconductor includes a P+ doped InGaAs or GaAsSb base located below the N- collector. The P+ base has a base contact located on the frontside of the semiconductor. The semiconductor includes an N- doped InP emitter located below the P+ base and an N+ doped emitter cap (graded InP and InGaAs) located below the emitter. The N+ emitter cap has an emitter contact located on the frontside of the semiconductor. The semiconductor implements a heterojunction bipolar transistor (HBT) on a Silicon (Si) substrate. The semiconductor includes (1) a first mesa located on the frontside of the semiconductor and associated with the emitter and (2) a second mesa located on the backside of the semiconductor and associated with the collector. The N+ sub-collector comprises Indium Gallium Arsenide (InGaAs), the N- collector comprises Indium Phosphide (InP), and the P+ base comprises Gallium Arsenide Antimonide (GaAsSb) or
Indium Gallium Arsenide (InGaAs). The semiconductor hetero-integrates an antenna module to the backside of the semiconductor, with a Silicon interposer hybrid bonded and interconnected to the frontside of the semiconductor, and a reconstituted complementary metal oxide semiconductor (CMOS) wafer hybrid bonded to the Silicon interposer. One or more passive components are embedded into the Silicon interposer. The reconstituted CMOS wafer includes at least one CMOS beamformer, and at least one silicon-on- insulator (SOI) switch. The semiconductor is incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a base station, a server, and a device in an automotive vehicle.[0034] As a second example, a semiconductor that includes a heterojunction bipolar transistor (HBT) may be fabricated. The fabrication includes: growing an epitaxial stack comprising the aforementioned layers of the InP HBT on a Silicon substrate, depositing emitter metal, performing a photo-resist strip, performing a lithography of an emitter mesa, performing an etch of the emitter mesa, performing a photo-resist strip, depositing Silicon Nitride, performing the lithography of a metal of a base, performing an etch of the Silicon Nitride, performing an etch of Indium Phosphide, depositing base metal, performing a lift-off, depositing additional Silicon Nitride, bonding a frontside of the semiconductor to a sacrificial wafer, grinding and etching the Silicon substrate down to a sub-collector, patterning a base, patterning a collector, depositing a dielectric, and patterning vias and metals. The collector contact is on the backside, whereas the emitter and base contacts are on the front side of the semiconductor. The fabrication process further includes debonding the sacrificial wafer from the frontside of the semiconductor and hetero-integrating the chiplets on either side of a Si interposer or laminate. The fabrication process also includes embedding one or more passive components into the Silicon interposer or laminate. The heterojunction bipolar transistor (HBT) includes: (1) an N+ sub-collector comprising Indium Gallium Arsenide (InGaAs), (2) an N- collector comprising Indium Phosphide (InP), and (3) a P+ base comprising Gallium Arsenide Antimonide (GaAsSb) or Indium Gallium Arsenide (InGaAs). The fabrication process includes (1) creating a first mesa located on the frontside of the semiconductor and associated with the emitter and (2) creating a second mesa located on the backside of the
semiconductor and associated with the collector. The fabrication process further includes hetero-integrating with the semiconductor at least one of: (1) a complementary metal oxide semiconductor (CMOS) low noise amplifier (LNA), (2) a complementary metal oxide semiconductor (CMOS) beamformer, (3) a silicon-on-insulator (SOI) switch, or (4) a Silicon interposer that includes one or more embedded passive components. The semiconductor is incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.[0035] As a third example, a semiconductor includes a heterojunction bipolar transistor (HBT) on a Silicon (Si) substrate, the HBT includes: (1) a sub-collector located on the Silicon substrate, (2) a collector located below the sub-collector, the collector having a collector contact located on a backside of the semiconductor, (3) a base located below the collector, the base having a base connector located on a frontside of the semiconductor, (4) an emitter located below the base, the emitter having an emitter contact located on the frontside of the semiconductor, and (5) an emitter cap located below the emitter. The semiconductor includes a first mesa associated with the emitter that is located on the frontside of the semiconductor, and a second mesa associated with the collector that is located on the backside of the semiconductor. The sub-collector comprises Indium Gallium Arsenide, the collector comprises Indium Phosphide, the base comprises either: (i) Gallium Arsenide Antimonide or (ii) Indium Gallium Arsenide, and the emitter comprises Indium Phosphide. The semiconductor includes hetero-integration of an antenna module to the backside of the semiconductor, hybrid bonding a Silicon interposer to the frontside of the semiconductor, and hybrid bonding a reconstituted complementary metal oxide semiconductor (CMOS) wafer to the Silicon interposer. Passive components are embedded into the Silicon interposer. The reconstituted CMOS wafer includes multiple radio frequency front end (RFFE) components, such as, for example, a CMOS low noise amplifier (LNA), a CMOS beamformer, and a silicon-on-insulator (SOI) switch. The semiconductor is incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal
digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.[0036] As a fourth example, a semiconductor comprising a heterojunction bipolar transistor (HBT) is fabricated. The fabrication includes (1) forming a sub-collector on a Silicon substrate, (2) forming a collector located below the sub-collector, the collector having a collector contact located on a backside of the semiconductor, (2) forming a base located below the collector, the base having a base connector located on a frontside of the semiconductor, (3) forming an emitter located below the base, the emitter having an emitter contact located on the frontside of the semiconductor, (4) forming an emitter cap located below the emitter, (5) forming a first mesa associated with the emitter that is located on the frontside of the semiconductor, and (6) forming a second mesa associated with the collector that is located on the backside of the semiconductor. In fabricating the semiconductor, the sub-collector comprises Indium Gallium Arsenide, the collector comprises Indium Phosphide, the base comprises either: (i) Gallium Arsenide Antimonide or (ii) Indium Gallium Arsenide, and the emitter comprises Indium Phosphide. The fabrication includes hetero-integration of: (1) an antenna module to the backside of the semiconductor, (2) hybrid bonding a Silicon interposer to the frontside of the semiconductor, and (3) hybrid bonding a reconstituted CMOS wafer to the Silicon interposer. The fabrication includes embedding passive components into the Silicon interposer. For example, the reconstituted CMOS wafer may include multiple radio frequency front end (RFFE) components, such as, for example, a CMOS low noise amplifier (LNA), a CMOS beamformer, and a silicon-on-insulator (SOI) switch. The semiconductor may be incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.[0037] FIG. 1 illustrates an exemplary cross-section of a semiconductor structure 100 with a first side or backside collector contact 113, according to various aspects of the disclosure. For convenience, the description herein will generally use backside. The semiconductor structure 100 includes a heterojunction bipolar transistor (HBT) 101 that is created using
a Silicon (Si) substrate that is later removed, a first inter-layer dielectric (ILD) 102 and a second ILD 103 enclosing the HBT 101. In some aspects, the Si substrate may be 300mm. The semiconductor structure 100 also includes an insulator 110 (e.g., passivation material), such as Silicon Nitride (SiN), disposed over the HBT 101 and between the first ILD 102 and the second ILD 103. In some aspects, the semiconductor structure 100 includes a hybrid bond interface 104, between an active portion (including HBT 101) and an Si interposer 130. In some aspects, e.g., as described below in FIGS. 3A, 3B, 3C, and 3D, a sacrificial wafer may be used during processing instead of the hybrid bond interface 104 to the Si interposer 130. As used herein, the term hybrid bonding refers to an alternative to thermocompression bonding. In hybrid bonding, a permanent (or semi permanent) bond combines a dielectric bond with embedded metal to form interconnections. Hybrid bonding is also referred to as direct bonding, or fusion bonding, because the wafer bonding process does not use additional intermediate layers. The bonding process is based on chemical bonds between two surfaces of any material where the wafer surface is sufficiently clean, flat, and smooth. In some aspects, hybrid bonding includes the use of adhesives. Hybrid bonding may include the use of various interconnect metals, such as, for example, copper (Cu), indium (In), silver (Ag), or the like.[0038] The semiconductor structure 100 includes the Si interposer 130 having various metals 108 (e.g., metal layers) in a third ILD 106. Vias 139 provide an electrical connection between metals 108 and interconnects 111 which form part of the hybrid bond interface 104. Each via is an opening in an insulating layer of ILD 106 to enable a conductive connection between different layers. Connectors 124 may be formed as bumps, balls, pins or any suitable configuration for connecting the semiconductor structure 100 through Si interposer 130 to other devices. In some cases, the connectors 124 may be a hybrid bond to connect Radio Frequency Front End (RFFE) components to the semiconductor structure 100.[0039] FIG. 1 illustrates a configuration of an InP HBT wafer on Si with a backside collector contact that reduces the base collector junction area (e.g., 126) by about 30%, resulting in about 2 dB in power gain. In accordance with some aspects, a sub-collector 112 may be N+ indium gallium arsenide (InGaAs) (also referred to as gallium indium arsenide, GalnAs). Backside collector contact 113 is formed on the sub-collector 112 (e.g., InGaAs) on a backside of the semiconductor structure 100. The collector 114 may be N- InP. A base 116 may be P+ Gallium Arsenide Antimonide (GaAsSb) or indium gallium
arsenide (InGaAs). One or more base contacts 117 may be formed on the frontside of the semiconductor structure 100 and coupled to the P+ GaAsSb base 116.[0040] The collector 114 (e.g., N- InP), and sub-collector 112 (e.g., InGaAs) are configured as a collector mesa, which protrudes from the base 116 (e.g., P+ GaAsSb). The base-collector junction 126 is located between the base 116 and the collector 114. The capacitance of the base-collector junction (Cbc) is reduced by about 30% due to the collector mesa configuration. An emitter 118 may include InP. The base 116, the collector 114, and the emitter 118, and emitter cap 120 may be collectively referred to as an epitaxial stack. The emitter cap 120 may be doped N+ and may be graded from InP to InGaAs. An emitter contact 122 is formed on the emitter cap 120 on a second side or front side of the semiconductor structure 100. For convenience, the description herein will generally use front side, but it will be appreciated that it should generally be construed to reference relative position, e.g., the backside (first side) is opposite the front side (second side). Connectors 124 may be formed as bumps, balls, pins or any suitable configuration for connecting the semiconductor structure 100 to other devices. In some aspects, the base contact 117 may be a circular deposition, in other aspects the base contact 117 may be two rectangular stripes on either side of the emitter 118 (as illustrated in FIG. 1). In other aspects, the base contact 117 may be a single rectangular stripe that is parallel to the emitter cap 120. Accordingly, it will be appreciated that the various aspects disclosed are not limited to the illustrated examples, which are provided solely to aid in explanation of the various aspects.[0041] FIGS. 2A, 2B, 2C, and 2D illustrate a portion of a process to create a structure with a backside collector contact, according to aspects of the disclosure. The process described in FIGS. 2A, 2B, 2C, and 2D is common when using a sacrificial wafer (e.g., FIGS. 3A, 3B, 3C, and 3D) or an Si interposer (e.g., FIGS. 4A, 4B, 4C, and 4D). Thus, the process when using a sacrificial wafer is described by FIGS. 2A, 2B, 2C, and 2D followed by FIGS. 3A, 3B, 3C, and 3D and the process when using an Si interposer is described by FIGS. 2A, 2B, 2C, and 2D followed by FIGS. 4A, 4B, 4C, and 4D.[0042] In FIG. 2A, an InP epitaxial stack is grown on an Si substrate 202. The Si substrate 202 may be, for example, 300 millimeters (mm) in diameter and of 100 crystalline orientation. It will be appreciated that other sizes and orientations of Si substrates may be used as appropriate. On top of the Si substrate 202 is the sub-collector 112. On top of the sub collector 112 is the collector 114 (e.g., N- InP). On top of the collector 114 is the base
116 (e.g., P+ GaAsSb). On top of the 116 is the emitter 118 (e.g., InP). On top of the emitter 118 is the emitter cap 120. The process includes forming an emitter contact 122. For example, the process can include depositing a metal, depositing a photoresist and patterning, removing excess metal and performing a strip (e.g., removal) of photoresist.[0043] FIG. 2B illustrates a portion of the process for forming the emitter cap 120. The process includes performing lithography and etching followed by a photoresist stripping process to create a mesa which forms the emitter cap 120 on the emitter 118. The process may include depositing Silicon Nitride (S13N4) (not shown).[0044] FIG. 2C illustrates a cross-section after base metal lithography and etching to form the base contact 117 on the base 116.[0045] FIG. 2D illustrates a cross-section of structure 204 after further processing. The process forms the second ILD103, the insulator 110, the metals 108 and the vias 109, as illustrated. For example, the insulator 110 (e.g., Silicon Nitride) may be deposited over the base contact 117, emitter cap 120, emitter contact 122, and emitter 118. Second ILD 103 can be deposited over the insulator 110. A metal layer can be deposited on second ILD 103 and then patterned and etched to form metals 108. Vias 109 can be formed in openings in the ILD 103 and insulator 110 to couple some of the metals 108 to the base contact 117 and emitter contact 122. Additional deposition of second ILD 103 can be used to cover the metals 108. Further, for ease of understanding, a first layer of metal 108 is illustrated in FIG. 2D. However, it should be understood that the process may add additional (e.g., 2, 3, 4, etc.) layers of metal. The process continues with structure 204 being further processed according to a first aspect (e.g., FIGS. 3A, 3B, 3C, and 3D) or a second aspect (FIGS. 4A, 4B, 4C, and 4D).[0046] FIGS. 3A, 3B, 3C, and 3D illustrate using a sacrificial wafer to complete a process to create an InP HBT with a backside collector contact, according to aspects of the disclosure. In FIGS. 3A, 3B, 3C, and 3D, the structure 204 from FIG. 2D is shown rotated 180 degrees.[0047] In FIG. 3A, the process bonds a wafer 302 to structure 204 on the second ILD 103. The wafer 302 is also referred to as a sacrificial wafer because the wafer 302 is later de- bonded. The process includes grinding and/or etching the Si substrate (202, not illustrated) to expose the sub-collector 112 (e.g., N+ InGaAs), resulting in the structure illustrated in FIG. 3A.
[0048] In FIG. 3B, the process continues with forming the base 116 (e.g., P+ GaAsSb), the collector 114 (e.g., N- InP), and sub-collector 112. An insulating layer is deposited, which may be Silicon Nitride, and expands insulator 110 (e.g., insulation layer) to cover the emitter 118, collector 114 and sub-collector 112, which have been patterned and etched. Further, the collector contact 122(e.g., metal) is formed and coupled to sub-collector 112.[0049] In FIG. 3C, the wafer 302 is de-bonded from InP structure 304 and may be reused or discarded. Before debonding, the process further includes, forming the first ILD 102, and patterns additional metal interconnect layers such as metals 108 on top of ILD 102 and the vias 109, as illustrated in FIG. 3C.[0050] In FIG. 3D, the process hetero-integrates the diced chips with InP structure 304 (e.g., HBT) with other chiplets or dies, on both sides of a substrate or Si interposer 306, which in some aspects may include embedded passive components. The embedded passive components (e.g., inductors, capacitors) may function as the input and output matching networks of an HBT based power amplifier (PA) or a CMOS based low noise amplifier (LNA). In the illustrated configuration, an Si interposer 306 includes a plurality of through vias 307 (e.g., Through Silicon Vias (TSV)) to provide an electrical connection between contacts on each side of the Si interposer 306. It will be appreciated that in addition to the vias 307, the Si interposer 306 may include multiple metal layers and also embedded passive components (not illustrated). The embedded passive components serve as matching network for PAs and/or LNAs. Embedded passive components that are directly under or above the PA or LNA chiplet decreases insertion loss due to the proximity to the respective PA, LNA. In contrast, discrete capacitors and inductors placed beside the PA, LNA result in longer interconnects and larger insertion losses. Therefore, such passive components improve the overall RF front end. The diced InP structure 304 may function as a power amplifier and/or a low noise amplifier. The other chiplets may include, for example, a complementary metal oxide semiconductor (CMOS) based low noise amplifier (LNA) 308, silicon-on-insulator (SOI) switch 310, and CMOS beamformer 312. The chiplets shown in FIG. 3D are purely for illustration purposes and it should be understood that other types of chiplets may be used in addition to or instead of the chiplets shown.[0051] FIGS. 4A, 4B, 4C, and 4D illustrate using a Silicon (Si) interposer to complete a process to create an HBT semiconductor structure with a backside collector contact, according to
aspects of the disclosure. In FIGS. 4A, 4B, 4C, and 4D, the structure 204 from FIG. 2D is shown rotated 180 degrees.[0052] In FIG. 4A, the process creates a hybrid bond interface 406 and interconnects 111 an HBT structure 402 to an Si interposer 404. The process grinds and etches the Si substrate to the sub-collector 112 (e.g., N+ InGaAs), resulting in the structure illustrated in FIG. 4B. In addition, in FIG. 4B, the process patterns the P+ GaAsSb base 116, patterns the N- InP collector 114, performs a nitride deposition, and patterns collector contact 113.[0053] In FIG. 4C, the process continues with adding the first ILD 102, and patterns additional ones of the metals 108 and the vias 109, resulting in an HBT wafer 403.[0054] In FIG. 4D, the process uses the hybrid bonding interface 406 (e.g., or another type of connection means) to interconnect the heterojunction bipolar transistor (HBT) wafer 403 (e.g., as shown in FIG. 4C) to Si interposer 404 that includes embedded passive components, to create a semiconductor 400. The process interconnects 407 (e.g., using hybrid bonding, solder balls, or another type of connection means) the Si interposer 404 to a reconstituted CMOS wafer 408. The reconstituted CMOS wafer 408 may include various radio frequency front end (RFFE) components, such as, for example, an SOI switch 410, an CMOS beamformer 412, an SOI switch 420, and a CMOS beamformer 422. A package substrate 454 is attached (e.g., using connections 424, such as solder balls, pins, or the like) to a top surface of the HBT wafer 403. An antenna module 452 is attached (e.g., using connections 426, such as solder balls, pins, or the like) to a top surface of the package substrate 454 and is connected to the HBT wafer 403 through the package substrate 454. The antenna module 452 may include multiple antenna tiles. For a mm-wave 3-dimensional integrated circuit (3DIC) stack, vertical routing may be used directly, without detour. Thus, integration on both sides of an Si interposer, with through- silicon via (TSV), fine-pitch interconnects provide for chip-on-wafer-on-substrate (CoWoS) integration.[0055] In the flow diagrams of FIGS. 5, 6, 7, and 8, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is
not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the processes 500, 600, 700, and 800 are described with reference to FIGS. 1, 2A-2D, 3A-3D, 4A-4D, and 5 as described above, although other models, frameworks, systems and environments may be used to implement these processes.[0056] FIG. 5 illustrates a process that includes forming a collector, an emitter, and a base of a semiconductor. The process 500 may be performed as part of a semiconductor manufacturing process.[0057] At 502, the process 500 forms a sub-collector. For example, in FIG. 3A and FIG. 4A, the process grinds and etches the Si substrate to form the sub-collector 112 (e.g., N+ InGaAs), resulting in the structure illustrated in FIG. 3B and FIG. 4B, respectively.[0058] At 504, the process 500 forms a collector on the sub-collector. The collector has a collector contact disposed on the sub-collector and located on a backside of a heterojunction bipolar transistor (HBT) semiconductor. For example, in FIG. 2A, the N- InP collector 114 is formed on the sub-collector 112 (e.g., N+ InGaAs). The process includes depositing a metal to form the emitter contact 122. The collector 114 and sub collector 112 are configured as a collector mesa, which protrudes from the P+ GaAsSb base 116 on a backside of the HBT.[0059] At 506, the process 500 forms an emitter. For example, in FIG. 2A, the emitter 118 is formed on top of the P+ GaAsSb base 116.[0060] At 508, the process 500 forms an emitter cap on the emitter. The emitter has an emitter contact disposed on the emitter cap and located on a second side of the HBT. For example, in FIG. 2A, the emitter cap 120 is formed on top of the emitter 118.[0061] At 510, the process 500 forms a base having a base contact located on the second side (e.g., frontside) of the HBT semiconductor. For example, in FIG. 2A, the P+ GaAsSb base 116 is formed on the N- InP collector 114.[0062] Thus, a process grows an epitaxial stack comprising the layers of the InP-based HBT on a Si substrate (e.g., 300mm diameter, with 100 crystalline orientation). The process grows the epitaxial stack comprising the layers of the InP HBT on an Si substrate, deposits emitter contact 122, and performs a photoresist strip. The process performs mesa lithography and mesa etch to create the mesa of the N+ emitter and performs a photoresist strip and deposits the Silicon Nitride.
[0063] A technical advantage includes a hetero-integrated HBT structure in which the HBT has a base-collector junction that is about 30% smaller in area as compared to a conventional structure in which the collector contacts are on a same side as the emitter contacts and the base contacts. The reduction in the size of the base-collector junction is achieved by moving the collector contacts to an opposite side of the emitter contacts and the base contacts. Another technical advantage is that, by reducing the base-collector junction area, a power gain improvement of about 2db is realized (e.g., compared to the conventional structure). The HBT structure described herein provides for a mesa on both the emitter side and the collector side and enables hetero-integration. For example, an Si interposer may be bonded on the emitter side with chiplets on the other side.[0064] FIG. 6 illustrates an example process that includes patterning vias and metals, according to aspects of the disclosure. The process 600 may be performed as part of a semiconductor manufacturing process.[0065] At 602, the process performs lithography of metals in a base. At 604, the process etches the Silicon Nitride (e.g., that was deposited earlier) and Indium Phosphide (InP). For example, in FIG. 2C, the process performs a lithography of the base contact 117 (e.g., metal) to the P+ GaAsSb base 116.[0066] At 606, the process performs a deposit of base metals and performs liftoff. Lift-off is a technique used to pattern a target material (e.g., a metal) using a sacrificial layer (e.g., photoresist) to define the pattern. The sacrificial layer is applied and patterned, after which the target material is deposited on top. The final step is the removal of the sacrificial material by lifting off the overlying target material. At 608, the process deposits Silicon Nitride. At 610, the process patterns vias and metals. For example, in FIG. 2D, the process adds the third ILD 103, the insulator 110 (e.g., insulation layer), and patterns the metals 108 and the vias 109. For ease of understanding, a 1st level of metal is illustrated in FIG. 2C. However, it should be understood that the process may add additional (e.g., 2, 3, 4, or 5) layers of metal.[0067] FIG. 7 illustrates an example process 700 that includes bonding a frontside to a sacrificial wafer, according to aspects of the disclosure. The process 700 may be performed as part of a semiconductor manufacturing process.[0068] At 702, the process bonds a front side to a sacrificial wafer. At 704, the process grinds and etches a silicon substrate down to a N+ GaAs sub-collector. At 706, the process patterns a base. At 708, the process patterns a collector. At 710, the process deposits
silicon nitride. At 712, the process patterns collector metal. For example, in FIG. 3A, the process bonds the wafer 302 to the second ILD 103. The wafer 302 is referred to as a sacrificial wafer because the wafer is later de-bonded. The process grinds and etches the Si substrate to the sub-collector 112 (e.g., N+ InGaAs), resulting in the structure illustrated in FIG. 3B. In addition, in FIG. 3B, the process patterns the P+ GaAsSb base 116, patterns the N- InP collector 114, deposits Silicon Nitride, and patterns the collector contact 113.[0069] At 714, the process deposits a dielectric. At 716, the process patterns vias and metals. At 718, the process de-bonds the sacrificial wafer. For example, in FIG. 3C, the process adds the first ILD 102, patterns metals 108 and the vias 109, and de-bonds the wafer 302 from the InP structure 304.[0070] At 720, the process hetero-integrates with chiplets on both sides of the Si substrate (e.g., Si interposer). At 722, the process embeds passive components into the Si substrate. For example, in FIG. 3D, the process hetero-integrates the InP structure 304 with chiplets on both sides and embeds passive components into the substrate or interposer. 306 illustrates the Si interposer with embedded passives.[0071] FIG. 8 illustrates an example process 800 that includes hybrid bonding and interconnecting an Si Substrate to an Si interposer, according to aspects of the disclosure. The process 800 may be performed as part of a semiconductor manufacturing process.[0072] At 802, the process hybrid bond and interconnects the Si substrate to an Si interposer. At 804, the process grinds and etches the Si substrate down to a N+ GaAs sub-collector. At 806, the process patterns a base. At 808, the process patterns a collector. At 810, the process deposits silicon nitride. At 812, the process patterns collector metal. For example, in FIG. 4A, the process uses hybrid bonding to connect an interface (e.g., hybrid bond interface 406) to the Si interposer 404. FIG. 4B illustrates a result of the process grinding and etching the Si substrate to the sub-collector 112 (e.g., N+ InGaAs). In FIG. 4B, the process patterns the P+ GaAsSb base 116, patterns the N- InP collector 114, performs a deposit of silicon nitride, and patterns collector contact 113.[0073] At 814, the process deposits a dielectric. At 816, the process patterns vias and metals. For example, in FIG. 4C, the process adds the first ILD 102, and patterns additional ones of the metals 108 and the vias 109, resulting in the HBT wafer 403.[0074] At 818, the process bonds a reconstituted CMOS wafer to the Si interposer. At 820, the process embeds passive components into the Si interposer. For example, in FIG. 4D, the
process interconnects the HBT wafer 403 (of FIG. 4C) to the Si interposer 404 and then embeds the passive components. The HBT wafer 403 is a structure that uses InP and includes radio frequency front end (RFFE) components such as, for example, a power amplifier, a low noise amplifier, a varactor, and the like. The process interconnects 407 the Si interposer 404 to the reconstituted CMOS wafer 408. The reconstituted CMOS wafer 408 may include, for example, the SOI switch 310, the CMOS beamformer 312, the SOI switch 410, and the CMOS beamformer 412. The package substrate 454 is attached to the top surface of the HBT wafer 403. The antenna module 452 is attached to the top surface of the package substrate 454 and is connected to the sub-collector 112 (e.g., N+ InGaAs) of the HBT wafer 403 through the package substrate 454.[0075] FIG. 9 illustrates components of an integrated device 900 according to one or more aspects of the disclosure. Regardless of the various techniques discussed above, it will be appreciated that the semiconductor 400 (which may contain multiple dies / chiplets, etc.) may be configured to couple to a PCB 970. The PCB 970 is also coupled to a power supply 980 (e.g., a power management integrated circuit (PMIC)), which allows the the semiconductor 400 to be electrically coupled to the PMIC 980. Specifically, one or more power supply (VDD) lines 971 and one or more ground (GND) lines 972 may be coupled to the PMIC 980 to distribute power to the PCB 970, semiconductor 400 via VDD BGA pin 925 and GND BGA pin 927. The VDD line 971 and GND line 972 each may be formed from traces, shapes or patterns in one or more metal layers of the PCB 970 (e.g., layers 1-6) coupled by one or more vias through insulating layers separating the metal layers 1-6 in the PCB 970. The PCB 970 may have one or more PCB capacitors (PCB cap) 975 that can be used to condition the power supply signals, as is known to those skilled in the art. Additional connections and devices may be coupled to and/or pass through the PCB 970 to the semiconductor 400 via one or more additional BGA pins (not illustrated) on the semiconductor 400. It will be appreciated that the illustrated configuration and descriptions are provided merely to aid in the explanation of the various aspects disclosed herein. For example, the PCB 970 may have more or less metal and insulating layers, there may be multiple lines providing power to the various components, etc. Accordingly, the forgoing illustrative examples and associated figures should not be construed to limit the various aspects disclosed and claimed herein[0076] In accordance with the various aspects disclosed herein, at least one aspect includes a hetero-integrated HBT structure that is created using either a sacrificial wafer or an Si
interposer. A technical advantage includes a hetero-integrated HBT structure in which the HBT has a base-collector junction that is about 30% smaller in area as compared to a conventional structure in which the collector contacts are on a same side as the emitter contacts and the base contacts. The reduction in the size of the base-collector junction is achieved by moving the collector contacts to an opposite side of the emitter contacts and the base contacts. Another technical advantage is that, by reducing the base-collector junction area, a power gain improvement of about 2db is realized (e.g., compared to the conventional structure). The HBT structure described herein provides for a mesa on both the emitter side and the collector side and enables hetero-integration. For example, an Si interposer may be bonded on the emitter side with chiplets on the other side.[0077] Other technical advantages will be recognized from various aspects disclosed herein and these technical advantages are merely provided as examples and should not be construed to limit any of the various aspects disclosed herein.[0078] FIG. 10 illustrates an exemplary mobile device 1000 in accordance with some examples of the disclosure. Referring now to FIG. 10, a block diagram of a mobile device that is configured according to exemplary aspects is depicted and generally designated mobile device 1000. In some aspects, mobile device 1000 may be configured as a wireless communication device. As shown, mobile device 1000 includes processor 1001. Processor 1001 may be communicatively coupled to memory 1032 over a link, which may be a die-to-die or chip-to-chip link. Processor 1001 is a hardware device capable of executing logic instructions. Mobile device 1000 also includes display 1028 and display controller 1026, with display controller 1026 coupled to processor 1001 and to display 1028.[0079] In some aspects, FIG. 10 may include coder/decoder (CODEC) 1034 (e.g., an audio and/or voice CODEC) coupled to processor 1001; speaker 1036 and microphone 1038 coupled to CODEC 1034; and wireless circuits 1040 (which may include a modem, RF circuitry, filters, etc., which may be implemented using hetero-integration with InP) coupled to wireless antenna 1042 and to processor 1001.[0080] In a particular aspect, where one or more of the above-mentioned blocks are present, processor 1001, display controller 1026, memory 1032, CODEC 1034, and wireless circuits 1040 can be implemented in whole or part using the hetero-integration techniques disclosed herein. Input device 1030 (e.g., physical or virtual keyboard), power supply 1044 (e.g., battery), display 1028, input device 1030, speaker 1036, microphone 1038,
wireless antenna 1042, and power supply 1044 may be external to device 1000 and may be coupled to a component of device 1000, such as an interface or a controller.[0081] It should be noted that although FIG. 10 depicts a mobile device 1000, processor 1001 and memory 1032 may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices.[0082] FIG. 11 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device or semiconductor device accordance with various examples of the disclosure. For example, a mobile phone device 1102, a laptop computer device 1104, and a fixed location terminal device 1106 may each be considered generally user equipment (UE) and may include an electronic device 1100 including hetero integration with an HBT structure (e.g., the integrated device 900 of FIG. 9), as described herein. The electronic device 1100 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 1102, 1104, 1106 illustrated in FIG. 11 are merely exemplary. Other devices may also include electronic device 1100 having the hetero-integration with an HBT structure including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), an Internet of things (IoT) device or any other device that stores or retrieves data or computer instructions or any combination thereof.[0083] It can be noted that, although particular frequencies, integrated circuits (ICs), hardware, and other features are described in the aspects herein, alternative aspects may vary. That is, alternative aspects may utilize additional or alternative frequencies (e.g., other the 60 GHz and/or 28 GHz frequency bands), antenna elements (e.g., having different size/shape of antenna element arrays), scanning periods (including both static and dynamic scanning periods), electronic devices (e.g., WLAN APs, cellular base stations, smart speakers, IoT
devices, mobile phones, tablets, personal computer (PC), etc.), and/or other features. A person of ordinary skill in the art will appreciate such variations.[0084] It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements may comprise one or more elements. In addition, terminology of the form “at least one of A, B, or C” or “one or more of A, B, or C” or “at least one of the group consisting of A, B, and C” used in the description or the claims means “A or B or C or any combination of these elements.” For example, this terminology may include A, or B, or C, or A and B, or A and C, or A and B and C, or 2A, or 2B, or 2C, and so on.[0085] In view of the descriptions and explanations above, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0086] In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the example clauses have more features than are explicitly mentioned in each clause. Rather, the various aspects of the disclosure may include fewer than all features of an individual example clause disclosed. Therefore, the following clauses should hereby be deemed to be incorporated in the description, wherein each clause by itself can stand as a separate example. Although each dependent clause can refer in the clauses to a specific combination with one of the other clauses, the aspect(s) of that dependent clause are not
limited to the specific combination. It will be appreciated that other example clauses can also include a combination of the dependent clause aspect(s) with the subject matter of any other dependent clause or independent clause or a combination of any feature with other dependent and independent clauses. The various aspects disclosed herein expressly include these combinations, unless it is explicitly expressed or can be readily inferred that a specific combination is not intended (e.g., contradictory aspects, such as defining an element as both an insulator and a conductor). Furthermore, it is also intended that aspects of a clause can be included in any other independent clause, even if the clause is not directly dependent on the independent clause. Implementation examples are described in the following numbered clauses:[0087] Clause 1. An apparatus comprising a heterojunction bipolar transistor comprising: a sub collector disposed on a collector, the collector having a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor; an emitter disposed on an emitter cap, the emitter having an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor; and a base having a base contact located on the second side of the heterojunction bipolar transistor.[0088] Clause 2. The apparatus of clause 1, wherein: the heterojunction bipolar transistor comprises an epitaxial stack; and the epitaxial stack includes the collector, the emitter, and the base.[0089] Clause 3. The apparatus of any of clauses 1 to 2, wherein the sub-collector comprises Indium Gallium Arsenide.[0090] Clause 4. The apparatus of any of clauses 1 to 3, wherein the base comprises at least one of Gallium Arsenide Antimonide or Indium Gallium Arsenide.[0091] Clause 5. The apparatus of any of clauses 1 to 4, wherein the emitter comprises Indium Phosphide.[0092] Clause 6. The apparatus of any of clauses 1 to 5, wherein the collector comprises Indium Phosphide.[0093] Clause 7. The apparatus of any of clauses 1 to 6, further comprising a silicon interposer.[0094] Clause 8. The apparatus of clause 7, wherein the silicon interposer is coupled to the second side of the heterojunction bipolar transistor with a hybrid bond interface.[0095] Clause 9. The apparatus of any of clauses 7 to 8, wherein one or more passive components are embedded in the silicon interposer.
[0096] Clause 10. The apparatus of any of clauses 7 to 9, wherein a plurality of radio frequency front end components are coupled to the silicon interposer.[0097] Clause 11. The apparatus of clause 10, wherein the plurality of radio frequency front end components comprise at least one of: a variable gain amplifier; a component of a frequency synthesizer; a complementary metal oxide semiconductor low noise amplifier; a complementary metal oxide semiconductor beamformer; and a silicon-on-insulator switch.[0098] Clause 12. The apparatus of any of clauses 7 to 11, wherein the silicon interposer is coupled to the heterojunction bipolar transistor via a plurality of solder balls.[0099] Clause 13. The apparatus of any of clauses 1 to 12, further comprising: an antenna module coupled to the first side of the heterojunction bipolar transistor.[0100] Clause 14. The apparatus of clause 13, wherein the antenna module comprises one or more antenna tiles and a package substrate.[0101] Clause 15. The apparatus of any of clauses 1 to 14, wherein the apparatus is selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.[0102] Clause 16. A method of fabricating a semiconductor device comprising a heterojunction bipolar transistor, the method comprising: forming a sub-collector; forming a collector on the sub-collector, the collector having a collector contact disposed on the sub-collector and located on a first side of the heterojunction bipolar transistor; forming an emitter; forming an emitter cap on the emitter, the emitter having an emitter contact disposed on the emitter cap and located on a second side of the heterojunction bipolar transistor; and forming a base having a base contact located on the second side of the heterojunction bipolar transistor.[0103] Clause 17. The method of clause 16, further comprising: coupling, with a hybrid bond interface, a silicon interposer to the second side of the heterojunction bipolar transistor.[0104] Clause 18. The method of clause 17, further comprising: hybrid bonding a complementary metal oxide semiconductor device to the silicon interposer.
[0105] Clause 19. The method of clause 18, wherein the complementary metal oxide semiconductor device comprises: at least one complementary metal oxide semiconductor beamformer; and at least one silicon-on-insulator (SOI) switch.[0106] Clause 20. The method of any of clauses 17 to 19, further comprising: embedding one or more passive components into the silicon interposer.[0107] Clause 21. The method of any of clauses 16 to 20, wherein: the heterojunction bipolar transistor comprises an epitaxial stack; and the epitaxial stack includes the collector, the emitter, and the base.[0108] Clause 22. The method of any of clauses 16 to 21, further comprising: bonding a sacrificial wafer to the second side of the heterojunction bipolar transistor.[0109] Clause 23. The method of clause 22, further comprising: debonding the sacrificial wafer from the second side of the heterojunction bipolar transistor; hetero-integrating the heterojunction bipolar transistor with one or more chiplets on the second side of the heterojunction bipolar transistor; and hetero-integrating the heterojunction bipolar transistor with one or more chiplets on the first side of the heterojunction bipolar transistor.[0110] Clause 24. The method of any of clauses 16 to 23, wherein: the sub-collector comprises Indium Gallium Arsenide; the base comprises at least one of Gallium Arsenide Antimonide or Indium Gallium Arsenide; the emitter comprises Indium Phosphide; and the collector comprises Indium Phosphide.[0111] Clause 25. The method of clause 16, further comprising: coupling, with a hybrid bond interface, a silicon interposer to the second side of the heterojunction bipolar transistor; and coupling, with a hybrid bond interface, a reconstituted complementary metal oxide semiconductor wafer to the silicon interposer, wherein one or more passive components are embedded into the silicon interposer, and wherein the reconstituted complementary metal oxide semiconductor wafer comprises a plurality of radio frequency front end components.[0112] Clause 26. The method of any of clauses 16 to 25, wherein the semiconductor device is incorporated into an apparatus selected from the group consisting of: a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, a base station, and a device in an automotive vehicle.
[0113] Accordingly, it will be appreciated, for example, that an apparatus or any component of an apparatus may be configured to (or made operable to or adapted to) provide functionality as taught herein. This may be achieved, for example: by manufacturing (e.g., fabricating) the apparatus or component so that it will provide the functionality; by programming the apparatus or component so that it will provide the functionality; or through the use of some other suitable implementation technique. As one example, an integrated circuit may be fabricated to provide the requisite functionality. As another example, an integrated circuit may be fabricated to support the requisite functionality and then configured (e.g., via programming) to provide the requisite functionality. As yet another example, a processor circuit may execute code to provide the requisite functionality.[0114] Moreover, the methods, sequences, and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An example storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor (e.g., cache memory).[0115] While the foregoing disclosure shows various illustrative aspects, it should be noted that various changes and modifications may be made to the illustrated examples without departing from the scope defined by the appended claims. The present disclosure is not intended to be limited to the specifically illustrated examples alone. For example, unless otherwise noted, the functions, steps, and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although certain aspects may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
An interferometric modulator ("IMOD") display utilizes ambient light and incorporates touch sensing without reducing the amount of ambient light that reaches the MEMS modulators, and without introducing any optical distortion or loss of performance. Electrodes for touch sensing are located at a back glass of the inteferometric display, and are used in conjunction with electrodes whose primary function is to activate the pixels of the MEMS display, in order to sense a touch. The touch deflects the IMOD layers and is sensed through the various display layers at the rear of the display. |
1.A device, including:First substrateAn array of micro-electromechanical elements, which is arranged on the first substrate;A first plurality of electrodes configured for conducting electrical signals to the array of microelectromechanical elements;A first control circuit configured to apply electrical signals for controlling the array of microelectromechanical elements via the first plurality of electrodes;Second substrateA second plurality of electrodes, which are disposed on the second substrate; andA second control circuit configured to detect a change in capacitance between the first plurality of electrodes and the second plurality of electrodes, and determine the deflection area of the first substrate based at least in part on the change in capacitance .2.The apparatus of claim 1, wherein the array of microelectromechanical elements includes interferometric modulation elements including two walls defining a cavity, one of the walls may be within a range of positions Moving relative to another wall, the wall causes the cavity to interferometrically operate in at least one of the positions, thereby generating a predetermined optical response to visible light.3.The apparatus according to claim 1 or claim 2, wherein the first substrate is substantially transparent.4.The apparatus according to any one of claims 1 to 3, wherein the second control circuit is further configured to calculate the centroid of the capacitance change.5.The device of claim 4, wherein the second control circuit is further configured to change the stored mapping of the intersecting data with reference to the centroid capacitance and determine the location of the touch.6.The device of claim 5, wherein the second control circuit is further configured to calculate the centroid of the multi-touch.7.The apparatus according to any one of claims 1 to 6, wherein the first plurality of electrodes are part of an optical stack disposed on the first substrate.8.The apparatus according to any one of claims 1 to 7, wherein the first plurality of electrodes are adjacent to the first substrate.9.The apparatus of claim 8, wherein the first plurality of electrodes is located between the microelectromechanical element array and the first substrate.10.The apparatus according to any one of claims 1 to 9, wherein a plurality of adjacent electrodes in the first plurality of electrodes are connected together and are sensed at the same time.11.The device according to any one of claims 1 to 10, wherein the device comprises: a display;A processor configured to communicate with the display, the processor configured to process image data; andA memory device that is configured to communicate with the processor.12.The device of claim 11, further comprising:A driver circuit configured to send at least one signal to the display.13.The device of claim 12, further comprising:A controller configured to send at least a portion of the image data to the driver circuit.14.The device of claim 11, further comprising:An image source module configured to send the image data to the processor.15.The apparatus of claim 14, wherein the image source module includes at least one of a receiver, a transceiver, and a transmitter.16.The device of claim 11, further comprising:An input device that is configured to receive input data and communicate the input data to the processor.17.A device with a display having a front side of the display and a back side of the display, the device comprising:The first substantially transparent substrate;An array of interferometric modulation elements disposed on the first substantially transparent substrate, the interferometric modulation elements including two walls defining a cavity, one of the walls may be opposed within a range of positions Moving on another wall, which causes the cavity to interferometrically operate in at least one of the positions, thereby generating a predetermined optical response to visible light;A first plurality of electrodes configured for conducting electrical signals to the array of interferometric modulation elements;A first control circuit configured to apply electrical signals for controlling the array of interferometric modulation elements via the first plurality of electrodes;A second substrate at the rear side of the display; andA plane sensing member, which is positioned at the rear side of the display, the plane sensing member is used to sense a touch and correlation of a part of the plane sensing member with electrical parameters between the first plurality of electrodes Change.18.The apparatus of claim 17, further comprising a second plurality of electrodes configured for conducting electrical signals to the array of interferometric modulation elements.19.The apparatus according to claim 18, wherein the plane sensing member at the rear side of the display is further configured to sense the position between a portion of the plane sensing member and the second plurality of electrodes The associated changes in the electrical parameters.20.A method of manufacturing and operating an interferometric display device, the method comprising:Providing a front substrate at the front of the display, the front substrate being substantially transparent;Providing a rear substrate at the rear of the display, the rear substrate being substantially transparent;Providing an array of interferometric modulation elements disposed between the front and rear substrates,The array is disposed on the front substrate at the front of the display, the interferometric modulation element includes two walls defining a cavity, one of the walls may be in a range of positions The interior moves relative to another wall, which causes the cavity to interferometrically operate in at least one of the positions, thereby generating a predetermined optical response to visible light;A first plurality of electrodes is provided, the first plurality of electrodes being oriented along a first axis and configured for conducting electrical signals to the array of interferometric modulation elements, the first plurality of electrodes and the back Substrate contactProviding a second plurality of electrodes oriented along a second axis that is substantially orthogonal to the first axis;Use one or both of the first or second plurality of electrodes as a panel to touch the screen, and sense the passage of the electrode in the first plurality of electrodes and the second plurality of electrodes Changes in parameters resulting from touch at the intersection between the electrodes.21.The method of claim 20, wherein sensing a change in a parameter generated by touch comprises: sensing between the electrode in the first plurality of electrodes and the electrode in the second plurality of electrodes The capacitance at the intersection changes.22.The method according to claim 20 or claim 21, wherein sensing the change in the parameter generated by the touch includes sensing all of the electrodes in the first plurality of electrodes and the second plurality of electrodes The resistance at the intersection between the electrodes changes.23.The method of claim 21, further comprising calculating the centroid of the capacitance change.24.The method of claim 23, further comprising referencing a centroid capacitance to change the stored mapping of intersecting data and determining the location of the touch.25.The method of claim 24, further comprising calculating the centroid of the multi-touch.26.The method of claim 20, further comprising providing a plurality of posts within the rear substrate.27.The method of any one of claims 20 to 26, wherein the first plurality of electrodes are placed on top of the post.28.The method of claim 26, further comprising providing a desiccant between the plurality of pillars within the substrate.29.A method of manufacturing and operating an interferometric display device, the method comprising:Forming an interferometric modulator array on the array substrate;Forming an absorbent layer;Providing a second substrate opposite to the array substrate;Forming an electrode at the second substrate; andA seal is formed and the array substrate is attached to the opposing second substrate.30.The method of claim 29, further comprising forming a top plate electrode.31.The method of claim 29 or claim 30, further comprising forming pillars within the second substrate and providing a desiccant between the pillars.32.The method of claim 31, wherein forming the electrode at the second substrate includes patterning the electrode on top of the pillar.33.The method of claim 30, further comprising supplying a reference voltage to the top plate electrode, and determining the location of the touch by detecting a change in capacitance between the top plate and the conductor of the interferometric display.34.The method of claim 30, wherein the electrode at the second substrate comprises [further comprising] [on the second substrate] providing a matrix of touch sensors, and the method further comprises A reference voltage is supplied to the electrodes of the interferometric modulator array and detects a change in capacitance between one of the touch sensor matrix and the electrodes of the array.35.The method of claim 30, wherein forming the electrode at the second substrate comprises: forming a single unpatterned conductive plate, and wherein the method further comprises providing a reference voltage to the interference Patterned line electrodes of the array of modulators to determine the location of the touch and determine the change in capacitance between the patterned line electrodes and the unpatterned conductive plate. |
Integrated touch screen for interferometric modulator displayThis application advocates US Patent Application No. 12 / 645,379, entitled “INTEGRATED TOUCH FOR FOR IMOD DISPLAYS USING BACK GLASS (Integrated Touch for Interferometric Modulator Display Using Back Glass)” filed on December 22, 2009 The priority of the application is incorporated herein by reference for all purposes.Technical fieldBackground techniqueMicroelectromechanical systems (MEMS) contain micromechanical elements, actuators and electronic equipment. Micromechanical elements can be created using deposition, etching, and / or other micromachining processes that etch away portions of the substrate and / or deposited material layers or add layers to form electrical and electromechanical devices. One type of MEMS device is called an interferometric modulator. As used herein, the term interferometric modulator or interferometric light modulator refers to a device that selectively absorbs and / or reflects light using the principles of optical interference. In some embodiments, the interferometric modulator may include a pair of conductive plates, one or both of which may be completely or partially transparent and / or reflective, and capable of applying an appropriate electrical signal Then perform relative movement. In certain embodiments, one conductive plate may include a fixed layer deposited on a substrate, and another conductive plate may include a metal film separated from the fixed layer by an air gap. As described in more detail herein, the position of one conductive plate relative to another conductive plate can change the optical interference of light incident on the interferometric modulator. Such devices have a wide range of applications, and it would be advantageous in this technology to utilize and / or modify the characteristics of these types of devices so that their characteristics can be used to improve existing products and produce new products that have not yet been developed.Summary of the inventionAn interferometric modulator ("IMOD") display utilizes ambient light and incorporates touch sensing without reducing the amount of ambient light and without generating any optical distortion or performance loss. The electrodes for touch sensing are located at the rear substrate or "back glass" of the interferometric display and are used in conjunction with electrodes whose main function is to activate the pixels of the MEMS display in order to sense the touch. The touch deflects the IMOD layer and is sensed via various display layers at the rear of the display.One aspect relates to a method of manufacturing and operating an interferometric display device. The method includes: providing a front substrate at the front of the display, the front substrate being substantially transparent; providing a rear substrate at the rear of the display, the back lining The bottom is substantially transparent; and an array of interferometric modulation elements disposed between the front and back substrates is provided. The array is disposed on the front substrate at the front of the display, and the interferometric modulation element includes two walls defining a cavity, one of the walls may be in a position Moving relative to another wall within range, the wall causes the cavity to interferometrically operate in at least one of the positions, thereby generating a predetermined optical response to visible light. The method further includes providing a first plurality of electrodes oriented along a first axis and configured for conducting electrical signals to the array of interferometric modulation elements, the first plurality Electrodes in contact with the rear substrate; providing a second plurality of electrodes oriented along a second axis that is substantially orthogonal to the first axis; using the first or second One or both of the electrodes serve as a board that touches the screen, and sensing is generated between the electrode in the first plurality of electrodes and the electrode in the second plurality of electrodes generated by touch Change of the parameter at the intersection.In some embodiments, the parameter includes a capacitance value, and the method further includes calculating a centroid of the capacitance change. The method may further include changing the stored mapping of the intersection data with reference to the centroid capacitance and determining the location of the touch.Another aspect relates to an apparatus, comprising: a first substantially transparent substrate; and an array of interferometric modulation elements disposed on the first substantially transparent substrate, the interferometric modulation elements including defining Two walls of the cavity, one of the walls is movable relative to the other wall within a range of positions, the walls enable the cavity to interferometrically operate in at least one of the positions, Thus, a predetermined light response to visible light is generated. The apparatus further includes: a first plurality of electrodes configured for conducting electrical signals to the array of interferometric modulation elements; a first control circuit configured to pass the first A plurality of electrodes applying electrical signals for controlling the array of interferometric modulation elements; a second substrate; a second plurality of electrodes, the second plurality of electrodes disposed on the second substrate; and a second A control circuit configured to detect a change in capacitance between the first plurality of electrodes and the second plurality of electrodes and determine the deflection of the first substantially transparent substrate based at least in part on the change in capacitance region.The nature and advantages of the present invention can be further understood with reference to the rest of the description and the drawings.BRIEF DESCRIPTION1 is an isometric view depicting a portion of an embodiment of an interferometric modulator display, where the movable reflective layer of the first interferometric modulator is in a relaxed position and the movable reflective layer of the second interferometric modulator is in动 位置。 Moving position.2 is a system block diagram illustrating one embodiment of an electronic device incorporating a 3 × 3 interferometric modulator display.3 is a graph of movable mirror position versus applied voltage for an exemplary embodiment of the interferometric modulator of FIG.4 is an illustration of a set of row and column voltages that can be used to drive an interferometric modulator display.5A and 5B illustrate an exemplary timing diagram of row and column signals that can be used to write frames of display data to the 3 × 3 interferometric modulator display of FIG. 2.6A and 6B are system block diagrams illustrating an embodiment of a visual display device including multiple interferometric modulators.7A is a cross-section of the device of FIG.7B is a cross-section of an alternative embodiment of an interferometric modulator.7C is a cross-section of another alternative embodiment of an interferometric modulator.7D is a cross-section of yet another alternative embodiment of an interferometric modulator.Figure 7E is a cross-section of an additional alternative embodiment of an interferometric modulator.8A, 8B, and 8C are cross-sections of additional alternative embodiments of interferometric modulators.Figure 8D is a cross-section of a two-state embodiment of an interferometric modulator.9A-9D are illustrations of embodiments of electrodes used in touch sensing.10A and 10B are cross-sections of embodiments in which a post is incorporated into the back glass of an interferometric modulator.11 is a flowchart depicting an outline of device manufacturing.detailed descriptionInterferometric modulatorThe following detailed description is directed to certain specific embodiments. However, the teachings in this article can be applied in many different ways. In this description, refer to the drawings in which the same parts are always designated with the same numbers. The embodiments may be implemented in any device configured to display images (whether moving images (eg, video) or still images (eg, still images) and whether text images or picture images). Rather, it is expected that the embodiments may be implemented in or associated with various electronic devices such as (but not limited to) the following: mobile phones, wireless devices, personal data assistants (PDAs), Handheld or portable computers, GPS receivers / navigators, cameras, MP3 players, camcorders, game consoles, watches, clocks, calculators, TV monitors, flat panel displays, computer monitors, car displays ( For example, odometer display, etc.), cockpit controller and / or display, camera field of view display (for example, the display of the rear view camera in the vehicle), electronic photos, electronic billboards or electronic signs, projectors, building structures, packaging And aesthetic structure (for example, an image display about a piece of jewelry). MEMS devices that are structurally similar to the MEMS devices described herein can also be used in non-display applications such as electronic switching devices.An interferometric modulator ("IMOD") display utilizes ambient light and incorporates touch sensing without reducing the amount of ambient light reaching the MEMS modulator and without introducing any optical distortion or performance loss. Electrodes for touch sensing are located at the back glass of the interferometric display and are used in conjunction with electrodes whose main function is to activate pixels of the MEMS display in order to sense touch. The touch deflects the IMOD layer and is sensed through various display layers at the rear of the display. The embodiments of this display are described below.An embodiment of an interferometric modulator display including interferometric MEMS display elements is illustrated in FIG. In these devices, the pixels are in a bright or dark state. In the bright ("relaxed" or "open") state, the display element reflects most of the incident visible light to the user. When in a dark ("actuated" or "closed") state, the display element hardly reflects incident visible light to the user. Depending on the embodiment, the light reflection properties of the "on" and "off" states can be reversed. MEMS pixels can be configured to reflect mainly in selected colors, thereby allowing color display in addition to black and white.Figure 1 is an isometric view depicting two adjacent pixels in a series of pixels of a visual display, where each pixel includes a MEMS interferometric modulator. In some embodiments, the interferometric modulator display includes a row / column array of these interferometric modulators. Each interferometric modulator includes a pair of reflective layers positioned at a variable and controllable distance from each other to form a resonant optical gap with at least one variable size. In one embodiment, one of the reflective layers can move between two positions. In the first position (referred to herein as the relaxed position), the movable reflective layer is positioned relatively far away from the fixed partial reflective layer. In the second position (referred to herein as the actuated position), the movable reflective layer is positioned closer to the partially reflective layer. The incident light reflected from the two layers interferes constructively or destructively depending on the position of the movable reflective layer, thereby producing an overall reflective or non-reflective state for each pixel.The depicted portion of the pixel array in FIG. 1 includes two adjacent interferometric modulators 12a and 12b. In the interferometric modulator 12a on the left, the movable reflective layer 14a is illustrated as being in a relaxed position at a predetermined distance from the optical stack 16a containing the partially reflective layer. In the interferometric modulator 12b on the right, the movable reflective layer 14b is illustrated as being in an actuated position adjacent to the optical stack 16b.The optical stacks 16a and 16b (collectively referred to as optical stacks 16) as mentioned herein generally include several fusion layers, which may include electrode layers such as indium tin oxide (ITO), partially reflective layers such as chromium, and transparent Dielectric. The optical stack 16 is therefore conductive, partially transparent, and partially reflective, and can be manufactured, for example, by depositing one or more of the above layers on the transparent substrate 20. The partially reflective layer may be formed of various materials that are partially reflective (eg, various metals, semiconductors, and dielectrics). The partially reflective layer may be formed of one or more material layers, and each of the layers may be formed of a single material or a combination of materials.In some embodiments, the layers of the optical stack 16 are patterned into parallel stripes, and may form row electrodes in a display device as described further below. The movable reflective layers 14a, 14b may be formed as a series of parallel strips of one or more deposited metal layers (orthogonal to the row electrodes of 16a, 16b) to form deposits on the pillar 18 and intervening sacrificial materials (deposited on Between columns 18). When the sacrificial material is etched away, the movable reflective layers 14a, 14b are separated from the optical stack 16a, 16b by the defined gap 19. A highly conductive and reflective material such as aluminum can be used for the reflective layer 14, and these strips can form column electrodes in the display device. It should be noted that Figure 1 may not be to scale. In some embodiments, the spacing between the pillars 18 may be about 10-100 μm, and the gap 19 may be about <1000 Angstroms.With no voltage applied, as illustrated by the pixel 12a in FIG. 1, the gap 19 remains between the movable reflective layer 14a and the optical stack 16a, where the movable reflective layer 14a is in a mechanically relaxed state. However, when a potential (voltage) difference is applied to selected rows and columns, the capacitor formed at the intersection of the row and column electrodes at the corresponding pixel becomes charged, and the electrodes are pulled together by electrostatic forces. If the voltage is high enough, the movable reflective layer 14 deforms and presses against the optical stack 16. As illustrated by the actuation pixel 12b on the right in FIG. 1, the dielectric layer (not illustrated in this figure) within the optical stack 16 prevents short circuits and controls the separation distance between the layers 14 and 16. Regardless of the polarity of the applied potential difference, the behavior is the same.2 to 5 illustrate an exemplary process and system for using an interferometric modulator array in display applications.2 is a system block diagram illustrating one embodiment of an electronic device that can incorporate an interferometric modulator. The electronic device includes a processor 21, which may be any general single-chip or multi-chip microprocessor (eg,or), or any special-purpose microprocessor (eg, digital signal processor, Microcontroller or programmable gate array). As is conventional in the art, the processor 21 may be configured to execute one or more software modules. In addition to executing an operating system, the processor may also be configured to execute one or more software applications, including one or more web application programs, telephone applications, e-mail programs, or any other software applications.In one embodiment, the processor 21 is also configured to communicate with the array driver 22. In one embodiment, the array driver 22 includes row driver circuits 24 and column driver circuits 26 that provide signals to the display array or panel 30. The cross section of the array illustrated in FIG. 1 is illustrated by line 1-1 in FIG. 2. It should be noted that although FIG. 2 illustrates a 3 × 3 interferometric modulator array for clarity, the display array 30 may contain a large number of interferometric modulators, and may have a number of interferometric modulators in rows other than the number in columns (For example, 300 pixels per row by 190 pixels per column).3 is a graph of movable mirror position versus applied voltage for an exemplary embodiment of the interferometric modulator of FIG. For MEMS interferometric modulators, the row / column actuation protocol can take advantage of the hysteresis properties of these devices as illustrated in FIG. 3. An interferometric modulator may require a potential difference of, for example, 10 volts to deform the movable layer from a relaxed state to an actuated state. However, when the voltage decreases from the value, as the voltage drops back below 10 volts, the movable layer maintains its state. In the exemplary embodiment of FIG. 3, the movable layer does not relax completely until the voltage drops below 2 volts. Therefore, there is a voltage range (approximately 3 to 7 V in the example illustrated in FIG. 3), in which case there is an applied voltage window within which the device is stable in a relaxed state or caused Dynamic state. This window is referred to herein as the "lag window" or "stability window." For the display array having the hysteresis characteristic of FIG. 3, the row / column actuation protocol can be designed such that during row gating, the pixels to be actuated in the selected pass are exposed to a voltage difference of approximately 10 volts, and The slack pixels are exposed to a voltage difference close to zero volts. After gating, the pixel is exposed to a steady state of about 5 volts or a bias voltage difference, so that it remains in whatever state the row gating puts it in. In this example, after being written, each pixel experiences a potential difference within a "stability window" of 3-7 volts. This feature makes the pixel design illustrated in FIG. 1 stable under the same applied voltage conditions in an actuated or relaxed pre-existing state. Since each pixel of the interferometric modulator (whether in an actuated or relaxed state) is essentially a capacitor formed by a fixed and moving reflective layer, this stable state can be maintained at a voltage within the hysteresis window with almost no power dissipation. If the applied potential is fixed, essentially no current flows into the pixel.As described further below, in a typical application, a frame of an image can be generated by sending a set of data signals (each data signal has a certain voltage level) on a set of column electrodes according to the set of pixels to be actuated in the first row. A row pulse is then applied to the first row electrode, thereby actuating the pixel corresponding to the set of data signals. The set of data signals is then changed to correspond to the set of pixels to be actuated in the second row. A pulse is then applied to the electrodes in the second row, thereby actuating the appropriate pixels in the second row according to the data signal. The pixels in the first row are not affected by the pulses in the second row and remain in the state they were set to during the pulses in the first row. For the entire series of lines, this process can be repeated in a sequential manner to produce frames. In general, frames are refreshed and / or updated with new image data by continuously repeating this process at a certain desired number of frames per second. A wide variety of protocols for driving row and column electrodes of pixel arrays to produce image frames can be used.4 and 5 illustrate one possible actuation protocol for generating display frames on the 3 × 3 array of FIG. 2. 4 illustrates a possible set of column and row voltage levels of pixels that can be used to exhibit the hysteresis curve of FIG. In the embodiment of FIG. 4, actuating the pixels involves setting the appropriate column to -Vbias and the appropriate row to + ΔV, which can correspond to -5 volts and +5 volts, respectively. Pixel relaxation is achieved by setting the appropriate column to + Vbias and the appropriate row to the same + ΔV to produce a zero volt potential difference across the pixel. In the row that maintains the row voltage at zero volts, the pixel is stable in whatever state it was originally in, regardless of whether the column is at + Vbias or -Vbias. As also illustrated in FIG. 4, a voltage having a polarity opposite to that of the above voltage may be used, for example, actuating the pixel may involve setting the appropriate column to + Vbias and the appropriate row to -ΔV. In this embodiment, the pixel is released by setting the appropriate column to -Vbias and the appropriate row to the same -ΔV to generate a zero-volt potential difference across the pixel.5B is a timing diagram illustrating a series of row and column signals applied to the 3 × 3 array of FIG. 2 that will result in the display arrangement illustrated in FIG. 5A (where the actuated pixels are non-reflective). Before writing to the frame illustrated in FIG. 5A, the pixels may be in any state, and in this example, all rows are initially at 0 volts and all columns are at +5 volts. With these applied voltages, all pixels stabilize in their existing actuated or relaxed state.In the frame of FIG. 5A, pixels (1, 1), (1,2), (2, 2), (3, 2), and (3, 3) are actuated. To achieve this, during the "line time" of row 1, columns 1 and 2 are set to -5 volts, and column 3 is set to +5 volts. Since all pixels are kept within a stable window of 3-7 volts, this situation does not change the state of any pixel. Next, row 1 is gated by pulses that rise from 0 to 5 volts and return to zero. This situation actuates the (1, 1) and (1,2) pixels and relaxes the pixels (1, 3). The other pixels in the array are not affected. To set row 2 as needed, set column 2 to -5 volts and set columns 1 and 3 to +5 volts. The same gate applied to row 2 will then actuate pixel (2, 2) and relax pixels (2, 1) and (2, 3). In addition, other pixels in the array are not affected. Row 3 is similarly set by setting columns 2 and 3 to -5 volts and column 1 to +5 volts. As illustrated in FIG. 5A, row 3 gates set the pixels of row 3. After writing the frame, the row potential is zero, and the column potential can be maintained at +5 or -5 volts, and the display then stabilizes in the arrangement of FIG. 5A. The same procedure can be used for arrays with tens or hundreds of rows and columns. Within the general principles outlined above, the timing, sequence, and voltage levels used to perform row and column actuation can vary widely, and the above examples are only exemplary, and any method of actuating voltage can be used in conjunction with The systems and methods described in this article are used together.6A and 6B are system block diagrams illustrating an embodiment of the display device 40. FIG. For example, the display device 40 may be a cellular or mobile phone. However, the same components of the display device 40 or slight variations thereof also illustrate various types of display devices, such as televisions and portable media players.The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The housing 41 is generally formed by any of a variety of manufacturing processes including injection molding and vacuum forming. In addition, the housing 41 may be made of any of a variety of materials (including but not limited to plastic, metal, glass, rubber, and ceramic, or a combination thereof). In one embodiment, the housing 41 includes a removable portion (not shown) that is interchangeable with other removable portions that have different colors or contain different logos, pictures, or symbols.The display 30 of the exemplary display device 40 may be any of a variety of displays including a bi-stable display as described herein. In other embodiments, the display 30 includes a flat panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD as described above; or a non-flat panel display, such as CRT or other tube devices. However, for the purpose of describing this embodiment, the display 30 includes an interferometric modulator display as described herein.The components of one embodiment of the exemplary display device 40 are schematically illustrated in FIG. 6B. The illustrated exemplary display device 40 includes a housing 41 and may include additional components at least partially enclosed therein. For example, in one embodiment, the exemplary display device 40 includes a network interface 27 that includes an antenna 43 coupled to the transceiver 47. The transceiver 47 is connected to the processor 21, and the processor 21 is connected to the adjustment hardware 52. The adjustment hardware 52 may be configured to adjust the signal (eg, filter the signal). The adjustment hardware 52 is connected to the speaker 45 and the microphone 46. The processor 21 is also connected to the input device 48 and the driver controller 29. The driver controller 29 is coupled to the frame buffer 28 and to the array driver 22, which in turn is coupled to the display array 30. The power supply 50 provides power to all components as required by the design of the particular exemplary display device 40.The network interface 27 includes an antenna 43 and a transceiver 47 so that the exemplary display device 40 can communicate with one or more devices via a network. In one embodiment, the network interface 27 may also have some processing capabilities that alleviate the requirements on the processor 21. The antenna 43 is any antenna for transmitting and receiving signals. In one embodiment, the antenna transmits and receives RF signals according to the IEEE 802.11 standard (including IEEE 802.11 (a), (b), or (g)). In another embodiment, the antenna transmits and receives RF signals according to the BLUETOOTH standard. In the case of cellular phones, the antenna is designed to receive CDMA, GSM, AMPS, W-CDMA or other known signals used to communicate within a wireless cellular network. The transceiver 47 preprocesses the signal received from the antenna 43 so that the signal can be received by the processor 21 and further manipulated. The transceiver 47 also processes the signal received from the processor 21 so that the signal can be transmitted from the exemplary display device 40 via the antenna 43.In alternative embodiments, the transceiver 47 may be replaced by a receiver. In yet another alternative embodiment, the network interface 27 may be replaced by an image source that can store or generate image data to be sent to the processor 21. For example, the image source may be a digital video disc (DVD) or hard disk drive containing image data, or a software module that generates image data.The processor 21 generally controls the overall operation of the exemplary display device 40. The processor 21 receives data such as compressed image data from the network interface 27 or the image source, and processes the data into raw image data or a format that is easy to process into raw image data. The processor 21 then sends the processed data to the driver controller 29 or to the frame buffer 28 for storage. Raw data generally refers to information that identifies the characteristics of the image at each location within the image. For example, such image characteristics may include color, saturation, and gray scale.In one embodiment, the processor 21 includes a microcontroller, CPU, or logic unit to control the operation of the exemplary display device 40. The conditioning hardware 52 generally includes amplifiers and filters for transmitting signals to the speaker 45 and for receiving signals from the microphone 46. The adjustment hardware 52 may be a discrete component within the exemplary display device 40, or may be incorporated into the processor 21 or other components.The driver controller 29 takes the original image data generated by the processor 21 directly from the processor 21 or from the frame buffer 28, and reformats the original image data appropriately for high-speed transmission to the array driver 22. Specifically, the driver controller 29 reformats the original image data into a data stream having a raster-like format so that it has a chronological order suitable for scanning across the display array 30. Next, the drive controller 29 sends the formatted information to the array drive 22. Although a driver controller 29 such as an LCD controller is often associated with the system processor 21 as an independent integrated circuit (IC), such controllers can be implemented in many ways. It can be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated with the array driver 22 in the form of hardware.Generally, the array driver 22 receives formatted information from the driver controller 29 and reformats the video data into a set of parallel waveforms that are applied to the number of xy pixel matrices from the display many times per second Hundreds and sometimes even thousands of leads.In one embodiment, the driver controller 29, the array driver 22, and the display array 30 are suitable for any type of display described herein. For example, in one embodiment, the driver controller 29 is a conventional display controller or a bi-stable display controller (eg, an interferometric modulator controller). In another embodiment, the array driver 22 is a conventional driver or a bi-stable display driver (eg, an interferometric modulator display). In one embodiment, the driver controller 29 is integrated with the array driver 22. This embodiment is common in highly integrated systems such as cellular phones, watches and other small area displays. In yet another embodiment, the display array 30 is a typical display array or a bi-stable display array (eg, a display including an array of interferometric modulators).The input device 48 allows the user to control the operation of the exemplary display device 40. In one embodiment, the input device 48 includes a keypad, buttons, switches, touch-sensitive screen, pressure-sensitive or thermal film, such as a QWERTY keyboard or a telephone keypad. In one embodiment, the microphone 46 is an input device of the exemplary display device 40. When the microphone 46 is used to input data into the device, a voice command for controlling the operation of the exemplary display device 40 may be provided by the user.The power supply 50 may include various energy storage devices well known in such technology. For example, in one embodiment, the power source 50 is a rechargeable battery such as a nickel-cadmium battery or a lithium-ion battery. In another embodiment, the power source 50 is a renewable energy source, a capacitor, or a solar cell (including plastic solar cells and solar cell paint). In another embodiment, the power supply 50 is configured to receive power from a wall outlet.As described above, in some implementations, control programmability resides in a driver controller that can be located in several places in the electronic display system. In some cases, control programmability resides in the array driver 22. The above optimization can be implemented in any number of hardware and / or software components and in various configurations.The structural details of interferometric modulators operating according to the principles set forth above can vary widely. For example, FIGS. 7A-7E illustrate five different embodiments of the movable reflective layer 14 and its supporting structure. 7A is a cross-section of the embodiment of FIG. 1 in which a strip of metal material 14 is deposited on a support 18 extending orthogonally. In FIG. 7B, the shape of the movable reflective layer 14 of each interferometric modulator is square or rectangular, and is attached to the support only at the corner on the tether 32. In FIG. 7C, the shape of the movable reflective layer 14 is square or rectangular, and hangs from the deformable layer 34, which may include a flexible metal. The deformable layer 34 is directly or indirectly connected to the substrate 20 around the periphery of the deformable layer 34. These connections are referred to herein as support columns. The embodiment illustrated in FIG. 7D has a support post plug 42 on which the deformable layer 34 rests. The movable reflective layer 14 remains suspended above the gap (as in FIGS. 7A-7C), but the deformable layer 34 does not form a support post by filling the hole between the deformable layer 34 and the optical stack 16. More specifically, the support post is formed of a planarizing material that is used to form the support post plug 42. The embodiment illustrated in FIG. 7E is based on the embodiment illustrated in FIG. 7D, but can also be adapted to be with any of the embodiments illustrated in FIGS. 7A-7C and additional embodiments not shown kick in. In the embodiment illustrated in FIG. 7E, additional layers of metal or other conductive materials have been used to form the bus structure 44. This situation allows the signal to be directed along the back of the interferometric modulator, thereby eliminating several electrodes that might otherwise have to be formed on the substrate 20.In an embodiment such as the embodiment illustrated in FIG. 7, the interferometric modulator serves as a direct-view device in which the image is viewed from the front side of the transparent substrate 20, which is opposite to the side on which the modulator is arranged . In these embodiments, the reflective layer 14 optically shields the portion of the interferometric modulator (including the deformable layer 34) on the side of the reflective layer opposite the substrate 20. This situation allows the masked area to be configured and operated without adversely affecting image quality. For example, this shadowing allows the bus structure 44 in FIG. 7E, which provides the ability to separate the optical properties of the modulator from the electromechanical properties of the modulator (eg, addressing and movement caused by the addressing). This separable modulator structure allows the structural design and materials used for the electromechanical and optical aspects of the modulator to be selected and function independently of each other. In addition, the embodiments illustrated in FIGS. 7C-7E have the additional benefit of decoupling the optical properties of the reflective layer 14 from its mechanical properties (implemented by the deformable layer 34). This situation allows the structural design and materials for the reflective layer 14 to be optimized with respect to optical properties, and the structural design and materials for the deformable layer 34 to be optimized with respect to the desired mechanical properties.Integrated touchFIG. 8A illustrates the components of the IMOD display 100 in an undeflected (balanced) position before touch. FIG. 8B illustrates the display 100 in a deflected state when touched by an object (eg, finger).One advantage of this IMOD display is that it is easy to read in a variety of lighting situations. For example, although some displays may be dimmed and difficult or impossible to read in bright sunlight, IMOD displays are reflective and easy to read in bright sunlight. Generally, the IMOD display 100 depends on ambient light, but the light source may be integrated beside the display. Since displays generally rely on ambient light, placing touch-sensitive screen elements on the front side of the display (closest to the user and possibly touching) will reduce the amount of light that reaches the display's pixels and is reflected to the user. In addition, since light rays pass through the element to the reflective pixel and from the reflective pixel through the element, this touch screen element can introduce a certain amount of optical distortion. Embodiments of the display 100 avoid these defects by integrating electrodes and using the electrodes to determine the location of the touch with other elements of the IMOD display.Referring to FIGS. 8A and 8B, the display 100 includes a rear substrate 102, also called a back glass 102, an electrode 104 in contact with the surface of the back glass 102, and an electrode 108 of a mechanical layer. As described above in the previous section entitled "Interferometric Modulator", the electrode 108 may be any of the patterned electrode layers of the display. The electrode 108 and other associated layers may be described below as "mechanical layers". The electrode 104 is patterned in such a way that it is substantially orthogonal to the pattern of the electrode 108 of the mechanical layer of the display. For example, as can be seen in FIG. 8C, the electrodes 104 on the back glass can be patterned in rows, while the electrodes of the mechanical layer are patterned in columns. Of course, the electrodes 104 and 108 need not be in a vertical or horizontal direction, but can be at any angle to the vertical direction, and the path can deviate from a straight line, as long as the intersection of the electrodes occurs in a sufficiently limited area where touch recognition and resolution are acceptable. Although the electrodes used to sense touch are described in the context of a display for description purposes, touch sensing can be implemented in any MEMS device by adding electrodes (104) at the backplate of the MEMS device. It should be understood that the present invention is not limited to display devices.In embodiments where deflection caused by touch may cause the mechanical layer to contact the electrode 104, the display 100 may further include the mechanical layer and its insulator 106 between the electrode 108 and the electrode 104. The display 100 further includes a front (transparent) substrate 112, hereinafter referred to as an IMOD substrate, a seal 110, and an absorber / oxide layer 114, which may be, for example, in rows or columns or Pattern in other orientations. Depending on the device and application, the substrate 112 may or may not be transparent. For example, in MEMS devices other than displays, the substrate 112 may not be transparent.As can be seen in FIG. 8B, when an object (eg, finger) touches the IMOD substrate 112, the IMOD substrate 112 will deflect together with the absorber oxide 114 and the mechanical layer / electrode 108. This deflection and the associated change in the gap between the mechanical layer electrode 108 or the absorber / oxide layer 114 and the electrode layer 104 results in a change in electrical parameters that can be sensed to determine the location of the touch. It should also be noted that the deflection generated at the back glass can also be sensed because this deflection also causes changes in capacitance or other parameters. It should be noted that a touch can also be made and sensed via the back substrate 102 in FIG. 8, and the mechanical layer 108 may not contact the layer 106. It can sense touch by finger, stylus or even partial pressure.Figure 8D is a cross-section of a two-state embodiment of an interferometric modulator. Because the mirror surface of the mechanical layer 108 can be driven (eg, pulled) toward the back glass 102 or the IMOD substrate 112, this embodiment is referred to as a "two-state". In this embodiment, the mirror is driven towards the back glass 102 by the top electrode / plate 116. The top electrode 116 is patterned in rows or columns or at another angle that is substantially orthogonal to the pattern of the electrode 104, and therefore can also be used to determine the location of the touch.In one embodiment, the system senses the location of the touch by determining the change in capacitance at the intersection of the column and row or otherwise orthogonally oriented electrodes. Using the processor of the system, the embodiment calculates the outline or shape of the deflected substrate by measuring the capacitance at various positions, and then compares the shape with the model to calculate the touched position. This display may have projected capacitive properties or surface capacitive properties. The embodiments illustrated in FIGS. 9A-D may be used to sense the touch area of interest, which may vary from the sub-pixel modulator scale to the entire screen or a portion thereof. In a projected capacitive embodiment, as can be seen in FIG. 9A, because the spatial resolution required to resolve the touch is much lower than the resolution of the display (and therefore the electrodes of the mechanical or other layers), multiple adjacent mechanical lines Can be connected together and sensed simultaneously. In another embodiment, as can be seen in FIG. 9B, a matrix of touch sensors on the back glass is used while the mechanical layer electrodes are only used to supply a common reference voltage. In the surface capacitance embodiment, the back glass layer may be a single conductor (electrode) instead of being patterned (as seen in FIG. 9C), and may be measured using an n-probe. For example, n may be four, and therefore a four-probe measurement method is used. In this embodiment, the patterned line electrode of the mechanical layer is used to supply the reference voltage. In addition, in embodiments where the IMOD display is a tri-state or three-dimensional analog IMOD device (with multiple sets of drive electrodes as in, for example, FIG. 8D), as can be seen in FIG. 9D, the top plate of the device can be used to supply a reference voltage.The centroid of the capacitance change can also be calculated from the measured capacitance data to improve touch sensing resolution and also allow for multi-touch (eg, two or more fingers or other objects simultaneously) sensing. The centroid of the capacitance change does not need to coincide with the touched position. For multi-touch, the superposition of shapes is a linear combination of shapes generated by individual touches. The mapping between the centroid and the touch location can be stored in memory and referenced when needed. The mapping data can be based on mathematical (ie, theoretical) calculations, or based on actual calibration values for specific product lines or individual displays.As an example, for a 3.5-inch panel and a six-micron gap between the mechanical layer electrode and the back glass electrode, the capacitance of the entire panel is approximately 6 nanofarads. Assuming that a two micron deflection occurs due to touch, a total capacitance change greater than 1 nanofarad that can be sufficiently detected by the described embodiments can be produced.Other electrical parameters may also be used, such as the resistance of the electrode across the back glass electrode and / or mechanical layer or absorber / oxide layer or the circuit connected to the mechanical layer or absorber / oxide layer. In this embodiment, the insulating layer between the electrodes is preferably not present.Referring to FIGS. 10A and 10B, the pillar group 130 may be formed within the back glass 102. The recesses 120 between the columns are filled with desiccant. Many different geometric shapes and patterns can be used for the post and the resulting recess. For example, a hexagonal array (eg, the hexagonal array shown in FIG. 10B) can be patterned. Other geometric shapes may include circles, triangles, rectangles, pentagons, octagonal columns, and the like. The back glass electrode 104 will be patterned to fit on top of the post and interconnected in rows or columns or other orientations. The density can also vary from center to edge to aid detection and panel edges, which are generally more difficult to resolve than the center portion of the display.In some embodiments, a suitable insulator may be placed on top of the back glass electrode 104 to assist in capacitance detection and prevent mechanical layer wear. Examples of the insulating layer will include silicon dioxide, liquid crystal polymer, Teflon, and the like.11 is a flowchart depicting an outline of device manufacturing. The following steps are not necessarily performed in the order described. In step 204, an interferometric modulator array is formed. Next in step 208, an absorber layer is formed, and in step 21, the top electrode / plate is formed in the embodiment where the top electrode / plate is present. In step 216, pillars in the back glass are formed, and in step 220, a desiccant between pillars or in other areas is provided. In step 224, a back glass electrode is formed, and in step 228, a seal is formed and the array substrate is attached to the back glass (counter substrate).Although the invention has been specifically illustrated and described with reference to specific embodiments of the invention, those skilled in the art will appreciate that the forms and details of the disclosed embodiments can be made without departing from the spirit or scope of the invention Make changes.In addition, although various advantages, aspects, and objectives of the present invention have been discussed herein with reference to various embodiments, it will be understood that the scope of the present invention should not be limited by reference to such advantages, aspects, and objectives. Rather, the scope of the invention should be determined with reference to the appended claims. |
A graphics processing unit (GPU) (105) or other device includes a plurality of shader engines (140-143). The apparatus also includes a first front end (FE) circuit (150) and one or more second FE circuits (151). In a first mode, the first FE circuit is configured to schedule a geometric workload for the shader engine. In a second mode, the first FE circuit is configured to schedule geometric workloads for a first subset of the plurality of shader engines, and the one or more second FE circuits are configured to schedule geometric workloads for a second subset of the plurality of shader engines. In some cases, a partition switch (165) is configured to selectively connect the first FE circuit or the one or more second FE circuits to a second subset of the plurality of shader engines depending on whether the device is in a first mode or a second mode. |
1.A device comprising:multiple shader engines; anda first front end (FE) circuit and at least one second FE circuit, wherein in the first mode the first FE circuit is configured to schedule geometry workloads for the plurality of shader engines, and wherein in the second mode Next, the first FE circuit is configured to schedule geometry workloads for a first subset of the plurality of shader engines, and the at least one second FE circuit is configured for a first subset of the plurality of shader engines The second subset schedules geometric workloads.2.The apparatus of claim 1, further comprising:a partition switch configured to selectively connect the first FE circuit or the at least one second FE circuit to the device depending on whether the device is in the first mode or the second mode the second subset of the plurality of shader engines.3.2. The apparatus of claim 1 or 2, wherein the first FE circuit is configured to schedule geometry workload in the first mode for concurrent execution by the plurality of shader engines, and wherein in the first mode In a second mode, the first FE circuit is configured to schedule geometric workload for execution by the first subset and geometry work scheduled by the at least one second FE circuit for execution on the second subset The execution of the load is performed concurrently.4.2. The apparatus of claim 1 or 2, wherein the first FE circuit and the at least one second FE circuit are configured based on different user experience levels corresponding to at least one of complexity or graphics resolution.5.5. The apparatus of claim 4, wherein the first FE circuit is configured based on a first user experience level corresponding to at least one of a first complexity or a first graphics resolution, and wherein the at least one second FE circuit is configured based on at least one second user experience level corresponding to at least one of a second complexity or a second graphics resolution, the first complexity or the first graphics The at least one of the resolutions is higher than the at least one of the second complexity or the second graphics resolution.6.6. The apparatus of claim 5, wherein the at least one second FE circuit comprises at least one third FE circuit, the at least one third FE circuit being based on a lower than the first complexity or the first graph at least one of the resolutions of at least one of a third complexity or a third graphics resolution, and wherein the at least one second FE circuit includes at least one fourth FE circuit, the at least one fourth FE circuit is configured based on at least one fourth complexity or fourth graphics resolution lower than the at least one third complexity or third graphics resolution.7.7. The apparatus of claim 6, wherein for a first application requiring at least one of the first complexity or the first graphics resolution, the first FE circuit is configured to A mode schedules geometry workloads for the plurality of shader engines.8.8. The apparatus of claim 7, wherein for a second application requiring at least one of the third complexity or the third graphics resolution, the first FE circuit and the at least one third The FE circuit is configured to schedule geometry workloads for corresponding subsets of the plurality of shader engines.9.9. The apparatus of claim 8, wherein for a third application requiring at least one of the fourth complexity or the fourth graphics resolution, the first FE circuit, the at least one third The FE circuit and the fourth FE circuit are configured to schedule geometry workloads for corresponding subsets of the plurality of shader engines.10.6. The apparatus of any preceding claim, wherein at least one of the first FE circuit or the at least one second FE circuit is configured to support multiple concurrent threads using time division multiplexing.11.A method comprising:extracting geometry workload for a plurality of shader engines at a first front end (FE) circuit and at least one second FE circuit;in a first mode, scheduling the geometry workload at the first FE circuit, wherein the first FE circuit schedules the geometry workload for execution on the plurality of shader engines; andIn a second mode, the geometric workload is scheduled at the first FE circuit and the at least one second FE circuit, wherein the first FE circuit schedules the geometric workload for rendering at the plurality of shaders executing on a first subset of shader engines, and the at least one second FE circuit schedules the geometry workload for a second subset of the plurality of shader engines.12.The method of claim 11, further comprising:In the first mode, selectively connecting the first FE circuit to the second subset of the plurality of shader engines, or in the second mode, connecting the at least one first Two FE circuits are connected to the second subset of the plurality of shader engines.13.13. The method of claim 12, wherein scheduling the geometry workload for execution on the plurality of shader engines in the first mode comprises scheduling the geometry workload for execution by the plurality of shader engines executing concurrently, and wherein scheduling the geometric workload in the second mode to execute on the first subset and the second subset includes scheduling the geometric workload in the second mode to execute concurrently on the first subset and the second subset.14.The method of claim 13, further comprising:The geometric workload is selectively scheduled in the first mode or the second mode based on at least one of complexity or graphics resolution of at least one application that is generating the geometric workload.15.15. The method of claim 14, wherein scheduling the geometry workload to execute on the plurality of shader engines in the first mode comprises requiring at least one of a first complexity or a first graphics resolution A first application of the one that schedules the geometry workload in the first mode for execution on the plurality of shader engines.16.16. The method of claim 15, wherein the geometry workload is scheduled to execute concurrently on the first subset and the second subset of the plurality of shader engines in the second mode including scheduling the geometry workload on the first subset and the second subset of the plurality of shader engines for a second application requiring at least a second complexity or a second graphics resolution Executing concurrently, the at least one of the second complexity or the second graphics resolution is lower than the at least one of the first complexity or the first graphics resolution.17.A device comprising:multiple shader engines;multiple front-end (FE) circuits; anda partition switch configured to map a subset of the plurality of FE circuits to the plurality of shaders based on characteristics of an application providing commands for execution on the plurality of shader engines corresponding subsets of engines, and wherein the subsets of the plurality of FE circuits are configured to schedule geometry workloads for the corresponding subsets of the plurality of shader engines.18.18. The apparatus of claim 17, wherein the plurality of FE circuits are configured based on different user experience levels corresponding to at least one of different complexity or different graphics resolutions.19.18. The apparatus of claim 17 or 18, wherein a first FE circuit of the plurality of FE circuits is based on a first user experience level corresponding to at least one of a first complexity or a first graphics resolution configured, and wherein at least one second FE circuit of the plurality of FE circuits is configured based on at least one second user experience level corresponding to at least one of a second complexity or a second graphics resolution , the at least one of the first complexity or the first graphics resolution is higher than the at least one of the second complexity or the second graphics resolution.20.19. The apparatus of any one of claims 17 to 19, wherein the partition switch is configured to determine from a plurality of modes based on characteristics of an application providing commands for execution on the plurality of shader engines a mode of operation, and wherein the partition switch is configured to map the subset of the plurality of FE circuits to the corresponding subset of the plurality of shader engines based on the mode of operation. |
Space Partitioning in Multi-Tenant Graphics Processing UnitsBackground techniqueTraditional processing systems include processing units, such as central processing units (CPUs) and graphics processing units (GPUs), that implement audio, video, and multimedia applications, and in some cases general-purpose computing. The GPU's physical resources include shader engines and fixed-function hardware units that implement user-defined reconfigurable virtual pipelines. For example, traditional graphics pipelines for processing three-dimensional (3-D) graphics are formed by a sequence of fixed-function hardware block arrangements supported by programmable shaders. These arrangements are typically specified by a graphics application programming interface (API) such as the Microsoft DX11/12 specification or the Khronos Group OpenGL/Vulkan API.Description of drawingsThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference numbers in different figures indicates similar or identical items.1 is a block diagram of a processing system that implements spatial partitioning in a graphics processing unit (GPU), according to some embodiments.2 is a block diagram of a mapping of a front end (FE) circuit of a GPU operating in a first mode to a set of shader engines (SE), according to some embodiments.3 is a block diagram of a mapping of FE circuits to sets of SEs for a GPU operating in a second mode, according to some embodiments.4 is a block diagram of a GPU that includes a set of FE circuits configured based on different characteristics of an application that provides instructions for execution by the GPU, according to some embodiments.5 is a block diagram of a mapping of FE circuits to sets of SEs for a GPU operating with an advanced user experience, according to some embodiments.6 is a block diagram of a mapping of FE circuits to sets of SEs for a GPU operating with a mid-level user experience, according to some embodiments.7 is a block diagram of a mapping of FE circuits to sets of SEs for a GPU operating with a low-level user experience, according to some embodiments.8 is a block diagram of a GPU including a set of FE circuits that schedules instructions in a time division multiplexed thread for execution by a set of SEs in the GPU, according to some embodiments.9 is a flowchart of a method of selectively allocating FE circuits to schedule commands for concurrent execution on a set of SEs, according to some embodiments.Detailed waysProcessing on the GPU is typically initiated by application programming interface (API) calls (eg, draw calls) processed by the CPU. A draw call is a command generated by the CPU and transmitted to the GPU to instruct the GPU to render an object (or part of an object) in a frame. Draw calls include information that defines textures, states, shaders, render objects, buffers, etc., which are used by the GPU to render objects or parts of them. In response to receiving the draw call, the GPU renders the object to produce pixel values that are provided to a display that uses the pixel values to display an image representing the rendered object. Objects are represented by primitives, such as triangles, patches, or other polygons that include multiple vertices connected by corresponding edges. The input assembler extracts vertices based on the topology information indicated in the draw call. Vertices are provided to the graphics pipeline for shading according to corresponding commands previously stored in the command buffer for execution by the GPU. Commands in the command buffer are written to a queue (or ring buffer), and the scheduler schedules the command buffer at the head of the queue for execution on the GPU.The hardware used to implement the GPU is usually configured based on the characteristics of the expected workload. For example, if a workload handled by the GPU is expected to produce graphics at 8K resolution, the GPU processes up to eight primitives per clock cycle to guarantee target quality of service and utilization levels. For another example, if the workload processed by the GPU is expected to produce graphics at a much lower 1080p resolution, the GPU guarantees a target quality of service and utilization level when processing the workload at the lower 1080p resolution. While traditional GPUs are optimized for predetermined types of workloads, many GPUs are required to handle workloads with varying levels of complexity and output resolutions. For example, a flexible cloud gaming architecture includes servers implementing sets of GPUs for concurrently executing various games at different levels of user experience, potentially from 1080p resolutions, depending on the gaming application and the level of experience requested by the user All the way up to 8K resolution. Although lower complexity or lower resolution games may execute on GPUs optimized for higher complexity or resolutions, the expected complexity or resolution of the optimized GPU is not the same as the actual complexity or resolution required by the application. The difference between the rates usually results in underutilization of the resources of the higher performance GPU. For example, serial dependencies between commands in lower complexity/resolution games executed on higher performance GPUs reduce the amount of pixel shading executed in parallel, which leads to underutilization of GPU resources.Figures 1-9 disclose embodiments of a reconfigurable graphics processing unit (GPU) that includes front-end (FE) circuitry and shader engines that are spatially partitioned to execute GPUs with different characteristics Multiple concurrent graphics streams. FE circuitry fetches primitives for geometry workloads, performs scheduling geometry workloads for execution on shader engines, and in some cases handles serial synchronization, state updates, draw calls, cache activity and primitives subdivision of the surface. The shader engine shades the vertices of the primitive (as scheduled by the FE circuitry) and shades the pixels generated based on the shaded primitive. In some embodiments, the FE circuitry includes a plurality of FE circuits that selectively schedule geometry workloads to execute concurrently on corresponding subsets of shader engines. The use of different FE circuits to schedule workloads executing on different subsets of a shader engine is referred to herein as a "spatial partition" of a shader engine.The amount of spatial partitioning available in a reconfigurable GPU depends on the number of independent FE circuits implemented in the FE circuitry. For example, if the FE circuitry includes two FE circuits, the first FE circuit schedules the geometry workload for all shader engines in the first mode of operation. In the second (partitioned) mode of operation, the first FE circuit schedules the geometry workload to execute on the first subset of shader engines, and the second FE circuit schedules the geometry workload to match the geometry workload on the first subset Execution on is performed concurrently on a second subset of shader engines. In some embodiments, multiple FE circuits are configured based on different user experience levels corresponding to different complexity or graphics resolutions. For example, a GPU including four shader engines includes a first FE circuit, two second FE circuits, and a third FE circuit, where the first FE circuit is optimized for high complexity/resolution and the second FE circuit is optimized for high complexity/resolution Medium complexity/resolution is optimized, and the third FE circuit is optimized for low complexity/resolution. The GPU can thus be reconfigured to support one high complexity/resolution application using the first FE circuit (such as games offering 8K resolution), two medium complexity/resolution applications using the two second FE circuits (such as games that provide 4K resolution), or four low complexity/resolution applications (such as games that provide 1080p resolution) using the first FE circuit, the second FE circuit, and the third FE circuit. In some embodiments, one or more of the plurality of FE circuits supports multiple concurrent threads using time division multiplexing.1 is a block diagram of a processing system 100 that implements spatial partitioning in a multi-tenant graphics processing unit (GPU) 105, according to some embodiments. The processing system 100 includes one or more central processing units (CPUs) 110 , 111 . Although two CPUs 110, 111 are shown in FIG. 1, some embodiments of the processing system 100 include more or fewer CPUs. Extensible data structure (SDF) 115 supports the flow of data between endpoints within processing system 100 . Some implementations of SDF 115 support data flow between connection points such as peripheral component interface (PCI) physical layers, memory controllers, universal serial bus (USB) hubs, including GPU 105 and CPUs 110, 111 computing and execution units, and other endpoints. In the embodiment shown, SDF 115 is connected to input/output (I/O) hub 120 , which in turn is connected to PCI express (PCI-E) bus 125 and NBIF 130 . Processing system 100 also includes an extensible control structure (SCF) 135 , which is a control communication plane that communicates system control signals within processing system 100 . Examples of system control signals are control signals used to support thermal and power management, testing, safety, and the like.GPU 105 includes a set of shader engines (SEs) 140, 141, 142, 143 (collectively referred to herein as "SEs 140-143") for executing commands concurrently or in parallel. Some implementations of SEs 140-143 are configured to use information in draw calls received from one of the CPUs 110, 111 to shade vertices representing primitives of the scene model. SEs 140-143 also shade pixels generated based on the shaded primitives, and provide the shaded pixels (eg, via I/O hub 120) to a display for presentation to a user. Although four shader engines are shown in FIG. 1, some embodiments of GPU 105 include more or fewer shader engines. SEs 140-143 are connected to a graphics L2 cache 145 that stores frequently used data and instructions. In some embodiments, L2 cache 145 is connected to one or more L1 caches implemented in SEs 140-143 and one or more L3 caches (or other last level caches) implemented in processing system 100 . The caches form a cache hierarchy including L2 cache 145 . Other caches in the cache hierarchy are not shown in Figure 1 for clarity.Front-end (FE) circuitry in GPU 105 fetches primitives for geometry workloads, performs scheduling of geometry workloads for execution on shader engines, and in some cases handles serial synchronization, state updates, draw calls, caching Tessellation of activities and primitives. The FE circuitry in GPU 105 includes FE circuits 150, 151, although some implementations of FE circuitry are partitioned to include additional FE circuits, as discussed herein. FE circuits 150, 151 include command processors 155, 156 that receive command buffers for execution on SEs 140-143. The FE circuits 150, 151 also include a graphics register bus manager (GRBM) 160, 161 that acts as a hub supporting register read and write operations for multiple masters and multiple slaves.GPU 105 operates in either the first mode or the second spatially partitioned mode. In the first mode, the FE circuit 150 schedules the geometric workload for the SEs 140-143. In the second mode, FE circuit 150 schedules geometric workload for a first subset of SEs 140-143, and FE circuit 150 schedules geometric workload for a second subset of SEs 140-143. The first subset includes SEs 140, 141, and the second subset includes SEs 142, 143, although other groupings of SEs 140-143 to subsets are used in some embodiments. The GPU 105 includes a partition switch 165 that selectively connects the FE circuits 150, 151 to first and second subsets of SEs 140-143 depending on whether the GPU 105 is operating in the first mode or the second mode. In the embodiment shown, the partition switch 165 determines the operational state of the GPU 105 . If the GPU 105 is operating in the first mode, the partition switch 165 connects the FE circuit 150 to the SEs 142, 143, so that the FE circuit 150 schedules operation on all SEs 140-143. If the GPU 105 is operating in the second mode, the partition switch 165 connects the FE circuit 151 to the SEs 142, 143 such that the FE circuit 150 schedules operations on the SEs 140, 141 and the FE circuit 151 schedules operations on the SEs 142, 143 .2 is a block diagram of a mapping 200 of a set of FE circuits 205, 210 to SEs 211, 212, 213, 214 for a GPU operating in a first mode, according to some embodiments. Mapping 200 indicates the mapping of some implementations of FE circuits 150, 151 in GPU 105 shown in FIG. 1 to SEs 140-143. The GPU is operating in the first mode, and the FE circuit 205 is mapped to all SEs 211-214. Therefore, FE circuit 205 schedules commands to execute concurrently on SEs 211-214. FE circuit 210 is not mapped to any of SEs 211-214, and therefore no commands are scheduled for execution on any of SEs 211-214, as indicated by the dashed outline of the box representing FE circuit 210.3 is a block diagram of a mapping 300 of FE circuits 305, 310 to sets of SEs 311, 312, 313, 314 for a GPU operating in a second mode, according to some embodiments. Mapping 300 indicates the mapping of some implementations of FE circuits 150, 151 in GPU 105 shown in FIG. 1 to SEs 140-143. The GPU operates in the second mode, and the FE circuit 305 is mapped to a first subset of SEs 311-314 including SEs 311, 312. Therefore, the FE circuit 305 schedules commands for execution on the SEs 311,312. FE circuit 310 is mapped to a second subset of SEs 311-314 including SEs 313, 314. Therefore, the FE circuit 310 schedules commands for execution on the SEs 313,314. The FE circuits 305, 310 schedule commands to execute concurrently on their corresponding first and second subsets of SEs 311-314.4 is a block diagram of a GPU 400 that includes a set of FE circuits configured based on different characteristics of an application that provides instructions for execution by the GPU, according to some embodiments. GPU 400 includes a set of SEs 401, 402, 403, 404, collectively referred to herein as "SEs 401-404", and execute instructions concurrently or in parallel. GPU 400 also includes FE circuits 411, 412, 413, 414, which are collectively referred to herein as "FE circuits 411-414." FE circuits 411-414 are configured based on different user experience levels corresponding to different complexity or graphics resolutions. In the illustrated embodiment, the FE circuit 411 is configured based on the requirements of an application with high complexity or graphics resolution, such as implementing a complex physics engine or providing a game at 8K resolution. The FE circuits 412, 413 are configured based on the requirements of applications with moderate complexity or graphics resolution, such as games providing 4K resolution. The FE circuit 414 is configured based on the requirements of applications with low complexity or graphics solution resolution, such as games providing 1080p resolution.Partition switch 415 selectively maps subsets of FE circuits 411-414 to corresponding subsets of SEs 401-404. The mapping indicates the connections between the FE circuits 411-414 and the SEs 401-404, and which of the FE circuits 411-414 is responsible for dispatching commands to one or more of the SEs 401-404. Some embodiments of partition switch 415 selectively map subsets of FE circuits 411-414 to corresponding subsets of SEs 401-404 based on the characteristics of the application providing the commands for execution on SEs 401-404. For example, GPU 400 may operate in one of multiple modes depending on the characteristics of the application. Partition switch 415 determines the current mode of operation based on signaling associated with GPU 400 or other indications using characteristics of the application. The partition switch 415 then selectively determines the mapping between the SEs 401-404 and the FE circuits 411-414 based on the mode of operation.5 is a block diagram of a mapping 500 of FE circuits 501 , 502 , 503 , 504 to sets of SEs 511 , 512 , 513 , 514 for a GPU operating at an advanced user experience, according to some embodiments. Mapping 500 indicates the mapping of some implementations of FE circuits 411-414 in GPU 400 shown in FIG. 4 to SEs 401-404. The GPU is executing commands provided by applications that require a relatively advanced user experience (eg, advanced complexity or graphics resolution). FE circuit 501 supports advanced user experience, and thus FE circuit 501 is mapped to SEs 511-514. FE circuit 501 schedules commands to execute concurrently on SEs 511-514. FE circuits 502-504 are not mapped to SEs 511-514, and therefore no commands are scheduled for execution on SEs 511-514, as indicated by the dashed boxes representing FE circuits 502-504.6 is a block diagram of a mapping 600 of FE circuits 601, 602, 603, 604 to sets of SEs 611, 612, 613, 614 for a GPU operating at an intermediate user experience, according to some embodiments. Mapping 600 indicates the mapping of some implementations of FE circuits 411-414 in GPU 400 shown in FIG. 4 to SEs 401-404. The GPU is executing commands provided by applications that require a mid-level user experience (eg, mid-level complexity or graphics resolution). FE circuits 602, 603 support mid-level user experience. In the embodiment shown, FE circuit 602 is mapped to SEs 611 , 612 and FE circuit 603 is mapped to SEs 613 , 614 . Accordingly, the FE circuits 602, 603 schedule commands to execute concurrently on the corresponding subsets of SEs 611-614. The FE circuits 601, 604 are not mapped to the SEs 611-614, and therefore no commands are scheduled for execution on the SEs 611-614, as indicated by the dashed boxes representing the FE circuits 601, 604. However, in some embodiments, FE circuit 601 is mapped to a subset of SEs 611-614 because FE circuit 601 is capable of dispatching commands for applications requiring a mid-level user experience.7 is a block diagram of a mapping 700 of FE circuits 701 , 702 , 703 , 704 to sets of SEs 711 , 712 , 713 , 714 for a GPU operating at a low level user experience, according to some embodiments. Mapping 700 indicates the mapping of some implementations of FE circuits 411-414 in GPU 400 shown in FIG. 4 to SEs 401-404. The GPU is executing commands provided by applications that require low-level user experience (eg, low-level complexity or graphics resolution). All FE circuits 701-704 are capable of dispatching commands to SE 711-714 from applications requiring low level user experience. FE circuits 701-704 are thus mapped to corresponding SEs 711-714. For example, FE circuit 701 maps to SE 711 (and schedules commands for it), FE circuit 702 maps to SE 712 (and schedules commands for it), FE circuit 703 maps to SE 713 (and schedules commands for it), and FE circuit 704 maps to SE714 (and dispatch commands to it). FE circuits 701-704 schedule commands for concurrent execution on corresponding SEs 711-714.8 is a block diagram of a GPU 800 including a set of FE circuits that schedules instructions in a time division multiplexed thread for execution by a set of SEs in the GPU, according to some embodiments. GPU 800 represents some implementations of GPU 105 shown in FIG. 1 . The set of FE circuits includes the first FE circuit 805 and the second FE circuit 810, although some embodiments of the GPU 800 include more FE circuits in the set. The first FE circuit 805 schedules commands for execution on one or more corresponding SEs including the first SE 815 . In the embodiment shown, the first FE circuit 805 schedules commands for the first thread 817 during the first time interval and the third time interval. The first FE circuit 805 also schedules commands or second threads 818 during a second time interval time-division multiplexed with the first time interval and the third time interval. The second FE circuit 810 schedules commands for execution on one or more corresponding SEs including the second SE 820 . In the embodiment shown, the second FE circuit 810 schedules commands for the third thread 822 during the fourth time interval and the fifth time interval. The second FE unit 810 also schedules commands for the fourth thread 823 during the sixth time interval time division multiplexed with the fourth time interval and the fifth time interval. Accordingly, FE circuits 805, 810 schedule commands in threads 817, 818, 822, 823 to execute concurrently on SEs 815, 820.9 is a flowchart of a method 900 of selectively allocating FE circuits to schedule commands for concurrent execution on a set of SEs, according to some embodiments. Method 900 is implemented in some embodiments of GPU 800 shown in FIG. 1 .At block 905, the GPU determines characteristics of one or more workloads (or threads) provided for execution on the GPU. In some embodiments, characteristics include, but are not limited to, the complexity of the workload or the graphics resolution required (or specified or preferred) for the workload. Characteristics are determined based on information provided in the workload (or thread) or using other information that configures the GPU to execute the workload (or thread).At decision block 910, the GPU determines whether one or more workloads (or threads) are to be executed concurrently. Examples of concurrently executing workloads include workloads having a complexity or graphics resolution lower than or equal to the complexity or graphics resolution used to configure multiple FE circuitry implemented in a GPU, as described herein. If the GPU is only executing a single workload, method 900 proceeds to block 915 . If multiple workloads are to be scheduled concurrently, method 900 proceeds to block 920 .At block 915, an FE circuit is assigned to schedule commands to execute concurrently on the set of SEs. The other FE circuits available in the GPU are not allocated to schedule commands for execution on any of the SE sets.At block 920, a set of FE circuits is allocated to schedule commands for concurrent execution by a corresponding subset of the set of SEs. At block 925, the set of FE circuits schedules commands to be executed concurrently by the corresponding subsets. For example, if two FE circuits are allocated, the first FE circuit schedules commands for execution on a first subset of the SE set and the second FE circuit schedules commands for execution on a second subset of the SE set. The first subset and the second subset execute scheduling commands concurrently.Computer-readable storage media includes any non-transitory storage media or combination of non-transitory storage media that can be accessed by a computer system to provide instructions and/or data to the computer system during use. Such storage media may include, but are not limited to, optical media (eg, compact disc (CD), digital versatile disc (DVD), Blu-ray disc), magnetic media (eg, floppy disk, magnetic tape, or magnetic hard disk), volatile memory (eg, , random access memory (RAM) or cache), non-volatile memory (eg, read only memory (ROM) or flash memory), or microelectromechanical systems (MEMS) based storage media. A computer-readable storage medium can be embedded in a computing system (eg, system RAM or ROM), fixedly attached to the computing system (eg, a magnetic hard drive), removably attached to the computing system (eg, an optical disc or based on Universal Serial Bus (USB) flash memory), or coupled to a computer system (eg, Network Accessible Storage (NAS)) through a wired or wireless network.In some implementations, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. Software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer-readable storage medium. The software may include instructions and certain data that, when executed by one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. Non-transitory computer readable storage media may include, for example, magnetic or optical disk storage, solid state storage such as flash memory, cache memory, random access memory (RAM), or other one or more nonvolatile memory devices, and the like. Executable instructions stored on a non-transitory computer-readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executed by one or more processors.It should be noted that not all of the activities or elements described above in the general description are required, that a particular activity or part of a device may not be required and one or more other activities may be performed, or may include other activities than those described components other than components. Also, the order in which activities are listed is not necessarily the order in which they are performed. Additionally, corresponding concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with reference to specific embodiments. However, neither the stated benefits, advantages, solutions to problems nor any feature that would make any benefit, advantage or solution to the problem appear or be more pronounced should be construed as a key, required or essential feature of any or all claims . Furthermore, the specific embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the appended claims. Therefore, it is evident that the specific embodiments disclosed above may be altered or modified and all such changes are considered to be within the scope of the disclosed subject matter. Accordingly, the protection sought herein is set forth in the following claims. |
A method and an apparatus for power management in a computer system have been disclosed. One embodiment of the method includes monitoring transactions over an interconnect coupling a chipset device and a peripheral device in the system, the transactions being transmitted between the peripheral device and the chipset device according to a flow control protocol to allow the chipset device to keep track of the transactions, and causing a processor in the system to exit from a power state if a plurality of coherent transactions pending in a buffer of the chipset device exceeds a first threshold. Other embodiments are described and claimed. |
CLAIMS What is claimed is: 1. A method to manage power in a system, the method comprising: monitoring transactions over an interconnect coupling a chipset device and a peripheral device in the system, the transactions being transmitted between the peripheral device and the chipset device according to a flow control protocol that allows the chipset device to keep track of the transactions; and causing a processor in the system to exit from a power state if a plurality of coherent transactions pending in a buffer of the chipset device exceeds a first threshold. 2. The method of claim 1, further comprising: determining whether a predetermined period of time has passed if the plurality of coherent transactions pending in the buffer does not exceed the first threshold; and causing the processor to exit from the power state if the predetermined period of time has passed. 3. The method of claim 1, further comprising: <Desc/Clms Page number 20> in response to a request from the processor to enter into the power state, de-asserting an indicator within a message packet to allow the processor to enter into the power state if a plurality of incoherent transactions pending in the buffer of the chipset device exceeds a second threshold. 4. The method of claim 3, further comprising: asserting the indicator within the message packet to prevent the processor from entering the power state if the plurality of incoherent transactions pending in the buffer of the chipset device exceeds the second threshold. 5. The method of claim 3, further comprising: determining whether a second predetermined period of time has passed if the plurality of incoherent transactions pending in the buffer of the chipset device is below the second threshold; and deasserting the indicator within the message packet to allow the processor to enter the power state if the second predetermined period of time has passed. 6. The method of claim 3, wherein the first threshold is substantially equal to the second threshold. <Desc/Clms Page number 21> 7. The method of claim 3, wherein the first threshold is lower than the second threshold. 8. The method of claim 3, wherein the first threshold is higher than the second threshold. 9. The method of claim 1, wherein the flow control protocol is Peripheral Component Interconnect (PCI) Express. 10. The method of claim 1, wherein the chipset device comprises a memory controller. 11. The method of claim 1, wherein the chipset device comprises an input/output controller. 12. An apparatus in a computing system, the apparatus comprising: power management circuitry to monitor transactions over an interconnect coupling a root complex device and a peripheral device in the computing system, the transactions being transmitted between the peripheral device and the root complex <Desc/Clms Page number 22> device according to a flow control protocol to allow the root complex device to keep track of the transactions transmitted; and a digital media interface coupled to the root complex device to send a first message packet to the root complex device to cause a processor in the computing system to exit from a power state if a plurality of coherent transactions pending in a buffer of the root complex device exceeds a first threshold. 13. The apparatus of claim 12, wherein in response to a request from the processor to enter into the power state, the power management circuitry de-asserts an indicator within a second message packet to allow the processor to enter into the power state if a plurality of incoherent transactions pending in the buffer of the root complex device exceeds a second threshold. 14. The apparatus of claim 13, wherein in response to the request from the processor, the power management circuitry asserts the indicator within the second message packet to prevent the processor from entering the power state if the plurality of incoherent transactions pending in the buffer of the root complex device is below the second threshold. <Desc/Clms Page number 23> 15. The apparatus of claim 14, further comprising a timer, wherein the power management circuitry asserts the indicator within the second message packet to prevent the processor from entering the power state if the timer has expired. 16. The apparatus of claim 14, wherein the first threshold is substantially equal to the second threshold. 17. The apparatus of claim 12, wherein the flow control protocol is Peripheral Component Interconnect (PCI) Express. 18. A semiconductor chip in a computing system, the semiconductor chip comprising: a memory controller coupled to a peripheral device in the computing system; power management circuitry coupled to the memory controller to monitor transactions between the peripheral device and the memory controller ; and an input/output controller residing on a common substrate with the memory controller to allow a processor in the computing system to enter into the power state in response to a request from the processor to enter into a power state if a plurality of incoherent transactions pending in a buffer of the memory controller exceeds an <Desc/Clms Page number 24> entry threshold and to prevent the processor from entering into the power state if the plurality of incoherent transactions is below the entry threshold. 19. The semiconductor chip of claim 18, wherein the input/output controller causes the processor to exit from the power state if a plurality of coherent transactions pending in the buffer of the memory controller exceeds an exit threshold. 20. The semiconductor chip of claim 19, wherein the entry threshold is substantially equal to the exit threshold. 21. The semiconductor chip of claim 19, wherein the entry and exit thresholds are adaptively modifiable. 22. The semiconductor chip of claim 18, wherein the peripheral device is coupled to the memory controller via a Peripheral Component Interconnect (PCI) Express interconnect. 23. The semiconductor chip of claim 18, wherein the peripheral device is coupled to the memory controller via a bus. <Desc/Clms Page number 25> 24. A system comprising: a processor; a memory controller coupled to the processor; a graphics chip ; an interconnect coupling the graphics chip to the memory controller; an input/output controller, coupled to the memory controller, comprising power management circuitry to monitor transactions over the interconnect, the transactions being transmitted between the graphics chip and the memory controller according to a flow control protocol; and a digital media interface coupled to the memory controller to send a first message packet to the memory controller to cause the processor to exit from a power state if a plurality of coherent transactions pending in a buffer of the memory controller exceeds a first threshold. 25. The system of claim 24, wherein the power management circuitry de-asserts an indicator within the second message packet in response to a request from the processor to enter into the power state to allow the processor to enter into the power state if a plurality of incoherent transactions pending in the buffer of the memory controller exceeds a second threshold. <Desc/Clms Page number 26> 26. The system of claim 25, wherein the power management circuitry asserts the indicator within the second message packet in response to the request from the processor to prevent the processor from entering the power state if the plurality of incoherent transactions pending in the buffer of the memory controller is below the second threshold. 27. The system of claim 26, wherein the first threshold is substantially equal to the second threshold. 28. The system of claim 24, wherein the flow control protocol is Peripheral Component Interconnect (PCI) Express. |
<Desc/Clms Page number 1> A Method and An Apparatus for Power Management in a Computer System FIELD OF INVENTION [0001] The present invention relates to computing technology, and more particularly, to power management in a computer system. BACKGROUND [0002] In a typical computer system, a central processing unit (CPU) of the system supports various power states to allow robust power management in the system. For example, a CPU may support five power states, such as the C0, C1, C2, C3, and C4 states. In one system, the CO state is an active power state, in which the CPU executes instructions, while the remaining states, i. e. , the C1, C2, C3, and C4 states, are sleeping states. In the sleeping states, the CPU consumes less power and dissipates less heat than in the CO state because the CPU does not execute any instruction while in the sleeping states. Furthermore, the power consumption in the C4 state is generally less than the power consumption in the C3 state because the CPU supply voltage is lowered when the CPU enters into the C4 state. Each sleeping state has a latency associated with entering and exiting and is related to the power saving in each state. In general, the more circuitry or logic being shutdown to save more power, the more effort and longer exit latency are consumed to re-energize the circuitry and/or logic shutdown. For example, the phase lock loop (PLL) and input/output (IO) of a CPU can be shut down to save more <Desc/Clms Page number 2> power when the CPU is in the C3 or C4 state because the CPU does not snoop while in the C3 or C4 state. However, it typically takes longer to re-energize the PLL and 10 after the CPU exits from the C3 or C4 state. [0004] In an exemplary system, the CPU can access the memory during the CO state or snoop bus-master initiated memory traffic while in the C1 or C2 state. The bus master is a peripheral device having control of the bus at a given time, such as, for example, an external graphic core. The data movement from one device to another over a bus is, therefore, referred to as a bus mastering event. In contrast, in the C3 or C4 state, the CPU suspends snooping or memory access as part of the deeper sleep states. In order to snoop the bus-master initiated memory traffic, a CPU in either the C3 or C4 state has to exit the C3 or C4 state. Because of the higher exit latency of the C3 and C4 states, the system has to verify whether there is an on-going bus mastering event from any peripheral device in the system that may require the CPU to snoop before entering either the C3 or C4 state. If there is an on-going bus mastering event, the CPU has to settle for a power state (e. g., Cl or C2) with higher power consumption but lower exit latency than the C3 or C4 state. [0005] As to the peripheral device, it may be coupled to the CPU through a root complex device via a serial interconnect, such as a PCI Express interconnect. A root complex device includes a host bridge and one or more root ports. Examples of a root complex device include a memory controller or 10 controller functional <Desc/Clms Page number 3> device. An interconnect is an infrastructure that couples one device to another. PCI Express is a high speed, point-to-point serial interconnect standard. For example, the first generation of PCI Express interconnect supports 2.5 Gb/sec per lane data transmission. In one exemplary system, a graphic device is coupled to a chipset of the system (e. g. , a memory controller hub) through a 16-lane PCI Express interconnect. Furthermore, PCI Express allows flow control by supporting an accounting scheme with credits to keep track of the traffic over a PCI Express interconnect. The credits indicate the available buffering in a device for various types of transactions over an interconnect. For example, a memory controller can report to the software of the capability of a root complex device to transmit data by writing the information in a number of registers. According to PCI Express protocol, there are a number of prescribed credits for various transactions, such as, read request, write request, completion, etc. For example, when a graphic device issues transactions (e. g. , read requests) towards the root complex device and these transactions are pending, a credit is consumed to reflect the amount of buffering taken up in the memory controller by the pending transactions. When these transactions are handled or retired by the memory controller, the credit is released or freed up. The number of pending transactions, as reflected by the credits consumed, <Desc/Clms Page number 4> indicates the likelihood of a bus mastering event that may prohibit entry into the C3 or C4 state. A prior art technique to indicate on-going bus mastering traffic uses a sideband signal. For example, a graphic device sends a signal AGPBUSY to the root complex device of the computer system to indicate on-going bus mastering traffic for the system that attaches the graphic device using Accelerated Graphics Port (AGP). However, the sideband signals are costly because they require one additional pin per sideband signal on each device. Furthermore, permanent connector infrastructure has to be provided for the sideband signals in the system even though future technological innovation may not use such sideband signals at all. <Desc/Clms Page number 5> BRIEF DESCRIPTION OF THE DRAWINGS [0008] The present invention will be understood more fully from the detailed description that follows and from the accompanying drawings, which however, should not be taken to limit the appended claims to the specific embodiments shown, but are for explanation and understanding only. l0009] Figure 1A shows a flow diagram of one embodiment of a process to manage power in a computer system. Figure 1B shows a flow diagram of one embodiment of a process to manage power in a computer system. [0011] Figure 2A illustrates one embodiment of an entry threshold. Figure 2B illustrates one embodiment of an exit threshold. Figures 3A-3C illustrate various embodiments of chipset partition. Figure 4 shows an exemplary embodiment of a computer system. <Desc/Clms Page number 6> DETAILED DESCRIPTION [0015] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description. Reference in the specification to"one embodiment"or"an embodiment"means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase"in one embodiment"in various places in the specification do not necessarily all refer to the same embodiment. A method and an apparatus for power management in a computer system are disclosed. In one embodiment, the method includes monitoring transactions over an interconnect coupling a chipset device and a peripheral device in the computer system, the transactions being transmitted between the peripheral device and the chipset device following a flow control protocol that allows the chipset device to keep track of the transactions. The embodiment further includes causing a processor in the computer system to exit from a power state if a number of coherent transactions pending in a buffer of the chipset device exceed a predetermined threshold. In a specific embodiment, the flow control protocol is PCI <Desc/Clms Page number 7> Express. Other features will be apparent from the accompanying figures and the detailed description that follows. ] Figure 1A shows a flow diagram of one embodiment of a process to manage power in a computer system. The process is performed by processing logic that may comprise hardware (e. g. , circuitry, dedicated logic, etc. ), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. As discussed above, an exemplary CPU may not initiate memory access or snoop bus-master initiated traffic while in the C3 or C4 state. Therefore, in response to a CPU request to enter either the C3 or C4 state (processing block 101), processing logic performs a series of operations to determine whether the peripheral devices in the system are likely to request the CPU to snoop a bus master or accesses directly to system memory without snooping. Examples of the peripheral devices include an external graphics core, an Ethernet controller, etc. Processing logic may receive a transaction 103 from one of the peripheral devices (processing block 104). The transaction 103 may be coherent or incoherent. A coherent transaction involves data currently or likely being used or modified in the cache of the CPU. In contrast, an incoherent transaction involves data from the memory and the data is currently not being stored, used, or modified in the cache of the CPU. [0019] Referring to Figure 1A, processing logic checks whether the transaction 103 received is coherent or there is any pending coherent transaction in a <Desc/Clms Page number 8> memory controller in the computer system (processing block 110). If either is true, then processing logic asserts a bus mastering indicator to prevent the CPU from entering the C3 or C4 state (processing block 130). In one embodiment, the CPU then enters into a default state, which may be the C 1 or C2 state. 10020] However, if the transaction 103 received is incoherent and there is no pending coherent transaction in the root complex device, processing logic consumes a number of credits to reflect the portion of buffer in the memory controller taken up by the incoherent transaction 103 and holds the transaction 103 as pending (processing block 112). Processing logic may check whether the total number of credits consumed exceeds or equals to an entry threshold (processing block 120). If the total number of credits consumed exceeds or equals to the entry threshold, the portion of the buffer in the root complex device filled by the pending transactions has exceeded a certain level corresponding to the entry threshold. With less available buffering in the memory controller, the peripheral device is less likely to send additional transactions to the memory controller. Thus, the CPU is less likely to be requested to snoop, and hence, the CPU may enter into either the C3 or C4 state. As a result, processing logic de-asserts the bus mastering indicator to allow the CPU to enter into either the C3 or C4 state (processing block 129). On the other hand, if the total number of credits consumed is less than the entry threshold, processing logic may check whether a timer has expired <Desc/Clms Page number 9> (processing block 122). If the timer has expired, processing logic deasserts the bus mastering indicator to allow the CPU to enter into either the C3 or C4 state (processing block 130). Otherwise, processing logic repeats processing block 110. Alternatively, processing logic may not check the timer at all and may simply repeat processing block 110 upon the determination that the total number of consumed credits is below the entry threshold. [0022] Figure 2A illustrates one embodiment of the entry threshold. The entry threshold 210 may be set to modify the bus mastering indicator to cause an exemplary CPU to enter into the C3 or C4 state even when there are pending incoherent transactions in the root complex device. In other words, the transactions may be intentionally held pending in the memory controller with no service attempted until the number of credits consumed exceeds or equals to the entry threshold 210 in order to defer asserting the bus mastering indicator to the CPU. As a result, the CPU has more opportunities to enter into either the C3 or C4 state to reduce average CPU power consumption. The entry threshold 210 may be set at 0% for highly performance sensitive applications, such as graphic applications. However, in a mobile system, such as a laptop computer, the entry threshold may be set at different values depending on the amount of charge remaining in one or more batteries of the system when the system is running solely on such batteries. It is noted that the tradeoff for lower CPU power consumption <Desc/Clms Page number 10> may be degraded CPU performance state. Furthermore, in one embodiment, a timer is used to qualify how long to stall servicing the initial pending transaction in order to mitigate the impact of the tradeoff on some latency sensitive applications. If the timer expires before the entry threshold 210 is reached, then the bus mastering indicator may be reset to allow the CPU to enter the C3 or C4 state for light traffic or idle cases. [0024]. Figure 1B shows a flow diagram of one embodiment of a process to manage power in a computer system : The process is performed by processing logic that may comprise hardware (e. g. , circuitry, dedicated logic, etc. ), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. When the CPU is in either the C3 or C4 state (processing block 105), processing logic may receive a coherent transaction from a peripheral device (processing block 140). Examples of the peripheral device include an external graphic core, an Ethernet controller, etc. Upon receipt of the coherent transaction, processing logic consumes a number of credits to reflect the portion of buffer taken up by the coherent transaction received (processing block 142). Then processing logic checks whether the total number of consumed credits for the coherent transactions exceeds or equals to an exit threshold (processing block 144). If the total number of the consumed credits exceeds or equals to the exit threshold, processing logic causes the CPU to exit from either the C3 or C4 state (processing <Desc/Clms Page number 11> block 150). Processing logic may send a signal to the CPU to instruct the CPU to exit from either the C3 or C4 state. After exiting from the C3 or C4 state, in one embodiment, the CPU enters the CO state. However, if the total number of the consumed credits does not exceed or equal to the exit threshold, processing logic checks whether a timer has expired (processing block 146). If the timer has expired, processing logic causes the CPU to exit from either the C3 or C4 state (processing block 150). Otherwise, processing logic queues up the transaction (processing block 148) and repeats processing block 140. Alternatively, processing logic may not check the timer at all and may simply queue up the transaction (processing block 148) and repeat processing block 140 upon the determination that the total number of consumed credits is below the exit threshold. Figure 2B illustrates one embodiment of the exit threshold. Referring to Figure 2B, the exit threshold 250 is set to decide when to set the bus mastering indicator to cause an exemplary CPU to exit from either the C3 or C4 state. Transactions may be queued up when the CPU is in the C3 or C4 state to allow the CPU to spend a certain period of time in the C3 or C4 state in order to achieve a certain level of power saving. If the number of consumed credits is below the exit threshold 250, then the CPU is held off from being notified that an exit condition has occurred. In one embodiment, a timer is used to qualify how long to stall servicing <Desc/Clms Page number 12> the initial pending transactions if the applications are latency sensitive. Once the timer expires, a signal is sent to cause the CPU to exit from the C3 or C4 state even if the total number of consumed credits corresponding to the pending coherent transactions is less than the exit threshold 250. Likewise, for some highly performance sensitive applications, the exit threshold 250 may be set at 0% in order to meet the performance specifications of such applications. Furthermore, in some embodiments, the exit threshold 250 is set at different values depending on the remaining battery charge capacity in the system when the battery alone powers the system. One should appreciate that there are multiple ways to define the entry and exit thresholds. In one embodiment, the entry threshold substantially equals to the exit threshold. For instance, to run a performance-oriented application, the entry and exit thresholds may be hardwired to a single value at 0%. In an alternate embodiment, the entry and exit thresholds are set at different values. For instance, the entry threshold may be higher than the exit threshold. Furthermore, allowing the entry and exit thresholds to be set at different values on the fly enables the CPU to adjust performance based on the remaining battery charge capacity. In addition, the CPU may modify the entry and exit behavior of the CPU adaptively through threshold modification. Adaptive modification of the entry and exit thresholds allows the CPU to steer away from frequent thrashing of <Desc/Clms Page number 13> low power states because of certain periodic traffic that may coincide with the timing of the C3 or C4 state entry decision. Another advantage is to provide for asymmetric entry and exit behavior to tune and increase the residency period of the CPU in either the C3 or C4 state. For example, the CPU may take hundreds of microseconds to exit the C3 or C4 state, during which a phase lock loop of the CPU may consume twice the power consumed during the initial ten microseconds to spin up. Therefore, if the C3 or C4 residency of the CPU is less than the exit latency, the net effect may be little or negligible power saving, or worse, more power consumption. Figures 3A-3C illustrate various embodiments of chipset partitions in a computer system. Figure 3A shows a memory controller 310, an input/output controller 320, and power management circuitry 330. The power management circuitry 330 is outside of both the memory controller 310 and the input/output controller 320. The memory controller 310 is coupled to the input/output controller 320 via a link 315. The link 315 may include a digital media interface (DMI) link. The memory controller 310 is further coupled to one or more peripheral devices (not shown) via one or more buses or interconnects (not shown) that adopt a protocol with a credit-based flow control accounting scheme, such as, for example, PCI Express. [00301. In one embodiment, the power management circuitry 330 communicates with the memory controller 310 and/or the input/output controller 320 via the sideband signals 322 and 324. The sideband signals 332 and 334 indicate <Desc/Clms Page number 14> whether there is any bus mastering activity from a peripheral device, such as an advance graphic port (AGP). The sideband signals 332 and 334 are typically denoted as XX BUSY. For example, the sideband signal corresponding to the AGP is denoted as AGP BUSY. One should appreciate that the sideband signals may include one or more shared signals. [0031] In one embodiment, one of the memory controller 310 and the input/output controller 320 acts as a central agent to roll up the bus mastering activity information through one or more message packets sent between the memory controller 310 and the input/output controller 320. The message packets may include DMI message packets 325. However, the central agent still communicates with the power management circuitry 330 via one of the sideband signals 334 and 332. 10032] Figure 3B shows an alternate embodiment of chipset partition in a computer system. The chipset in Figure 3B includes a memory controller 340 and an input/output controller 350 coupling to each other via a link 345, which may include a DMI link. However, one should appreciate that some embodiments of the chipset include additional devices not shown. The memory controller 340 is further coupled to a peripheral device (not shown) via an interconnect (not shown) adopting a credit- based flow control accounting scheme, such as, for example, PCI Express. The peripheral device may include an external graphic core, an Ethernet controller, etc. The input/output controller 350 includes power management circuitry 352 to monitor <Desc/Clms Page number 15> data traffic over the interconnect. Since the power management circuitry 352 is internal to the input/output controller 350, the memory controller 340 has to communicate to the input/output controller 350 on whether the peripheral device has any on-going traffic over the interconnect. In one embodiment, the memory controller 340 sets one or more bits in a message packet 347 sent via the link 345 to the input/output controller 350. The message packet 347 may be a DMI packet. Setting the bit (s) in the message packet 347 is also referred to as in-band virtualization of the bus mastering indicator signal, as opposed to the sideband signals (e. g. , sideband signals 332 and 334 in Figure 3A), because the signal is abstracted to eliminate the pin and connector infrastructure on both of the controllers 340 and 350. Furthermore, the power management circuitry 352 may also monitor the bus mastering activity from other peripheral devices (not shown) coupled to the input/output controller 350 via other interconnects (not shown). [0033] Figure 3C shows an alternate embodiment of a chipset partition in a computer system. The chipset shown in Figure 3C includes an integrated memory and input/output controller 360. The integrated memory and input/output controller 360 includes internal power management circuitry 365. Since the power management circuitry 365 is part of the integrated controller 360, the bus mastering indications for peripheral devices coupled to the controller 360 may be internally registered through logic circuitry within the controller 360. <Desc/Clms Page number 16> [0034] One should appreciate that the various embodiments of chipset partition in Figures 3A-3C are merely shown to illustrate the technique disclosed. The technique disclosed may be applied to other embodiments of computer chipset partition. Figure 4 shows an exemplary embodiment of a computer system 400. The computer system 400 includes a central processing unit (CPU) 410, a memory controller (MCH) 420, a number of dual in-line memory modules (DIMMs) 425, a number of memory devices 427, a PCI Express graphic port 430, an input/output controller (ICH) 440, a number of Universal Serial Bus (USB) ports 445, an audio coder-decoder (AUDIO CODEC) 460, a Super Input/Output (Super I/O) 450, and a firmware hub (FWH) 470. [0036] In one embodiment, the CPU 410, the PCI Express graphic port 430, the DIMMs 425, and the ICH 440 are coupled to the MCH 420. The link 435 between the MCH 420 and the ICH 440 may include a DMI link. The MCH 420 routes data to and from the memory devices 427 via the DIMMs 425. The memory devices 427 may include various types of memories, such as, for example, dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate (DDR) SDRAM, or flash memory. In one embodiment, each of the DIMMs 425 is mounted on the same motherboard (not shown) via a DIMM connector (not shown) in order to couple to the MCH 420. In one <Desc/Clms Page number 17> embodiment, the USB ports 445, the AUDIO CODEC 460, and the Super I/O 450 are coupled to the ICH 440. The Super I/O 450 may be further coupled to a firmware hub 470, a floppy disk drive 451, data input devices 453, such as, a keyboard, a mouse, etc. , a number of serial ports 455, and a number of parallel ports 457. [0037] In one embodiment, the ICH 440 includes power management circuitry 442 to monitor data traffic over various interconnects coupling the ICH 440 and the MCH 420 to the peripheral devices, such as, for example, the PCI Express graphic port 430. The power management circuitry 442 may generate a bus mastering indicator to be sent as a virtualized signal within a message packet 437 from the MCH 420 to the ICH 440 via the link 435. Alternatively, the MCH 420 and the ICH 440 may be integrated into a single controller with power management circuitry such that the bus mastering indicator may be internally registered through logic. In an alternate embodiment, the MCH 420 and the ICH 440 remain as separate devices and the power management circuitry is external to both of the MCH 420 and the ICH 440. Either one of the MCH 420 and the ICH 440 may act as a central agent to roll up information of bus traffic from the peripheral devices in the system 400 from the other controller using message packets sent between the controllers 420 and 440. Furthermore, the central agent may communicate the <Desc/Clms Page number 18> information to the external power management circuitry via one or more sideband signals. Note that any or all of the components and the associated hardware illustrated in Figure 4 may be used in various embodiments of the computer system 400. However, it should be appreciated that other configuration of the computer system may include one or more additional devices not shown in Figure 4. Furthermore, one should appreciate that the technique disclosed is applicable to different types of system environment, such as a multi-drop environment or a point- to-point environment. Likewise, the disclosed technique is applicable to both mobile and desktop computing systems. [0040] The foregoing discussion merely describes some exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion, the accompanying drawings and the claims that various modifications can be made without departing from the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. |
Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing wake lock aware scheduling. The apparatus may receive a wake lock request by a wake lock profiler and acquire wake lock information of a wake lock event associated with the wake lock request. The wake lock information may include a wake lock time parameter. The apparatus may send a hint having the wake lock time parameter. The apparatus may receive the hint, determine whether ready jobs can execute during the wake lock event, and send a request for permission to schedule the ready jobs for execution during the wake lock event in response to determining that the ready jobs can execute during the wake lock event. |
1.A method for implementing wake-up lock aware scheduling on a computing device, comprising:Receiving a wake lock request through the wake lock analyzer;Acquiring, by the wakelock analyzer, wakelock information of a wakelock event associated with the wakelock request, wherein the wakelock information includes a wakelock time parameter;Sending a prompt containing the wake lock time parameter by the wake lock analyzer;Receiving the prompt by a scheduler;Determining, by the scheduler, whether a first ready job is executable during the wake lock event;A request to permit scheduling of the first ready job to execute during the wake lock event is sent by the scheduler in response to determining that the first ready job is executable during the wake lock event.2.The method of claim 1, wherein the request to permit scheduling of the first ready job to execute during the wake lock event comprises an estimate of processor usage for the first ready job, The method further includes:Receiving, by the wakelock analyzer, the request to permit scheduling of the first ready job to execute during the wake lock event;Determining, by the wake lock analyzer, whether a workload including the first ready job exceeds a total processor usage threshold;Transmitting, by the wakelock analyzer, the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload exceeds the total processor usage threshold Rejected;Transmitting, by the wakelock analyzer, the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload does not exceed the total processor usage threshold Approval.3.The method of claim 2 further comprising:Receiving, by the scheduler, the rejection of the request for permitting scheduling of the first ready job for execution during the wake lock event;Determining, by the scheduler, whether a second ready job is executable during the wake lock event;Responding to a request by the scheduler to permit scheduling of the second ready job to execute during the wake lock event in response to determining that the second ready job is executable during the wake lock event;Receiving, by the scheduler, the approval of the request to permit scheduling of the second ready job to execute during the wake lock event;The second ready job is scheduled to execute during the wake lock event.4.The method of claim 1 further comprising:Determining, by the scheduler, whether the first ready job exceeds a processor usage threshold, wherein, in response to determining that the first ready job is executable during the wake lock event and in response to determining the first ready job Sending the request to permit scheduling of the first ready job to execute during the wake lock event without exceeding the processor usage threshold;It is determined by the scheduler whether a second ready job is executable during the wake lock event.5.The method of claim 1 further comprising:Determining, by the wake lock analyzer, whether the wake lock information includes a wake lock time parameter;Calculating a wake lock duration estimate of the wake lock event by the wake lock analyzer in response to determining that the wake lock information does not include the wake lock time parameter;The wake lock duration estimate is stored by the wake lock analyzer.6.The method of claim 5 wherein:Storing the wake lock duration estimate includes:Correlating the wake lock duration estimate with a wake lock identifier ID of the wake lock event associated with the wake lock request;Storing the wakelock duration estimate in a wakelock information data structure having a corresponding wakelock ID;Acquiring the wake lock information includes retrieving the wake lock duration estimate from the wake lock information data structure.7.The method of claim 5 wherein calculating the wake lock duration estimate for the wake lock event comprises calculating the wake up of the wake lock event using a plurality of wake lock durations of the wake lock event Lock duration estimate.8.The method of claim 7, wherein the plurality of wake lock durations comprises a plurality of wake lock duration estimates, a plurality of wake lock duration observations, or a plurality of wake lock estimates and wake lock observations one of the.9.A wake lock sensing system configured to implement wake lock aware scheduling on a computing device, the wake lock sensing system comprising:A wake lock analyzer configured to perform operations including:Receiving a wake lock request;Acquiring wake lock information of a wake lock event associated with the wake lock request, wherein the wake lock information includes a wake lock time parameter;Sending a prompt containing the wake lock time parameter;A scheduler communicatively coupled to the wakelock analyzer and configured to perform operations including:Receiving the prompt;Determining whether the first ready job is executable during the wake lock event;A request to permit scheduling of the first ready job to execute during the wake lock event is sent in response to determining that the first ready job is executable during the wake lock event.10.The wake lock sensing system of claim 9, wherein the request to permit scheduling of the first ready job to execute during the wake lock event comprises using the first ready job to a processor for a pin Rate estimate, andWherein the wake lock analyzer is configured to perform operations further comprising the following:Receiving the request to permit scheduling of the first ready job to execute during the wake lock event;Determining whether a workload including the first ready job exceeds a total processor usage threshold;Responding to the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload exceeds the total processor usage threshold;Approving the request to permit scheduling of the first ready job to execute during the wake lock event is sent in response to determining that the workload does not exceed the total processor usage threshold.11.The wake lock sensing system of claim 10, wherein the scheduler is configured to perform operations further comprising:Receiving the rejection of the request to permit scheduling of the first ready job to execute during the wake lock event;Determining whether a second ready job is executable during the wake lock event;Responding to a request to permit scheduling of the second ready job to execute during the wake lock event in response to determining that the second ready job is executable during the wake lock event;Receiving the approval of the request to permit scheduling of the second ready job to execute during the wake lock event;The second ready job is scheduled to execute during the wake lock event.12.The wake lock sensing system of claim 9, wherein the scheduler is configured to perform operations further comprising:Determining whether the first ready job exceeds a processor usage threshold, wherein, in response to determining that the first ready job is executable during the wake lock event and in response to determining that the first ready job does not exceed the processing Transmitting the request for permitting scheduling of the first ready job to execute during the wake lock event;A determination is made as to whether the second ready job can be executed during the wake lock event.13.The wake lock sensing system of claim 9, wherein the wake lock analyzer is configured to perform operations further comprising:Determining whether the wake lock information includes a wake lock time parameter;Calculating a wake lock duration estimate of the wake lock event in response to determining that the wake lock information does not include a wake lock time parameter;The wake lock duration estimate is stored.14.The wake lock sensing system of claim 13 wherein said wake lock analyzer is configured to perform operations such that:Storing the wake lock duration estimate includes:Correlating the wake lock duration estimate with a wake lock identifier ID of the wake lock event associated with the wake lock request;Storing the wakelock duration estimate in a wakelock information data structure having a corresponding wakelock ID;Acquiring the wake lock information includes retrieving the wake lock duration estimate from the wake lock information data structure.15.The wake lock sensing system of claim 13 wherein the wake lock analyzer is configured to perform an operation such that calculating the wake lock duration estimate of the wake lock event comprises using the wake lock event The plurality of wakelock durations calculate the wakelock duration estimate of the wakelock event.16.The wake lock sensing system of claim 15 wherein said plurality of wake lock durations comprises a plurality of wake lock duration estimates, a plurality of wake lock duration observations, or a plurality of wake lock estimates and wake up One of the lock observations.17.A wake lock sensing system configured to implement wake lock aware scheduling on a computing device, the wake lock sensing system comprising:Means for receiving a wake lock request;Means for acquiring wake lock information of a wake lock event associated with the wake lock request, wherein the wake lock information includes a wake lock time parameter;Means for transmitting a prompt containing the wake lock time parameter;Means for receiving the prompt;Means for determining whether a first ready job is executable during the wake lock event;Means for transmitting a request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the first ready job is executable during the wake lock event.18.The wake lock sensing system of claim 17, wherein the request to permit scheduling of the first ready job to execute during the wake lock event comprises processor usage for the first ready job The estimation of the wake lock sensing system further includes:Means for receiving the request for permitting scheduling of the first ready job for execution during the wake lock event;Means for determining whether a workload including the first ready job exceeds a total processor usage threshold;Means for transmitting a rejection of the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload exceeds the total processor usage threshold;Means for transmitting an approval of the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload does not exceed the total processor usage threshold.19.The wake lock sensing system of claim 18, further comprising:Means for receiving the rejection of the request to permit scheduling of the first ready job to execute during the wake lock event;Means for determining whether a second ready job is executable during the wake lock event;Means for transmitting a request for permitting scheduling of the second ready job to execute during the wake lock event in response to determining that the second ready job is executable during the wake lock event;Means for receiving the approval for the request to permit scheduling of the second ready job to execute during the wake lock event;Means for scheduling the second ready job to execute during the wake lock event.20.The wake lock sensing system of claim 17 further comprising:Means for determining whether the first ready job exceeds a processor usage threshold, wherein the means for transmitting the request to permit scheduling of the first ready job to execute during the wake lock event comprises Transmitting, in response to determining that the first ready job is executable during the wake lock event and in response to determining that the first ready job does not exceed the processor usage threshold a device that is ready to perform a request during the wake lock event;Means for determining whether a second ready job is executable during the wake lock event.21.The wake lock sensing system of claim 17 further comprising:Means for determining whether the wake lock information includes a wake lock time parameter;Means for calculating a wake lock duration estimate of the wake lock event in response to determining that the wake lock information does not include a wake lock time parameter;Means for storing the wake lock duration estimate.22.The wake lock sensing system of claim 21 wherein:The means for storing the wake lock duration estimate includes:Means for correlating the wake lock duration estimate with a wake lock identifier ID of the wake lock event associated with the wake lock request;Means for storing the wakelock duration estimate in a wakelock information data structure having a corresponding wakelock ID;The means for obtaining the wake lock information includes means for retrieving the wake lock duration estimate from the wake lock information data structure.23.The wake lock sensing system of claim 21 wherein the means for calculating the wake lock duration estimate of the wake lock event comprises a plurality of wake lock duration calculations for using the wake lock event The means for waking up the wake lock duration estimate, wherein the plurality of wake lock durations comprises a plurality of wake lock duration estimates, a plurality of wake lock duration observations, or a plurality of wake locks One of the estimate and the wake lock observation.24.A non-transitory processor readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations comprising:Receiving a wake lock request;Acquiring wake lock information of a wake lock event associated with the wake lock request, wherein the wake lock information includes a wake lock time parameter;Sending a prompt containing the wake lock time parameter;Receiving the prompt;Determining whether the first ready job is executable during the wake lock event;A request to permit scheduling of the first ready job to execute during the wake lock event is sent in response to determining that the first ready job is executable during the wake lock event.25.The non-transitory processor readable storage medium of claim 24, wherein the request to permit scheduling of the first ready job to execute during the wake lock event comprises for the first ready job An estimate of processor usage, and wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:Receiving the request to permit scheduling of the first ready job to execute during the wake lock event;Determining whether a workload including the first ready job exceeds a total processor usage threshold;Responding to the request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload exceeds the total processor usage threshold;Approving the request to permit scheduling of the first ready job to execute during the wake lock event is sent in response to determining that the workload does not exceed the total processor usage threshold.26.The non-transitory processor readable storage medium of claim 25, wherein the stored processor executable instructions are configured to cause the processor to perform operations further comprising:Receiving the rejection of the request to permit scheduling of the first ready job to execute during the wake lock event;Determining whether a second ready job is executable during the wake lock event;Responding to a request to permit scheduling of the second ready job to execute during the wake lock event in response to determining that the second ready job is executable during the wake lock event;Receiving the approval of the request to permit scheduling of the second ready job to execute during the wake lock event;The second ready job is scheduled to execute during the wake lock event.27.The non-transitory processor readable storage medium of claim 24, wherein the stored processor executable instructions are configured to cause the processor to perform operations further comprising:Determining whether the first ready job exceeds a processor usage threshold, wherein transmitting the request to permit scheduling the first ready job to execute during the wake lock event comprises responding to determining the first ready A job may be executed during the wake lock event and in response to determining that the first ready job does not exceed the processor usage threshold, the transmitting for permitting scheduling of the first ready job for the wake lock event a request executed during the period; andA determination is made as to whether the second ready job can be executed during the wake lock event.28.The non-transitory processor readable storage medium of claim 24, wherein the stored processor executable instructions are configured to cause the processor to perform operations further comprising:Determining whether the wake lock information includes a wake lock time parameter;Calculating a wake lock duration estimate of the wake lock event in response to determining that the wake lock information does not include a wake lock time parameter;The wake lock duration estimate is stored.29.The non-transitory processor readable storage medium of claim 28, wherein the stored processor executable instructions are configured to cause the processor to perform operations such that:Storing the wake lock duration estimate includes:Correlating the wake lock duration estimate with a wake lock identifier ID of the wake lock event associated with the wake lock request;Storing the wakelock duration estimate in a wakelock information data structure having a corresponding wakelock ID;Acquiring the wake lock information includes retrieving the wake lock duration estimate from the wake lock information data structure.30.The non-transitory processor readable storage medium of claim 28, wherein the stored processor executable instructions are configured to cause the processor to perform an operation such that the calculating the wake lock event is performed The wake lock duration estimate includes calculating the wake lock duration estimate of the wake lock event using a plurality of wake lock durations of the wake lock event, wherein the plurality of wake lock durations comprises a plurality of wakeups Lock duration estimate, multiple wake lock duration observations, or one of multiple wake lock estimates and wake lock observations. |
Wake-up lock-aware system wide job scheduling for energy efficiency on mobile devicesBackground techniqueDifferent parts of the computing system of modern smartphones schedule their individual jobs (periodic or aperiodic). Schedule application-level services and system-level services in user space. Schedule driver-level jobs and background jobs in kernel space. The central processing unit (CPU) of the computing system wakes up periodically to complete the scheduled job. Frequent CPU wakeups increase overall energy consumption. The CPU remains awake to perform background activities for applications and services even when the display of the computing system is turned off. In battery powered systems, such as smartphones, this consumes battery power.Summary of the inventionMethods and apparatus of various embodiments provide apparatus and methods for implementing wake lock aware scheduling on a computing device. Various embodiments may include a wake lock analyzer that receives a wake lock request, acquires wake lock information for a wake lock event associated with the wake lock request, and sends a prompt containing a wake lock time parameter. In various embodiments, the wake lock information may include a wake lock time parameter. The wake lock time parameter may include information identifying and/or implementing a calculation of the duration of the wake lock, including one of a wake lock duration, a wake lock duration estimate, a wake lock start time, and/or a wake lock end time. Multiple. Some embodiments may further include receiving, by the scheduler, a prompt, determining, by the scheduler, whether the first ready job is executable during the wake lock event, and transmitting, by the scheduler, in response to determining that the first ready job is executable during the wake lock event A request to schedule a first ready job to execute during a wake lock event.In some embodiments, the request to permit scheduling of the first ready job to execute during the wake lock event may include an estimate of processor usage for the first ready job. Some embodiments may further include a wake lock analyzer that receives a request to permit scheduling of the first ready job to execute during the wake lock event, and determines if the workload containing the first ready job exceeds the total processing Usage threshold. Some embodiments may further include a wake lock analyzer that transmits a request for permitting scheduling of the first ready job for execution during the wake lock event in response to determining that the workload exceeds the total processor usage threshold Refuse. Some embodiments may further include a wake lock analyzer that sends a request to permit scheduling of the first ready job to execute during the wake lock event in response to determining that the workload does not exceed the total processor usage threshold Approval.Some embodiments may further include a scheduler that receives a rejection of the request to permit scheduling of the first ready job to execute during the wake lock event, and determines whether the second ready job is executable during the wake lock event. Some embodiments may further include a scheduler responsive to determining that the second ready job is executable during the wake lock event to send a request to permit scheduling of the second ready job to execute during the wake lock event. Some embodiments may further include a scheduler that receives approval for a request to permit scheduling of a second ready job to execute during a wake lock event, and schedules a second ready job to execute during the wake lock event.Some embodiments may further include a scheduler that determines whether the first ready job exceeds a processor usage threshold, and responsive to determining that the first ready job is executable during the wake lock event and in response to determining that the first ready job is not A request to permit scheduling of the first ready job to execute during the wake lock event is sent beyond the processor usage threshold. Various embodiments may further include a scheduler that determines whether the second ready job is executable during a wake lock event.Some embodiments may further include a wake lock analyzer that determines whether the wake lock information includes a wake lock time parameter, and calculates a wake lock duration for the wake lock event in response to determining that the wake lock information does not include the wake lock time parameter Time estimate. Some embodiments may further include a wake lock analyzer that stores the wake lock duration estimate in a wake lock information data structure having a corresponding wake lock ID.In some embodiments, calculating the wake lock duration estimate for the wake lock event may include calculating a wake lock duration estimate for the wake lock event using the plurality of wake lock durations of the wake lock event. In some embodiments, the plurality of wake lock durations may include a plurality of wake lock duration estimates, a plurality of wake lock duration observations, or one of a plurality of wake lock estimates and wake lock observations.Various embodiments may include a wake lock sensing system having a wake lock analyzer communicatively coupled to the scheduler. The wake lock analyzer and scheduler can be configured to perform the operations of one or more of the embodiment methods outlined above.Various embodiments may include a wake lock sensing system having means for performing the functions of one or more of the above-described embodiment methods.Various embodiments may comprise a non-transitory processor readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform the embodiments outlined above One or more operations in the method.DRAWINGSThe accompanying drawings, which are incorporated in the claims of the claims1 is a block diagram showing components of a computing device suitable for implementing an embodiment.2 is a component block diagram illustrating an example multi-core processor suitable for implementing an embodiment.3A-3C are component block diagrams illustrating three examples of a wakelock sensing system, in accordance with various embodiments.4 is a block diagram illustrating an example of a wakelock information table, in accordance with an embodiment.5 is a symbolic diagram illustrating an example wakelock lock duration estimate, in accordance with an embodiment.6 is a symbolic diagram illustrating an example of wake-up lock unperceived scheduling, in accordance with an embodiment.7 is a symbolic diagram illustrating an example of wake lock aware scheduling, in accordance with an embodiment.8 is a process flow diagram illustrating a method for wake lock duration estimation, in accordance with an embodiment.9 is a process flow diagram illustrating a method for waking up lock aware scheduling, in accordance with an embodiment.10 is a process flow diagram illustrating a method for wake lock aware scheduling, in accordance with an embodiment.11 is a component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.12 is a component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.13 is a component block diagram illustrating an example server suitable for use with the various embodiments.Detailed waysVarious embodiments will be described in detail with reference to the drawings. Wherever possible, the same reference numerals are used throughout the drawings to the The specific examples and embodiments are for illustrative purposes and are not intended to limit the scope of the appended claims.The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any or all of the following: cellular telephone, smart phone, personal or mobile multimedia player, personal data assistant ( PDA), laptops, tablets, deformable laptops/tablets (two-in-one computers), smart laptops, ultrabooks, netbooks, palmtop computers, wireless email receivers, cellular phones with multimedia Internet capabilities , a mobile game console, and similar personal electronic devices including memory and programmable processors. The term "computing device" may further refer to a stationary computing device, including personal computers, desktop computers, all-in-one computers, workstations, supercomputers, mainframe computers, embedded computers, servers, home theater computers, and game consoles.Various embodiments include methods and systems and apparatus for implementing such methods for systematically adjusting and executing system wide coordinated job scheduling in a computing device to implement a longer/deeper processor sleep phase based on wakelock activity to obtain Better energy efficiency. Embodiments may include calculating a duration estimate of the wakelock activity, providing the scheduler with an indication of an upcoming/existing wakelock activity time, and scheduling the upcoming/existing wakelock activity as long as the workload will remain below Threshold.Component-oriented users (called "activities" in Android systems) and background services within each application often acquire wake-up locks. The wake-up lock keeps the central processing unit (CPU) awake for a certain time window to be able to perform a specific task. A clear timeout value can be used to obtain a wakelock. For example, a wake lock can be obtained to play a 60 second YouTube video, or to execute some specific, well-defined program code segments. The task-based nature of other wake-up locks makes it possible to estimate the average duration of some wake-up locks based on history.To reduce the number of CPU wakeups used to obtain the wakelock to perform various tasks, jobs can be scheduled from different parts of the system to be opportunistically piggybacked on the upcoming/existing wakelock time window. The job scheduler at different levels can be configured to adjust the scheduling of the job according to the prompts regarding the upcoming/existing wake lock window. This allows the CPU to sleep for a longer duration and reduce the number of sleep to active and active to sleep transitions.The duration and/or duration estimate of the wake lock may be calculated based on offline and/or runtime analysis using suitable measurements, such as average, median, exponentially weighted moving averages, and the like. In various systems, duration and/or duration estimates associated with information identifying and specifying other parameters of the wake lock may be stored. For example, a duration and/or duration estimate can be added to the global table of the wake lock and associated with the wake lock identifier (ID) in the Android system.A wake-up lock analyzer can be added to the system to calculate duration and/or duration estimates to provide different levels of schedulers with hints related to the duration and/or duration estimate of the wake-up lock to determine if the scheduled work is exceeded Total processor usage threshold and approve/reject work schedules. The wake lock analyzer can be a stand-alone component, an integrated component of a power manager, a software program implemented by a CPU, or a combination of a software program implemented by a CPU and dedicated hardware.The application can request a wake lock from the power manager. The request to acquire the wake lock may specify a wake lock identifier and/or a timeout value indicating when to release the wake lock, thereby causing the CPU to transition from active to sleep. The wake lock analyzer can detect, receive, and/or intercept the wake lock request and pass the wake lock prompt to one or more schedulers. The wake lock hint may specify the actual or estimated duration of the upcoming/existing process, job or task to the receiving scheduler, which may include a start time and/or an end time or may be used to define the start and/or end time of the wake lock. The duration of the process/job/task.The scheduler can maintain a list of ready jobs waiting to execute during an upcoming/existing wake lock or waiting to acquire a wake lock to execute. The scheduler may select a ready job that may be performed during the duration specified in the wake lock prompt to schedule execution during the associated wake lock. The selection of the ready job may be implemented in response to determining that the CPU usage will be less than the processor usage threshold during the wake lock, such as 50% or less of the CPU capacity.Each scheduler can negotiate with the wake lock analyzer to schedule a corresponding selected ready job for the scheduler. Each scheduler can send the estimated CPU usage of the selected ready job to the wakelock analyzer. The wakelock analyzer can determine whether the combination of total CPU usage, estimated CPU usage of selected ready jobs from various schedulers, and CPU usage of scheduled jobs for wakelocks will exceed the total processor usage threshold. (For example, 75% or more of CPU capacity). In response to determining that the total CPU usage will be less than the total processor usage threshold, the wake lock analyzer may signal the approval of the scheduling of the selected ready job to the respective schedulers.In response to determining that the total CPU usage will exceed the total processor usage threshold, the wake lock analyzer may signal rejection of all selected ready jobs to the various schedulers. Based on the rejected signal, the scheduler can select a ready job with a lower estimated CPU usage and continue to negotiate with the wake lock analyzer until an approval for the selected ready job is received.In response to determining that the total CPU usage will exceed the total processor usage threshold, the wakelock analyzer may select certain ready jobs that may execute during the wakelock without exceeding the total processor usage threshold and will be ready for the other selections. The approval and rejection of the job signals each scheduler. Based on the approved signal, the scheduler can schedule an approved ready job to execute during the upcoming/existing wake lock.FIG. 1 illustrates a computing device 10 suitable for use with the various embodiments. Computing device 10 may include a system on a chip (SoC) 12 having a processor 14, memory 16, communication interface 18, and storage device memory interface 20. Computing device 10 can further include a communication component 22, such as a wired or wireless modem, storage device memory 24, and an antenna 26 for establishing a wireless communication link. Processor 14 can include any of a variety of processing devices, such as a number of processor cores.The term "system on a chip" (SoC) is used herein to refer to a group of interconnected electronic circuits that typically, but not exclusively, include processing devices, memory, and communication interfaces. Processing devices may include various different types of processors 14 and processor cores, such as general purpose processors, central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), accelerated processing units (APUs) , auxiliary processors, single core processors, and multi-core processors. Processing devices may further embody other hardware and hardware combinations, such as field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), other programmable logic devices, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and Time base. The integrated circuit can be configured such that components of the integrated circuit reside on a single piece of semiconductor material, such as silicon.The SoC 12 may include one or more processors 14. Computing device 10 may include more than one SOC 12, thereby increasing the number of processors 14 and processor cores. Computing device 10 may also include a processor 14 that is not associated with SoC 12. Individual processor 14 may be a multi-core processor, as described below with respect to FIG. Processor 14 may each be configured the same or different than other processors 14 of computing device 10 for a particular purpose. One or more of processor 14 and processor cores having the same or different configurations may be grouped together. The processor 14 or group of processor cores may be referred to as a multi-processor cluster.The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured to store data and processor executable code for access by the processor 14. Computing device 10 and/or SoC 12 may include one or more memories 16 that are configured for various purposes. The one or more memories 16 may include volatile memory, such as random access memory (RAM) or main memory, or a cache. These memories 16 may be configured to temporarily hold a limited amount of data received from a data sensor or subsystem, loaded from non-volatile memory from the non-volatile memory to the memory 16 to predict future access based on various factors. Data and/or processor executable code instructions, and/or intermediate processing data and/or processor generated by processor 14 and temporarily stored for future fast access without being stored in non-volatile memory Execute code instructions.The memory 16 can be configured to at least temporarily store data and processor executables that are loaded from another memory device, such as another memory 16 or storage device memory 24, to the memory 16 for access by one or more of the processors 14 Code. The data or processor executable code loaded into memory 16 may be loaded in response to execution of functions by processor 14. Loading data or processor executable code into memory 16 in response to execution of a function may result from an unsuccessful or missed memory access request to memory 16, since the requested data or processor executable code is not located in memory 16 in. In response to a miss, a memory access request may be made to another memory 16 or storage device memory 24 to load the requested data or processor executable code from other memory 16 or storage device memory 24 to memory device 16. Loading data or processor executable code into memory 16 in response to execution of a function may result from a memory access request to another memory 16 or storage device memory 24, and the data or processor executable code may be loaded into memory 16, For subsequent access.The storage device memory interface 20 and the storage device memory 24 can work together to allow the computing device 10 to store data and processor executable code on a non-volatile storage device medium. The storage device memory 24 can be configured to be very similar to the embodiment of the memory 16, wherein the storage device memory 24 can store data or processor executable code for one or more accesses in the processor 14. The storage device memory 24 is non-volatile and can retain information after the power to the computing device 10 has been turned off. The information stored on memory 24 is available to computing device 10 when the power is turned back on and computing device 10 is restarted. The storage device memory interface 20 can control access to the storage device memory 24 and allow the processor 14 to read data from and write data to the storage device memory 24.Some or all of the components of computing device 10 may be arranged and/or combined differently while still providing the necessary functionality. Moreover, computing device 10 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of computing device 10.FIG. 2 illustrates a multi-core processor 14 suitable for implementing embodiments. Multi-core processor 14 may have multiple homogeneous or heterogeneous processor cores 200, 201, 202, 203. Processor cores 200, 201, 202, 203 may be isomorphic in that processor cores 200, 201, 202, 203 of a single processor 14 may be configured for the same purpose and have the same or similar performance characteristics. For example, processor 14 can be a general purpose processor, and processor cores 200, 201, 202, 203 can be isomorphic general purpose processor cores. Alternatively, processor 14 may be a graphics processing unit or a digital signal processor, and processor cores 200, 201, 202, 203 may each be a homogeneous graphics processing core or a digital signal processor core. For ease of reference, the terms "processor" and "processor core" are used interchangeably herein.Processor cores 200, 201, 202, 203 may be heterogeneous in that processor cores 200, 201, 202, 203 of a single processor 14 may be configured for different purposes and/or have different performance characteristics. The heterogeneity of such heterogeneous processor cores can include different instruction set architectures, pipelines, operating frequencies, and the like. An example of such a heterogeneous processor core may include a so-called "big.LITTLE" architecture in which a slower, lower power processor core can be coupled to a stronger and higher power processor core. In a similar embodiment, SoC 12 may include several homogeneous or heterogeneous processors 14.In the example illustrated in FIG. 2, multi-core processor 14 includes four processor cores 200, 201, 202, 203 (ie, processor core 0, processor core 1, processor core 2, and processor core 3). . For ease of explanation, the examples herein may refer to the four processor cores 200, 201, 202, 203 illustrated in FIG. However, the four processor cores 200, 201, 202, 203 illustrated in FIG. 2 and described herein are provided by way of example only and are in no way intended to limit the various embodiments to four core processor systems. Computing device 10, SoC 12, or multi-core processor 14 may include fewer or more than four processor cores 200, 201, 202, 203 illustrated and described herein, individually or in combination.3A through 3C illustrate example embodiments of wake lock aware scheduling systems 300a through 300c. Example embodiments of wake lock sense scheduling systems 300a through 300c may be included on a computing device (such as computing device 10 in FIG. 1). Each of the example embodiments of the wake lock aware scheduling system 300a through 300c can include a wake lock analyzer 304, a service job scheduler 306, and a driver job/kernel level background job scheduler 308. The wake lock analyzer 304 can be communicatively coupled to the service job scheduler 306 and the driver job/kernel level background job scheduler 308. In various embodiments, wake lock analyzer 304 may be implemented as software executing on general purpose hardware (such as processor 14 in FIGS. 1 and 2) or on dedicated hardware such as power manager 302. In various embodiments, wakelock analyzer 304 may be implemented as a hardware component that is integrated as part of general purpose hardware (such as processor 14 in FIGS. 1 and 2) or integrated as dedicated to power manager 302. The part of the hardware. In various embodiments, wake lock analyzer 304 may be implemented in a programmable processor that executes software components of software configured to manage wakelocks, such as power manager 302. Alternatively, wake lock analyzer 304 may be implemented in general purpose hardware/circuitry (such as processor 14 in Figures 1 and 2) or in dedicated hardware/circuitry. In various embodiments, wake lock analyzer 304 can be implemented as a dedicated hardware component and can be communicatively coupled to processor 14 and power manager 302.Wake lock analyzer 304 can be configured to receive/detect/intercept wake lock requests from an application and compile individual wake locks using wake lock information containing requests and/or observations of processor 14 activity during a wake lock event. Wake up lock information. The wake lock information, which may include a wake lock request and compiled by the wake lock analyzer 304, may include a wake lock identifier (ID), a user ID, a process ID, and a wake lock time parameter. The wake lock time parameter may include information identifying and/or implementing a calculation of the duration of the wake lock, including one of a wake lock duration, a wake lock duration estimate, a wake lock start time, and/or a wake lock end time. Multiple. The wake lock identifier (ID), user ID, process ID, and wake lock time parameters may also be observed by the wake lock analyzer 304 from the processor 14 during the wake lock event. The wake lock analyzer 304 may also calculate and compile the wake lock duration estimate from the observations of the processor 14 activity during the wake lock event and correlate the wake lock duration estimate with the wake lock information compiled from the wake lock request. Wake lock analyzer 304 may store the wake lock information in a data structure that is configured to correlate wake lock information for a specified wake lock identified by its wake lock ID.The wake lock analyzer 304 can be configured to identify the wake lock request and send the prompt to the service job scheduler 306 and/or the driver job/kernel level background job scheduler 308. The prompt may include a prompt wake-up lock time parameter for the requested wake-up lock. The wake lock time parameter may include information identifying and/or implementing a calculation of the duration of the wake lock, including one of a wake lock duration, a wake lock duration estimate, a wake lock start time, and/or a wake lock end time. Multiple. The requested wake lock can be an upcoming or existing wake lock. The service job scheduler 306 and/or the driver job/kernel level background job scheduler 308 can use the hint information to select a ready job that can be executed during the wake lock event of the requested wake lock. The service job scheduler 306 and/or the driver job/kernel level background job scheduler 308 can request permission from the wake lock analyzer 304 to schedule the selected ready job. The grant request may include a processor usage indicator and/or a ready job identifier. The processor usage indicator may be an indicator for scheduling all of the requested ready jobs, the requested set of ready jobs, or the processor usage of the individual requested ready jobs.The wake lock analyzer 304 may determine whether to approve and/or reject the request to permit scheduling of the selected ready job. The wake lock analyzer 304 can remain at or below the total processor usage based on whether all of the requested ready jobs, the requested set of ready jobs, or the individual requested ready jobs can be added to the total workload of the processor 14 The threshold grants and/or rejects the request to permit scheduling of the selected ready job. Wake lock analyzer 304 may send approvals and/or denials for all requested ready jobs, groups of requested ready jobs, or individual requested ready jobs.The service job scheduler 306 and/or the driver job/kernel level background job scheduler 308 may reconfigure the selection of some or all of the ready jobs in response to permitting the rejection of some or all of the selected ready jobs. A repeat request permits the process of scheduling a selected ready job using a reconfigured ready job. A request to permit scheduling of a selected ready job may be referred to as negotiation. The service job scheduler 306 and/or the driver job/kernel level background job scheduler 308 can schedule some or all of the selected ready jobs in response to and based on approval of some or all of the selected ready jobs for the permission schedule.FIG. 4 illustrates an example of a wake lock information table 400 in accordance with an embodiment. As described herein, the wake lock analyzer may store wake lock information in a data structure that is configured to correlate wake lock information for a specified wake lock identified by its wake lock ID. A non-limiting example of such a data structure can include a wake lock information table 400. The wake lock information table 400 can include a number of data fields, represented as columns 402 through 410, which store wake lock information of a specified type, including a wake lock ID column 402, a user ID column 404, a process ID column 406, a wake lock. The actual duration column 408 and the wake lock duration estimate column 410. The wake lock information table 400 can include a number of data records, represented as rows 412 through 416, which are associated with wake lock information specifying columns 402 through 410 of the wake lock, which is awakened by the wake lock ID column 402. The lock ID is specified.The wake lock information table 400 can be populated with wake lock information derived from the wake lock request and/or observations of processor activity during the wake lock event. In some embodiments, wakelock ID column 402, user ID column 404, process ID column 406, wakelock actual duration column 408 for rows 412 through 416 may be populated with wakelock ID, user ID, process ID, and wakeup. Lock time parameter, the wake lock ID, user ID, process ID, and wake lock time parameters are derived from a wake lock request and/or observation of processor activity during a wake lock event of a wake lock associated with the wake lock ID result. In some embodiments, the wake-up lock actual duration column 408 can be populated with a specified wake-up lock duration that is included in the wake-lock information as a wake-lock time parameter. In some embodiments, the wake-up lock actual duration column 408 can be populated with a calculated wake-up lock duration derived from the wake-up lock time parameter, which includes the start time and/or end time of the wake-up lock event during the wake-up lock event. And observations of processor activity.The wake lock duration estimate column 410 may be populated with a calculated wake lock duration estimate having a wakelock event duration associated with the wake lock ID. In various embodiments, the wake lock duration estimate may be calculated when the wake lock time parameter is not available for the wake lock ID. The wake lock duration estimate may be calculated based on observations of processor activity during a wake lock event associated with the wake lock ID. In various embodiments, the wake lock duration estimate column 410 may be updated based at least in part on a subsequent wake lock event associated with the wake lock ID. The wake lock duration estimate may be calculated for a plurality of wake lock events associated with the wake lock ID, and the wake lock duration estimate column 410 may be updated using the calculation of the wake lock duration estimate, using the current wake lock to continue The wake-up lock duration estimate is calculated for some or all of the time observations/estimates and previous wake-up lock duration observations. Various calculations can be used to calculate wakelock duration estimates using multiple wakelock duration observations/estimates including mean, median, exponentially weighted moving averages, and the like. In various embodiments, the wake lock duration estimate may be based on runtime analysis. In various embodiments, the wake lock duration estimate may be based on an offline analysis.FIG. 5 illustrates an example wake lock duration estimate in accordance with an embodiment. Each wakelock event 500a, 500b, 500c associated with the same wakelock ID may have different timing characteristics. The timing characteristics of the individual wake lock events 500a, 500b, 500c can be used to calculate the wake lock duration estimate 508 for the wakelock ID. Each wakelock event 500a, 500b, 500c may include start times 502a, 502b, 502c, durations 504a, 504b, 504c and end times 506a, 506b, 506c. The wake-up lock request can be received/detected by the wake-up lock analyzer, the wake-up lock acquisition signal indicating the grant of the wake-up lock can be received/detected/intercepted, and/or the change observation start time of the processor state from "sleep" to "active" can be observed. 502a, 502b, 502c. Similarly, wakelock analyzer observation end times 506a, 506b, 506c may be received/detected/intercepted by a wakelock lock release signal indicating wakeup lock revocation and/or a change in processor state observing from "active" to "sleep". The durations 504a, 504b, 504c may be measured and calculated by the wake lock analyzer by measuring or calculating the elapsed time between the observation start times 502a, 502b, 502c and the end times 506a, 506b, 506c.The example illustrated in FIG. 5 comparatively shows that the duration 504a of the wake lock event 500a may be the longest of the durations 504a, 504b, 504c; the duration 504b of the wake lock event 500b may be the shortest of the durations 504a, 504b, 504c And the duration 504c of the wake lock event 500c can be between the longest wake lock duration 504a and the shortest wake lock duration 504b. Example Description Although possible, the wake lock duration estimate 508 need not be equal to any of the durations 504a, 504b, 504c.The example in FIG. 5 illustrates that the wakelock duration estimate 508 can be longer than the duration 504b and shorter than the durations 504a, 504c. As discussed herein, various calculations may be used and wakelock persistence may be calculated using some or all of durations 504a, 504b, 504c, start times 502a, 502b, 502c and/or end times 506a, 506b, 506c Time estimate 508. In various embodiments, the wake lock persistence may be calculated using some or all of the previous wake lock duration estimate and durations 504a, 504b, 504c, start times 502a, 502b, 502c and/or end times 506a, 506b, 506c. Time estimate 508. The wake lock duration estimate 508 may change with an additional wake lock event. The wake lock duration estimate 508 can be used to control the scheduling of ready jobs by the service job scheduler and the driver job/kernel level background job scheduler.6 and 7 illustrate a comparative example of wake lock unaware scheduling and wake lock aware scheduling, in accordance with an embodiment. In the example illustrated in Figures 6 and 7, the same wake lock event series 600 is illustrated. In both instances, the wake lock event series 600 includes a first wake lock event 670 and a second wake lock event 672. The first wake lock event 670 can have a wake lock ID associated with the actual duration of the wake lock, and the second wake lock event 672 can have a wake lock ID associated with the wake lock duration estimate. The first wake lock event 670 can be illustrated by a start time 602, an actual duration 604, and an actual end time 606. The second wake lock event 672 can be illustrated by a start time 608, an estimated duration 610, and an estimated end time 612.The example illustrated in FIG. 6 may further include a wakeup lock unaware service job schedule 620, a wakeup lock unaware driver job/kernel level background job schedule 630, and a processor state series 640. Wakeup lock non-aware service job schedule 620 and wake lock unaware driver job/kernel level background job schedule 630 may include jobs 622, 624, 626 that may be scheduled without regard to the duration of wake lock events 670, 672, 632,634. In this example, jobs 622, 626, and 632 can be scheduled external to wake lock events 670, 672.Processor state series 640 can include individual processor states 680 through 688. Processor states 680 through 688 may include "active" states 680, 684, 688, and "sleep" states 682, 686. Each processor state 680 to 688 can be illustrated by transition edges 642, 646, 648, 652, 654, 658 and state durations 644, 650, 656, 660, 662. Active states 680, 684, 688 may be triggered by wake lock events 670, 672 and/or scheduled jobs 622 through 634. The example in FIG. 6 illustrates that the actual end time 606 of the wake lock event 670 and the transition edge 646 from the active state 680 to the sleep state 682 may be inconsistent; the estimated end time 612 of the wake lock event 672 and the active state 684 to the sleep state 686 Transition edge 652 may be inconsistent; and there may be no wake lock events consistent with active state 688.The example illustrated in FIG. 7 may further include a wake lock aware service job schedule 700, a wake lock sense driver job/kernel level background job schedule 702, and a processor state series 704. The wake lock aware service job schedule 700 and the wake lock sense driver job/kernel level background job schedule 702 may include the same jobs 622, 624, 626, 632, 634, as illustrated in FIG. However, in the example illustrated in FIG. 7, due to wake lock aware scheduling, jobs 622, 624, 626, 632, 634 may be scheduled to align with the duration of wake lock events 670, 672. In this example, jobs 622, 624, 626, 632, 634 may be scheduled within wake lock events 670, 672.Processor state series 704 can include various processor states 684, 720 through 724. Processor states 684, 720 through 724 may include "active" states 684, 720 and "sleep" states 722, 724. Each processor state 684, 720 through 724 may be illustrated by transition edges 642, 648, 652, 710 and state durations 650, 708, 712, 714. Active states 684, 720 may be triggered by wake lock events 670, 672 and/or scheduled jobs 622 through 634. The example in FIG. 7 illustrates that the actual end time 606 of the wake lock event 670 and the transition edge 710 from the active state 720 to the sleep state 722 may be inconsistent; the estimated end time 612 of the wake lock event 672 and the active state 684 to the sleep state 724 The transition edge 652 may be inconsistent; and there may be no active state that is not consistent with the wake lock event.A comparison of the examples illustrated in Figures 6 and 7 shows that the active state durations 650, 708 can be cumulatively less than the active state durations 644, 650, 656 when using wakelock aware scheduling and wakelock unperceived scheduling. Thus, sleep state durations 722, 724 may cumulatively exceed sleep state durations 682, 686. The number of transition edges 642, 648 from the sleep state to the active state may be cumulatively less than the number of transition edges 642, 648, 654 from the sleep state to the active state. Thus, wake-up lock-aware scheduling for the same job 622, 624, 626, 632, 634 can reduce the time spent by the processor in the active state, increase the time spent by the processor in the sleep state, and reduce the sleep state to active by the processor. The number of transitions of the state provides power benefits superior to wake-up lock unperceived scheduling. The power benefit may even be achieved when the estimated end time 612 of the wake lock event 672 and the transition edge 652 from the active state 684 to the sleep state 724 are inconsistent, since the job 626 may be executed during the wake lock event 672, thereby avoiding active The need for state 688.FIG. 8 illustrates a method 800 of waking up lock duration estimates, in accordance with an embodiment. Method 800 can be implemented in a computing device in software executed in a processor (eg, processor 14 in Figures 1, 2, and 3C) (e.g., power manager 302 and wake lock analyzer 304 in Figures 3A through 3C) In medium, general purpose hardware, dedicated hardware (eg, power manager 302 and wake lock analyzer 304 in Figures 3A through 3C), or a combination of processor and dedicated hardware, for example, performing a wake lock containing other individual components A processor that senses software within the dispatch system. In order to cover alternative configurations that can be implemented in various embodiments, the hardware implementing method 800 is referred to herein as a "processing device."In block 802, the processing device may detect/receive/block the wakelock request from the application. The processing device can be configured to monitor the communication path of the wakelock request or can be located in the communication path of the wakelock request.In block 804, the processing device may obtain wake lock information for the wake lock associated with the wake lock request. The wake lock information may include a wake lock ID, a user ID, a process ID, and/or an actual duration. The processing device may acquire the wakelock information by retrieving the wakelock information from wakelock information sent from a wakeup lock request from a memory device for implementing a wakeup lock of the processor, such as a cache register and a buffer. In the event that at least the wake lock ID is retrieved, the processing device may retrieve other wake lock information associated with the wake lock ID from the wake lock information data structure (eg, wake lock information table 400 in FIG. 4).In determination block 806, the processing device can determine whether the wake lock information includes a wake lock time parameter. Accordingly, in block 806, the processing device may determine whether the wake-up lock information includes information identifying and/or implementing a duration calculation of the wake-up lock, such as the wake-up lock actual duration or wake-up lock start time and wake-up lock End time both.In response to determining that the wake lock information does not include a wake lock time parameter (ie, determination block 806 = "No"), the processing device may measure or calculate the wake lock duration in block 808. The processing device may receive an observation from the wakelock request including grant and release of the wakelock signal and/or processor activity corresponding to the wakelock request and/or include one or the other of the wakelock start time and the wakelock end time A single wake lock time parameter is used to measure or calculate the wake lock duration. The processing device may start and end the measurement of the wake-up lock based on the grant and release of the wake-up lock signal transmission, the transition between the active state and the sleep state of the processor corresponding to the wake-up lock request, and/or the wake-up lock start time or the wake-up lock end time. duration. The processing device may calculate the duration of the wake-up lock based on the time of grant and release of the wake-up lock signal, the transition of the processor corresponding to the wake-up lock request between the active state and the sleep state, and/or the wake-up lock start time or the wake-up lock end time time.In block 810, the processing device may store its measured or calculated wake lock duration. The processing device may store the wake-up lock duration in a general purpose or special volatile or non-volatile memory (eg, memory 16, 24, FIG. 1) for subsequent retrieval, eg, for persisting based on multiple wake locks Time estimates the duration of the wake lock.In block 812, the processing device may calculate the wake-up lock duration estimate using a plurality of stored wake-up lock durations. In various embodiments, the plurality of stored wakelock lock durations may be embodied by a previous wake lock duration estimate, and the calculation of the wake lock duration estimate may include using a previous wake lock duration estimate and one or more recent The stored wakelock duration. In various embodiments, calculating the wake lock duration estimate may include calculating using various techniques, such as calculating an average, a median, an exponentially weighted moving average, and the like.In block 814, the processing device may store the wake lock duration estimate in a wake lock information data structure having a corresponding wake lock ID (ie, an ID corresponding to the wake lock event). The wake lock duration estimate can be stored in the wake lock information data structure. For example, the processing device may store the wake lock duration estimate in memory by associating the estimate with wakelock information including an associated wake lock ID and a user ID and/or process ID. In some embodiments, operations in blocks 802 through 814 may be performed for offline and/or runtime analysis to generate wakelock lock duration estimates for associated wakelock IDs.In response to determining that the wake lock information does include a wake lock time parameter (ie, determination block 806 = "Yes"), the processing device may optionally select a wake lock time parameter, such as a wake lock start time and a wake lock, in optional block 816. End time, calculate the wake lock duration. In the event that the processing device is not available for the actual duration of the wakelock, the device may use the wakelock start time and the wakelock end time to determine the actual duration of the wakelock.In block 818, the processing device can store the actual duration of the wakelock. In various embodiments, the processing device may store the actual duration of the wake-up lock in a memory (eg, memory 16, 24, FIG. 1) for use during a wake-up lock event associated with the wake-lock request. The actual duration of the wakelock lock can be stored in the wakelock information data structure with the corresponding wakelock ID. For example, the processing device can store the actual duration of the wake-up lock by associating the duration with corresponding wake-lock information including the associated wake-lock ID, user ID, and/or process ID.FIG. 9 illustrates a method 900 of waking up lock aware scheduling, in accordance with an embodiment. Method 900 can be implemented in a computing device in software executed in a processor (eg, processor 14 in Figures 1, 2, and 3C) (e.g., power manager 302 and wake lock analyzer 304 in Figures 3A through 3C) In medium, general purpose hardware, dedicated hardware (eg, power manager 302 and wake lock analyzer 304 in Figures 3A through 3C), or a combination of processor and dedicated hardware, for example, performing a wake lock containing other individual components A processor that senses software within the dispatch system. In order to cover alternative configurations that can be implemented in various embodiments, the hardware implementing method 900 is referred to herein as a "processing device."In block 902, the processing device may detect/receive/block the wakelock request from the application. The processing device can be configured to monitor the communication path of the wakelock request or can be located in the communication path of the wakelock request.In block 904, the processing device may obtain wake lock information for the wake lock associated with the wake lock request. The wake lock information may include one of a wake lock ID, a user ID, a process ID, and/or a wake lock time parameter (eg, a wake lock duration, a wake lock duration estimate, a wake lock start time, and/or a wake lock end time) Or more). The processing device may acquire the wakelock information by retrieving the wakelock information from wakelock information sent from a wakeup lock request from a memory device for implementing a wakeup lock of the processor, such as a cache register and a buffer. In the event that at least the wake lock ID is retrieved, the processing device may retrieve other wake lock information associated with the wake lock ID from the wake lock information data structure (eg, wake lock information table 400 in FIG. 4), including the wake lock duration estimated value.In block 906, the processing device may send the prompt to one or more schedulers (eg, service job scheduler 306, driver job/kernel level background job scheduler 308 in Figures 3A-3C). The prompt may include a prompt wake lock time parameter, which may include a wake lock time parameter and/or a wake lock duration estimate of the requested wake lock. The requested wake lock can be an upcoming or existing wake lock.In block 908, the processing device may receive a request from the scheduler to permit scheduling of the ready job. A ready job can contain jobs that are waiting to be executed during a wake lock event. Upon receiving the prompt, the scheduler may select a ready job to execute and send a request to permit scheduling of the selected ready job, as described further with respect to method 1000 illustrated in FIG. The request to permit the scheduling of the ready job may include an estimate of each of the selected ready jobs, a group or all of the processor usage, and/or each of the selected ready jobs, a group, and/or all of the identifications symbol.In determination block 910, the processing device may determine whether the total workload including the selected ready job exceeds a total processor usage threshold.In block 912, in response to determining that the total workload has not exceeded the total processor usage threshold (ie, determination block 910 = "No"), the processing device may send an approval to the request to permit scheduling of the selected ready job to be scheduler.In optional block 914, in response to determining that the total workload does exceed the total processor usage threshold (ie, determination block 910 = "Yes"), the processing device may select to reduce the total workload below the total processor usage threshold. A combination of ready jobs. The processing device may use the information for permitting scheduling of the request for the selected ready job to select a combination of ready jobs.In optional block 916, the processing device may send an approval to the scheduler for permission to schedule a combination of ready jobs selected by the processing device.In block 918, the processing device may send a rejection of the request to permit scheduling of the ready job to the scheduler. In various embodiments, the rejection of the request to permit the scheduling of the ready job may include all of the ready jobs selected by the scheduler, or only the ready jobs not selected by the processing device for the combination of ready jobs.FIG. 10 illustrates a method 1000 of wake lock aware scheduling in accordance with an embodiment. Method 1000 can be implemented in a computing device in software executed in a processor (eg, service job scheduler 306 and driver job/kernel level background job scheduler 308 in Figures 3A-3C) (eg, Figures 1, 2) And processor 14) in 3C, general purpose hardware, dedicated hardware (eg, service job scheduler 306 and driver job/kernel level background job scheduler 308 in Figures 3A through 3C), or processor and dedicated In a combination of hardware, for example, processor execution software within a wakelock aware scheduling system that includes other individual components is executed. In order to cover alternative configurations that can be implemented in various embodiments, the hardware implementing method 1000 is referred to herein as a "processing device."In block 1002, the processing device can receive a prompt from the wake lock analyzer. The prompt may include a prompt wake lock time parameter, which may include a wake lock time parameter and/or a wake lock duration estimate of the requested wake lock. The requested wake lock can be an upcoming or existing wake lock.In block 1004, the processing device may select a ready job that may be executed during a wake lock event associated with the requested wake lock. The processing device may compare the estimated duration for performing the ready job with the prompt wake lock time parameter received with the prompt to determine if the ready job can complete execution during the duration of the wake lock event associated with the wake lock request. The processing device may select a ready job that may complete execution during the duration of the wake lock event associated with the wake lock request.In determination block 1006, the processing device can determine whether the selected ready jobs collectively exceed the processor usage threshold. The processing device can compare the estimated processor usage and the processor usage threshold for performing the ready job to determine if the ready job cumulatively exceeds the processor usage threshold.In block 1004, in response to determining that the selected ready job collectively exceeds the processor usage threshold (ie, determination block 1006 = "Yes"), the processing device may select a wake lock event that may be associated with the requested wake lock. Ready jobs executed during the period. By repeatedly performing the operations in block 1004, the processing device can track the estimated total processor usage of the selected ready job and select a ready job that can be executed during the wake lock event associated with the requested wake lock, which also Reduce the estimated total processor usage from the previously selected ready job. The later selection of the ready job may include a ready job that was previously selected to execute during the wake lock event associated with the requested wake lock.In block 1008, in response to determining that the selected ready job does not collectively exceed the processor usage threshold (ie, determination block 1006 = "No"), the processing device may send a request to permit scheduling of the selected ready job to wake up Lock analyzer. As described above with reference to method 900 illustrated in FIG. 9, the wake lock analyzer may determine whether to respond to the grant for scheduling with full or partial approval and/or rejection of the request to permit scheduling of the selected ready job. A request for a ready job.In block 1010, the processing device may receive a response from the wakelock analyzer to a request to permit scheduling of the selected ready job. The response may include identifying an individual, group, or all of the selected ready jobs that approve and/or reject the grant.In determination block 1012, the processing device may determine whether the response to the request to permit scheduling of the selected ready job includes approval and/or rejection of the request to permit scheduling of the selected ready job. In various embodiments, the processing device may determine that the response includes approval, rejection, or approval and rejection of the request to permit scheduling of the selected ready job.In block 1014, in response to determining that the response to the request to permit scheduling of the selected ready job includes approval of the request to permit scheduling of the selected ready job (ie, determination block 1012 = "Yes"), the processing device may The approved ready task is scheduled to execute during the wake lock event associated with the wake lock request.In block 1004, in response to determining that the response to the request to permit scheduling of the selected ready job includes a rejection of the request to permit scheduling of the selected ready job (ie, determination block 1012 = "No"), the processing device may A different set of ready jobs is selected, which may be executed during a wake lock event associated with the requested wake lock.In response to determining that the response to the request to permit scheduling of the selected ready job includes approval and rejection of the request to permit scheduling of the selected ready job (ie, determination block 1012 = "Yes" and "No"), the processing device The approved ready task may be scheduled in block 1014 to be executed during a wake lock event associated with the wake lock request, and may be selected in block 1004 to be different during the wake lock event associated with the requested wake lock. The ready job for the collection.In various embodiments, the method 800 illustrated in FIG. 8, the method 900 illustrated in FIG. 9, and the method 1000 illustrated in FIG. 10 may be performed simultaneously in a coordinated manner.Various embodiments, including but not limited to the embodiments described above with reference to Figures 1 through 10, can be implemented in various computing systems, including various computing systems, with examples of the various embodiments being suitable for Figure 11 The various embodiments described are used together. Mobile computing device 1100 can include a processor 1102 coupled to touch screen controller 1104 and internal memory 1106. Processor 1102 can be one or more multi-core integrated circuits designated for general or proprietary processing tasks. Internal memory 1106 can be a volatile or non-volatile memory, and can also be a secure and/or encrypted memory, or an unsecure and/or unencrypted memory, or any combination thereof. Examples of available memory types include, but are not limited to, DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R-RAM, M-RAM, STT-RAM, and embedded DRAM. Touch screen controller 1104 and processor 1102 can also be coupled to touch screen panel 1112, such as a resistive sensing touch screen, a capacitive sensing touch screen, an infrared sensing touch screen, and the like. Additionally, the display of computing device 1100 need not have touch screen capabilities.The mobile computing device 1100 can have one or more radio signal transceivers 1108 (eg, Peanut, Bluetooth, Zigbee, Wi-Fi, RF Radio) and antenna 1110 coupled to and/or coupled to the processor 1102, Used to send and receive communications. Transceiver 1108 and antenna 1110 can be used with the circuits mentioned above to implement various wireless transmit protocol stacks/interfaces. Mobile computing device 1100 can include a cellular network wireless modem chip 1116 that can communicate via a cellular network and be coupled to a processor.Mobile computing device 1100 can include a peripheral device connection interface 1118 coupled to processor 1102. Peripheral device connection interface 1118 can be singularly configured to accept one type of connection, or can be configured to accept various types of physical and communication connections, such as Universal Serial Bus (USB), FireWire, Thunderbolt, in common or exclusively. Or PCIe. Peripheral device connection interface 1118 may also be coupled to a similarly configured peripheral device connection port (not shown).Mobile computing device 1100 can also include a speaker 1114 for providing an audio output. The mobile computing device 1100 can also include a housing 1120 that is constructed of a combination of plastic, metal, or material for containing all or some of the components described herein. Mobile computing device 1100 can include a power source 1122 coupled to processor 1102, such as a disposable or rechargeable battery. The rechargeable battery can also be coupled to a peripheral device connection port to receive a charging current from a source external to the mobile computing device 1100. Mobile computing device 1100 can also include a physical button 1124 for receiving user input. Mobile computing device 1100 can also include a power button 1126 for turning the mobile computing device 1100 on and off.Various embodiments (including but not limited to the embodiments described above with reference to Figures 1 through 10) may be implemented in various computing systems, including notebook computers 1200, examples of which are illustrated in Figure 12 Description. Many notebook computers include a touchpad touch surface 1217 that acts as a pointing device for a computer, and thus can receive drag, scroll, and swipe gestures, similar to those implemented on a computing device equipped with a touch screen display and as described above. Notebook 1200 will typically include a processor 1211 coupled to volatile memory 1212 and bulk non-volatile memory (e.g., disk drive 1213 of flash memory). Additionally, computer 1200 can have one or more antennas 1208 for transmitting and receiving electromagnetic radiation that can be coupled to a wireless data link and/or cellular telephone transceiver 1216 that is coupled to processor 1211. Computer 1200 can also include a floppy disk drive 1214 and a compact disk (CD) drive 1215 coupled to processor 1211. In a notebook configuration, the computer housing includes a touch pad 1217, a keyboard 1218, and a display 1219, all of which are coupled to the processor 1211. Other configurations of the computing device, as is well known, can include a computer mouse or trackball coupled to the processor (e.g., via USB input), which can also be used with various embodiments.Various embodiments, including but not limited to the embodiments described above with reference to Figures 1 through 10, may also be implemented in a fixed computing system, such as any of a variety of commercially available servers. The instance server 1300 is illustrated in FIG. This server 1300 typically includes one or more multi-core processor assemblies 1301, such as disk drive 1304, coupled to volatile memory 1302 and bulk non-volatile memory. As illustrated in Figure 13, the multi-core processor assembly 1301 can be added to the server 1300 by inserting it into the cradle of the assembly. Server 1300 can also include a floppy disk drive, compact disc (CD), or digital versatile disc (DVD) disk drive 1306 coupled to processor 1301. Server 1300 can also include a network access port 1303 coupled to multi-core processor assembly 1301 for establishing a network interface with network 1305, such as a local area network, Internet, public switched telephone network coupled to other broadcast system computers and servers And/or cellular data networks (eg, CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, or any other type of cellular data network).Computer program code or "program code" for execution on a programmable processor for performing the operations of various embodiments may be, for example, C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, Structured Query Language ( For example, Transact-SQL), Perl, or various other programming languages are written in high-level programming languages. Program code or programs stored on a computer readable storage medium, as used in this application, may refer to machine language code (e.g., object code), the format of which may be understood by a processor.The foregoing method descriptions and process flow diagrams are provided by way of illustration only and are not intended to be As will be appreciated by those skilled in the art, the order of the operations in the foregoing embodiments can be performed in any order. The words "subsequent", "then", "next", etc. are not intended to limit the order of the operations; these words are only used to guide the reader in the description of the method. In addition, the use of the articles "a", "an" The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether this functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. The described functionality may be implemented in a different manner for each particular application, and such implementation decisions are not to be construed as a departure from the scope of the claims.The hardware used to implement the various illustrative logic, logic blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or executed by: general purpose processor, digital signal processor (DSP), dedicated An integrated circuit (ASIC), field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry specifically for a given function.In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer readable medium or non-transitory processor readable medium. The operations of the methods or algorithms disclosed herein may be embodied in a processor-executable software module that can reside on a non-transitory computer readable or processor readable storage medium. The non-transitory computer readable or processor readable medium can be any storage medium that can be accessed by a computer or processor. By way of example and not limitation, such non-transitory computer readable or processor readable medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or Any other medium that can be used to store the required program code in the form of an instruction or data structure and that can be accessed by a computer. Disk and disc as used herein include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, wherein the disc usually reproduces data magnetically, and the disc is laser- Optically reproduce data. Combinations of the above are also included within the scope of non-transitory computer readable and processor readable media. In addition, the operations of the methods or algorithms may reside as non-transitory processor-readable media and/or computer readable by one or any combination or collection of code and/or instructions that may be incorporated into a computer program product. In the media.The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the appended claims. Various modifications to these embodiments are readily apparent to those skilled in the art, and the general principles defined herein are applicable to other embodiments and embodiments without departing from the scope of the appended claims. Therefore, the present disclosure is not intended to be limited to the embodiments or the embodiments described herein, but the scope of the inventions |
A method and apparatus for a driver layout is described. The layout includes an first number of gate lines arranged along a first axis and a second equal number of gates arranged along a second axis, such that the first set of gates lines is orthogonal to the second set of gates lines. The layout includes a total of N discrete transistors. |
What is claimed is: 1. A layout for an even number of transistors N, comprising:a first half of the transistors (N/2) having gates oriented along a first axis; a second half of the transistors (N/2) having gates oriented along a second axis orthogonal to the gates of the first half of the transistors; a plurality of legs, the legs forming the gates of the N transistors; and a common non-diffused area shared by at least two intersections of the legs. 2. The layout of claim 1, wherein the transistor layout is bilaterally symmetric along both the X and Y axis.3. The layout of claim 1, wherein the plurality of legs are arranged in a bilaterally symmetric format.4. The layout of claim 3, further comprising:diffused areas forming sources and drains; and an area at which any of the plurality of legs cross, the area being a non-diffused area. 5. The layout of claim 3, wherein the plurality of legs form a tic-tac-toe pattern.6. The layout of claim 5, whereinthe tic-tac-toe pattern defines square areas between the legs; and the source and drain areas alternate in the square areas. 7. The layout of claim 5, wherein the tic-tac-toe pattern is repeated to form a larger layout.8. The layout of claim 5, wherein at least four intersections of the plurality of legs forming the tic-tac-toe pattern share a common non-diffused area.9. The layout of claim 1, used for an integrated circuit, wherein the gate orientation reduces skew effects due to mask alignment and gate orientation.10. A symmetric transistor layout comprising:an even number of transistor legs, laid out in an intersecting pattern, forming a bilaterally symmetric base; a plurality of source areas and drain areas defined by rectangles bordered by two or more transistor legs; undiffused areas surrounding each intersection of the legs, a common undiffused area shared by at least two intersections of the legs; and a plurality of transistors defined by a portion of a leg forming a gate and the source and drain areas on either side of the leg forming a source and a drain. 11. The symmetric transistor layout of claim 10, wherein the plurality of transistors is an even number of transistors.12. The symmetric transistor layout of claim 11, wherein a first half of the transistors are oriented along a first axis and a second half of the transistors are oriented along a second axis orthogonal to the first axis.13. The symmetric transistor layout of claim 10, wherein the legs form a tic-tac-toe pattern.14. The layout of claim 13, wherein at least four intersections of the legs forming the tic-tac-toe pattern share a common undiffused area.15. A layout for an even number of transistors, comprising:a bilaterally symmetric base of transistor gates; a plurality of source areas and drain areas adjacent to the transistor gates; and undiffused areas surrounding each intersection of the transistor gates, a common undiffused area shared by at least two intersections of the gates. 16. The layout of claim 15, wherein half the transistor gates are oriented along a first axis and a second half of the transistor gates are oriented along a second axis orthogonal to the first axis.17. The layout of claim 15, wherein the bilaterally symmetric base of transistor gates includes a plurality of legs, each leg defining one or more transistor gates.18. The layout of claim 17, wherein the plurality of legs form a tic-tac-toe pattern, wherein the source and drain areas alternate in quadrilateral areas defined by the plurality of legs.19. The layout of claim 18, wherein the tic-tac-toe pattern may be repeated to form a larger layout.20. The layout of claim 18, wherein at least four intersections of the plurality of legs forming the tic-tac-toe pattern share a common undiffused area. |
This application claims the benefit of Provisional application Ser. No. 60/151,813, filed Aug. 30, 1999.FIELD OF THE INVENTIONThe present invention relates to integrated circuits, and more specifically, to integrated circuit layout design.BACKGROUNDAs the frequency of VLSI circuits increases, the need to control skew in critical circuits becomes increasingly important. Two major process related components of skew are optical astigmatism and angle of implantation. Both of these effects are sensitive to gate orientation.Optical astigmatism can cause vertical and/or horizontal lines to be imaged onto a silicon wafer less accurately than normal. The accuracy of these critical dimensions (CDs) is fundamental but obviously some variance must be tolerated. Variance in the width and/or length of the intended transistor channel dimensions ultimately affects the strength, [beta], (Eq. 1.4), i.e. the current carrying capability of the device (Eqs 1.2 & 1.3). This effect is becoming ever more dominant as CDs continue to approach photolithographical limits.The second source of transistor driving strength modulation, albeit less dominant, is a result of a variance in the angle of implantation. This causes a modulation of the device threshold voltage, Vt, resulting in a change in the effective driving strength of the device.In the prior art, several methods have been used to control skew. Two of these are:use of long-channeled transistorsguaranteeing the same gate orientation of all critical circuits.The use of long-channel transistors minimizes the effects of poly CD variance reducing the percentage change in Leff (Eq. 1.6) caused by [Delta]l. However, in order to achieve that same effective driving strength for the driver in question, the effective width, Weff (Eq. 1.5) must be increased so that the [beta] of the device is equal to that of the minimum channel device. Long-channel drivers inherently consume more die area. For example, a 20% increase in Leff requires a 20% increase in Weff which translates to a 20% or more increase in silicon area required.FIG. 1A illustrates a driver circuit that may be implemented with the various circuits described below. FIG. 1B illustrates one layout of the driver of FIG. 1A having a vertical orientation with parallel transistors. FIG. 1C illustrates an alternative layout with parallel transistors having a horizontal orientation. The driver may alternatively be implemented as a single large device, as shown in FIG. 1D. The device example shown has a W/L ratio of 12. FIG. 1E shows the horizontal embodiment of the single legged device. A vertical implementation may be done in the alternative.Guaranteeing the same gate orientation for all critical transistors is another method of controlling skew. However, maintaining the same gate orientation is not always practical. For example, I/O cells are normally placed radially to form the I/O ring of a design as shown in FIG. 2. As can be seen, the same I/O library element is placed on both the top/bottom and left/right side of a die. Thus, the same gate orientation can not be maintained.Therefore, an improved method of controlling skew would be advantageous.SUMMARY OF THE INVENTIONA method and apparatus for a driver layout is described. The layout includes an first number of gate lines arranged along a first axis and a second equal number of gates arranged along a second axis, such that the first set of gates lines is orthogonal to the second set of gates lines. The layout includes a total of N discrete transistors.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:FIG. 1A is a circuit diagram of a driver circuit.FIGS. 1B-1E are circuit diagrams and layouts of prior art transistors.FIG. 2 is a layout of a prior art I/O ring design.FIGS. 3A-C are one embodiment of layouts of circuits according to one embodiment of the present invention.FIG. 4 is a layout illustrating optical astigmatism.FIGS. 5A-8C illustrate one embodiment of step-by-step manufacturing of the driver circuit of FIG. 3.FIG. 9 illustrates one embodiment of a diffusion plate that may be used to create the diffusion areas shown in FIG. 7A.DETAILED DESCRIPTIONA circuit layout to minimize gate orientation related skew effects is described. This layout, for a driver with N gates, orients N/2 gates horizontally and N/2 vertically to reduce skew in integrated circuits. This is fundamentally different from selecting specific gate orientation for skew sensitive circuits on a die. For one embodiment, this driver can be referred to as a T4 driver, for tic-tac-toe Transistor layout.The T4 driver minimizes skew by reducing the overall range of drain current, Ids, resulting from optical astigmatism variances. All other process parameters being equal, i.e. [mu]E/tox constant, the skew of driver strength can be directly controlled by minimizing the range of [beta].For simplicity sake, the following set of equations discussed vertical and horizontal astigmatism effects separately. If it can be shown that [beta]current is in the middle of possible values for both single and multi-legged device oriented either vertically or horizontally, then the T4 layout provides a circuit less sensitive to gate orientation skew effects. The T4 driver minimizes skew due to optical astigmatism-defined in the background section-by reducing the minimum and maximum ranges of the transistor. The discussion below, for simplicity addresses the structure of a N-type metal oxide semiconductors (NMOS).The basic MOS transistor equations for Ids, the drain to source current for a transistor, are: WhereVgs is the gate to source voltage,V1 is the device threshold voltage,Vds is the drain to source voltage, and [beta] is the transistor gain factor, such that where is the process dependent factor, where[mu] is the effective surface mobility of carriers (electrons or holes),[epsilon] is the permittivity of the gate insulator,tox is the thickness of the gate insulator,Weff is the effective width of the channel, andLeff is the effective length of the channel such thatWeff=W±[Delta]w (1.5)Leff=L±[Delta]l, (1.6)where[Delta]w is the diffusion critical dimension variance, and[Delta]l is the poly critical dimension varianceBased on these equations, the transistor gain factor can be calculated for various circuit types. For one embodiment, for this calculation it can be assumed that the process dependent factor is a constant, k. Thus, for the circuits described in FIG. 1D, a single legged circuit, the value of [beta] is while for the twelve parallel legged circuit shown in FIG. 1B, the value of [beta] is The value of [beta] for the circuit described below in FIG. 3 is since half of the transistors are horizontally oriented, while the other half of the transistors are vertically oriented.Four specific examples are described below with respect to FIG. 4, illustrating two types of astigmatism, vertical and horizontal. Optical astigmatism can cause vertical and/or horizontal lines to be imaged onto a silicon wafer less accurately than normal.Case I(a)-vertical astigmatism, along axis x and y, [Delta]x>0, [Delta]y=0. For this example, for simplicity, the process dependent factors are assumed to be constant and are not shown. In this case, thus,[beta]v12t=[beta]v1t<[beta]current<[beta]h1t<[beta]h12t, similarly, it can be proven that for Case I(b), where [Delta]x<0, [Delta]y=0,[beta]h12t<[beta]h1t<[beta]current<[beta]v1t=[beta]v12t. Thus it appears that [beta]current, having an equal number of transistors oriented horizontally and vertically is less sensitive to vertical astigmatism than either of the two prior art methods.Similarly, for Case II(a), horizontal astigmatism, where [Delta]x=0, [Delta]y>0, thus,[beta]h12t=[beta]h1t<[beta]current<[beta]v1t<[beta]v12t, similarly, it can be proven that for Case II(b), where [Delta]x=0, [Delta]y<0,[beta]v12t<[beta]v1t<[beta]current<[beta]h1t=[beta]h12t, Thus it appears that [beta]current, having an equal number of transistors oriented horizontally and vertically is less sensitive to horizontal astigmatism than either of the two prior art methods.Astigmatism may have both horizontal and vertical aspects. Since [beta]current is less sensitive to horizontal astigmatism, and [beta]current is less sensitive to vertical astigmatism, therefore [beta]current is less sensitive to a combined horizontal and vertical astigmatism.For one embodiment, this structure, which is a fundamental building block, can be stepped and repeat in both X and Y directions to create stronger drivers. The basic twelve transistor structure shown inFIG. 3A below can be permutated by removing an even number of transistor legs to create other structures. For example T9-T12 may be removed to create an O-ring device, as shown in FIG. 3B. A single pair of legs may also be used to generate four transistors, as shown in FIG. 3C. Any even number of transistors may be set in this structure, such that half of the transistors are orthogonal to the other half of the transistors. By using such a layout of transistors, skew effects are minimized.Common library elements which can not be placed to guarantee the same orientation for a specific gate, e.g. I/O cells for bond-wire designs, can use this layout method to eliminate gate orientation skew.The T4 driver can also reduce the modulations in threshold voltage, Vt, resulting from implant angle variations which can arise between orthogonally oriented transistors. It can be seen in both the saturated and non-saturated current equations, that variations in Vt will cause variations in driver current. These Vt variations also skew the behavior of the driver and minimizing these is beneficial to controlling overall circuit skew. Keeping the gate orientation of N/2 transistors orthogonal to the other half of transistors, forces all T4 driver configuration to experience that same set of variations. In much the same way as with optical astigmatism, this minimizes the magnitude of Vt variance.FIG. 3A illustrates an exemplary layout of a T4 driver. The driver includes twelve transistors T1 to T12390 arranged symmetrically along four legs 310-325. The legs 310-325 are arranged in a bilaterally symmetric format. The legs 310-325 form the gates of transistors T1 to T12. For one embodiment, the legs 310-325 are polysilicon. Alternatively, the legs 310-325 may be metal, or another conductive material. The legs 310-325 are placed on a substrate (not shown). The substrate includes source 330 and drain sections 340, in an alternating pattern. Thus, for example, all corners and the center section may be source sections 330, while the other sections are drain sections 340. Around each crossing of the legs 310-320 is a non-diffused area 350. The interconnections between the sources are not shown. For one embodiment, the sources may be tied together and the drains may be tied together, using metal layers.Thus, for example, one transistor, T1390, is circled, including a portion of leg 310 and adjacent source 330 and drain 340 areas. Transistor T1390 shares a source with transistor T8, and a drain with transistor T2. The gate area of the transistor T1390 is defined by the edge of the structure and the non-diffused area 350. FIGS. 3B and 3C illustrate permutations of this design, with fewer numbers of transistors. Similarly, additional transistors may be added to the system, while balancing the number of transistors.FIG. 4 is a layout illustrating optical astigmatism. For simplicity, the system is described as being horizontally oriented, such that the diffusion area extends horizontally. The first figure shows a vertical astigmatism, where [Delta]y>0, and [Delta]x=0. The second figure shows horizontal astigmatism, where [Delta]x>0, and [Delta]y=0. Of course, astigmatism may involve both an x and a y component, but this is not shown in FIG. 4.FIGS. 5A-C illustrate top, side, and perspective views of the substrate on which a transistor layout according to the present invention may be implemented. For one embodiment, the substrate is a silicon substrate. Alternatively, ceramics, sapphire, or other materials may be used for the substrate.FIGS. 6A-C illustrate top, side, and perspective views of the substrate after a first layer of a conductor 510 has been deposited. This conductor 510 forms the gate for the transistor. For one embodiment, the stage shown at FIGS. 6A-C is achieved by a two step process, initially depositing a layer of conductor 510, and then etching away the material 510. For one embodiment, a layer of silicon dioxide (SiO2) 515 is deposited on the substrate prior to the conductor 510 deposition. For another embodiment, another material may be used in place of the silicon dioxide. This SiO2layer 515 is removed with the conductive layer 510, leaving a layer of SiO2layer 515 underneath the conductor layer 510. For one embodiment, the conductor 510 is a metal layer. For another embodiment, the conductor 510 is a polysilicon layer. Alternative materials may be used.FIGS. 7A-C illustrate top, side, and perspective views of the substrate after a diffusion step. The diffusion step creates the source and drain regions 520. For one embodiment, the step further dopes the gate 510. The diffusion step creates non-diffused areas 525, centered around the intersection of gates 510. FIG. 9 illustrates a diffusion plate 910 that may be used to create the diffusion areas shown in FIG. 7C. With this step a complete transistor is formed, with the gate area 515 surrounded on either side by a source 530 and drain 540 contact.FIGS. 8A-C illustrate top, side, and perspective views of the substrate after contact windows are established. The contact windows 530, 540 permit the transistor to be hooked up to other devices. This figure does not show the interconnections between the source and drain contacts of each transistor. However, by interconnecting the source and drain areas appropriately, various circuits may be created.In this way, a symmetric set of drivers is manufactured, with transistors in both directions.In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
A method of forming a microelectronic device comprises forming a microelectronic device structure assembly comprising memory cells, digit lines coupled to the memory cells, word lines coupled to the memory cells, and isolation material overlying the memory cells, the digit lines, and the word lines. An additional microelectronic device structure assembly comprising control logic devices and additional isolation material overlying the control logic devices is formed. The additional isolation material of the additional microelectronic device structure assembly is bonded to the isolation material of the microelectronic device structure assembly to attach the additional microelectronic device structure assembly to the microelectronic device structure assembly. The memory cells are electrically connected to at least some of the control logic devices after bonding the additional isolation material to the isolation material. Microelectronic devices, electronic systems, and additional methods are also described |
CLAIMS What is claimed is: 1. A method of forming a microelectronic device, comprising: forming a microelectronic device structure assembly comprising memory cells, digit lines coupled to the memory cells, word lines coupled to the memory cells, and isolation material overlying the memory cells, the digit lines, and the word lines; forming an additional microelectronic device structure assembly comprising control logic devices and additional isolation material overlying the control logic devices; bonding the additional isolation material of the additional microelectronic device structure assembly to the isolation material of the microelectronic device structure assembly to attach the additional microelectronic device structure assembly to the microelectronic device structure assembly; and electrically connecting the memory cells to at least some of the control logic devices after bonding the additional isolation material to the isolation material. 2. The method of claim 1, wherein forming a microelectronic device structure assembly comprises: forming a first microelectronic device structure comprising a first base semiconductor structure, the digit lines, the word lines, and access devices of the memory cells coupled to the digit lines and the word lines; forming contact structures coupled to the digit lines within digit line exit regions neighboring the access devices in a first horizontal direction; forming additional contact structures coupled to the word lines within word line exit regions neighboring the access devices in a second horizontal direction; forming storage node devices of the memory cells over and in electrical communication with the access devices of the memory cells; and forming routing structures over the storage node devices of the memory cells, at least some of the routing structures in electrical communication with the storage node devices.3. The method of claim 2, further comprising: forming further contact structures within socket regions prior to forming storage node devices; and coupling the at least some of the routing structures to at least some of the further contact structures. 4. The method of claim 3, further comprising forming capacitors within the socket regions, at least some of the capacitors coupled to one or more of the further contact structures. 5. The method of claim 2, further comprising: bonding a second microelectronic device structure over the routing structures to form a first assembly comprising the first microelectronic device structure, the contact structures, the additional contact structures, the memory cells, the routing structures, and the second microelectronic device structure; vertically inverting the first assembly; removing a section of the first base semiconductor structure after vertically inverting the first assembly to expose portions of the contact structures, the additional contact structures, and filled trenches in the first base semiconductor structure; forming sacrificial structures on the exposed portions of the contact structures and the additional contact; and forming the isolation material over the memory cells and the sacrificial structures.6. The method of claim 5, wherein forming an additional microelectronic device structure assembly comprises: forming a third microelectronic device structure comprising a second base semiconductor structure and the control logic devices at least partially overlying the second base semiconductor structure; bonding a fourth microelectronic device structure over the control logic devices to form a second assembly comprising the third microelectronic device structure and the fourth microelectronic device structure; vertically inverting the second assembly; removing a section of the second base semiconductor structure after vertically inverting the second assembly to expose additional filled trenches in the second base semiconductor structure; and forming the additional isolation material over the control logic devices. 7. The method of claim 6, further comprising: removing a portion of the fourth microelectronic device structure of the additional microelectronic device structure after attaching the additional microelectronic device structure to the microelectronic device structure; forming contact openings vertically extending through a remaining portion of the additional microelectronic device structure and the isolation material of the microelectronic device structure to expose the sacrificial structures; selectively removing the sacrificial structures, after forming the contact openings, to form void spaces in communication with the contact openings; and filling the contact openings and the void spaces with conductive material to form additional contact structures. 8. The method of any one of claims 1 through 7, further comprising selecting the isolation material of the microelectronic device structure assembly and the additional isolation material of the additional microelectronic device structure assembly to each comprise a dielectric oxide material.9. The method of any one of claims 1 through 7, further comprising: forming routing structures over the control logic devices and in electrical communication with the control logic devices and the memory cells; and forming pad structures over and in electrical communication with the routing structures. 10. The method of claim 9, wherein: forming routing structures over the control logic devices comprises: forming tungsten routing structures over the control logic devices and in electrical communication with the control logic devices and the memory cells; and forming copper routing structures over and in electrical communication with the tungsten routing structures; and forming pad structures comprises forming aluminum pad structures over and in electrical communication with the copper routing structures. 11. A method of forming a microelectronic device, comprising: forming a first semiconductor wafer comprising access devices within array regions, digit lines coupled to the access devices and terminating within digit line exit regions neighboring the array regions, and word lines coupled to the access devices and terminating within word line exit regions neighboring the array regions; forming digit line contact structures extending through and in contact with the digit lines within the digit line exit regions; forming word line contact structures extending through in contact with the word lines within the word line exit regions; forming capacitors over and in electrical communication with the access devices to form memory cells within the array regions; forming a second semiconductor wafer comprising control logic devices; attaching the second semiconductor wafer to the first semiconductor wafer such that at least some of the control logic devices of the second semiconductor wafer are positioned within the array regions of the first semiconductor wafer; forming additional contact structures over the digit line contact structures and the word line contact structures, some of the additional contact structures in contact with the digit line contact structures, some other of the additional contact structures in contact with the word line contact structures; and
forming routing structures over the control logic devices and the additional contact structures, the routing structures in electrical communication with the control logic devices and the memory cells. 12. The method of claim 11, further comprising: forming further contact structures within socket regions of the first semiconductor wafer prior to attaching the second semiconductor wafer to the first semiconductor wafer, the socket region horizontally offset from the digit line exit regions and the word line exit regions; and forming yet some other of the additional contact structures over and in contact with the further contact structures. 13. The method of claim 12, further comprising forming additional capacitors within the socket regions of the first semiconductor wafer and in electrical communication with at least some the further contact structures, at least some of the additional capacitors in electrical communication with at least some of the control logic devices of the second semiconductor wafer after forming the routing structures. 14. The method of claim 11, wherein attaching the second semiconductor wafer to the first semiconductor wafer comprises: vertically inverting the second semiconductor wafer; physically contacting a first dielectric oxide material of the first semiconductor wafer with a second dielectric oxide material of the first semiconductor wafer after vertically inverting the second semiconductor wafer; and annealing the first dielectric oxide material and the second dielectric oxide material after physically contacting the first dielectric oxide material with the second dielectric oxide material to form oxide-oxide bonds between the first dielectric oxide material and the second dielectric oxide material.15. The method of any one of claims 11 through 14, wherein: forming digit line contact structures comprises forming the digit line contact structures to physically contact the digit lines and a semiconductor material of the first semiconductor wafer underlying the digit lines; and forming word line contact structures comprises forming the word line contact structures to physically contact word lines and the semiconductor material of the first semiconductor wafer. 16. The method of claim 15, further comprising, before attaching the second semiconductor wafer to the first semiconductor wafer: vertically inverting the first semiconductor wafer after forming the digit line contact structures and the word line contact structures; removing a portion of the semiconductor material to expose surfaces of the digit line contact structures and the word line contact structures; forming sacrificial dielectric structures on the exposed surfaces of the digit line contact structures and the word line contact structures; and forming a dielectric oxide material over the sacrificial dielectric structures and remaining portions of the semiconductor material. 17. The method of claim 16, wherein forming additional contact structures over the digit line contact structures and the word line contact structures comprises: forming contact openings vertically extending through additional dielectric oxide material of the second semiconductor wafer and the dielectric oxide material overlying the sacrificial dielectric structures to expose the sacrificial dielectric structures; exhuming the sacrificial dielectric structures through the contact openings to form open volumes, the open volumes re-exposing the surfaces of the digit line contact structures and the word line contact structures; and filling the contact openings and the open volumes with conductive material to form the additional contact structures.18. A microelectronic device, comprising: array regions individually comprising: memory cells comprising access devices and storage node devices; digit lines coupled to the access devices and extending in a first direction; word lines coupled to the access devices and extending in a second direction orthogonal to the first direction; and control logic devices over and in electrical communication with the memory cells; digit line exit regions horizontally alternating with the array regions in the first direction and individually comprising: portions of the digit lines extending beyond the array regions adjacent thereto; digit line contact structures extending through at least some of the portions of the digit lines; contact structures on the digit line contact structures and individually comprising: a lower region; and an upper region integral and continuous with the lower region and having smaller horizontal dimensions than the lower region; and routing structures coupled to the contact structures; word line exit regions horizontally alternating with the array regions in the second direction and individually comprising: portions of the word lines extending beyond the array regions adjacent thereto; word line contact structures extending through at least some of the portions of the word lines; additional contact structures on the word line contact structures and individually comprising: an additional lower region; and an additional upper region integral and continuous with the additional lower region and having smaller horizontal dimensions than the additional lower region; and additional routing structures coupled to the additional contact structures.19. The microelectronic device of claim 18, further comprising socket regions horizontally offset from the array regions, the digit line exit regions, and the word line exit regions, the socket regions individually comprising deep contact structure assemblies coupling the memory cells to at least some of the control logic devices. 20. The microelectronic device of claim 19, wherein the socket regions further comprise additional control logic devices having different configurations and operational functions than the control logic devices. 21. The microelectronic device of claim 20, wherein the socket regions further comprise capacitors coupled to one or more of at least some of the control logic devices and at least some of the additional control logic devices. 22. The microelectronic device of any one of claims 18 through 21, wherein the control logic devices within each array region of the array regions comprise: sense amplifier devices within multiple sense amplifier regions positioned proximate corners of the array region diagonally opposing one another; and sub-word line driver devices within multiple sub-word line driver regions positioned proximate additional corners of the array region diagonally opposing one another. 23. The microelectronic device of claim 22, wherein, for each sense amplifier region of the multiple sense amplifier regions within the array region: some of the sense amplifier devices within the sense amplifier region are coupled to some of the digit lines extending through the array region; and some other of the sense amplifier devices within the sense amplifier region are coupled to some of the digit lines extending through an additional one of the array regions neighboring the array region.24. The microelectronic device of claim 23, wherein: the some of the sense amplifier devices are coupled to the some of the digit lines extending through the array region by way of some of the digit line contact structures, some of the contact structures, and some of the routing structures within one of the digit line exit regions horizontally interposed between the array region and the additional one of the array regions; and the some other of the sense amplifier devices are coupled to the some of the digit lines horizontally extending through the additional one of the array regions by way of some other of the digit line contact structures, some other of the contact structures, and some other of the routing structures within the one of the digit line exit regions. 25. The microelectronic device of claim 22, wherein, for each sub-word line driver region of the multiple sub-word line driver regions within the array region: some of the sub-word line driver devices within the sub-word line driver region are coupled to some of the word lines extending through the array region; and some other of the sub-word line driver devices within the sub-word line driver region are coupled to some of the word lines extending through an additional one of the array regions neighboring the array region. 26. The microelectronic device of claim 25, wherein: the some of the sub-word line driver devices are coupled to the some of the word lines extending through the array region by way of some of the word line contact structures, some of the additional contact structures, and some of the additional routing structures within one of the word line exit regions horizontally interposed between the array region and the additional one of the array regions; and the some other of the sub-word line driver devices are coupled to the some of the word lines extending through the additional one of the array regions by way of some other of the word line contact structures, some other of the additional contact structures, and some other of the additional routing structures within the one of the word line exit regions. 27. An electronic system, comprising: an input device; an output device;
a processor device operably connected to the input device and the output device; and a memory device operably connected to the processor device and comprising: memory array regions each comprising dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and control logic circuitry overlying and in electrical communication with the DRAM cells; a digit line contact region between two of the memory array regions neighboring one another in a first direction, the digit line contact region comprising: end portions of some of the digit lines extending past horizontal boundaries of the two of the memory array regions; digit line contacts coupled to and extending completely through the end portions of the some of the digit lines; contact structures on the digit line contacts and individually comprising a lower region and an upper region integral and continuous with the lower region, the upper region having smaller horizontal dimensions than the lower region; and routing structures over and coupled to the contact structures; and a word line contact region between two other of the memory array regions neighboring one another in a second direction perpendicular to the first direction, the word line contact region comprising: end portions of some of the word lines extending past horizontal boundaries of the two other of the memory array regions; word line contacts coupled to and extending completely through the end portions of the some of the word lines; additional contact structures on the word line contacts and individually comprising an additional lower region and an additional upper region integral and continuous with the additional lower region, the additional upper region having smaller horizontal dimensions than the additional lower region; and additional routing structures over and coupled to the additional contact structures. |
METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS PRIORITY CLAIM This application claims the benefit of the filing date of United States Patent Application Serial No.17/364,377, filed June 30, 2021, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS,” which is related to U.S. Patent Application Serial No. 17/364,281, filed June 30, 2021, listing Fatma Arzum Simsek-Ege, Kunal R. Parekh, Terrence B. McDaniel, and Beau D. Barry as inventors, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS,” to U.S. Patent Application Serial No.17/364,335, filed June 30, 2021, listing Fatma Arzum Simsek-Ege, Kunal R. Parekh, and Beau D. Barry as inventors, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS,” to U.S. Patent Application Serial No.17/364,429, filed June 30, 2021, listing Fatma Arzum Simsek- Ege as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS,” to U.S. Patent Application Serial No.17/364,476, filed June 30, 2021, listing Fatma Arzum Simsek- Ege and Kunal R. Parekh as inventors, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS,” and to U.S. Patent Application Serial No.17/364,379, filed June 30, 2021, listing Fatma Arzum Simsek-Ege as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS.” The disclosure of each of the foregoing documents is hereby incorporated herein in its entirety by reference. TECHNICAL FIELD The disclosure, in various embodiments, relates generally to the field of microelectronic device design and fabrication. More specifically, the disclosure relates to methods of forming microelectronic devices and memory devices, and to related microelectronic devices, memory devices, and electronic systems.
BACKGROUND Microelectronic device designers often desire to increase the level of integration or density of features within a microelectronic device by reducing the dimensions of the individual features and by reducing the separation distance between neighboring features. In addition, microelectronic device designers often desire to design architectures that are not only compact, but offer performance advantages, as well as simplified, easier and less expensive to fabricate designs. One example of a microelectronic device is a memory device. Memory devices are generally provided as internal integrated circuits in computers or other electronic devices. There are many types of memory devices including, but not limited to, volatile memory devices. One type of volatile memory device is a dynamic random access memory (DRAM) device. A DRAM device may include a memory array including DRAM cells arranged rows extending in a first horizontal direction and columns extending in a second horizontal direction. In one design configuration, an individual DRAM cell includes an access device (e.g., a transistor) and a storage node device (e.g., a capacitor) electrically connected to the access device. The DRAM cells of a DRAM device are electrically accessible through digit lines and word lines arranged along the rows and columns of the memory array and in electrical communication with control logic devices within a base control logic structure of the DRAM device. Control logic devices within a base control logic structure underlying a memory array of a DRAM device have been used to control operations on the DRAM cells of the DRAM device. Control logic devices of the base control logic structure can be provided in electrical communication with digit lines and word lines coupled to the DRAM cells by way of routing and contact structures. Unfortunately, processing conditions (e.g., temperatures, pressures, materials) for the formation of the memory array over the base control logic structure can limit the configurations and performance of the control logic devices within the base control logic structure. In addition, the quantities, dimensions, and arrangements of the different control logic devices employed within the base control logic structure can also undesirably impede reductions to the size (e.g., horizontal footprint) of a memory device, and/or improvements in the performance (e.g., faster memory cell ON/OFF speed, lower threshold switching voltage requirements, faster data transfer rates, lower power consumption) of the DRAM device.
SUMMARY In some embodiments, a method of forming a microelectronic device comprises forming a microelectronic device structure assembly comprising memory cells, digit lines coupled to the memory cells, word lines coupled to the memory cells, and isolation material overlying the memory cells, the digit lines, and the word lines. An additional microelectronic device structure assembly comprising control logic devices and additional isolation material overlying the control logic devices is formed. The additional isolation material of the additional microelectronic device structure assembly is bonded to the isolation material of the microelectronic device structure assembly to attach the additional microelectronic device structure assembly to the microelectronic device structure assembly. The memory cells are electrically connected to at least some of the control logic devices after bonding the additional isolation material to the isolation material. In additional embodiments, a method of forming a microelectronic device comprises forming a first semiconductor wafer comprising access devices within array regions, digit lines coupled to the access devices and terminating within digit line exit regions neighboring the array regions, and word lines coupled to the access devices and terminating within word line exit regions neighboring the array regions. Digit line contact structures extending through and in contact with the digit lines within the digit line exit regions are formed. Word line contact structures extending through in contact with the word lines within the word line exit regions are formed. Capacitors are formed over and in electrical communication with the access devices to form memory cells within the array regions. A second semiconductor wafer comprising control logic devices is formed. The second semiconductor wafer is attached to the first semiconductor wafer such that at least some of the control logic devices of the second semiconductor wafer are positioned within the array regions of the first semiconductor wafer. Additional contact structures are formed over the digit line contact structures and the word line contact structures. Some of the additional contact structures are in contact with the digit line contact structures. Some other of the additional contact structures are in contact with the word line contact structures. Routing structures are formed over the control logic devices and the additional contact structures. The routing structures are in electrical communication with the control logic devices and the memory cells. In further embodiments, a microelectronic device comprises array regions, digit line exit regions, and word line exit regions. The array regions individually comprise memory cells, digit lines, word lines, and control logic devices. The memory cells comprise access
devices and storage node devices. The digit lines are coupled to the access devices and extend in a first direction. The word lines are coupled to the access devices and extend in a second direction orthogonal to the first direction. The control logic devices are over and in electrical communication with the memory cells. The digit line exit regions horizontally alternate with the array regions in the first direction. The digit line exit regions individually comprise portions of the digit lines extending beyond the array regions adjacent thereto, digit line contact structures extending through at least some of the portions of the digit lines, contact structures on the digit line contact structures, and routing structures coupled to the contact structures. The contact structures individually comprise a lower region, and an upper region integral and continuous with the lower region and having smaller horizontal dimensions than the lower region. The word line exit regions horizontally alternate with the array regions in the second direction. The word line exit regions individually comprise portions of the word lines extending beyond the array regions adjacent thereto, word line contact structures extending through at least some of the portions of the word lines, additional contact structures on the word line contact structures, and additional routing structures coupled to the additional contact structures. The additional contact structures individually comprise an additional lower region, and an additional upper region integral and continuous with the additional lower region and having smaller horizontal dimensions than the additional lower region. In yet further embodiments, an electronic system comprises an input device, an output device, a processor device operably connected to the input device and the output device, and a memory device operably connected to the processor device. The memory device comprises memory array regions, a digit line contact region between two of the memory array regions neighboring one another in a first direction, and a word line contact region between two other of the memory array regions neighboring one another in a second direction perpendicular to the first direction. The memory array regions each comprise dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and control logic circuitry overlying and in electrical communication with the DRAM cells. The digit line contact region comprises end portions of some of the digit lines extending past horizontal boundaries of the two of the memory array regions; digit line contacts coupled to and extending completely through the end portions of the some of the digit lines; contact structures on the digit line contacts and individually comprising a lower region and an upper region integral and continuous with the lower region, the upper region having smaller horizontal dimensions than the lower region; and routing structures over and coupled to the
contact structures. The word line contact region comprises end portions of some of the word lines extending past horizontal boundaries of the two other of the memory array regions; word line contacts coupled to and extending completely through the end portions of the some of the word lines; additional contact structures on the word line contacts and individually comprising an additional lower region and an additional upper region integral and continuous with the additional lower region, the additional upper region having smaller horizontal dimensions than the additional lower region; and additional routing structures over and coupled to the additional contact structures. BRIEF DESCRIPTION OF THE DRAWINGS FIG.1 is a simplified plan view of a microelectronic device structure at a processing stage of a method of forming a microelectronic device, in accordance with embodiments of the disclosure. FIGS.2A through 2D are simplified, partial longitudinal cross-sectional views of an array region (FIG.2A), a digit line exit region (FIG.2B), a word line exit region (FIG.2C), and a socket region (FIG.2D) of the microelectronic device structure shown in FIG.1 at the processing stage of FIG.1. FIGS.3A through 3D are simplified, partial longitudinal cross-sectional views of the array region (FIG.3A), the digit line exit region (FIG.3B), the word line exit region (FIG. 3C), and the socket region (FIG.3D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.2A through 2D. FIGS.4A through 4D are simplified, partial longitudinal cross-sectional views of the array region (FIG.4A), the digit line exit region (FIG.4B), the word line exit region (FIG. 4C), and the socket region (FIG.4D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.3A through 3D. FIGS.5A through 5D are simplified, partial longitudinal cross-sectional views of the array region (FIG.4A), the digit line exit region (FIG.4B), the word line exit region (FIG. 4C), and the socket region (FIG.4D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.4A through 4D.
FIGS.6A through 6D are simplified, partial longitudinal cross-sectional views of the array region (FIG.6A), the digit line exit region (FIG.6B), the word line exit region (FIG. 6C), and the socket region (FIG.6D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.5A through 5D. FIGS.7A through 7D are simplified, partial longitudinal cross-sectional views of an array region (FIG.7A), a digit line exit region (FIG.7B), a word line exit region (FIG.7C), and a socket region (FIG.7D) of an additional microelectronic device structure, at another processing stage of the method of forming the microelectronic device. FIGS.8A through 8D are simplified, partial longitudinal cross-sectional views of the array region (FIG.8A), the digit line exit region (FIG.8B), the word line exit region (FIG. 8C), and the socket region (FIG.8D) shown in FIGS.7A through 7D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.7A through 7D. FIGS.9A through 9D are simplified, partial longitudinal cross-sectional views of the array region (FIG.9A), the digit line exit region (FIG.9B), the word line exit region (FIG. 9C), and the socket region (FIG.9D) shown in FIGS.7A through 7D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.8A through 8D. FIGS.10A through 10D are simplified, partial longitudinal cross-sectional views of the array region (FIG.10A), the digit line exit region (FIG.10B), the word line exit region (FIG.10C), and the socket region (FIG.10D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.6A through 6D and the processing stage of FIGS.9A through 9D. FIGS.11A through 11D are simplified, partial longitudinal cross-sectional views of the array region (FIG.11A), the digit line exit region (FIG.11B), the word line exit region (FIG.11C), and the socket region (FIG.11D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.10A through 10D. FIGS.12A through 12D are simplified, partial longitudinal cross-sectional views of the array region (FIG.12A), the digit line exit region (FIG.12B), the word line exit region (FIG.12C), and the socket region (FIG.12D) shown in FIGS.2A through 2D, respectively,
at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.11A through 11D. FIGS.13A through 13D are simplified, partial longitudinal cross-sectional views of the array region (FIG.13A), the digit line exit region (FIG.13B), the word line exit region (FIG.13C), and the socket region (FIG.13D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.12A through 12D. FIGS.14A through 14D are simplified, partial longitudinal cross-sectional views of the array region (FIG.14A), the digit line exit region (FIG.14B), the word line exit region (FIG.14C), and the socket region (FIG.14D) shown in FIGS.2A through 2D, respectively, at another processing stage of the method of forming the microelectronic device following the processing stage of FIGS.13A through 13D. FIG.15 is a simplified plan view of a microelectronic device, in accordance with an embodiment of the disclosure. FIG.16 is a schematic block diagram of an electronic system, in accordance with an embodiment of the disclosure. MODE(S) FOR CARRYING OUT THE INVENTION The following description provides specific details, such as material compositions, shapes, and sizes, in order to provide a thorough description of embodiments of the disclosure. However, a person of ordinary skill in the art would understand that the embodiments of the disclosure may be practiced without employing these specific details. Indeed, the embodiments of the disclosure may be practiced in conjunction with conventional microelectronic device fabrication techniques employed in the industry. In addition, the description provided below does not form a complete process flow for manufacturing a microelectronic device (e.g., a memory device). The structures described below do not form a complete microelectronic device. Only those process acts and structures necessary to understand the embodiments of the disclosure are described in detail below. Additional acts to form a complete microelectronic device from the structures may be performed by conventional fabrication techniques. Drawings presented herein are for illustrative purposes only, and are not meant to be actual views of any particular material, component, structure, device, or system. Variations from the shapes depicted in the drawings as a result, for example, of manufacturing techniques
and/or tolerances, are to be expected. Thus, embodiments described herein are not to be construed as being limited to the particular shapes or regions as illustrated, but include deviations in shapes that result, for example, from manufacturing. For example, a region illustrated or described as box-shaped may have rough and/or nonlinear features, and a region illustrated or described as round may include some rough and/or linear features. Moreover, sharp angles that are illustrated may be rounded, and vice versa. Thus, the regions illustrated in the figures are schematic in nature, and their shapes are not intended to illustrate the precise shape of a region and do not limit the scope of the present claims. The drawings are not necessarily to scale. Additionally, elements common between figures may retain the same numerical designation. As used herein, a “memory device” means and includes microelectronic devices exhibiting memory functionality, but not necessary limited to memory functionality. Stated another way, and by way of non-limiting example only, the term “memory device” includes not only conventional memory (e.g., conventional volatile memory; conventional non-volatile memory), but also includes an application specific integrated circuit (ASIC) (e.g., a system on a chip (SoC)), a microelectronic device combining logic and memory, and a graphics processing unit (GPU) incorporating memory. As used herein, the term “configured” refers to a size, shape, material composition, orientation, and arrangement of one or more of at least one structure and at least one apparatus facilitating operation of one or more of the structure and the apparatus in a pre-determined way. As used herein, the terms “vertical,” “longitudinal,” “horizontal,” and “lateral” are in reference to a major plane of a structure and are not necessarily defined by earth’s gravitational field. A “horizontal” or “lateral” direction is a direction that is substantially parallel to the major plane of the structure, while a “vertical” or “longitudinal” direction is a direction that is substantially perpendicular to the major plane of the structure. The major plane of the structure is defined by a surface of the structure having a relatively large area compared to other surfaces of the structure. With reference to the figures, a “horizontal” or “lateral” direction may be perpendicular to an indicated “Z” axis, and may be parallel to an indicated “X” axis and/or parallel to an indicated “Y” axis; and a “vertical” or “longitudinal” direction may be parallel to an indicated “Z” axis, may be perpendicular to an indicated “X” axis, and may be perpendicular to an indicated “Y” axis.
As used herein, features (e.g., regions, structures, devices) described as “neighboring” one another means and includes features of the disclosed identity (or identities) that are located most proximate (e.g., closest to) one another. Additional features (e.g., additional regions, additional structures, additional devices) not matching the disclosed identity (or identities) of the “neighboring” features may be disposed between the “neighboring” features. Put another way, the “neighboring” features may be positioned directly adjacent one another, such that no other feature intervenes between the “neighboring” features; or the “neighboring” features may be positioned indirectly adjacent one another, such that at least one feature having an identity other than that associated with at least one the “neighboring” features is positioned between the “neighboring” features. Accordingly, features described as “vertically neighboring” one another means and includes features of the disclosed identity (or identities) that are located most vertically proximate (e.g., vertically closest to) one another. Moreover, features described as “horizontally neighboring” one another means and includes features of the disclosed identity (or identities) that are located most horizontally proximate (e.g., horizontally closest to) one another. As used herein, spatially relative terms, such as “beneath,” “below,” “lower,” “bottom,” “above,” “upper,” “top,” “front,” “rear,” “left,” “right,” and the like, may be used for ease of description to describe one element’s or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. Unless otherwise specified, the spatially relative terms are intended to encompass different orientations of the materials in addition to the orientation depicted in the figures. For example, if materials in the figures are inverted, elements described as “below” or “beneath” or “under” or “on bottom of” other elements or features would then be oriented “above” or “on top of” the other elements or features. Thus, the term “below” can encompass both an orientation of above and below, depending on the context in which the term is used, which will be evident to one of ordinary skill in the art. The materials may be otherwise oriented (e.g., rotated 90 degrees, inverted, flipped) and the spatially relative descriptors used herein interpreted accordingly. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, “and/or” includes any and all combinations of one or more of the associated listed items.
As used herein, the phrase “coupled to” refers to structures operatively connected with each other, such as electrically connected through a direct Ohmic connection or through an indirect connection (e.g., by way of another structure). As used herein, the term “substantially” in reference to a given parameter, property, or condition means and includes to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a degree of variance, such as within acceptable tolerances. By way of example, depending on the particular parameter, property, or condition that is substantially met, the parameter, property, or condition may be at least 90.0 percent met, at least 95.0 percent met, at least 99.0 percent met, at least 99.9 percent met, or even 100.0 percent met. As used herein, “about” or “approximately” in reference to a numerical value for a particular parameter is inclusive of the numerical value and a degree of variance from the numerical value that one of ordinary skill in the art would understand is within acceptable tolerances for the particular parameter. For example, “about” or “approximately” in reference to a numerical value may include additional numerical values within a range of from 90.0 percent to 110.0 percent of the numerical value, such as within a range of from 95.0 percent to 105.0 percent of the numerical value, within a range of from 97.5 percent to 102.5 percent of the numerical value, within a range of from 99.0 percent to 101.0 percent of the numerical value, within a range of from 99.5 percent to 100.5 percent of the numerical value, or within a range of from 99.9 percent to 100.1 percent of the numerical value. As used herein, “conductive material” means and includes electrically conductive material such as one or more of a metal (e.g., tungsten (W), titanium (Ti), molybdenum (Mo), niobium (Nb), vanadium (V), hafnium (Hf), tantalum (Ta), chromium (Cr), zirconium (Zr), iron (Fe), ruthenium (Ru), osmium (Os), cobalt (Co), rhodium (Rh), iridium (Ir), nickel (Ni), palladium (Pa), platinum (Pt), copper (Cu), silver (Ag), gold (Au), aluminum (Al)), an alloy (e.g., a Co-based alloy, an Fe-based alloy, an Ni-based alloy, an Fe- and Ni-based alloy, a Co- and Ni-based alloy, an Fe- and Co-based alloy, a Co- and Ni- and Fe-based alloy, an Al-based alloy, a Cu-based alloy, a magnesium (Mg)-based alloy, a Ti-based alloy, a steel, a low- carbon steel, a stainless steel), a conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide), and a conductively-doped semiconductor material (e.g., conductively-doped polysilicon, conductively-doped germanium (Ge), conductively-doped silicon germanium
(SiGe)). In addition, a “conductive structure” means and includes a structure formed of and including conductive material. As used herein, “insulative material” means and includes electrically insulative material, such one or more of at least one dielectric oxide material (e.g., one or more of a silicon oxide (SiOx), phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, an aluminum oxide (AlOx), a hafnium oxide (HfOx), a niobium oxide (NbOx), a titanium oxide (TiOx), a zirconium oxide (ZrOx), a tantalum oxide (TaOx), and a magnesium oxide (MgOx)), at least one dielectric nitride material (e.g., a silicon nitride (SiNy)), at least one dielectric oxynitride material (e.g., a silicon oxynitride (SiOxNy)), at least one dielectric oxycarbide material (e.g., silicon oxycarbide (SiOxCy)), at least one hydrogenated dielectric oxycarbide material (e.g., hydrogenated silicon oxycarbide (SiCxOyHz)), and at least one dielectric carboxynitride material (e.g., a silicon carboxynitride (SiOxCzNy)). Formulae including one or more of “x,” “y,” and “z” herein (e.g., SiOx, AlOx, HfOx, NbOx, TiOx, SiNy, SiOxNy, SiOxCy,SiCxOyHz, SiOxCzNy) represent a material that contains an average ratio of “x” atoms of one element, “y” atoms of another element, and “z” atoms of an additional element (if any) for every one atom of another element (e.g., Si, Al, Hf, Nb, Ti). As the formulae are representative of relative atomic ratios and not strict chemical structure, an insulative material may comprise one or more stoichiometric compounds and/or one or more non-stoichiometric compounds, and values of “x,” “y,” and “z” (if any) may be integers or may be non-integers. As used herein, the term “non- stoichiometric compound” means and includes a chemical compound with an elemental composition that cannot be represented by a ratio of well-defined natural numbers and is in violation of the law of definite proportions. In addition, an “insulative structure” means and includes a structure formed of and including insulative material. As used herein, the term “homogeneous” means relative amounts of elements included in a feature (e.g., a material, a structure) do not vary throughout different portions (e.g., different horizontal portions, different vertical portions) of the feature. Conversely, as used herein, the term “heterogeneous” means relative amounts of elements included in a feature (e.g., a material, a structure) vary throughout different portions of the feature. If a feature is heterogeneous, amounts of one or more elements included in the feature may vary stepwise (e.g., change abruptly), or may vary continuously (e.g., change progressively, such as linearly, parabolically) throughout different portions of the feature. The feature may, for example, be formed of and include a stack of at least two different materials.
Unless the context indicates otherwise, the materials described herein may be formed by any suitable technique including, but not limited to, spin coating, blanket coating, chemical vapor deposition (CVD), plasma enhanced CVD (PECVD), atomic layer deposition (ALD), plasma enhanced ALD (PEALD), physical vapor deposition (PVD) (e.g., sputtering), or epitaxial growth. Depending on the specific material to be formed, the technique for depositing or growing the material may be selected by a person of ordinary skill in the art. In addition, unless the context indicates otherwise, removal of materials described herein may be accomplished by any suitable technique including, but not limited to, etching (e.g., dry etching, wet etching, vapor etching), ion milling, abrasive planarization (e.g., chemical- mechanical planarization (CMP)), or other known methods. FIGS.1 through FIG.15 are various views (described in further detail below) illustrating different processing stages of a method of forming a microelectronic device (e.g., a memory device, such as a DRAM device), in accordance with embodiments of the disclosure. With the description provided below, it will be readily apparent to one of ordinary skill in the art that the methods described herein may be used for forming various devices. In other words, the methods of the disclosure may be used whenever it is desired to form a microelectronic device. With the description provided below, it will be readily apparent to one of ordinary skill in the art that the methods and structures described herein may be used to form various devices and electronic systems. FIG.1 shows a simplified plan view of a first microelectronic device structure 100 (e.g., a first wafer) at an early processing stage of a method of forming a microelectronic device (e.g., a memory device, such as a DRAM device), in accordance with embodiments of the disclosure. As shown in FIG.1, the first microelectronic device structure 100 may be formed to include array regions 102, digit line exit regions 104 (also referred to as “digit line contact socket regions”) interposed between pairs of the array regions 102 horizontally neighboring one another in a first horizontal direction (e.g., the Y-direction), word line exit regions 106 (also referred to as “word line contact socket regions”) interposed between additional pairs of the array regions 102 horizontally neighboring one another in a second horizontal direction (e.g., the X-direction) orthogonal to the first horizontal direction, and one or more socket regions 108 (also referred to as “back end of line (BEOL) contact socket regions”) horizontally neighboring some of the array regions 102 in one or more of the first horizontal direction and the second horizontal direction. The array regions 102,
the digit line exit regions 104, the word line exit regions 106, and the socket regions 108 are each described in further detail below. The array regions 102 of the first microelectronic device structure 100 may comprise horizontal areas of the first microelectronic device structure 100 configured and positioned to have arrays of memory cells (e.g., arrays of DRAM cells) subsequently formed within horizontal boundaries thereof, as described in further detail below. In addition, the array regions 102 may also be configured and positioned to have desirable arrangements of control logic devices subsequently formed within horizontal boundaries thereof, as also described in further detail below. The control logic devices to be formed within the horizontal boundaries of the array regions 102 may be formed to be vertically offset (e.g., in the Z-direction) from the memory cells to be formed within the horizontal boundaries of the array regions 102. The first microelectronic device structure 100 may be formed to include a desired quantity of the array regions 102. For clarity and ease of understanding of the drawings and related description, FIG.1 depicts the first microelectronic device structure 100 as being formed to include four (4) array regions 102: a first array region 102A, a second array region 102B, a third array region 102C, and a fourth array region 102D. As shown in FIG. 1, the second array region 102B may horizontally neighbor the first array region 102A in the Y-direction, and may horizontally neighbor the fourth array region 102D in the X- direction; the third array region 102C may horizontally neighbor the first array region 102A in the X-direction, and may horizontally neighbor the fourth array region 102D in the Y-direction; and the fourth array region 102D may horizontally neighbor the third array region 102C in the Y-direction, and may horizontally neighboring the second array region 102B in the Y-direction. In additional embodiments, the first microelectronic device structure 100 is formed to include a different number of array regions 102. For example, the first microelectronic device structure 100 may be formed to include greater than four (4) array regions 102, such as greater than or equal to eight (8) array regions 102, greater than or equal to sixteen (16) array regions 102, greater than or equal to thirty-two (32) array regions 102, greater than or equal to sixty-four (64) array regions 102, greater than or equal to one hundred twenty eight (128) array regions 102, greater than or equal to two hundred fifty six (256) array regions 102, greater than or equal to five hundred twelve (512) array regions 102, or greater than or equal to one thousand twenty-four (1024) array regions 102.
In addition, the first microelectronic device structure 100 may be formed to include a desired distribution of the array regions 102. As shown in FIG.1, in some embodiments, the first microelectronic device structure 100 is formed to include rows 103 of the array regions 102 extending in the X-direction, and columns 105 of the array regions 102 extending in the Y-direction. The rows 103 of the array regions 102 may, for example, include a first row including the first array region 102A and the third array region 102C, and a second row including the second array region 102B and the fourth array region 102D. The columns 105 of the array regions 102 may, for example, include a first column including the first array region 102A and the second array region 102B, and a second column including the third array region 102C and the fourth array region 102D. With continued reference to FIG.1, the digit line exit regions 104 of the first microelectronic device structure 100 may comprise horizontal areas of the first microelectronic device structure 100 configured and positioned to have at least some subsequently formed digit lines (e.g., bit lines, data lines) horizontally terminate therein. For an individual digit line exit region 104, at least some subsequently formed digit lines operatively associated with the array regions 102 flanking (e.g., at opposing boundaries in the Y-direction) the digit line exit region 104 may have ends within the horizontal boundaries of the digit line exit region 104. In addition, the digit line exit regions 104 may also be configured and positioned to include contact structures and routing structures with the horizontal boundaries thereof that are operatively associated with at least some of the subsequently formed digit lines. As described in further detail below, some of the contact structures to be formed within the digit line exit regions 104 may couple the subsequently formed digit lines to control logic circuitry of control logic devices (e.g., sense amplifier (SA) devices) to subsequently be formed within the array regions 102. As shown in FIG. 1, in some embodiments, the digit line exit regions 104 horizontally extend in the X- direction, and are horizontally interposed between horizontally neighboring rows of the array regions 102 in the Y-direction. The digit line exit regions 104 may, for example, horizontally alternate with the rows of the array regions 102 in the Y-direction. An individual digit line exit region 104 may be divided into multiple subregions. For example, as shown in FIG.1, an individual digit line exit region 104 may include first digit line exit subregions 104A and second digit line exit subregions 104B. In some embodiments, the first digit line exit subregions 104A horizontally alternate with the second digit line exit subregions 104B in the X-direction. A pair (e.g., two (2)) of
horizontally neighboring array regions 102 within an individual column of the array regions 102 may include one (1) of the first digit line exit subregions 104A and one (1) of the second digit line exit subregions 104B positioned horizontally therebetween in the Y- direction. By way of non-limiting example, the first array region 102A and the second array region 102B of a first column of the array regions 102 may include one (1) of the first digit line exit subregions 104A and one (1) of the second digit line exit subregions 104B positioned therebetween in the Y-direction. The one (1) of the first digit line exit subregions 104A and the one (1) of the second digit line exit subregions 104B may be at least partially (e.g., substantially) confined with horizontal boundaries in the X-direction of the first array region 102A and the second array region 102B. As described in further detail below, an individual first digit line exit subregion 104A may be configured and positioned to facilitate electrical connections between a group of digit lines (e.g., odd digit lines or even digit lines) and a group of control logic devices (e.g., odd SA devices or even SA devices) operatively associated with a portion (e.g., a half portion in the X-direction) of one (1) array region 102 (e.g., the first array region 102A) of a pair of horizontally neighboring array regions 102, and to also facilitate electrical connections between a group of additional digit lines (e.g., additional odd digit lines or additional even digit lines) and a group of additional control logic devices (e.g., additional odd SA devices or additional even SA devices) operatively associated with a corresponding portion (e.g., a corresponding half portion in the X-direction) of an additional array region 102 (e.g., the second array region 102B) of the pair of horizontally neighboring array regions 102. In addition, as also described in further detail below, an individual second digit line exit subregion 104B may be configured and positioned to facilitate electrical connections between a group of further digit lines and a group of further control logic devices operatively associated with another portion (e.g., another half portion in the X- direction) of the one (1) array region 102 (e.g., the first array region 102A), and to also facilitate electrical connections between a group of yet further digit lines and a group of yet further control logic devices operatively associated with a corresponding another portion (e.g., a corresponding another half portion in the X-direction) of the additional array region 102 (e.g., the second array region 102B). Still referring to FIG.1, the word line exit regions 106 of the first microelectronic device structure 100 may comprise horizontal areas of the first microelectronic device structure 100 configured and positioned to have at least some subsequently formed word
lines (e.g., access lines) horizontally terminate therein. For an individual word line exit region 106, at least some subsequently formed word lines operatively associated with the array regions 102 flanking (e.g., at opposing boundaries in the X-direction) the word line exit region 106 may have ends within the horizontal boundaries of the word line exit region 106. In addition, the word line exit regions 106 may also be configured and positioned to include contact structures and routing structures within the horizontal boundaries thereof that are operatively associated with the subsequently formed word lines. As described in further detail below, some of the contact structures to be formed within the word line exit regions 106 may couple the subsequently formed word lines to control logic circuitry of additional control logic devices (e.g., sub-word line driver (SWD) devices) to subsequently be formed within the array regions 102. As shown in FIG.1, in some embodiments, the word line exit regions 106 horizontally extend in the Y-direction, and are horizontally interposed between horizontally neighboring columns of the array regions 102 in the X-direction. The word line exit regions 106 may, for example, horizontally alternate with the columns of the array regions 102 in the X-direction. An individual word line exit region 106 may be divided into multiple subregions. For example, as shown in FIG.1, an individual word line exit region 106 may include first word line exit subregions 106A and second word line exit subregions 106B. In some embodiments, the first word line exit subregions 106A horizontally alternate with the second word line exit subregions 106B in the Y-direction. A pair (e.g., two (2)) of horizontally neighboring array regions 102 within an individual row of the array regions 102 may include one (1) of the first word line exit subregions 106A and one (1) of the second word line exit subregions 106B positioned horizontally therebetween in the X- direction. By way of non-limiting example, the first array region 102A and the third array region 102C of a first row of the array regions 102 may include one (1) of the first word line exit subregions 106A and one (1) of the second word line exit subregions 106B positioned therebetween in the X-direction. The one (1) of the first word line exit subregions 106A and the one (1) of the second word line exit subregions 106B may be at least partially (e.g., substantially) confined with horizontal boundaries in the Y-direction of the first array region 102A and the third array region 102C. As described in further detail below, an individual first word line exit subregion 106A may be configured and positioned to facilitate electrical connections between a group of word lines (e.g., odd word lines or even word lines) and a group of control logic devices
(e.g., odd SWD devices or even SWD devices) operatively associated with a portion (e.g., a half portion in the Y-direction) of one (1) array region 102 (e.g., the first array region 102A) of a pair of horizontally neighboring array regions 102, and to also facilitate electrical connections between a group of additional word lines (e.g., additional odd word lines or additional even word lines) and a group of additional control logic devices (e.g., additional odd SWD devices or additional even SWD devices) operatively associated with a corresponding portion (e.g., a corresponding half portion in the Y-direction) of a further array region 102 (e.g., the third array region 102C) of the pair of horizontally neighboring array regions 102. In addition, as also described in further detail below, an individual second word line exit subregion 106B may be configured and positioned to facilitate electrical connections between a group of further word lines and a group of further control logic devices operatively associated with another portion (e.g., another half portion in the Y-direction) of the one (1) array region 102 (e.g., the first array region 102A), and to also facilitate electrical connections between a group of yet further word lines and a group of yet further control logic devices operatively associated with a corresponding another portion (e.g., a corresponding another half portion in the Y-direction) of the further array region 102 (e.g., the third array region 102C). With continued reference to FIG.1, the socket regions 108 of the first microelectronic device structure 100 may comprise horizontal areas of the first microelectronic device structure 100 configured and positioned to facilitate electrical connections (e.g., by way of contact structures and routing structures formed within horizontal boundaries thereof) between subsequently formed control logic circuitry and additional subsequently formed structures (e.g., BEOL structures), as described in further detail below. The socket regions 108 may horizontally neighbor one or more peripheral horizontal boundaries (e.g., in the Y-direction, in the X-direction) of one or more groups of the array regions 102. For clarity and ease of understanding of the drawings and related description, FIG.1 depicts the first microelectronic device structure 100 as being formed to include one (1) socket region 108 horizontally neighboring a shared horizontal boundary of the second array region 102B and the fourth array region 102D. However, the first microelectronic device structure 100 may be formed to include one or more of a different quantity and a different horizontal position of socket region(s) 108. As a non-limiting example, the socket region 108 may horizontally neighbor a shared horizontal boundary of a different group of the array regions 102 (e.g., a shared horizontal boundary of the third
array region 102C and the fourth array region 102D, a shared horizontal boundary of the first array region 102A and the third array region 102C, a shared horizontal boundary of the first array region 102A and the second array region 102B). As another non-limiting example, the first microelectronic device structure 100 may be formed to include multiple (e.g., a plurality of, more than one) socket regions 108 horizontally neighboring different groups of the array regions 102 than one another. In some embodiments, multiple socket regions 108 collectively substantially horizontally surround (e.g., substantially horizontally circumscribe) the array regions 102. FIGS.2A through 2D illustrate simplified, partial longitudinal cross-sectional views of different regions of the first microelectronic device structure 100 previously described with reference to FIG.1. FIG.2A illustrates a simplified, partial longitudinal cross- sectional view from the perspective of the Y-direction (so as to depict an XZ-plane) of one of the array regions 102 (e.g., the first array region 102A) of the first microelectronic device structure 100 shown in FIG.1. FIG.2B illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the Y-direction (so as to depict an XZ-plane) of one of the digit line exit regions 104 of the first microelectronic device structure 100 shown in FIG.1. FIG.2C illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the X-direction (so as to depict an YZ-plane) of one of the word line exit regions 106 of the first microelectronic device structure 100 shown in FIG.1. FIG.2D illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the X-direction (so as to depict an YZ-plane) of one of socket regions 108 of the first microelectronic device structure 100 shown in FIG.1. Referring collectively to FIGS.2A through 2D, the first microelectronic device structure 100 may be formed to include a first base semiconductor structure 110, filled trenches 112, and a first isolation material 114. The filled trenches 112 vertically extend (e.g., in the Z-direction) into the first base semiconductor structure 110. The first isolation material 114 covers and surrounds surfaces of the first base semiconductor structure 110. The first base semiconductor structure 110 comprises a base material or construction upon which additional features (e.g., materials, structures, devices) of the first microelectronic device structure 100 are formed. The first base semiconductor structure 110 may comprise a semiconductor structure (e.g., a semiconductor wafer), or a base semiconductor material on a supporting structure. For example, the first base semiconductor structure 110 may comprise a conventional silicon substrate (e.g., a
conventional silicon wafer), or another bulk substrate comprising a semiconductor material. In some embodiments, the first base semiconductor structure 110 comprises a silicon wafer. The first base semiconductor structure 110 may include one or more layers, structures, and/or regions formed therein and/or thereon. The filled trenches 112 may comprise trenches (e.g., openings, vias, apertures) within the first base semiconductor structure 110 that are at least partially (e.g., substantially) filled with the first isolation material 114. The filled trenches 112 may, for example, be employed as shallow trench isolation (STI) structures within the first base semiconductor structure 110. The filled trenches 112 may be formed to vertically extend partially (e.g., less than completely) through the first base semiconductor structure 110. Each of the filled trenches 112 may be formed to exhibit substantially the same dimensions and shape as each other of the filled trenches 112, or at least one of the filled trenches 112 may be formed to exhibit one or more of different dimensions and a different shape than at least one other of the filled trenches 112. As a non-limiting example, each of the filled trenches 112 may be formed to exhibit substantially the same vertical dimension(s) and substantially the same vertical cross-sectional shape(s) as each other of the filled trenches 112; or at least one of the filled trenches 112 may be formed to exhibit one or more of different vertical dimension(s) and different vertical cross-sectional shape(s) than at least one other of the filled trenches 112. In some embodiments, the filled trenches 112 are all formed to vertically extend to and terminate at substantially the same depth within the first base semiconductor structure 110. In additional embodiments, at least one of the filled trenches 112 is formed to vertically extend to and terminate at a relatively deeper depth within the first base semiconductor structure 110 than at least one other of the filled trenches 112. As another non-limiting example, each of the filled trenches 112 may be formed to exhibit substantially the same horizontal dimension(s) and substantially the same horizontal cross-sectional shape(s) as each other of the filled trenches 112; or at least one of the filled trenches 112 may be formed to exhibit one or more of different horizontal dimension(s) (e.g., relatively larger horizontal dimension(s), relatively smaller horizontal dimension(s)) and different horizontal cross-sectional shape(s) than at least one other of the filled trenches 112. In some embodiments, at least one of the filled trenches 112 is formed to have one or more different horizontal dimensions (e.g., in the X-direction and/or in the Y-direction) than at least one other of the filled trenches 112.
The first isolation material 114 may be formed of and include at least one insulative material. By way of non-limiting example, the first isolation material 114 may be formed of and include one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (e.g., SiNy), at least one dielectric oxynitride material (e.g., SiOxNy), at least one dielectric carboxynitride material (e.g., SiOxCzNy), and amorphous carbon. In some embodiments, the first isolation material 114 is formed of and includes SiOx(e.g., SiO2). The first isolation material 114 may be substantially homogeneous, or the first isolation material 114 may be heterogeneous. In some embodiments, the first isolation material 114 is substantially homogeneous. In additional embodiments, the first isolation material 114 is heterogeneous. The first isolation material 114 may, for example, be formed of and include a stack of at least two different dielectric materials. Referring next to FIGS.3A through 3D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.3A), the digit line exit region 104 (FIG.3B), the word line exit region 106 (FIG.3C), and the socket region 108 (FIG.3D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.1 and 2A through 2D. As collectively depicted in FIGS. 3A through 3D, access devices 116 (FIG.3A) (e.g., access transistors) may be formed within the array region 102 (FIG.3A). In addition, digit lines 118 (FIGS.3A and 3B) (e.g., data lines, bit lines) may be formed to be coupled to the access devices 116 (FIG.3A) and to horizontally extend in the Y-direction through the array region 102 (FIG.3A). At least some of the digit lines 118 (FIGS.3A and 3B) may terminate (e.g., end) within the digit line exit region 104 (FIG.3B). Furthermore, word lines 120 (e.g., access lines) may be formed to be coupled to the access devices 116 (FIG.3A) and to horizontally extend in the X-direction through the array region 102 (FIG.3A). At least some of the word lines 120 (FIGS.3A and 3C) may terminate within the word line exit region 106 (FIG.3C). Referring to FIG.3A, the access devices 116 formed within the array region 102 may be employed as components of memory cells (e.g., DRAM cells) to be formed within the array region 102. By way of non-limiting example, each access device 116 may individually be formed to include a channel region comprising a portion of the first base semiconductor structure 110; a source region and a drain region each individually
comprising one or more of at least one conductively doped portion of the first base semiconductor structure 110 and/or at least one conductive structure formed in, on, or over the first base semiconductor structure 110; and at least one gate structure comprising a portion of at least one of the word lines 120. Each access device 116 may also include a gate dielectric material (e.g., a dielectric oxide material) formed to be interposed between the channel region thereof and the gate structure thereof. The digit lines 118 may exhibit horizontally elongate shapes extending in parallel in the Y-direction; and the word lines 120 may exhibit horizontally elongate shapes extending in parallel in the X-direction orthogonal to the Y-direction. As used herein, the term “parallel” means substantially parallel. The digit lines 118 and the word lines 120 may each individually be formed of and include conductive material. By way of non-limiting example, the digit lines 118 and the word lines 120 may each individually be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal- containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the digit lines 118 and the word lines 120 are each individually formed of and include one or more of W, Ru, Mo, and titanium nitride (TiNy). Each of the digit lines 118 and each of the word lines 120 may individually be substantially homogeneous, or one or more of the digit lines 118 and/or one or more of the word lines 120 may individually be substantially heterogeneous. In some embodiments, each of the digit lines 118 and each of the word lines 120 are formed to be substantially homogeneous. Still referring to FIG.3A, within the array region 102, additional features (e.g., structures, materials) are also formed on, over, and/or between the access devices 116, the digit lines 118, and the word lines 120. For example, as shown in FIG.3A, first contact structures 122 (e.g., digit line contact structures, also referred to as so-called “bitcon” structures) may be formed to vertically extend between and couple the access devices 116 to the digit lines 118; second contact structures 124 (e.g., cell contact structures, also referred to as so-called “cellcon” structures) may be formed in contact with the access devices 116 and may configured and positioned to couple the access devices 116 to subsequently formed storage node devices (e.g., capacitors); dielectric cap structures 126 may be formed on or over the digit lines 118; and additional dielectric cap structures 128 may be formed on or over the word lines 120. In addition, dielectric structures (e.g., dielectric spacers, such as low-k dielectric spacers formed of and including one or more
low-k dielectric materials) may be formed to intervene (e.g., horizontally intervene) between and isolate the second contact structures 124 and digit lines 118; and further dielectric structures (e.g., gate dielectric structures, such as gate dielectric oxide structures) may be formed to intervene (e.g., horizontally intervene) between and isolate the first contact structures 122 and the word lines 120. The first contact structures 122 and the second contact structures 124 may individually be formed of and include at least one conductive material. In some embodiments, the first contact structures 122 and the second contact structures 124 are individually formed of and include one or more of at least one metal (e.g., W), at least one alloy, at least one conductive metal silicide (e.g., one or more of titanium silicide (TiSix), cobalt silicide (CoSix), tungsten silicide (WSix), tantalum silicide (TaSix), molybdenum silicide (MoSix), and nickel silicide (NiSix)), and at least one conductive metal nitride (e.g., one or more of TiNy, tungsten nitride (WNy), tantalum nitride (TaNy), cobalt nitride (CoNy), molybdenum nitride (MoNy), and nickel nitride (NiNy)). In addition, the dielectric cap structures 126 and the additional dielectric cap structures 128 may individually be formed of and include at least one insulative material. In some embodiments, the dielectric cap structures 126 and the additional dielectric cap structures 128 are individually formed of and include a dielectric nitride material (e.g., SiNy, such as Si3N4). Referring to FIG.3B, within the digit line exit region 104, at least some of the digit lines 118 may horizontally terminate (e.g., end) in the Y-direction. Each of the digit lines 118 horizontally extending through the array region 102 (FIG.3A) and horizontally terminating within the digit line exit region 104 may be formed to terminate at substantially the same horizontal position in the Y-direction; or at least one of the digit lines 118 horizontally terminating within the digit line exit region 104 may be formed to terminate at a different horizontal position in the Y-direction within the digit line exit region 104 than at least one other of the digit lines 118 horizontally terminating within the digit line exit region 104. In some embodiments, at least some digit lines 118 horizontally neighboring one another in the X-direction have terminal ends (e.g., terminal surfaces) horizontally offset from one another in the Y-direction. Horizontally offsetting the terminal ends of some of the digit lines 118 from the terminal ends of some other of the digit lines 118 within the digit line exit region 104 may, for example, promote or facilitate desirable contact structure arrangements within the digit line exit region 104.
As shown in FIG.3B, within the digit line exit region 104, dummy word lines 121 may, optionally, be formed vertically below the digit lines 118. If formed, the dummy word lines 121 may be formed at substantially the same vertical position (e.g., vertical elevation) within the first microelectronic device structure 100 (e.g., within the first base semiconductor structure 110 thereof) as the word lines 120 (FIGS.3A and 3C), and may be formed to horizontally extend orthogonal to the digit lines 118 (e.g., in the X-direction). A material composition of the dummy word lines 121 may be substantially the same as a material composition of the word lines 120 (FIGS.3A and 3C). If formed, the dummy word lines 121 may be electrically isolated from one another and other components (e.g., the word lines 120 (FIGS.3A and 3C), the digit lines 118) of the first microelectronic device structure 100. The dummy word lines 121 (if any) within the digit line exit region 104 may not be employed as part of data paths during use and operation of a microelectronic device formed through the methods of the disclosure. In additional embodiments, the dummy word lines 121 are absent (e.g., omitted) from the digit line exit region 104. Referring next to FIG.3C, within the word line exit region 106, at least some of the word lines 120 may horizontally terminate (e.g., end) in the X-direction. Each of the word lines 120 horizontally extending through the array region 102 (FIG.3A) and horizontally terminating within the word line exit region 106 may be formed to terminate at substantially the same horizontal position in the X-direction; or at least one of the word lines 120 horizontally terminating within the word line exit region 106 may be formed to terminate at a different horizontal position in the X-direction within the word line exit region 106 than at least one other of the word lines 120 horizontally terminating within the word line exit region 106. In some embodiments, at least some word lines 120 horizontally neighboring one another in the Y-direction have terminal ends (e.g., terminal surfaces) horizontally offset from one another in the X-direction. Horizontally offsetting the terminal ends of some of the word lines 120 from the terminal ends of some other of the word lines 120 within the word line exit region 106 may, for example, promote or facilitate desirable contact structure arrangements within the word line exit region 106. As shown in FIG.3C, within the word line exit region 106, dummy digit lines 119 may, optionally, be formed vertically above the word lines 120. If formed, the dummy digit lines 119 may be formed at substantially the same vertical position (e.g., vertical elevation) within the first microelectronic device structure 100 (e.g., within the second isolation material 130 thereof) as the digit lines 118 (FIGS.3A and 3B), and may be formed
to horizontally extend orthogonal to the word lines 120 (e.g., in the Y-direction). A material composition of the dummy digit lines 119 may be substantially the same as a material composition of the digit lines 118 (FIGS.3A and 3B). If formed, the dummy digit lines 119 may be electrically isolated from one another and the other components (e.g., the digit lines 118 (FIGS.3A and 3B), the word lines 120) of the first microelectronic device structure 100. The dummy digit lines 119 (if any) within the word line exit region 106 may not be employed as part of data paths during use and operation of a microelectronic device formed through the methods of the disclosure. In additional embodiments, the dummy digit lines 119 are absent (e.g., omitted) from the word line exit region 106. Referring collectively to FIGS.3A through 3D, the second isolation material 130 may be formed on or over portions of at least the first base semiconductor structure 110, the access devices 116 (FIG.3A), the digit lines 118 (FIGS.3A and 3B), the word lines 120 (FIGS.3A and 3C), the second contact structures 124, and the first isolation material 114. The second isolation material 130 may be formed of and include at least one insulative material. A material composition of second isolation material 130 may be substantially the same as a material composition of the first isolation material 114, or the material composition of the second isolation material 130 may be different than the material composition of the first isolation material 114. In some embodiments, the second isolation material 130 is formed of and includes a dielectric oxide material, such as SiOx(e.g., SiO2). The second isolation material 130 may be substantially homogeneous, or the second isolation material 130 may be heterogeneous. In some embodiments, the second isolation material 130 is substantially homogeneous. In additional embodiments, the second isolation material 130 is heterogeneous. The second isolation material 130 may, for example, be formed of and include a stack of at least two different dielectric materials. Referring next to FIGS.4A through 4D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.4A), the digit line exit region 104 (FIG.4B), the word line exit region 106 (FIG.4C), and the socket region 108 (FIG.4D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.3A through 3D. As collectively depicted in FIGS.4A through 4D, third contact structures 132 may be formed within each of the digit line exit region 104 (FIG.4B), the word line exit region 106 (FIG.4C), and the socket region 108 (FIG.4D). The third contact structures 132 may be formed to vertically extend (e.g., in the
Z-direction) to and contact the first base semiconductor structure 110. In addition, as described in further detail below, some of the third contact structures 132 may be formed to be contact to portions of the digit lines 118 (FIG.4B) within the digit line exit region 104 (FIG.4B), and some other of the third contact structures 132 may be formed to be contact to portions of the word lines 120 (FIG.4C) within the word line exit region 106 (FIG.4C). Referring to FIG.4B, within the digit line exit region 104, a first group 132A of the third contact structures 132 may be formed to contact at least some of the digit lines 118 horizontally extending (e.g., in the Y-direction) into the digit line exit region 104. Each third contact structure 132 of the first group 132A of third contact structures 132 may be considered to be a digit line contact structure (e.g., a so-called “edge of array” digit line contact structure). As shown in FIG.4B, each third contact structure 132 of the first group 132A of third contact structures 132 may be formed to physically contact and vertically extend completely through an individual digit line 118. For example, within the digit line exit region 104, each third contact structure 132 of the first group 132A may be formed to physically contact and vertically extend through each of the second isolation material 130, one of the digit lines 118, and the first isolation material 114. Accordingly, each third contact structure 132 of the first group 132A may be formed to be coupled to one of the digit lines 118. In some embodiments, outer sidewalls of each third contact structure 132 of the first group 132A of the third contact structures 132 physically contact inner sidewalls of an individual digit line 118. In addition, each third contact structure 132 of the first group 132A may be formed to vertically terminate on or within the first base semiconductor structure 110, such as on or within a portion of the first base semiconductor structure 110 vertically underlying one of the filled trenches 112 within the digit line exit region 104. Referring next to FIG.4C, within the word line exit region 106, a second group 132B of the third contact structures 132 may be formed to contact at least some of the word lines 120 horizontally extending (e.g., in the X-direction) into the word line exit region 106. Each third contact structure 132 of the second group 132B of third contact structures 132 may be considered to be a word line contact structure (e.g., a so-called “edge of array” word line contact structure). As shown in FIG.4C, each third contact structure 132 of the second group 132B of third contact structures 132 may be formed to physically contact and vertically extend completely through an individual word line 120. For example, within the word line exit region 106, each third contact structure 132 of the second group 132B may
be formed to physically contact and vertically extend through each of the second isolation material 130, one of the word lines 120, and the first isolation material 114. Accordingly, each third contact structure 132 of the second group 132B may be formed to be coupled to one of the word lines 120. In some embodiments, outer sidewalls of each third contact structure 132 of the second group 132B of the third contact structures 132 physically contact inner sidewalls of an individual word line 120. In addition, each third contact structure 132 of the second group 132B may be formed to vertically terminate on or within the first base semiconductor structure 110, such as on or within a portion of the first base semiconductor structure 110 vertically underlying one of the filled trenches 112 within the word line exit region 106. Referring next to FIG.4D, within the socket region 108, a third group 132C of the third contact structures 132 may be formed to vertically extend to portions of the first base semiconductor structure 110 located within the socket region 108. Each third contact structure 132 of the third group 132C of third contact structures 132 may be considered to be a deep contact structure (e.g., a deep contact structure to be electrically connected to one or more BEOL structures to subsequently be formed). Within the socket region 108, each third contact structure 132 of the third group 132C may be formed to physically contact and vertically extend through each of the second isolation material 130 and the first isolation material 114; and may vertically terminate on or within the first base semiconductor structure 110, such as on or within a portion of the first base semiconductor structure 110 vertically underlying one of the filled trenches 112 within the socket region 108. Collectively referring again to FIGS.4A through 4D, the third contact structures 132, including the first group 132A (FIG.4B), the second group 132B (FIG.4C), and the third group 132C (FIG.4D) thereof, may be formed of and include conductive material. By way of non-limiting example, the third contact structures 132 may each individually be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the third contact structures 132 are each individually formed of and include W. Each of the third contact structures 132 may be substantially homogeneous, or one or more of the third contact structures 132 may individually be heterogeneous. In some embodiments, each of the third contact structures 132 is substantially homogeneous. In additional embodiments,
each of the third contact structures 132 is heterogeneous. Each third contact structure 132 may, for example, be formed of and include a stack of at least two different conductive materials. Referring next to FIGS.5A through 5D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.5A), the digit line exit region 104 (FIG.5B), the word line exit region 106 (FIG.5C), and the socket region 108 (FIG.5D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.4A through 4D. As collectively depicted in FIGS.5A through 5D, at least one first routing tier 134 including first routing structures 136 may be formed over the access devices 116 (FIG.5A); storage node devices 138 (e.g., capacitors) may be formed over and in electrical communication with at least some of the first routing structures 136 within the array region 102 (FIG.5A); fourth contact structures 140 may be formed over and in electrical communication with at least some of the third contact structures 132 within the socket region 108 (FIG.5D); and a second routing tier 142 including second routing structures 144 may be formed over the storage node devices 138 and the fourth contact structures 140. With continued collective reference to FIGS.5A through 5D, the first routing structures 136 of the first routing tier 134 may be employed to facilitate electrical communication between additional features (e.g., structures, materials, devices) coupled thereto. The first routing structures 136 may each individually be formed of and include conductive material. By way of non-limiting example, the first routing structures 136 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the first routing structures 136 are formed of and include W. Referring to FIG.5A, within the array region 102, at least some of the first routing structures 136 may be formed and configured to couple the access devices 116 (e.g., access devices) to the storage node devices 138 (e.g., capacitors) to form memory cells 146 (e.g., DRAM cells) within the array region 102. Each memory cell 146 may individually include one of the access devices 116; one of the storage node devices 138; one of the second contact structures 124 interposed between the access device 116 and the storage node device 138; and one of the first routing structures 136 interposed between the second
contact structure 124 and the storage node device 138. At least some of the first routing structures 136 within the array region 102 may, for example, be configured and employed as redistribution material (RDM) structures (also referred to as “redistribution layer” (RDL) structures) to effectively shift (e.g., stagger, adjust, modify) lateral positions of semiconductor pillars of the access devices 116 to accommodate a desired arrangement (e.g., a hexagonal close packed arrangement) of the storage node devices 138 vertically over and in electrical communication with the access devices 116. While FIGS.5A through 5D show the formation of a single (e.g., only one) first routing tier 134 including first routing structures 136, multiple (e.g., more than one) first routing tiers 134 each individually including a desired arrangement (e.g., pattern) of first routing structures 136 may be formed. By of non-limiting example, two or more (e.g., three or more) of the first routing tiers 134 may be formed, wherein different first routing tiers 134 are vertically offset from one another and each individually include a desired arrangement of first routing structures 136 therein. At least some of the first routing structures 136 within at least one of the first routing tiers 134 may be coupled to at least some of the first routing structures 136 within at least one other of the first routing tiers 134 by way of conductive interconnect structures. Referring to again to FIG.5A, within the array region 102, the storage node devices 138 may individually be formed and configured to store a charge representative of a programmable logic state of the memory cell 146 including the storage node device 138. In some embodiments, the storage node devices 138 comprise capacitors. During use and operation, a charged capacitor may represent a first logic state, such as a logic 1; and an uncharged capacitor may represent a second logic state, such as a logic 0. Each of the storage node devices 138 may, for example, be formed to include a first electrode (e.g., a bottom electrode), a second electrode (e.g., a top electrode), and a dielectric material between the first electrode and the second electrode. Referring to next to FIG.5D, within the socket region 108, at least some of the fourth contact structures 140 may be formed to be coupled to at least some of the third contact structures 132. The fourth contact structures 140 may individually be formed of and include conductive material. By way of non-limiting example, the fourth contact structures 140 may each individually be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive
metal oxide). In some embodiments, each of the fourth contact structures 140 is formed of and includes W. Each of the fourth contact structures 140 may be substantially homogeneous, or one or more of the fourth contact structures 140 may individually be heterogeneous. In some embodiments, each of the fourth contact structures 140 is substantially homogeneous. In additional embodiments, each of the fourth contact structures 140 is heterogeneous. Each fourth contact structure 140 may, for example, be formed of and include a stack of at least two different conductive materials. As shown in FIG.5D, within the socket region 108, one or more groups of storage node devices 138 (e.g., capacitors) may, optionally, also be formed. If formed within the socket region 108, the storage node devices 138 may be coupled to at least some of the second routing structures 144 positioned within the socket region 108. If formed, the storage node devices 138 may be employed to enhance the performance of a microelectronic device formed through the methods of the disclosure. The storage node devices 138 may, for example, subsequently (e.g., following completion of additional processing stages of the method of forming the microelectronic device) be coupled to and employed to power additional devices (e.g., control logic devices, access devices) of the microelectronic device. In some embodiments, the storage node devices 138 are subsequently coupled to and employed to power control logic devices comprising complementary metal-oxide-semiconductor (CMOS) circuitry. As described in further detail below, the control logic devices may be components of an additional, separately- formed microelectronic device structure (e.g., a third microelectronic device structure) that is subsequently attached to the first microelectronic device structure 100 to facilitate the formation of a microelectronic device of the disclosure. The storage node devices 138 formed within socket region 108 may be coupled to (e.g., by way of one or more of the second routing structures 144, one or more of the fourth contact structures 140, one or more of the third contact structures 132, one or more additional routing structures, and one or more additional contact structures) to BEOL structures to subsequently be formed, as also described in further detail below. Referring collectively to FIGS.5A through 5D, the second routing structures 144 of the second routing tier 142 may be employed to facilitate electrical communication between additional features (e.g., structures, materials, devices) coupled thereto. In some embodiments, one or more of the second routing structures 144 are formed to horizontally extend between and couple at least some of the storage node devices 138 (and, hence, the
memory cells 146) (FIG.5A) within the array region 102 (FIG.5A) to one or more of the fourth contact structures 140 (FIG.5D) within the socket region 108 (FIG.5D). In additional embodiments, one or more of the second routing structures 144 are formed to horizontally extend between and couple at least some of the storage node devices 138 (FIG. 5D) within the socket region 108 (FIG.5D) to one or more of the fourth contact structures 140 (FIG.5D) within the socket region 108 (FIG.5D). The second routing structures 144 may each be formed of and include conductive material. By way of non-limiting example, the second routing structures 144 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, each of the second routing structures 144 of the second routing tier 142 is formed of and includes W. With continued reference to FIGS.5A through 5D, a third isolation material 148 may be formed on or over portions of at least the second isolation material 130, the first routing structures 136, the storage node devices 138 (FIGS.5A and 5D), the fourth contact structures 140 (FIG.5D), and the second routing structures 144. The third isolation material 148 may be formed of and include at least one insulative material. A material composition of the third isolation material 148 may be substantially the same as a material composition of the second isolation material 130, or the material composition of the third isolation material 148 may be different than the material composition of the second isolation material 130. In some embodiments, the third isolation material 148 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiO2). The third isolation material 148 may be substantially homogeneous, or the third isolation material 148 may be heterogeneous. In some embodiments, the third isolation material 148 is substantially homogeneous. In additional embodiments, the third isolation material 148 is heterogeneous. The third isolation material 148 may, for example, be formed of and include a stack of at least two different dielectric materials. As shown in FIGS.5A through 5D, an upper surface of third isolation material 148 may be formed to be substantially planar and to vertically overlie upper surfaces of the second routing structures 144. Referring next to FIGS.6A through 6D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.6A), the digit line exit region 104 (FIG.6B), the word line exit region 106 (FIG.6C), and the socket region 108 (FIG.6D) at a processing stage of the
method of forming the microelectronic device following the processing stage previously described with reference to FIGS.5A through 5D. As collectively depicted in FIGS.6A through 6D, a second microelectronic device structure 150 (e.g., a second wafer) including a base structure 152 and a fourth isolation material 154 may be attached to the third isolation material 148 to form a first microelectronic device structure assembly 156. The first microelectronic device structure assembly 156 may then be vertically inverted (e.g., flipped upside down in the Z-direction), and an upper portion of the first base semiconductor structure 110 (FIGS.5A through 5D) may be removed to expose (e.g., uncover) the first isolation material 114 within the filled trenches 112 (FIGS.5A through 5D) and form a first semiconductor tier 158 including first semiconductor structures 160 separated from one another by remaining portions of the first isolation material 114. Thereafter, sacrificial structures 162 (e.g., sacrificial pad structures) may be formed to physically contact at least some of the third contact structures 132, and a fifth isolation material 164 may be formed on or over surfaces of the sacrificial structures 162, the first semiconductor structures 160, and the first isolation material 114. The base structure 152 of the second microelectronic device structure 150 comprises a base material or construction upon which additional features (e.g., materials, structures, devices) of the formed. In some embodiments, the base structure 152 comprises a wafer. The base structure 152 may be formed of and include one or more of semiconductor material (e.g., one or more of a silicon material, such monocrystalline silicon or polycrystalline silicon (also referred to herein as “polysilicon”); silicon-germanium; germanium; gallium arsenide; a gallium nitride; gallium phosphide; indium phosphide; indium gallium nitride; and aluminum gallium nitride), a base semiconductor material on a supporting structure, glass material (e.g., one or more of borosilicate glass (BSP), phosphosilicate glass (PSG), fluorosilicate glass (FSG), borophosphosilicate glass (BPSG), aluminosilicate glass, an alkaline earth boro- aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass), and ceramic material (e.g., one or more of poly-aluminum nitride (p-AlN), silicon on poly-aluminum nitride (SOPAN), aluminum nitride (AlN), aluminum oxide (e.g., sapphire; α-Al2O3), and silicon carbide). By way of non-limiting example, the base structure 152 may comprise a semiconductor wafer (e.g., a silicon wafer), a glass wafer, or a ceramic wafer. The base structure 152 may include one or more layers, structures, and/or regions formed therein and/or thereon.
The fourth isolation material 154 of the second microelectronic device structure 150 may be formed of and include at least one insulative material. A material composition of the fourth isolation material 154 may be substantially the same as a material composition of the third isolation material 148; or the material composition of the fourth isolation material 154 may be different than the material composition of the third isolation material 148. In some embodiments, the fourth isolation material 154 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiO2). The fourth isolation material 154 may be substantially homogeneous, or the fourth isolation material 154 may be heterogeneous. In some embodiments, the fourth isolation material 154 is substantially homogeneous. In additional embodiments, the fourth isolation material 154 is heterogeneous. The fourth isolation material 154 may, for example, be formed of and include a stack of at least two different dielectric materials. To attach the second microelectronic device structure 150 to the third isolation material 148, the second microelectronic device structure 150 may be vertically inverted (e.g., flipped upside down in the Z-direction), the fourth isolation material 154 thereof may be provided in physical contact with the third isolation material 148, and the fourth isolation material 154 and the third isolation material 148 may be exposed to annealing conditions to form bonds (e.g., oxide-to-oxide bonds) between the fourth isolation material 154 and the third isolation material 148. By way of non-limiting example, the fourth isolation material 154 and the third isolation material 148 may be exposed to a temperature greater than or equal to about 400°C (e.g., within a range of from about 400ºC to about 800ºC, greater than about 800ºC) to form oxide-to-oxide bonds between the third isolation material 148 and the fourth isolation material 154. In some embodiments, the third isolation material 148 and the fourth isolation material 154 are exposed to at least one temperature greater than about 800°C to form oxide-to-oxide bonds between the third isolation material 148 and the fourth isolation material 154. As shown in FIGS.6A through 6D, bonding the fourth isolation material 154 to the third isolation material 148 may form a first connected isolation structure 166. In FIGS. 6A through 6D, the fourth isolation material 154 and the third isolation material 148 of the first connected isolation structure 166 are distinguished from one another by way of a dashed line. However, the fourth isolation material 154 to the third isolation material 148 may be integral and continuous with one another. Put another way, the first connected isolation structure 166 may be a substantially monolithic structure including the fourth
isolation material 154 as a first region thereof, and the third isolation material 148 as a second region thereof. For the first connected isolation structure 166, the fourth isolation material 154 thereof may be attached to the third isolation material 148 thereof without a bond line. Still collectively referring to FIGS.6A through 6D, the upper portion of the first base semiconductor structure 110 (FIGS.5A through 5D) vertically overlying the filled trenches 112 (FIGS.5A through 5D) following the vertical inversion of the first microelectronic device structure assembly 156 may be removed using at least one conventional wafer thinning process (e.g., a conventional chemical-mechanical planarization (CMP) process; a conventional etching process, such as a conventional dry etching process, or a conventional wet etching process). The first semiconductor structures 160 may be formed to exhibit a desired vertical height (e.g., in the Z-direction) through the material removal process. The material removal process may also remove portions (e.g., upper portions following the vertical inversion of the first microelectronic device structure assembly 156) of the first isolation material 114. In addition, within the digit line exit region 104 (FIG.6B), the word line exit region 106 (FIG.6C), and the socket region 108 (FIG.6D), the material removal process may partially expose the third contact structures 132. The material removal process may also remove portions (e.g., upper portions following the vertical inversion of the first microelectronic device structure assembly 156) of the third contact structures 132. Referring to FIGS.6B through 6D, the sacrificial structures 162 may be formed to have desirable geometric configurations (e.g., shapes, dimensions) and horizontal positions (e.g., in the X-direction and in the Y-direction). The geometric configurations, horizontal positions, and horizontal spacing of the sacrificial structures 162 at least partially depends on the geometric configurations, horizontal positions, and horizontal spacing of the third contact structures 132. Individual sacrificial structures 162 may be formed to at least partially horizontally overlap individual third contact structures 132. In some embodiments, the each sacrificial structure 162 is formed to substantially cover an upper surface of the third contact structure 132 in physical contact therewith. Individual sacrificial structures 162 may be formed to have horizontal dimensions (e.g., in the X- direction and in the Y-direction) greater than or equal to corresponding horizontal dimensions of individual third contact structures 132 in physical contact therewith.
The sacrificial structures 162 may be formed of and include at least one material (e.g., at least one dielectric material) that may be selectively removed relative to the fifth isolation material 164, the first isolation material 114, and the third contact structures 132. For example, the sacrificial structures 162 may be selectively etchable relative to the fifth isolation material 164 during common (e.g., collective, mutual) exposure to a first etchant, and the fifth isolation material 164 may be selectively etchable to the sacrificial structures 162 during common exposure to a second, different etchant. As used herein, a material is “selectively etchable” relative to another material if the material exhibits an etch rate that is at least about five times (5x) greater than the etch rate of another material, such as about ten times (10x) greater, about twenty times (20x) greater, or about forty times (40x) greater. A material composition of the sacrificial structures 162 is different than the material compositions of the fifth isolation material 164, the first isolation material 114, and the third contact structures 132. As a non-limiting example, the sacrificial structures 162 may comprise at least one insulative material having a different material composition than insulative material(s) of the fifth isolation material 164 and the first isolation material 114. In some embodiments, the sacrificial structures 162 are formed of and include one or more of at least one dielectric nitride material (e.g., SiNy, such as Si3N4), and at least one dielectric oxynitride material (e.g., SiOxNy). The sacrificial structures 162 may individually be substantially homogeneous, or the sacrificial structures 162 may individually be heterogeneous. Referring collectively to FIGS.6A through 6D, the fifth isolation material 164 formed to cover surfaces of the first semiconductor structures 160 (FIG.6A), the sacrificial structures 162 (FIGS.6B through 6D), and the first isolation material 114 (FIGS.6A through 6D) may be formed of and include at least one insulative material. A material composition of the fifth isolation material 164 may be substantially the same as a material composition of the first isolation material 114, or the material composition of the fifth isolation material 164 may be different than the material composition of the first isolation material 114. In some embodiments, the fifth isolation material 164 is formed of and includes a dielectric oxide material, such as SiOx(e.g., SiO2). The fifth isolation material 164 may be substantially homogeneous, or the fifth isolation material 164 may be heterogeneous. In some embodiments, the fifth isolation material 164 is substantially homogeneous. In additional embodiments, the fifth isolation material 164 is heterogeneous. The fifth isolation material 164 may, for example, be formed of and include a stack of at least
two different dielectric materials. As shown in FIGS.6A through 6D, an upper surface of the fifth isolation material 164 may be formed to be substantially planar and to vertically overlie upper surfaces of the sacrificial structures 162 (FIGS.6B through 6D). Referring next to FIGS.7A through 7D, illustrated are simplified, partial longitudinal cross-sectional views of different regions of a third microelectronic device structure 167 (e.g., a third wafer) formed separate from the first microelectronic device structure assembly 156 (FIGS.6A through 6D). The third microelectronic device structure 167 may be formed to have an arrangement of different regions (e.g., array regions, digit line exit regions, word line exit regions, socket regions) corresponding to (e.g., substantially the same as) the arrangement of different regions (e.g., the array regions 102, the digit line exit regions 104, the word line exit regions 106, the socket regions 108) previously described with reference to FIGS.1 through 6D. FIG.7A illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the Y-direction (so as to depict an XZ-plane) of an array region 102' of the third microelectronic device structure 167. FIG.7B illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the Y-direction (so as to depict an XZ-plane) of a digit line exit region 104' of the third microelectronic device structure 167. FIG.7C illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the X-direction (so as to depict an YZ-plane) of a word line exit region 106' of the third microelectronic device structure 167. FIG.7D illustrates a simplified, partial longitudinal cross-sectional view from the perspective of the X-direction (so as to depict an YZ-plane) of a socket region 108' of the third microelectronic device structure 167. As shown in FIGS.7A through 7D, the third microelectronic device structure 167 may be formed to include a second base semiconductor structure 168, additional filled trenches 170, transistors 172 (FIGS.7A and 7D), a sixth isolation material 174, fourth contact structures 184 (FIGS.7A and 7D), fifth contact structures 186 (FIGS.7A and 7D), and at least one third routing tier 188 (FIGS.7A and 7D) including third routing structures 190 (FIGS.7A and 7D). The additional filled trenches 170 vertically extend (e.g., in the Z- direction) into the second base semiconductor structure 168. The transistors 172 at least partially vertically overlie the second base semiconductor structure 168 and the additional filled trenches 170. The fourth contact structures 184 and fifth contact structures 186 contact the transistors 172. Some of the third routing structures 190 contact some of the fourth contact structures 184, and some other of the third routing structures 190 contact
some of the fifth contact structures 186. The sixth isolation material 174 may substantially cover and surround the second base semiconductor structure 168, the transistors 172, the fourth contact structures 184, the fifth contact structures 186, and the third routing structures 190. The second base semiconductor structure 168 comprises a base material or construction upon which additional features (e.g., materials, structures, devices) of the third microelectronic device structure 167 are formed. The second base semiconductor structure 168 may comprise a semiconductor structure (e.g., a semiconductor wafer), or a base semiconductor material on a supporting structure. For example, the second base semiconductor structure 168 may comprise a conventional silicon substrate (e.g., a conventional silicon wafer), or another bulk substrate comprising a semiconductor material. In some embodiments, the second base semiconductor structure 168 comprises a silicon wafer. The second base semiconductor structure 168 may include one or more layers, structures, and/or regions formed therein and/or thereon. The additional filled trenches 170 may comprise trenches (e.g., openings, vias, apertures) within the second base semiconductor structure 168 that are at least partially (e.g., substantially) filled with the sixth isolation material 174. The additional filled trenches 170 may, for example, be employed as STI structures within the second base semiconductor structure 168. The additional filled trenches 170 may be formed to vertically extend partially (e.g., less than completely) through the second base semiconductor structure 168. Each of the additional filled trenches 170 may be formed to exhibit substantially the same dimensions and shape as each other of the additional filled trenches 170, or at least one of the additional filled trenches 170 may be formed to exhibit one or more of different dimensions and a different shape than at least one other of the additional filled trenches 170. As a non-limiting example, each of the additional filled trenches 170 may be formed to exhibit substantially the same vertical dimension(s) and substantially the same vertical cross-sectional shape(s) as each other of the additional filled trenches 170; or at least one of the additional filled trenches 170 may be formed to exhibit one or more of different vertical dimension(s) and different vertical cross-sectional shape(s) than at least one other of the additional filled trenches 170. In some embodiments, the additional filled trenches 170 are all formed to vertically extend to and terminate at substantially the same depth within the second base semiconductor structure 168. In additional embodiments, at least one of the additional filled trenches 170 is formed to
vertically extend to and terminate at a relatively deeper depth within the second base semiconductor structure 168 than at least one other of the additional filled trenches 170. As another non-limiting example, each of the additional filled trenches 170 may be formed to exhibit substantially the same horizontal dimension(s) and substantially the same horizontal cross-sectional shape(s) as each other of the additional filled trenches 170; or at least one of the additional filled trenches 170 may be formed to exhibit one or more of different horizontal dimension(s) (e.g., relatively larger horizontal dimension(s), relatively smaller horizontal dimension(s)) and different horizontal cross-sectional shape(s) than at least one other of the additional filled trenches 170. In some embodiments, at least one of the additional filled trenches 170 is formed to have one or more different horizontal dimensions (e.g., in the X-direction and/or in the Y-direction) than at least one other of the additional filled trenches 170. Referring collectively to FIGS.7A and 7D, the transistors 172 may individually be formed to include conductively doped regions 176, a channel region 178, a gate structure 180, and a gate dielectric material 182. For an transistor 172, the conductively doped regions 176 may be formed within the second base semiconductor structure 168 (e.g., within an relatively elevated portion of the formed within portions (e.g., relatively elevated portions) of the second base semiconductor structure 168 horizontally neighboring the additional filled trenches 170 horizontally neighboring at least one of the additional filled trenches 170); the channel region 178 may be within the second base semiconductor structure 168 and may be horizontally interposed between the conductively doped regions 176 thereof; the gate structure 180 may vertically overlie the channel region 178; and the gate dielectric material 182 (e.g., a dielectric oxide) may be vertically interposed (e.g., in the Z-direction) between the gate structure 180 and the channel region 178. The conductively doped regions 176 of an individual transistor 172 may include a source region 176A and a drain region 176B. Referring collectively to FIGS.7A and 7D, for an individual transistor 172, the conductively doped regions 176 thereof may comprise semiconductor material of the second base semiconductor structure 168 doped with one or more desired conductivity- enhancing dopants. In some embodiments, the conductively doped regions 176 of the transistor 172 comprise semiconductor material (e.g., silicon) doped with at least one N-type dopant (e.g., one or more of phosphorus, arsenic, antimony, and bismuth). In some of such embodiments, the channel region 178 of the transistor 172 comprises the semiconductor
material doped with at least one P-type dopant (e.g., one or more of boron, aluminum, and gallium). In some other of such embodiments, the channel region 178 of the transistor 172 comprises substantially undoped semiconductor material (e.g., substantially undoped silicon). In additional embodiments, for an individual transistor 172, the conductively doped regions 176 thereof comprise semiconductor material (e.g., silicon) doped with at least one P-type dopant (e.g., one or more of boron, aluminum, and gallium). In some of such additional embodiments, the channel region 178 of the transistor 172 comprises the semiconductor material doped with at least one N-type dopant (e.g., one or more of phosphorus, arsenic, antimony, and bismuth). In some other of such additional embodiments, the channel region 178 of the transistor 172 comprised substantially undoped semiconductor material (e.g., substantially undoped silicon). Still referring collectively to FIGS.7A and 7D, the gate structures 180 (e.g., gate electrodes) may individually horizontally extend (e.g., in the X-direction) between and be employed by multiple transistors 172. The gate structures 180 may be formed of and include conductive material. The gate structures 180 may individually be substantially homogeneous, or the gate structures 180 may individually be heterogeneous. In some embodiments, the gate structures 180 are each substantially homogeneous. In additional embodiments, the gate structures 180 are each heterogeneous. Individual gate structures 180 may, for example, be formed of and include a stack of at least two different conductive materials. Still referring to FIGS.7A and 7D, the fourth contact structures 184 may individually be formed to vertically extend between and couple the gate structures 180 (and, hence, the transistors 172) to one or more of the third routing structures 190 of the third routing tier 188. The fourth contact structures 184 may individually be formed of and include conductive material. By way of non-limiting example, the fourth contact structures 184 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the fourth contact structures 184 are formed of and include W. In additional embodiments, the fourth contact structures 184 are formed of and include Cu. As also shown in FIGS.7A and 7D, the fifth contact structures 186 may be formed to vertically extend between and couple the conductively doped regions 176 (e.g., the source region 176A, the drain region 176B) of the transistors 172 to some of the third
routing structures 190 of the third routing tier 188. The fifth contact structures 186 may individually be formed of and include conductive material. By way of non-limiting example, the fifth contact structures 186 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). A material composition of the fifth contact structures 186 may be substantially the same as a material composition of the fourth contact structures 184, or the material composition of one or more of the fifth contact structures 186 may be different than the material composition of one or more of the fourth contact structures 184. In some embodiments, the fifth contact structures 186 are formed of and include W. In additional embodiments, the fifth contact structures 186 are formed of and include Cu. Referring collectively to FIGS.7A through 7D, the third routing structures 190 of the third routing tier 188 may be formed of and include conductive material. By way of non-limiting example, the third routing structures 190 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the third routing structures 190 are formed of and include W. In additional embodiments, the third routing structures 190 are formed of and include Cu. At least some of the third routing structures 190 may be employed as local routing structures of a microelectronic device (e.g., a memory device, such as a DRAM device). While FIGS.7A through 7D show the formation of a single (e.g., only one) third routing tier 188 including third routing structures 190, multiple (e.g., more than one) third routing tiers 188 each individually including a desired arrangement (e.g., pattern) of third routing structures 190 may be formed. By of non-limiting example, two or more (e.g., three or more) of the third routing tiers 188 may be formed, wherein different third routing tiers 188 are vertically offset from one another and each individually include a desired arrangement of third routing structures 190 therein. At least some of the third routing structures 190 within at least one of the third routing tiers 188 may be coupled to at least some of the third routing structures 190 within at least one other of the third routing tiers 188 by way of conductive interconnect structures. With continued collective reference to FIGS.7A though 7D, the transistors 172, the third routing structures 190, the fourth contact structures 184, the fifth contact structures 186
may form control logic circuitry of various control logic devices 191 (FIG.7A) configured to control various operations of various features (e.g., the memory cells 146 (FIG.6A)) of a microelectronic device (e.g., a memory device, such as a DRAM device) to be formed through the methods of disclosure. In some embodiments, the control logic devices 191 comprise CMOS circuitry. As a non-limiting example, the control logic devices 191 may include one or more (e.g., each) of charge pumps (e.g., VCCPcharge pumps, VNEGWLcharge pumps, DVC2 charge pumps), delay-locked loop (DLL) circuitry (e.g., ring oscillators), Vddregulators, drivers (e.g., main word line drivers, sub word line drivers (SWD)), page buffers, decoders (e.g., local deck decoders, column decoders, row decoders), sense amplifiers (e.g., equalization (EQ) amplifiers, isolation (ISO) amplifiers, NMOS sense amplifiers (NSAs), PMOS sense amplifiers (PSAs)), repair circuitry (e.g., column repair circuitry, row repair circuitry), I/O devices (e.g., local I/O devices), memory test devices, array multiplexers (MUX), error checking and correction (ECC) devices, self-refresh/wear leveling devices, and other chip/deck control circuitry. Different regions (e.g., the array region 102' (FIG.7A), the socket region 108' (FIG.7D)) may have different control logic devices 191 formed within horizontal boundaries thereof. With returned collective reference to FIGS.7A through 7D, the sixth isolation material 174 covering and surrounding the second base semiconductor structure 168, the transistors 172 (FIGS.7A and 7D), the gate structures 180 (FIGS.7A and 7D), the fourth contact structures 184 (FIGS.7A and 7D), the fifth contact structures 186 (FIGS.7A and 7D), and the third routing structures 190 (FIGS.7A and 7D) may be formed of and include at least one insulative material. A material composition of the sixth isolation material 174 may be substantially the same as a material composition of the fifth isolation material 164 (FIGS.6A through 6D) of the first microelectronic device structure assembly 156 (FIGS. 6A through 6D), or the material composition of the sixth isolation material 174 may be different than the material composition of the fifth isolation material 164 (FIGS.6A through 6D). In some embodiments, the sixth isolation material 174 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiO2). The sixth isolation material 174 may be substantially homogeneous, or the sixth isolation material 174 may be heterogeneous. In some embodiments, the sixth isolation material 174 is substantially homogeneous. In additional embodiments, the sixth isolation material 174 is heterogeneous. The sixth isolation material 174 may, for example, be formed of and include a stack of at least two different dielectric materials. As shown in FIGS.7A through 7D, an upper surface
of the sixth isolation material 174 may be formed to be substantially planar and to vertically overlie upper surfaces of the third routing structures 190 (FIGS.7A and 7D). Referring next to FIGS.8A through 8D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102' (FIG.8A), the digit line exit region 104' (FIG.8B), the word line exit region 106' (FIG.8C), and the socket region 108' (FIG.8D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.7A through 7D. As collectively depicted in FIGS.8A through 8D, a fourth microelectronic device structure 192 (e.g., a fourth wafer) including an additional base structure 194 and a seventh isolation material 196 may be attached to the sixth isolation material 174 of the third microelectronic device structure 167 to form a second microelectronic device structure assembly 198. The additional base structure 194 of the fourth microelectronic device structure 192 comprises a base material or construction upon which additional features (e.g., materials, structures, devices) of the formed. In some embodiments, the additional base structure 194 comprises a wafer. The additional base structure 194 may be formed of and include one or more of semiconductor material (e.g., one or more of a silicon material, such monocrystalline silicon or polycrystalline silicon (also referred to herein as “polysilicon”); silicon-germanium; germanium; gallium arsenide; a gallium nitride; gallium phosphide; indium phosphide; indium gallium nitride; and aluminum gallium nitride), a base semiconductor material on a supporting structure, glass material (e.g., one or more of BSP, PSG, FSG, BPSG, aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass), and ceramic material (e.g., one or more of p-AlN, SOPAN, AlN, aluminum oxide (e.g., sapphire; α-Al2O3), and silicon carbide). By way of non-limiting example, the additional base structure 194 may comprise a semiconductor wafer (e.g., a silicon wafer), a glass wafer, or a ceramic wafer. The additional base structure 194 may include one or more layers, structures, and/or regions formed therein and/or thereon. The seventh isolation material 196 of the fourth microelectronic device structure 192 may be formed of and include at least one insulative material. A material composition of the seventh isolation material 196 may be substantially the same as a material composition of the sixth isolation material 174 of the third microelectronic device structure 167; or the material composition of the seventh isolation material 196 may be different than
the material composition of the sixth isolation material 174. In some embodiments, the seventh isolation material 196 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiO2). The seventh isolation material 196 may be substantially homogeneous, or the seventh isolation material 196 may be heterogeneous. In some embodiments, the seventh isolation material 196 is substantially homogeneous. In additional embodiments, the seventh isolation material 196 is heterogeneous. The seventh isolation material 196 may, for example, be formed of and include a stack of at least two different dielectric materials. To attach the fourth microelectronic device structure 192 to the sixth isolation material 174 of the third microelectronic device structure 167, the fourth microelectronic device structure 192 may be vertically inverted (e.g., flipped upside down in the Z-direction), the seventh isolation material 196 thereof may be provided in physical contact with the sixth isolation material 174, and the seventh isolation material 196 and the sixth isolation material 174 may be exposed to annealing conditions to form bonds (e.g., oxide-to- oxide bonds) between the seventh isolation material 196 and the sixth isolation material 174. By way of non-limiting example, the seventh isolation material 196 and the sixth isolation material 174 may be exposed to a temperature greater than or equal to about 400°C (e.g., within a range of from about 400ºC to about 800ºC, greater than about 800ºC) to form oxide- to-oxide bonds between the sixth isolation material 174 and the seventh isolation material 196. In some embodiments, the sixth isolation material 174 and the seventh isolation material 196 are exposed to at least one temperature greater than about 800°C to form oxide- to-oxide bonds between the sixth isolation material 174 and the seventh isolation material 196. As shown in FIGS.8A through 8D, bonding the seventh isolation material 196 to the sixth isolation material 174 may form a second connected isolation structure 200. In FIGS.8A through 8D, the seventh isolation material 196 and the sixth isolation material 174 of the second connected isolation structure 200 are distinguished from one another by way of a dashed line. However, the seventh isolation material 196 to the sixth isolation material 174 may be integral and continuous with one another. Put another way, the second connected isolation structure 200 may be a substantially monolithic structure including the seventh isolation material 196 as a first region thereof, and the sixth isolation material 174 as a second region thereof. For the second connected isolation structure 200, the seventh isolation material 196 thereof may be attached to the sixth isolation material 174 thereof without a bond line.
Referring next to FIGS.9A through 9D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102' (FIG.9A), the digit line exit region 104' (FIG.9B), the word line exit region 106' (FIG.9C), and the socket region 108' (FIG.9D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.8A through 8D. As collectively depicted in FIGS.9A through 9D, the second microelectronic device structure assembly 198 may be vertically inverted (e.g., flipped upside down in the Z-direction), and an upper portion of the second base semiconductor structure 168 (FIGS.8A through 8D) may be removed to expose (e.g., uncover) the sixth isolation material 174 within the additional filled trenches 170 (FIGS. 8A through 8D) and form a second semiconductor tier 202 (FIGS.9A and 9D) including second semiconductor structures 204 separated from one another by remaining portions of the sixth isolation material 174. Thereafter, an eighth isolation material 206 may be formed on or over surfaces of the second semiconductor structures 204 and the sixth isolation material 174. The upper portion of the second base semiconductor structure 168 (FIGS.8A through 8D) vertically overlying the additional filled trenches 170 (FIGS.8A through 8D) following the vertical inversion of the second microelectronic device structure assembly 198 may be removed using at least one conventional wafer thinning process (e.g., a conventional CMP process; a conventional etching process, such as a conventional dry etching process, or a conventional wet etching process). The second semiconductor structures 204 may be formed to exhibit a desired vertical height (e.g., in the Z-direction) through the material removal process. The material removal process may also remove portions (e.g., upper portions following the vertical inversion of the second microelectronic device structure assembly 198) of the sixth isolation material 174. Referring collectively to FIGS.9A through 9D, the eighth isolation material 206 formed to cover the second semiconductor structures 204 (FIGS.9A and 9D) and the sixth isolation material 174 may be formed of and include at least one insulative material. A material composition of the eighth isolation material 206 may be substantially the same as a material composition of the sixth isolation material 174, or the material composition of the eighth isolation material 206 may be different than the material composition of the sixth isolation material 174. In some embodiments, the eighth isolation material 206 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiO2). The eighth isolation
material 206 may be substantially homogeneous, or the eighth isolation material 206 may be heterogeneous. In some embodiments, the eighth isolation material 206 is substantially homogeneous. In additional embodiments, the eighth isolation material 206 is heterogeneous. The eighth isolation material 206 may, for example, be formed of and include a stack of at least two different dielectric materials. As shown in FIGS.9A through 9D, an upper surface of the eighth isolation material 206 may be formed to be substantially planar. Referring next to FIGS.10A through 10D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.10A), the digit line exit region 104 (FIG.10B), the word line exit region 106 (FIG.10C), and the socket region 108 (FIG.10D) previously described with reference to FIGS.6A through 6D at a processing stage of the method of forming the microelectronic device following the processing stages previously described with reference to FIGS.6A through 6D and FIGS.9A through 9D. As depicted in FIGS.10A through 10D, following the processing stage previously described with reference to FIGS.9A through 9B, the second microelectronic device structure assembly 198 may be vertically inverted (e.g., flipped upside down in the Z-direction) and the eighth isolation material 206 thereof may be attached (e.g., bonded, such as through oxide-oxide bonding) to the fifth isolation material 164 of the first microelectronic device structure assembly 156 to form a third microelectronic device structure assembly 208. Attaching (e.g., bonding) the eighth isolation material 206 of the second microelectronic device structure assembly 198 to the fifth isolation material 164 of the first microelectronic device structure assembly may form a third connected isolation structure 210 of the third microelectronic device structure assembly 208. Following the attachment of the eighth isolation material 206 to the fifth isolation material 164, the additional base structure 194 (FIGS.9A through 9D) of the second microelectronic device structure assembly 198 may be removed. As depicted in FIGS.10A through 10D, the second microelectronic device structure assembly 198 may be attached to the first microelectronic device structure assembly 156 such that array regions 102' (FIG.9A), digit line exit regions 104' (FIG.9B), word line exit region 106' (FIG.9C), and socket regions 108' (FIG.9D) of the second microelectronic device structure assembly 198 horizontally overlap (e.g., are substantially horizontally aligned with) array regions 102 (FIG.6A), digit line exit regions 104 (FIG.6B), word line exit regions 106 (FIG.6C), and socket regions 108 (FIG.6D) of the first microelectronic
device structure assembly 156, respectively. Thus, in FIGS.10A through 10D, the array region 102 (FIG.10A), the digit line exit region 104 (FIG.10B), the word line exit region 106 (FIG.10C), and the socket region 108 (FIG.10D) respectively include features of the array region 102' (FIG.9A), the digit line exit region 104' (FIG.9B), the word line exit region 106' (FIG.9C), and the socket region 108' (FIG.9D) of the second microelectronic device structure assembly 198 following the processing stage previously described with reference to FIGS.9A through 9D. While the different regions shown in FIGS.10A through 10D were previously described as different regions of the first microelectronic device structure 100 (FIGS.1 and 2A through 2D) and of the first microelectronic device structure assembly 156 (FIGS.6A through 6D) formed by processing the first microelectronic device structure 100 according to the methods of the disclosure, it will be understood that these regions become regions of a microelectronic device of the disclosure formed using the first microelectronic device structure assembly 156 and the second microelectronic device structure assembly 198, as described in further detail below. Thus, these different regions are not limited to the features (e.g., structures, materials, devices) and/or portions of features of the first microelectronic device structure 100 and the first microelectronic device structure assembly 156. Instead, these regions evolve through the methods of the disclosure to encompass and include additional features (e.g., additional structures, additional materials, additional devices), portions of additional features, and/or modified features. To form the third microelectronic device structure assembly 208, the eighth isolation material 206 of the second microelectronic device structure assembly 198 may be provided in physical contact with the fifth isolation material 164 of the first microelectronic device structure assembly 156, and then then the eighth isolation material 206 and the fifth isolation material 164 may be exposed to annealing conditions to form bonds (e.g., oxide-to- oxide bonds) between the eighth isolation material 206 and the fifth isolation material 164. By way of non-limiting example, the eighth isolation material 206 and the fifth isolation material 164 may be exposed to a temperature greater than or equal to about 400°C (e.g., within a range of from about 400ºC to about 800ºC, greater than about 800ºC) to form oxide- to-oxide bonds between the eighth isolation material 206 and the fifth isolation material 164. In some embodiments, the eighth isolation material 206 and the fifth isolation material 164 are exposed to at least one temperature greater than about 800°C to form oxide-to-oxide bonds between the eighth isolation material 206 and the fifth isolation material 164.
In FIGS.10A through 10D, the eighth isolation material 206 and the fifth isolation material 164 of the third connected isolation structure 210 are distinguished from one another by way of a dashed line. However, the eighth isolation material 206 and the fifth isolation material 164 may be integral and continuous with one another. Put another way, third connected isolation structure 210 may be a substantially monolithic structure including the eighth isolation material 206 as a first region thereof, and the fifth isolation material 164 as a second region thereof. For the third connected isolation structure 210, the eighth isolation material 206 thereof may be attached to the fifth isolation material 164 thereof without a bond line. In additional embodiments, the third microelectronic device structure 167 previously described with reference to FIGS.7A through 7D is attached to the first microelectronic device structure assembly 156 without forming the second microelectronic device structure assembly 198 through the processing acts previously described with reference to FIGS.8A through 9D. For example, the third microelectronic device structure 167 (FIGS.7A through 7D) may be vertically inverted, the sixth isolation material 174 thereof may be provided on the fifth isolation material 164 of the first microelectronic device structure assembly 156, and then the sixth isolation material 174 and fifth isolation material 164 may be subjected to annealing conditions (e.g., a temperature greater than or equal to about 400°C, such as within a range of from about 400ºC to about 800ºC, or greater than about 800ºC) to form bonds (e.g., oxide-to-oxide bonds) between the sixth isolation material 174 and fifth isolation material 164. Thereafter, an upper portion of the second base semiconductor structure 168 (FIGS.7A through 7D) vertically overlying the additional filled trenches 170 (FIGS.7A through 7D) may be removed in a manner similar to that previously described with reference to FIGS.9A through 9D to form the second semiconductor tier 202 (FIGS.9A and 9D) including the second semiconductor structures 204 (FIGS.9A and 9D). In such embodiments, the orientations of the control logic devices 191 (FIGS.10A and 10D) (including the control logic circuitry thereof, such as the transistors 172, the third routing structures 190, the fourth contact structures 184, and the fifth contact structures 186) are vertically inverted relative to the orientations depicted in FIGS.10A and 10D. For example, in such embodiments, the gate structures 180 of the transistors 172 are positioned relatively more vertically proximate the access devices 116 than in the configuration described above with reference to FIGS.10A through 10D. Attaching and acting upon the third microelectronic device structure 167 (FIGS.7A
through 7D) in this manner (e.g., as an alternative to the processing acts and stage previously described with reference to FIGS.8A through 10D) may, for example, be employed when a different routing and interconnect scheme for a microelectronic device of the disclosure is desirable. Referring next to FIGS.11A through 11D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.11A), the digit line exit region 104 (FIG.11B), the word line exit region 106 (FIG.11C), and the socket region 108 (FIG.11D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.10A through 10D. As collectively depicted in FIGS.11A through 11D, contact openings 212 (FIGS.11B through 11D) may be formed to vertically extend to and expose (e.g., uncover) portions of the sacrificial structures 162 (FIGS.11B through 11D). As shown in FIGS.11B through 11D, the contact openings 212 may be formed to vertically extend through portions of the second connected isolation structure 200, the sixth isolation material 174, and the third connected isolation structure 210 vertically overlying the sacrificial structures 162. The contact openings 212 may be formed to terminate at or below uppermost vertical boundaries (e.g., uppermost surfaces) of the sacrificial structures 162. Lowermost vertical boundaries of the contact openings 212 may vertically overlie lowermost vertical boundaries (e.g., lowermost surfaces) of the sacrificial structures 162. As described in further detail below, the contact openings 212 may be employed to remove (e.g., exhume) the sacrificial structures 162 and facilitate the formation of additional contact structures (e.g., sixth contact structures) in contact (e.g., physical contact, electrical contact) with the third contact structures 132 (FIGS.11B through 11D). The formation of the contact openings 212, as well as the subsequent formation of the additional contact structures using the sacrificial structures 162 and the contact openings 212, may reduce contact misalignment risks and/or alleviate the need for relatively complex contact alignment operations and systems as compared to conventional methods of the forming contact structures in contact with other contact structures. Referring collectively to FIGS.11B through 11D, a geometric configuration (e.g., shape, dimensions), horizontal position (e.g., in the X-direction and in the Y-direction), and horizontal spacing of each of the contact openings 212 at least partially depends on the geometric configurations, horizontal positions, and horizontal spacing of the sacrificial
structures 162. The contact openings 212 may be formed to be at least partially (e.g., substantially) horizontally overlap (e.g., in the X-direction and in the Y-direction) the sacrificial structures 162. In addition, the contact openings 212 may each be formed to have horizontal dimensions (e.g., in the X-direction and in the Y-direction) less than or equal to (e.g., less than) corresponding horizontal dimensions of the sacrificial structure 162 exposed thereby. In some embodiments, the contact openings 212 are formed to exhibit substantially the same geometric configurations (e.g., substantially the same shapes and substantially the same dimensions) as one another. For example, each of the contact openings 212 may be formed to exhibit a substantially circular horizontal cross-sectional shape, and may have substantially the same horizontal dimensions (e.g., diameter) as each other of the contact openings 212. In additional embodiments, one or more of the contact openings 212 is formed to exhibit a different geometric configuration (e.g., a different shape, such as a non- circular horizontal cross-sectional shape; and/or one or more different horizontal dimensions) than one or more other of the contact openings 212. The contact openings 212 may be formed by subjecting portions of the second connected isolation structure 200, the sixth isolation material 174, and the third connected isolation structure 210 vertically overlying the sacrificial structures 162 to one or more conventional material removal processes (e.g., one or more conventional etching processes, such as one or more conventional anisotropic dry etching process), which are not described in detail herein. Referring next to FIGS.12A through 12D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.12A), the digit line exit region 104 (FIG.12B), the word line exit region 106 (FIG.12C), and the socket region 108 (FIG.12D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.11A through 11D. As collectively depicted in FIGS.12A through 12D, the sacrificial structures 162 (FIGS.12B through 12D) may be selectively removed (e.g., selectively etched and exhumed) through the contact openings 212 to form void spaces 214 (e.g., open volumes). The void spaces 214 may have geometric configurations (e.g., shapes, dimensions) and positions corresponding to (e.g., substantially the same as) the geometric configurations and positions of the sacrificial structures 162 (FIGS.12B through 12D). In addition, the void spaces 214 may expose (e.g., uncover)
portions of the third contact structures 132 (FIGS.11B through 11D) (e.g., upper surfaces of the third contact structures 132) previously covered by the sacrificial structures 162 (FIGS.12B through 12D). To form the void spaces 214, the third microelectronic device structure assembly 208 by be exposed to at least one chemical species (e.g., at least etchant) that selectively removes (e.g., selectively etches) the sacrificial structures 162 (FIGS.12B through 12D) relative to the second connected isolation structure 200, the sixth isolation material 174, the third connected isolation structure 210, the third contact structures 132, and the first isolation material 114. The chemical species may, for example, etch the sacrificial structures 162 (FIGS.12B through 12D) at a rate that is at least about five times (5x) greater (e.g., at least about ten times (10x) greater, at least about twenty times (20x) greater, at least about forty times (40x) greater) than rate(s) at which the chemical species etches the second connected isolation structure 200, the sixth isolation material 174, the third connected isolation structure 210, the third contact structures 132, and the first isolation material 114. By way of non-limiting example, if the second connected isolation structure 200, the sixth isolation material 174, the third connected isolation structure 210, and the first isolation material 114 are formed of and include a dielectric oxide material (e.g., SiOx, such as SiO2) and the sacrificial structures 162 (FIGS.12B through 12D) are formed of and include a dielectric nitride material (e.g., SiNy, such as Si3N4), the third microelectronic device structure assembly 208 may be treated with phosphoric acid (H3PO4) to selectively remove the sacrificial structures 162 (FIGS.12B through 12D) through the contact openings 212 and form the void spaces 214. Referring next to FIGS.13A through 13D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.13A), the digit line exit region 104 (FIG.13B), the word line exit region 106 (FIG.14C), and the socket region 108 (FIG.13D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.12A through 12D. As collectively depicted in FIGS.13A through 13D, sixth contact structures 216 (FIGS.13B through 13D) may be formed within the contact openings 212 (FIGS.12B through 12D) and the void spaces 214 (FIGS.12B through 12D). The sixth contact structures 216 may substantially fill the contact openings 212 (FIGS.12B through 12D) and the void spaces 214 (FIGS.12B through 12D).
Individual sixth contact structures 216 may be formed to contact (e.g., physically contact, electrically contact) individual third contact structures 132. Referring collectively to FIGS.13B through 13D, each of the sixth contact structures 216 may be formed to include a first region 216A (e.g., an upper region) and a second region 216B (e.g., a lower region). For an individual sixth contact structure 216, the first region 216A thereof may be positioned within boundaries (e.g., vertical boundaries, horizontal boundaries) of the contact opening 212 (FIGS.12B through 12D) in which the sixth contact structure 216 is formed, and the second region 216B thereof may be positioned within boundaries (e.g., vertical boundaries, horizontal boundaries) of the void space 214 (FIGS.12B through 12D) in which the sixth contact structure 216 is formed. A geometric configuration (e.g., shape, dimensions) of the first region 216A may be substantially the same as a geometric configuration of the contact opening 212 (FIGS.12B through 12D); and geometric configuration of the second region 216B may be substantially the same as a geometric configuration of the void space 214 (FIGS.12B through 12D). As shown in FIGS.12B through 12D, for an individual sixth contact structure 216, the second region 216B may vertically underlie (e.g., in the Z-direction) the first region 216A, and the second region 216B may horizontally extend (e.g., in the X-direction, in the Y-direction) beyond horizontal boundaries of the first region 216A. For each of the sixth contact structures 216, the first region 216A thereof may be integral and continuous with the second region 216B thereof. Put another way, each sixth contact structure 216 may be formed to be a substantially monolithic structure including the first region 216A and the second region 216B. The sixth contact structures 216 (including the first regions 216A and the second regions 216B thereof) may be formed of and include conductive material. By way of non- limiting example, the sixth contact structures 216 may each individually be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the sixth contact structures 216 are each individually formed of and include W. Each of the sixth contact structures 216 may be substantially homogeneous, or one or more of the sixth contact structures 216 may individually be heterogeneous. In some embodiments, each of the sixth contact structures 216 is substantially homogeneous. In additional embodiments, each of the sixth contact structures 216 is heterogeneous. Each sixth contact structure 216
may, for example, be formed of and include a stack of at least two different conductive materials. Referring next to FIGS.14A through 14D, illustrated are simplified, partial longitudinal cross-sectional views, from the directional perspectives previously described, of the array region 102 (FIG.14A), the digit line exit region 104 (FIG.14B), the word line exit region 106 (FIG.14C), and the socket region 108 (FIG.14D) at a processing stage of the method of forming the microelectronic device following the processing stage previously described with reference to FIGS.13A through 13D. As collectively depicted in FIGS.14A through 14D, BEOL structures may be formed over the third routing tier 188. For example, at least one fourth routing tier 218 including fourth routing structures 220 may be formed over the third routing tier 188; at least one fifth routing tier 222 including fifth routing structures 224 may be formed over the fourth routing tier 218; and at least one sixth routing tier 226 including sixth routing structures 228 may be formed over the fifth routing tier 222. One or more of the fourth routing structures 220 of the fourth routing tier 218 may be coupled to one or more of the third routing structures 190 of the third routing tier 188 and/or one or more of the sixth contact structures 216 by way of seventh contact structures 230. In addition, one or more of the fifth routing structures 224 of the fifth routing tier 222 may be coupled to one or more of the fourth routing structures 220 of the fourth routing tier 218 by way of eighth contact structures 232 (FIGS.14A and 14C). Furthermore, one or more of the sixth routing structures 228 (e.g., one or more conductive pad structures) of the sixth routing tier 226 may be coupled to one or more of the fifth routing structures 224 of the fifth routing tier 222 by way of ninth contact structures 234 (FIG.14D). In additional embodiments, at least some (e.g., all) of the ninth contact structures 234 (FIG.14D) are omitted (e.g., are not formed), and one or more of the sixth routing structures 228 of the sixth routing tier 226 are formed to directly physically contact one or more of the fifth routing structures 224 of the fifth routing tier 222. Referring to FIG.14D, in some embodiments, at least some of the fourth routing structures 220, the fifth routing structures 224, and the sixth routing structures 228 are formed to be in electrical communication with at least some of the second routing structures 144 coupled to the memory cells 146 (FIG.14A) within the array region 102 (FIG.14A) by way of at least one deep contact assembly extending between the at least some of the fourth routing structures 220 and at least some of the second routing structures 144 within the socket region 108. As shown in FIG.14D, the deep contact assembly may
include some of the contact structures (e.g., at least one of the ninth contact structures 234 (if any), at least one of the eighth contact structures 232, at least one of the seventh contact structures 230, at least one of the sixth contact structures 216, at least one of the third contact structures 132, and at least one of the fourth contact structures 140) located within the socket region 108, as well the routing structures within the socket region 108 coupled to the some of the contact structures. The fourth routing structures 220, the fifth routing structures 224, the sixth routing structures 228, the seventh contact structures 230, the eighth contact structures 232 (FIGS. 14A and 14D), and the ninth contact structures 234 (FIG.14D) (if any) may each be formed of and include conductive material. By way of non-limiting example, the fourth routing structures 220, the fifth routing structures 224, the sixth routing structures 228, the seventh contact structures 230, the eighth contact structures 232 (FIGS.14A and 14D), and the ninth contact structures 234 (FIG.14D) may individually be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). In some embodiments, the fourth routing structures 220 are each formed of and include W; the fifth routing structures 224 are each formed of and include Cu; the sixth routing structures 228 are formed of and include Al; and the seventh contact structures 230, the eighth contact structures 232 (FIGS.14A and 14D), and the ninth contact structures 234 (FIG.14D) are each formed of and include W. Still referring to collectively to FIGS.14A through 14D, a ninth isolation material 236 may be formed on or over portions of at least the fourth routing structures 220, the fifth routing structures 224, the sixth routing structures 228, the seventh contact structures 230, the eighth contact structures 232 (FIGS.14A and 14D), and the ninth contact structures 234 (FIG.14D) (if any). The ninth isolation material 236 may be formed of and include at least one insulative material. In some embodiments, the ninth isolation material 236 is formed of and includes a dielectric oxide material, such as SiOx(e.g., SiO2). The ninth isolation material 236 may be substantially homogeneous, or the ninth isolation material 236 may be heterogeneous. In some embodiments, the ninth isolation material 236 is substantially homogeneous. In additional embodiments, the ninth isolation material 236 is heterogeneous. The ninth isolation material 236 may, for example, be formed of and include a stack of at least two different dielectric materials. In addition, one or more openings may be formed within the ninth isolation material 236 (e.g., within a portion of the ninth isolation material
236 within the socket region 108 (FIG.14D)) to expose (and, hence, facilitate access to) one or more portions of one or more of the sixth routing structures 228 (e.g., one or more conductive pad structures) of the sixth routing tier 226. As shown in FIGS.14A through 14D, the method described above with reference to FIGS.1 through 14D may effectuate the formation of a microelectronic device 238 (e.g., a memory device, such as a DRAM device) including the features (e.g., structures, materials, devices) previously described herein. In some embodiments, at least some of the fourth routing structures 220, the fifth routing structures 224, and the sixth routing structures 228 are employed as global routing structures for the microelectronic device 238. The fourth routing structures 220, the fifth routing structures 224, and the sixth routing structures 228 may, for example, be configured to receive global signals from an external bus, and to relay the global signals to other features (e.g., structures, devices) of the microelectronic device 238. Thus, in accordance with embodiments of the disclosure, a method of forming a microelectronic device comprises forming a microelectronic device structure assembly comprising memory cells, digit lines coupled to the memory cells, word lines coupled to the memory cells, and isolation material overlying the memory cells, the digit lines, and the word lines. An additional microelectronic device structure assembly comprising control logic devices and additional isolation material overlying the control logic devices is formed. The additional isolation material of the additional microelectronic device structure assembly is bonded to the isolation material of the microelectronic device structure assembly to attach the additional microelectronic device structure assembly to the microelectronic device structure assembly. The memory cells are electrically connected to at least some of the control logic devices after bonding the additional isolation material to the isolation material. Referring next to FIG.15, depicted is a simplified plan view of the microelectronic device 238 illustrating an arrangement of different control logic sections (described in further detail below) within individual different regions (e.g., the array regions 102, such as the first array region 102A, the second array region 102B, the third array region 102C, and the fourth array region 102D; the socket regions 108) of the microelectronic device 238, as well as routing arrangements to different control logic devices (e.g., corresponding to the control logic devices 191 (FIG.14A)) within the different control logic sections, in accordance with embodiments of the disclosure. The different control logic devices of the different control logic sections may be positioned vertically above (e.g., in the Z-direction)
the memory cells 146 (FIG.14A) of the microelectronic device 238. At least some of the different control logic devices may be coupled to the memory cells 146 (FIG.14A) in the manner previously described with reference to FIGS.14A through 14D. For clarity and ease of understanding the description, not all features (e.g., structures, materials, devices) of the microelectronic device 238 previously described with reference to FIGS.14A through 14D are illustrated in FIG.15. As shown in FIG.15, within a horizontal area of each array region 102, the microelectronic device 238 may be formed to include a desired arrangement of sense amplifier (SA) sections 240 and sub-word line driver (SWD) sections 242. The SA sections 240 may include SA devices coupled to the digit lines 118 of the microelectronic device 238, as described in further detail below. The digit lines 118 may vertically underlie (e.g., in the Z-direction) the SA devices of the SA sections 240 within the microelectronic device 238. The SWD sections 242 may include SWD devices coupled to the word lines 120 of the microelectronic device 238, as also described in further detail below. The word lines 120 may vertically underlie (e.g., in the Z-direction) the SWD devices of the SWD sections 242 within the microelectronic device 238. The SA sections 240 within a horizontal area an individual array region 102 (e.g., the first array region 102A, the second array region 102B, the third array region 102C, or the fourth array region 102D) may include a first SA section 240A and a second SA section 240B. For an individual array region 102, the first SA section 240A and the second SA section 240B may be positioned at or proximate opposite corners (e.g., diagonally opposite corners) of the array region 102 than one another. For example, as shown in FIG.15, for an individual array region 102, the first SA section 240A may be positioned at or proximate a first corner 246A of the array region 102, and the second SA section 240B may be positioned at or proximate a second corner 246B of the array region 102 located diagonally opposite (e.g., kitty-corner) the first corner 246A. For each SA section 240 (e.g., the first SA section 240A, the second SA section 240B) within an individual array region 102, the SA devices of the SA section 240 may be coupled to a group of the digit lines 118 horizontally extending (e.g., in the Y-direction) through the array region 102 by way of digit line routing and contact structures 248. The digit line routing and contact structures 248 may, for example, correspond to some of the routing structures (e.g., some of the fourth routing structures 220 (FIGS.14A and 14B)) and some of the contact structures (e.g., some of the seventh contact structures 230 (FIGS.
14A and 14B); some of the sixth contact structures 216 (FIGS.14A and 14B); some of the first group 132A (FIG.14B) of the third contact structures 132 (FIG.14B)) previously described herein. The SA devices of the SA sections 240 of array regions 102 horizontally neighboring one another in the Y-direction (e.g., the first array region 102A and the second array region 102B; the third array region 102C and the fourth array region 102D) may be coupled to different groups of digit lines 118 than one another. For example, each of the SA sections 240 (e.g., each of the first SA section 240A and the second SA section 240B) of the first array region 102A may include so-called “even” SA devices coupled to even digit lines 118B of the microelectronic device 238 by way of the digit line routing and contact structures 248 associated with the SA sections 240; and each of the SA sections 240 (e.g., each of the first SA section 240A and the second SA section 240B) of the second array region 102B may include so-called “odd” SA devices coupled to odd digit lines 118A of the microelectronic device 238 by way of the digit line routing and contact structures 248 associated with the SA sections 240; or vice versa. The even digit lines 118B of the microelectronic device 238 may horizontally alternate with the odd digit lines 118A of the microelectronic device 238 in the X-direction. The SA devices of each of the SA sections 240 of the first array region 102A may not be coupled to any odd digit lines 118A; and the SA devices of each of the SA sections 240 of the second array region 102B may not be coupled to any even digit lines 118B; or vice versa. Similarly, each of the SA sections 240 (e.g., each of the first SA section 240A and the second SA section 240B) of the third array region 102C horizontally neighboring the first array region 102A in the X-direction may include additional even SA devices coupled to additional even digit lines 118B of the microelectronic device 238 by way of the digit line routing and contact structures 248 associated with the SA sections 240; and each of the SA sections 240 (e.g., each of the first SA section 240A and the second SA section 240B) of the fourth array region 102D horizontally neighboring the second array region 102B in the X-direction may include additional odd SA devices coupled to additional odd digit lines 118A of the microelectronic device 238 by way of the digit line routing and contact structures 248 associated with the SA sections 240; or vice versa. As shown in FIG.15, the SA devices (e.g., odd SA devices or even SA devices) within an individual SA section 240 of an individual array region 102 may be coupled to digit lines (e.g., odd digit lines 118A or even digit lines 118B) horizontally extending
through the array region 102, and may also be coupled to additional digit lines (e.g., additional odd digit lines 118A or additional even digit lines 118B) horizontally extending through another array region 102 horizontally neighboring the array region 102 in the Y- direction. For example, some odd SA devices within the first SA section 240A of the second array region 102B may be coupled to odd digit lines 118A horizontally extending through the second array region 102B by way of some digit line routing and contact structures 248 extending to and through the first digit line exit subregion 104A horizontally neighboring the second array region 102B in the Y-direction; and some additional odd SA devices within the first SA section 240A of the second array region 102B may be coupled to additional odd digit lines 118A horizontally extending through the first array region 102A by way of some additional digit line routing and contact structures 248 extending to and through the first digit line exit subregion 104A. As another example, some even SA devices within the second SA section 240B of the first array region 102A may be coupled to even digit lines 118B horizontally extending through the first array region 102A by way of some digit line routing and contact structures 248 extending to and through the second digit line exit subregion 104B horizontally neighboring the first array region 102A in the Y-direction; and some additional even SA devices within the second SA section 240B of the first array region 102A may be coupled to additional even digit lines 118B horizontally extending through the second array region 102B by way of some additional digit line routing and contact structures 248 extending to and through the second digit line exit subregion 104B. With maintained reference to FIG.15, the SWD sections 242 within a horizontal area an individual array region 102 (e.g., the first array region 102A, the second array region 102B, the third array region 102C, or the fourth array region 102D) may include a first SWD section 242A and a second SWD section 242B. For an individual array region 102, the first SWD section 242A and the second SWD section 242B may be positioned at or proximate different corners than the first SA section 240A and a second SA section 240B. In addition, the corner of the array region 102 associated with first SWD section 242A may oppose (e.g., diagonally oppose) the corner of the array region 102 associated with second SWD section 242B. For example, as shown in FIG.15, for an individual array region 102, the first SWD section 242A may be positioned at or proximate a third corner 246C of the array region 102, and the second SWD section 242B may be positioned at or
proximate a fourth corner 246D of the array region 102 located diagonally opposite (e.g., kitty-corner) the third corner 246C. For each SWD section 242 (e.g., the first SWD section 242A, the second SWD section 242B) within an individual array region 102, the SWD devices of the SWD section 242 may be coupled to a group of the word lines 120 horizontally extending (e.g., in the X- direction) the array region 102 by way of word line routing and contact structures 250. The word line routing and contact structures 250 may, for example, correspond to some of the routing structures (e.g., some of the fourth routing structures 220 (FIGS.14A and 14C)) and some of the contact structures (e.g., some of the seventh contact structures 230 (FIGS. 14A and 14C); some of the sixth contact structures 216 (FIG.14C); some of the second group 132B (FIG.14C) of the third contact structures 132 (FIG.14C)) previously described herein. The SWD devices of the SWD sections 242 of array regions 102 horizontally neighboring one another in the X-direction (e.g., the first array region 102A and the third array region 102C; the second array region 102B and the fourth array region 102D) may be coupled to different groups of word lines 120 than one another. For example, each of the SWD sections 242 (e.g., each of the first SWD section 242A and the second SWD section 242B) of the first array region 102A may include so-called “even” SWD devices coupled to even word lines 120B of the microelectronic device 238 by way of the word line routing and contact structures 250 associated with the SWD sections 242; and each of the SWD sections 242 (e.g., each of the first SWD section 242A and the second SWD section 242B) of the third array region 102C may include so-called “odd” SWD devices coupled to odd word lines 120A of the microelectronic device 238 by way of the word line routing and contact structures 250 associated with the SWD sections 242; or vice versa. The even word lines 120B of the microelectronic device 238 may horizontally alternate with the odd word lines 120A of the microelectronic device 238 in the Y-direction. The SWD devices of each of the SWD sections 242 of the first array region 102A may not be coupled to any odd word lines 120A; and the SWD devices of each of the SWD sections 242 of the third array region 102C may not be coupled to any even word lines 120B; or vice versa. Similarly, each of the SWD sections 242 (e.g., each of the first SWD section 242A and the second SWD section 242B) of the second array region 102B horizontally neighboring the first array region 102A in the Y-direction may include additional even SWD devices coupled to additional even word lines 120B of the microelectronic device 238 by way of the word line
routing and contact structures 250 associated with the SWD sections 242; and each of the SWD sections 242 (e.g., each of the first SWD section 242A and the second SWD section 242B) of the fourth array region 102D horizontally neighboring the third array region 102C in the Y-direction may include additional odd SWD devices coupled to additional odd word lines 120A of the microelectronic device 238 by way of the word line routing and contact structures 250 associated with the SWD sections 242; or vice versa. As shown in FIG.15, the SWD devices (e.g., odd SWD devices or even SWD devices) within an individual SWD section 242 of an individual array region 102 may be coupled to word lines (e.g., odd word lines 120A or even word lines 120B) horizontally extending through the array region 102, and may also be coupled to additional word lines (e.g., additional odd word lines 120A or additional even word lines 120B) horizontally extending through another array region 102 horizontally neighboring the array region 102 in the X-direction. For example, some odd SWD devices within the first SWD section 242A of the third array region 102C may be coupled to odd word lines 120A horizontally extending through the third array region 102C by way of some word line routing and contact structures 250 extending to and through the second word line exit subregion 106B horizontally neighboring the third array region 102C in the X-direction; and some additional odd SWD devices within the first SWD section 242A of the third array region 102C may be coupled to additional odd word lines 120A horizontally extending through the first array region 102A by way of some additional word line routing and contact structures 250 extending to and through the second word line exit subregion 106B. As another example, some even SWD devices within the second SWD section 242B of the first array region 102A may be coupled to even word lines 120B horizontally extending through the first array region 102A by way of some word line routing and contact structures 250 extending to and through the first word line exit subregion 106A horizontally neighboring the first array region 102A in the X-direction; and some additional even SWD devices within the second SWD section 242B of the first array region 102A may be coupled to additional even word lines 120B horizontally extending through the third array region 102C by way of some additional word line routing and contact structures 250 extending to and through the first word line exit subregion 106A. With maintained reference to FIG.15, within a horizontal area of each array region 102, the microelectronic device 238 may include additional control logic sections individually including additional control logic devices (e.g., control logic devices other
than SA devices and SWD devices). For example, for each array region 102, additional control logic sections 252 may be positioned horizontally between (e.g., at relatively more horizontally central positions within the array region 102) the SA sections 240 and the SWD sections 242. The additional control logic sections 252 may include, but are not limited to, column decoder device sections including column decoder device, and main word line (MWD) sections including MWD devices. Still referring to FIG.15, within a horizontal area of each socket region 108, the microelectronic device 238 may include further control logic sections 254 individually including further control logic devices (e.g., control logic devices in addition to those located within the horizontal areas of the array regions 102). For example, for each socket region 108, one or more further control logic sections 254 may be positioned horizontally between deep contact structures assemblies (e.g., vertically extending from one or more of the fifth routing structures 224 (FIG.14D) to one or more of the second routing structures 144 (FIG.14D)) within the socket region 108 and the array regions 102 horizontally neighboring the socket region 108. At least some of the further control logic devices within the further control logic sections 254 may have different configurations and different operational functions than the control logic devices located within the horizontal areas of the array regions 102. By way of non-limiting example, the further control logic sections 254 may include bank logic sections including bank logic devices. Thus, in accordance with embodiments of the disclosure, a method of forming a microelectronic device comprises forming a first semiconductor wafer comprising access devices within array regions, digit lines coupled to the access devices and terminating within digit line exit regions neighboring the array regions, and word lines coupled to the access devices and terminating within word line exit regions neighboring the array regions. Digit line contact structures extending through and in contact with the digit lines within the digit line exit regions are formed. Word line contact structures extending through in contact with the word lines within the word line exit regions are formed. Capacitors are formed over and in electrical communication with the access devices to form memory cells within the array regions. A second semiconductor wafer comprising control logic devices is formed. The second semiconductor wafer is attached to the first semiconductor wafer such that at least some of the control logic devices of the second semiconductor wafer are positioned within the array regions of the first semiconductor wafer. Additional contact structures are formed over the digit line contact structures and the word line contact structures. Some of the additional
contact structures are in contact with the digit line contact structures. Some other of the additional contact structures are in contact with the word line contact structures. Routing structures are formed over the control logic devices and the additional contact structures. The routing structures are in electrical communication with the control logic devices and the memory cells. Furthermore, in accordance with embodiments of the disclosure, a microelectronic device comprises array regions, digit line exit regions, and word line exit regions. The array regions individually comprise memory cells, digit lines, word lines, and control logic devices. The memory cells comprise access devices and storage node devices. The digit lines are coupled to the access devices and extend in a first direction. The word lines are coupled to the access devices and extend in a second direction orthogonal to the first direction. The control logic devices are over and in electrical communication with the memory cells. The digit line exit regions horizontally alternate with the array regions in the first direction. The digit line exit regions individually comprise portions of the digit lines extending beyond the array regions adjacent thereto, digit line contact structures extending through at least some of the portions of the digit lines, contact structures on the digit line contact structures, and routing structures coupled to the contact structures. The contact structures individually comprise a lower region, and an upper region integral and continuous with the lower region and having smaller horizontal dimensions than the lower region. The word line exit regions horizontally alternate with the array regions in the second direction. The word line exit regions individually comprise portions of the word lines extending beyond the array regions adjacent thereto, word line contact structures extending through at least some of the portions of the word lines, additional contact structures on the word line contact structures, and additional routing structures coupled to the additional contact structures. The additional contact structures individually comprise an additional lower region, and an additional upper region integral and continuous with the additional lower region and having smaller horizontal dimensions than the additional lower region. Microelectronic devices (e.g., the microelectronic device 238 (FIGS.14A through 14D)) in accordance with embodiments of the disclosure may be used in embodiments of electronic systems of the disclosure. For example, FIG.16 is a block diagram illustrating an electronic system 300 according to embodiments of disclosure. The electronic system 300 may comprise, for example, a computer or computer hardware component, a server or other networking hardware component, a cellular telephone, a digital camera, a personal digital
assistant (PDA), portable media (e.g., music) player, a Wi-Fi or cellular-enabled tablet such as, for example, an iPAD® or SURFACE® tablet, an electronic book, a navigation device, etc. The electronic system 300 includes at least one memory device 302. The memory device 302 may comprise, for example, a microelectronic device (e.g., the microelectronic device 238 (FIGS.14A through 14D)) previously described herein. The electronic system 300 may further include at least one electronic signal processor device 304 (often referred to as a “microprocessor”). The electronic signal processor device 304 may, optionally, comprise a microelectronic device (e.g., the microelectronic device 238 (FIGS.14A through 14D)) previously described herein. While the memory device 302 and the electronic signal processor device 304 are depicted as two (2) separate devices in FIG.16, in additional embodiments, a single (e.g., only one) memory/processor device having the functionalities of the memory device 302 and the electronic signal processor device 304 is included in the electronic system 300. In such embodiments, the memory/processor device may include a microelectronic device (e.g., the microelectronic device 238 (FIGS.14A through 14D)) previously described herein. The electronic system 300 may further include one or more input devices 306 for inputting information into the electronic system 300 by a user, such as, for example, a mouse or other pointing device, a keyboard, a touchpad, a button, or a control panel. The electronic system 300 may further include one or more output devices 308 for outputting information (e.g., visual or audio output) to a user such as, for example, a monitor, a display, a printer, an audio output jack, a speaker, etc. In some embodiments, the input device 306 and the output device 308 comprise a single touchscreen device that can be used both to input information to the electronic system 300 and to output visual information to a user. The input device 306 and the output device 308 may communicate electrically with one or more of the memory device 302 and the electronic signal processor device 304. Thus, in accordance with embodiments of the disclosure, an electronic system comprises an input device, an output device, a processor device operably connected to the input device and the output device, and a memory device operably connected to the processor device. The memory device comprises memory array regions, a digit line contact region between two of the memory array regions neighboring one another in a first direction, and a word line contact region between two other of the memory array regions neighboring one another in a second direction perpendicular to the first direction. The memory array regions each comprise dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and control logic circuitry overlying and
in electrical communication with the DRAM cells. The digit line contact region comprises end portions of some of the digit lines extending past horizontal boundaries of the two of the memory array regions; digit line contacts coupled to and extending completely through the end portions of the some of the digit lines; contact structures on the digit line contacts and individually comprising a lower region and an upper region integral and continuous with the lower region, the upper region having smaller horizontal dimensions than the lower region; and routing structures over and coupled to the contact structures. The word line contact region comprises end portions of some of the word lines extending past horizontal boundaries of the two other of the memory array regions; word line contacts coupled to and extending completely through the end portions of the some of the word lines; additional contact structures on the word line contacts and individually comprising an additional lower region and an additional upper region integral and continuous with the additional lower region, the additional upper region having smaller horizontal dimensions than the additional lower region; and additional routing structures over and coupled to the additional contact structures. The structures, devices, and methods of the disclosure advantageously facilitate one or more of improved microelectronic device performance, reduced costs (e.g., manufacturing costs, material costs), increased miniaturization of components, and greater packaging density as compared to conventional structures, conventional devices, and conventional methods. The structures, devices, and methods of the disclosure may also improve scalability, efficiency, and simplicity as compared to conventional structures, conventional devices, and conventional methods. Additional, non-limiting example embodiments of the disclosure are set forth below. Embodiment 1: A method of forming a microelectronic device, comprising: forming a microelectronic device structure assembly comprising memory cells, digit lines coupled to the memory cells, word lines coupled to the memory cells, and isolation material overlying the memory cells, the digit lines, and the word lines; forming an additional microelectronic device structure assembly comprising control logic devices and additional isolation material overlying the control logic devices; bonding the additional isolation material of the additional microelectronic device structure assembly to the isolation material of the microelectronic device structure assembly to attach the additional microelectronic device structure assembly to the microelectronic device structure assembly; and electrically connecting the memory cells to
at least some of the control logic devices after bonding the additional isolation material to the isolation material. Embodiment 2: The method of Embodiment 1, wherein forming a microelectronic device structure assembly comprises: forming a first microelectronic device structure comprising a first base semiconductor structure, the digit lines, the word lines, and access devices of the memory cells coupled to the digit lines and the word lines; forming contact structures coupled to the digit lines within digit line exit regions neighboring the access devices in a first horizontal direction; forming additional contact structures coupled to the word lines within word line exit regions neighboring the access devices in a second horizontal direction; forming storage node devices of the memory cells over and in electrical communication with the access devices of the memory cells; and forming routing structures over the storage node devices of the memory cells, at least some of the routing structures in electrical communication with the storage node devices. Embodiment 3: The method of Embodiment 2, further comprising: forming further contact structures within socket regions prior to forming storage node devices; and coupling the at least some of the routing structures to at least some of the further contact structures. Embodiment 4: The method of Embodiment 3, further comprising forming capacitors within the socket regions, at least some of the capacitors coupled to one or more of the further contact structures. Embodiment 5: The method of any one of Embodiments 2 through 4, further comprising: bonding a second microelectronic device structure over the routing structures to form a first assembly comprising the first microelectronic device structure, the contact structures, the additional contact structures, the memory cells, the routing structures, and the second microelectronic device structure; vertically inverting the first assembly; removing a section of the first base semiconductor structure after vertically inverting the first assembly to expose portions of the contact structures, the additional contact structures, and filled trenches in the first base semiconductor structure; forming sacrificial structures on the exposed portions of the contact structures and the additional contact; and forming the isolation material over the memory cells and the sacrificial structures. Embodiment 6: The method of Embodiment 5, wherein forming an additional microelectronic device structure assembly comprises: forming a third microelectronic device structure comprising a second base semiconductor structure and the control logic devices at least partially overlying the second base semiconductor structure; bonding a fourth
microelectronic device structure over the control logic devices to form a second assembly comprising the third microelectronic device structure and the fourth microelectronic device structure; vertically inverting the second assembly; removing a section of the second base semiconductor structure after vertically inverting the second assembly to expose additional filled trenches in the second base semiconductor structure; and forming the additional isolation material over the control logic devices. Embodiment 7: The method of Embodiment 6, further comprising: removing a portion of the fourth microelectronic device structure of the additional microelectronic device structure after attaching the additional microelectronic device structure to the microelectronic device structure; forming contact openings vertically extending through a remaining portion of the additional microelectronic device structure and the isolation material of the microelectronic device structure to expose the sacrificial structures; selectively removing the sacrificial structures, after forming the contact openings, to form void spaces in communication with the contact openings; and filling the contact openings and the void spaces with conductive material to form additional contact structures. Embodiment 8: The method of any one of Embodiments 1 through 7, further comprising selecting the isolation material of the microelectronic device structure assembly and the additional isolation material of the additional microelectronic device structure assembly to each comprise a dielectric oxide material. Embodiment 9: The method of any one of Embodiments 1 through 8, further comprising: forming routing structures over the control logic devices and in electrical communication with the control logic devices and the memory cells; and forming pad structures over and in electrical communication with the routing structures. Embodiment 10: The method of Embodiment 9, wherein: forming routing structures over the control logic devices comprises: forming tungsten routing structures over the control logic devices and in electrical communication with the control logic devices and the memory cells; and forming copper routing structures over and in electrical communication with the tungsten routing structures; and forming pad structures comprises forming aluminum pad structures over and in electrical communication with the copper routing structures. Embodiment 11: A method of forming a microelectronic device, comprising: forming a first semiconductor wafer comprising access devices within array regions, digit lines coupled to the access devices and terminating within digit line exit regions neighboring the array regions, and word lines coupled to the access devices and terminating within word
line exit regions neighboring the array regions; forming digit line contact structures extending through and in contact with the digit lines within the digit line exit regions; forming word line contact structures extending through in contact with the word lines within the word line exit regions; forming capacitors over and in electrical communication with the access devices to form memory cells within the array regions; forming a second semiconductor wafer comprising control logic devices; attaching the second semiconductor wafer to the first semiconductor wafer such that at least some of the control logic devices of the second semiconductor wafer are positioned within the array regions of the first semiconductor wafer; forming additional contact structures over the digit line contact structures and the word line contact structures, some of the additional contact structures in contact with the digit line contact structures, some other of the additional contact structures in contact with the word line contact structures; and forming routing structures over the control logic devices and the additional contact structures, the routing structures in electrical communication with the control logic devices and the memory cells. Embodiment 12: The method of Embodiment 11, further comprising: forming further contact structures within socket regions of the first semiconductor wafer prior to attaching the second semiconductor wafer to the first semiconductor wafer, the socket region horizontally offset from the digit line exit regions and the word line exit regions; and forming yet some other of the additional contact structures over and in contact with the further contact structures. Embodiment 13: The method of Embodiment 12, further comprising forming additional capacitors within the socket regions of the first semiconductor wafer and in electrical communication with at least some the further contact structures, at least some of the additional capacitors in electrical communication with at least some of the control logic devices of the second semiconductor wafer after forming the routing structures. Embodiment 14: The method of any one of Embodiments 11 through 13, wherein attaching the second semiconductor wafer to the first semiconductor wafer comprises: vertically inverting the second semiconductor wafer; physically contacting a first dielectric oxide material of the first semiconductor wafer with a second dielectric oxide material of the first semiconductor wafer after vertically inverting the second semiconductor wafer; and annealing the first dielectric oxide material and the second dielectric oxide material after physically contacting the first dielectric oxide material with the second dielectric oxide
material to form oxide-oxide bonds between the first dielectric oxide material and the second dielectric oxide material. Embodiment 15: The method of any one of Embodiments 11 through 14, wherein: forming digit line contact structures comprises forming the digit line contact structures to physically contact the digit lines and a semiconductor material of the first semiconductor wafer underlying the digit lines; and forming word line contact structures comprises forming the word line contact structures to physically contact word lines and the semiconductor material of the first semiconductor wafer. Embodiment 16: The method of Embodiment 15, further comprising, before attaching the second semiconductor wafer to the first semiconductor wafer: vertically inverting the first semiconductor wafer after forming the digit line contact structures and the word line contact structures; removing a portion of the semiconductor material to expose surfaces of the digit line contact structures and the word line contact structures; forming sacrificial dielectric structures on the exposed surfaces of the digit line contact structures and the word line contact structures; and forming a dielectric oxide material over the sacrificial dielectric structures and remaining portions of the semiconductor material. Embodiment 17: The method of Embodiment 16, wherein forming additional contact structures over the digit line contact structures and the word line contact structures comprises: forming contact openings vertically extending through additional dielectric oxide material of the second semiconductor wafer and the dielectric oxide material overlying the sacrificial dielectric structures to expose the sacrificial dielectric structures; exhuming the sacrificial dielectric structures through the contact openings to form open volumes, the open volumes re- exposing the surfaces of the digit line contact structures and the word line contact structures; and filling the contact openings and the open volumes with conductive material to form the additional contact structures. Embodiment 18: A microelectronic device, comprising: array regions individually comprising: memory cells comprising access devices and storage node devices; digit lines coupled to the access devices and extending in a first direction; word lines coupled to the access devices and extending in a second direction orthogonal to the first direction; and control logic devices over and in electrical communication with the memory cells; digit line exit regions horizontally alternating with the array regions in the first direction and individually comprising: portions of the digit lines extending beyond the array regions adjacent thereto; digit line contact structures extending through at least some of the portions of
the digit lines; contact structures on the digit line contact structures and individually comprising: a lower region; and an upper region integral and continuous with the lower region and having smaller horizontal dimensions than the lower region; and routing structures coupled to the contact structures; word line exit regions horizontally alternating with the array regions in the second direction and individually comprising: portions of the word lines extending beyond the array regions adjacent thereto; word line contact structures extending through at least some of the portions of the word lines; additional contact structures on the word line contact structures and individually comprising: an additional lower region; and an additional upper region integral and continuous with the additional lower region and having smaller horizontal dimensions than the additional lower region; and additional routing structures coupled to the additional contact structures. Embodiment 19: The microelectronic device of Embodiment 18, further comprising socket regions horizontally offset from the array regions, the digit line exit regions, and the word line exit regions, the socket regions individually comprising deep contact structure assemblies coupling the memory cells to at least some of the control logic devices. Embodiment 20: The microelectronic device of Embodiment 19, wherein the socket regions further comprise additional control logic devices having different configurations and operational functions than the control logic devices. Embodiment 21: The microelectronic device of Embodiment 20, wherein the socket regions further comprise capacitors coupled to one or more of at least some of the control logic devices and at least some of the additional control logic devices. Embodiment 22: The microelectronic device of any one of Embodiments 18 through 21, wherein the control logic devices within each array region of the array regions comprise: sense amplifier devices within multiple sense amplifier regions positioned proximate corners of the array region diagonally opposing one another; and sub-word line driver devices within multiple sub-word line driver regions positioned proximate additional corners of the array region diagonally opposing one another. Embodiment 23: The microelectronic device of Embodiment 22, wherein, for each sense amplifier region of the multiple sense amplifier regions within the array region: some of the sense amplifier devices within the sense amplifier region are coupled to some of the digit lines extending through the array region; and some other of the sense amplifier devices within the sense amplifier region are coupled to some of the digit lines extending through an additional one of the array regions neighboring the array region.
Embodiment 24: The microelectronic device of Embodiment 23, wherein: the some of the sense amplifier devices are coupled to the some of the digit lines extending through the array region by way of some of the digit line contact structures, some of the contact structures, and some of the routing structures within one of the digit line exit regions horizontally interposed between the array region and the additional one of the array regions; and the some other of the sense amplifier devices are coupled to the some of the digit lines horizontally extending through the additional one of the array regions by way of some other of the digit line contact structures, some other of the contact structures, and some other of the routing structures within the one of the digit line exit regions. Embodiment 25: The microelectronic device of Embodiment 22, wherein, for each sub-word line driver region of the multiple sub-word line driver regions within the array region: some of the sub-word line driver devices within the sub-word line driver region are coupled to some of the word lines extending through the array region; and some other of the sub-word line driver devices within the sub-word line driver region are coupled to some of the word lines extending through an additional one of the array regions neighboring the array region. Embodiment 26: The microelectronic device of Embodiment 25, wherein: the some of the sub-word line driver devices are coupled to the some of the word lines extending through the array region by way of some of the word line contact structures, some of the additional contact structures, and some of the additional routing structures within one of the word line exit regions horizontally interposed between the array region and the additional one of the array regions; and the some other of the sub-word line driver devices are coupled to the some of the word lines extending through the additional one of the array regions by way of some other of the word line contact structures, some other of the additional contact structures, and some other of the additional routing structures within the one of the word line exit regions. Embodiment 27: An electronic system, comprising: an input device; an output device; a processor device operably connected to the input device and the output device; and a memory device operably connected to the processor device and comprising: memory array regions each comprising dynamic random access memory (DRAM) cells, digit lines coupled to the DRAM cells, word lines coupled to the DRAM cells, and control logic circuitry overlying and in electrical communication with the DRAM cells; a digit line contact region between two of the memory array regions neighboring one another in a first direction, the
digit line contact region comprising: end portions of some of the digit lines extending past horizontal boundaries of the two of the memory array regions; digit line contacts coupled to and extending completely through the end portions of the some of the digit lines; contact structures on the digit line contacts and individually comprising a lower region and an upper region integral and continuous with the lower region, the upper region having smaller horizontal dimensions than the lower region; and routing structures over and coupled to the contact structures; and a word line contact region between two other of the memory array regions neighboring one another in a second direction perpendicular to the first direction, the word line contact region comprising: end portions of some of the word lines extending past horizontal boundaries of the two other of the memory array regions; word line contacts coupled to and extending completely through the end portions of the some of the word lines; additional contact structures on the word line contacts and individually comprising an additional lower region and an additional upper region integral and continuous with the additional lower region, the additional upper region having smaller horizontal dimensions than the additional lower region; and additional routing structures over and coupled to the additional contact structures. While the disclosure is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, the disclosure is not limited to the particular forms disclosed. Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the following appended claims and their legal equivalent. For example, elements and features disclosed in relation to one embodiment may be combined with elements and features disclosed in relation to other embodiments of the disclosure. |
A system and method for adaptive thermal and performance management in electronic devices are disclosed. A particular embodiment includes: providing a processor with a plurality of selectable performance levels and a sensor in an electronic device; receiving sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow; determining a device context from the sensor information; and dynamically modifying the performance level of the processor by implementing one of a plurality of selectable performance levels of the processor based on the device context. |
CLAIMSWhat is claimed is:1. An electronic device comprising:a processor with a plurality of selectable performance levels;a sensor; andan adaptive thermal and performance management subsystem in data communication with the processor and the sensor, the adaptive thermal and performance management subsystem to:receive sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow; determine a device context from the sensor information; anddynamically modify the performance level of the processor by implementing one of the plurality of selectable performance levels of the processor based on the device context.2. The electronic device of claim 1 wherein the adaptive thermal and performance management subsystem being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.3. The electronic device of claim 1 wherein the sensor information including information for determining if the electronic device is inserted into a dock with an active airflow.4. The electronic device of claim 1 wherein the adaptive thermal and performance management subsystem to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.5. The electronic device of claim 1 wherein the adaptive thermal and performance management subsystem to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.6. The electronic device of claim 1 wherein the adaptive thermal and performance management subsystem to dynamically monitor the current temperature of the electronic device.7. The electronic device of claim 1 wherein the adaptive thermal and performance management subsystem to dynamically modify the performance level of the processor by changing a power level and changing a thermal envelope by modifying skin temperature (TSkin) limits.8. The electronic device of claim 1 wherein the sensor information includes orientation data associated with orientation of the electronic device, wherein the device context is further based on the orientation of the electronic device.9. A method comprising:providing a processor with a plurality of selectable performance levels and a sensor in an electronic device;receiving sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow; determining a device context from the sensor information; anddynamically modifying the performance level of the processor by implementing one of a plurality of selectable performance levels of the processor based on the device context.10. The method of claim 9 including selecting a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy. 11. The method of claim 9 wherein the sensor information including information for determining if the electronic device is inserted into a dock with an active airflow.12. The method of claim 9 including dynamically increasing the performance level of the processor if the electronic device is positioned proximately to an active airflow.13. A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to:receive sensor information from a sensor, the sensor information including information for determining if an electronic device is positioned proximately to an active airflow;determine a device context from the sensor information; anddynamically modify the performance level of a processor having a plurality of selectable performance levels by implementing one of a plurality of selectable performance levels of the processor based on the device context. 14. The machine-useable storage medium of claim 13 being further configured to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.15. The machine-useable storage medium of claim 13 being further configured to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.16. An electronic system comprising:an electronic device docking mechanism with an active airflow producing element; and an electronic device for insertion into the electronic device docking mechanism, theelectronic device including:a processor with a plurality of selectable performance levels;a sensor; andan adaptive thermal and performance management subsystem in data communication with the processor and the sensor, the adaptive thermal and performance management subsystem to:receive sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned in the electronic device docking mechanism;determine a device context from the sensor information; anddynamically modify the performance level of the processor by implementing one of the plurality of selectable performance levels of the processor based on the device context. 17. The electronic system of claim 16 wherein the adaptive thermal and performance management subsystem being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.18. The electronic system of claim 16 wherein the sensor information including information for determining if the electronic device is positioned proximately to an active airflow.19. The electronic system of claim 16 wherein the adaptive thermal and performance management subsystem to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.20. The electronic system of claim 16 wherein the adaptive thermal and performance management subsystem to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.21. An apparatus comprising:a data processing means with a plurality of selectable performance levels;a sensing means; andan adaptive thermal and performance management means in data communication with the data processing means and the sensing means, the adaptive thermal and performance management means to:receive sensing information from the sensing means, the sensing information including information for determining if the apparatus is positioned proximately to an active airflow;determine a device context from the sensing information; anddynamically modify the performance level of the data processing means by implementing one of the plurality of selectable performance levels of the data processing means based on the device context.22. The apparatus of claim 21 wherein the adaptive thermal and performance management means to dynamically decrease the performance level of the data processing means if the apparatus is not positioned proximately to an active airflow.23. The apparatus of claim 21 wherein the adaptive thermal and performance management means to dynamically monitor the current temperature of the apparatus.24. The apparatus of claim 21 wherein the adaptive thermal and performance management means to dynamically modify the performance level of the data processing means by changing a power level and changing a thermal envelope by modifying skin temperature (TSkin) limits.25. The apparatus of claim 21 wherein the sensing information includes orientation data associated with orientation of the apparatus, wherein the device context is further based on the orientation of the apparatus. |
SYSTEM AND METHOD FOR ADAPTIVE THERMAL AND PERFORMANCE MANAGEMENT IN ELECTRONIC DEVICESTECHNICAL FIELDThis patent application relates to electronic systems, mobile devices, and computer- implemented software, according to various example embodiments, and more specifically to a system and method for adaptive thermal and performance management in electronic devices.BACKGROUNDAdvances in semiconductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a result, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple hardware threads, multiple cores, multiple devices, and/or complete systems on individual integrated circuits. Additionally, as the density of integrated circuits has grown, the power requirements for computing systems (from embedded systems to servers) have also escalated. Furthermore, software inefficiencies, and its requirements of hardware, have also caused an increase in computing device energy consumption. As a result, there is a vital need for energy efficiency and conservation associated with integrated circuits. These needs will increase as servers, desktop computers, notebooks, Ultrabooks™, tablets, mobile phones, processors, embedded systems, etc. become even more prevalent (from inclusion in the typical computer, mobile devices, wearables, automobiles, and televisions to biotechnology).Currently, electronic devices including mobile computing systems or mobile devices are designed to operate in the lowest thermal envelope of any of the device's subsystems. As a result, the electronic device must operate at a minimal performance level to maintain a safe thermal operating condition. Existing systems cannot dynamically scale up performance based on adaptive thermal management of the electronic device.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:FIG. 1 is a block diagram of an apparatus, according to an example embodiment;FIG. 2 is a diagram depicting states of a state machine, according to an example embodiment;FIG. 3 is a block diagram of logic utilized to provide contextual information to power management logic of a device, according to an example embodiment;FIG. 4 is a flow diagram of a method including logic flow, according to an example embodiment;FIG. 5 is a graph illustrating an example of performance scalability for the same workload on a given processor under different usage scenarios;FIG. 6 illustrates an example embodiment of a dock configured as an attachable mobile base or stationary dock with active cooling;FIG. 7 illustrates a high level architecture of a DPTF (Dynamic Platform Thermal Framework) integrated with adaptive performance, according to an example embodiment;FIG. 8 is a processing flow chart illustrating an example embodiment of a method as described herein; andFIG. 9 shows a diagrammatic representation of a machine in the example form of a mobile computing and/or communication system within which a set of instructions when executed and/or processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein.DETAILED DESCRIPTIONIn the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It will be evident, however, to one of ordinary skill in the art that the various embodiments may be practiced without these specific details.In the various embodiments described herein, a system and method for adaptive thermal and performance management in electronic devices are disclosed. The disclosed embodiments can be used with a wide variety of electronic devices, such as mobile computing platforms, mobile devices, mobile systems, portable devices, wearables, all-in-one desktop devices, portable-all-in-one devices (pAIO), laptop computers, handheld computers, touch screen systems, and other electronic devices typically configured to fit in a docking device or with a docking interface. In the various example embodiments, an electronic device may include a mobile system, which may refer to one or more of a laptop computer, a tablet computer, a 2-in- 1 device that generally combines functionality of a laptop with usability/portability of a tablet, a smartphone, a wearable device (such as a bracelet, ring, headset, etc.), or other mobile device. In some embodiments the mobile system may include more than one of the aforementioned devices, e.g., multiple devices that are coupled and that may create an improved user experience. Many of these electronic devices can be used in different modes of operation - e.g., vertical, angular, and lay flat modes. The various embodiments described herein are useful for any types of electronic devices or systems that can be used with a docking system. The details of various example embodiments are provided below.Power Management (PM) in systems, such as mobile systems, is a continuous and evolving process. Efficient management of platform resources while maximizing battery life is a goal of an efficient PM policy. A device context of an electronic device (e.g., a portable device such as a mobile device) may be a representation of external factors that influence efficiency of the device. The external factors may be related to the device's thermal capability, e.g., ability of the device to dissipate heat. Device context may describe orientation of the device, presence or absence of physical contact of the device with a user, presence or absence of proximate air flow that causes heat removal from the device (e.g., by convection), and other factors. For example, context may be affected if a device is in contact with a human (e.g., a user grasping a tablet at the edges or a device placed on a user's lap, etc.). In this case, the thermal management of the device can be configured based on the context to accommodate the human contact. For example, the temperature of the skin of the device (Tskin) can be maintained at an increased limit when there is no human contact and maintained at a lower limit if there is human contact. In this manner, the thermal management of the device can improve user comfort when using the device. Device context may be inferred from measurements, and may be evaluated to determine operating parameters of the device in order to achieve greater device efficiency, performance, and/or extend battery life.In various embodiments, a power management (PM) policy may be utilized to enhance an ability of a data processor in an electronic device to scale performance based on thermal constraints associated with a device context or usage. Device context may be determined through observation of platform states of components, sensors, and usage parameters. Various example embodiments are directed to contextual power management in electronic devices such as mobile devices or systems. The details of various example embodiments are provided below in connection with the accompanying figures.FIG. 1 is a block diagram of an apparatus, according to an example embodiment. While referred to hereinafter as a device 102 for purposes of simplicity and illustration, it should be understood that the device 102 may include any suitable name, label, configuration and/or form factor and still fall within the described embodiments. The device 102 may include a system having a compact form factor arranged to support a plurality of computing components. As described herein, the device 102 may include any mobile computing device that can be carried or worn by a person (e.g., a user). In different embodiments, the electronic device 102 may be a laptop, a tablet, a 2-in-l device, a smartphone, a phone/tablet, a wearable device (such as a bracelet, necklace, earring, ring, earpiece, glasses, head-mounted device, etc.), or one or more other mobile-style devices. While described herein as being within this list, one of ordinary skill in the art will understand that embodiments are not limited in this respect.The device 102 may include one or more data processor circuits 106 (also referred to as processor logic(s), processor core(s), etc.) (e.g. processor 106A, processor 106B, and processors through processor 106n, where n is a total count of processors), memory/storage 108, logic 110, operating systems (OS's) 112 (e.g. OS 112A and OS 112B, and OS's through OS 112m, where m is a total count of OS's), transceiver(s) that can include 114, radio(s) 116 and antenna(s) 118, sensor and input/output (I/O) control logic (SICL) 120, sensor(s) 122, power source/regulation 124, and power management logic 126. Although the electronic device 102 shown in FIG. 1 has a limited number of elements in a certain topology, it may be appreciated that the device 102 may include more or less elements in alternate topologies as desired for a given implementation.In various embodiments, device 102 may include the processor circuit 106. The processor circuit 106 can be any of various commercially available processors, including without limitation an AMD®Athlon®, Duron®and Opteron®processors; IBM®and Motorola®DragonBall®and PowerPC®processors; IBM and Sony®Cell processors; Intel®Celeron®, Core (2) Duo®, Core (2) Quad®, Core i3®, Core i5®, Core i7®, Itanium®, Pentium®, Xeon®, and XScale®processors; and similar processors. Dual microprocessors, multi-core processors, and other multi-processor architectures may also be employed as the processor circuit 106.As shown in FIG. 1, in some embodiments, device 102 may include two processor circuits 106A and 106B, or may include any number of processor circuits. In other embodiments, the processor circuits 106A, 106B, ... 106n may include separate cores of a multi-core processor 106. The example embodiments described herein are not limited in this respect.In some embodiments, the one or more processor circuits 106A, 106B may include a first processor circuit 106A arranged to execute a first operating system 112A and a second processor circuit 106B arranged to execute a second operating system 112B (and potentially any number (n) of additional operating systems that are to be executed on additional processor circuits). In various embodiments, the logic 110 may be operative to automatically select one of the first processor circuit 106A and first operating system 112A or second processor circuit 106B and second operating system 112B based on the one or more characteristics of peripheral device 104 (e.g., context), as described in more detail below.The first processor circuit 106A may operate at a first frequency and the second processor circuit 106B may operate at a second frequency that is less than the first frequency in some embodiments. For example, the first processor circuit 106A may include a core capable of executing an operating system 112A, such as an Android®operating system, iOS operating system, OS X operating system, Linux operating system, Windows®operating system or any other suitable operating system. Processor circuit 106B may include a low power, low frequency processor circuit such as a microcontroller (MCU) or the like. Processor circuit 106B may be operative to execute a boot OS, real-time OS (RTOS), run-time OS or limited functionality OS 112B that is designed for a specific purpose, application, or device. The example embodiments described herein are not limited in this respect. In various embodiments, device 102 may include a memory unit 108. The memory unit 108 may store, among other types of information, logic 110 and OS 112A, OS 112B, etc. The memory unit 108 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR AM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon- oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. While shown as being included with memory 108 in FIG. 1, it should be understood that logic 110 and/or OS 112A, 112B may be located elsewhere within device 102 and still fall within the described embodiments.In some embodiments, device 102 may include logic 110. Examples of logic 110 may include but are not limited to executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Embodiments may also be at least partly implemented as instructions contained in or on a non-transitory computer-readable medium, which may be read and executed by one or more processors to enable performance of the operations described herein. In some embodiments, at least a portion of logic 110 can be implemented in hardware. Other embodiments are described and claimed.Device 102 may include a power source and/or power regulation (PSPR) 124 in various embodiments. In some embodiments, PSPR 124 may include a battery such as a lithium ion battery or the like. In some embodiments, PSPR 124 also may include one or more voltage regulators to regulate the voltage supplied by the power source. PSPR 124 may be operative to provide power to one or more of the components of device 102. The example embodiments described herein are not limited in this respect.In various embodiments, device 102 may include sensor and I/O control logic (SICL) 120. SICL 120 may include a plurality of input/output (I/O) pins or ports in some embodiments, and may also include logic to interface with one or more sensors 122. Sensors 122 may include accelerometers, gyroscopes, inclinometers, global position system (GPS) receivers, infrared, RADAR, LIDAR, biometric, thermal, environmental, proximity, barometric, humidity, pressure sensors, and may include one or more specific absorption rate (SAR) sensors. For example, the SICL 120 may be operative to interface with one or more peripheral I/O devices as well as with one or more sensors and to report sensor and I/O information to the processor 106. In various embodiments, the SICL 120 may be operative to enable (or arranged to support) plug and play operation between the device 102 and a plurality of other devices.In operation, one or more of the sensors, e.g., a SAR sensor within the SICL 120, may be operable to detect that a human being (e.g., a user of the device 102) is in physical contact with the device 102. Physical contact of the device 102 with the human being may imply that adjustments should be made in the thermal management of the device 102 to improve user comfort. For example, responsive to an indication of physical contact of the human being with the device 102, operating parameters, e.g., operating speed of one or more of the processors 106, may be adjusted (e.g., reduced) to improve user comfort and to maintain a viable operating temperature of the device 102.Additionally, SICL 120 may include logic (e.g., software logic, hardware logic, or a combination of both) to dynamically configure the device 102 to interface with one of a number of peripheral devices 104. A pin-out of device 102 is not hardwired in many embodiments and instead can be programmed. This dynamic programmability may be based on a discovery protocol that determines the pin-out of the peripheral device 104 interface upon being coupled to the peripheral device 104. For example, one or more pins may be set to corresponding discovery information for a plurality of other available pins in the interface. Once the device 102 retrieves this information, the device 102 may program capabilities of pins on the device 102 for further interface compatibility with the peripheral device 104. In other embodiments, each pin on the device 102 may check for a live link to determine which pins are available for interfacing.Because this is a dynamic configuration, the device pins may change functionality and/or operational state depending on the type of peripheral device 104 interface available. The functionality of a given pin may even change while maintaining a plugged in state with a single peripheral device 104 in some embodiments. In other embodiments, the peripheral device 104 can be configured to add additional cooling capabilities to device 102. Other embodiments are described and claimed.Device 102 may include one or more wireless transceivers 114, in some embodiments. Each of the wireless transceivers 114 may be implemented as physical wireless adapters or virtual wireless adapters, sometimes referred to as "hardware radios" and "software radios," respectively. A single physical wireless adapter may be virtualized (e.g., using software) into multiple virtual wireless adapters. A physical wireless adapter typically connects to a hardware-based wireless access point. A virtual wireless adapter typically connects to a software-based wireless access point, sometimes referred to as a "SoftAP." For instance, a virtual wireless adapter may allow ad hoc communications between peer devices, such as a smart phone and a desktop computer or notebook computer. Various embodiments may use a single physical wireless adapter implemented as multiple virtual wireless adapters, multiple physical wireless adapters, multiple physical wireless adapters each implemented as multiple virtual wireless adapters, or some combination thereof. The example embodiments described herein are not limited in this respect.The wireless transceivers 114 may include or implement various communication techniques to allow the device 102 to communicate with other electronic devices. For instance, the wireless transceivers 114 may implement various types of standard communication elements designed to be interoperable with a network, such as one or more communications interfaces, network interfaces, network interface cards (NIC), radios, wireless transmitters/receivers (transceivers), wired and/or wireless communication media, physical connectors, and so forth.By way of example, and not limitation, communication media includes wired communications media and wireless communications media. Examples of wired communications media may include a wire, cable, metal leads, printed circuit boards (PCB), backplanes, switch fabrics, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, a propagated signal, and so forth. Examples of wireless communications media may include acoustic, radio- frequency (RF) spectrum, infrared and other parts of the spectrum, and other wireless media.In various embodiments, the device 102 may implement different types of wireless transceivers 114. Each of the wireless transceivers 114 may implement or utilize a same or different set of communication parameters to communicate information between various electronic devices. In one embodiment, for example, each of the wireless transceivers 114 may implement or utilize a different set of communication parameters to communicate information between device 102 and any number of other devices. Some examples of communication parameters may include without limitation a communication protocol, a communication standard, a radio-frequency (RF) band, a radio, a transmitter/receiver (transceiver), a radio processor, a baseband processor, a network scanning threshold parameter, a radio-frequency channel parameter, an access point parameter, a rate selection parameter, a frame size parameter, an aggregation size parameter, a packet retry limit parameter, a protocol parameter, a radio parameter, modulation and coding scheme (MCS), acknowledgement parameter, media access control (MAC) layer parameter, physical (PHY) layer parameter, and any other communication parameters affecting operations for the wireless transceivers 114. The example embodiments described herein are not limited in this respect.In various embodiments, the wireless transceivers 114 may implement different communication parameters offering varying bandwidths, communications speeds, or transmission ranges. For instance, a first wireless transceiver may include a short-range interface implementing suitable communication parameters for shorter range communication of information, while a second wireless transceiver may include a long-range interface implementing suitable communication parameters for longer range communication of information.In various embodiments, the terms "short-range" and "long-range" may be relative terms referring to associated communications ranges (or distances) for associated wireless transceivers 114 as compared to each other rather than an objective standard. In one embodiment, for example, the term "short-range" may refer to a communications range or distance for the first wireless transceiver that is shorter than a communications range or distance for another wireless transceiver 114 implemented for the device 102, such as a second wireless transceiver. Similarly, the term "long-range" may refer to a communications range or distance for the second wireless transceiver that is longer than a communications range or distance for another wireless transceiver 114 implemented for the device 102, such as the first wireless transceiver. The example embodiments described herein are not limited in this respect.In various embodiments, the terms "short-range" and "long-range" may be relative terms referring to associated communications ranges (or distances) for associated wireless transceivers 114 as compared to an objective measure, such as provided by a communications standard, protocol or interface. In one embodiment, for example, the term "short-range" may refer to a communications range or distance for the first wireless transceiver that is shorter than 300 meters or some other defined distance. Similarly, the term "long-range" may refer to a communications range or distance for the second wireless transceiver that is longer than 300 meters or some other defined distance. The example embodiments described herein are not limited in this respect.In one embodiment, for example, the wireless transceiver 114 may include a radio designed to communicate information over a wireless personal area network (WPAN) or a wireless local area network (WLAN). The wireless transceiver 114 may be arranged to provide data communications functionality in accordance with different types of lower range wireless network systems or protocols. Examples of suitable WPAN systems offering lower range data communication services may include a Bluetooth system as defined by the Bluetooth Special Interest Group, an infra-red (IR) system, an Institute of Electrical and Electronics Engineers (IEEE) 802.15 system, a DASH7 system, wireless universal serial bus (USB), wireless high- definition (HD), an ultra-side band (UWB) system, and similar systems. Examples of suitable WLAN systems offering lower range data communications services may include the IEEE 802.xx series of protocols, such as the IEEE 802.1 la/b/g/n series of standard protocols and variants (also referred to as "WiFi"). It may be appreciated that other wireless techniques may be implemented. The example embodiments described herein are not limited in this respectJn one embodiment, for example, the wireless transceiver 114 may include a radio designed to communicate information over a wireless local area network (WLAN), a wireless metropolitan area network (WMAN), a wireless wide area network (WW AN), or a cellular radiotelephone system. Another wireless transceiver may be arranged to provide data communications functionality in accordance with different types of longer range wireless network systems or protocols. Examples of suitable wireless network systems offering longer range data communication services may include the IEEE 802.XX series of protocols, such as the IEEE 802.11a/b/g/n series of standard protocols and variants, the IEEE 802.16 series of standard protocols and variants, the IEEE 802.20 series of standard protocols and variants (also referred to as "Mobile Broadband Wireless Access"), and so forth. Alternatively, the wireless transceiver 114 may include a radio designed to communicate information across data networking links provided by one or more cellular radiotelephone systems. Examples of cellular radiotelephone systems offering data communications services may include GSM with General Packet Radio Service (GPRS) systems (GSM/GPRS), CDMA/lxRTT systems, Enhanced Data Rates for Global Evolution (EDGE) systems, Evolution Data Only or Evolution Data Optimized (EV-DO) systems, Evolution For Data and Voice (EV-DV) systems, High Speed Downlink Packet Access (HSDPA) systems, High Speed Uplink Packet Access (HSUPA), and similar systems. It may be appreciated that other wireless techniques may be implemented. The example embodiments described herein are not limited in this respect.Although not shown, device 102 may further include one or more device resources commonly implemented for electronic devices, such as various computing and communications platform hardware and software components typically implemented by a personal electronic device. Some examples of device resources may include without limitation a co-processor, a graphics processing unit (GPU), a chipset/platform control logic, an input/output (I/O) device, computer-readable media, network interfaces, portable power supplies (e.g., a battery), application programs, system programs, and so forth. The example embodiments described herein are not limited in this respect.In the illustrated example embodiment shown in FIG. 1, the processor(s) 106 may be communicatively coupled to one or more of the memory 108, logic 110, power source 112, transceiver 114, radio 116, antenna 118 and/or SICL 120. The memory unit 108 may store the logic 110 arranged for execution by the processor 106 to enable processing capabilities. The logic 110 may generally provide features to enable any of the functionality described herein. Other embodiments are described and claimed.The peripheral device 104 may include, for example, an I/O peripheral device designed to interact with device 102. In some embodiments, the I O devices may include but are not limited to a display, a speaker, a microphone, a projector, a camera, a keyboard, one or more additional input devices (such as a touchpad, touchscreen), and one or more sensors (such as an accelerometer, gyroscope, global positioning system (GPS) logic, infrared motion detector, etc.). Although the peripheral device 104 shown in FIG. 1 has a number of elements in a certain topology, it may be appreciated that the peripheral device 104 may include more or less elements in alternate topologies as desired for a given implementation. For example, any number, type, or arrangement of an I/O device, including devices not shown in FIG. 1, could be used and still fall within the described and claimed embodiments.The one or more I/O devices may be arranged to provide functionality to the peripheral device 104 and/or the device 102 including but not limited to capturing images, exchanging information, capturing or reproducing multimedia information, receiving user feedback, or any other suitable functionality. Non-limiting examples of input/output devices include a camera, QR reader/writer, bar code reader, buttons, switches, input/output ports such as a universal serial bus (USB) port, touch-sensitive sensors, pressure sensors, a touch-sensitive digital display, and the like. The example embodiments described herein are not limited in this respect.The peripheral device 104 may include one or more displays in some embodiments. The displays may include any digital display device suitable for an electronic device. For instance, the displays may be implemented by a liquid crystal display (LCD) such as a touch-sensitive, color, thin-film transistor (TFT) LCD, a plasma display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a cathode ray tube (CRT) display, or other type of suitable visual interface for displaying content to a user of the device 102 when used in connection with the device 102. The displays may further include some form of a backlight or brightness emitter as desired for a given implementation.In various embodiments, the displays may include touch-sensitive or touchscreen displays. A touchscreen may include an electronic visual display that is operative to detect the presence and location of a touch within the display area or touch interface. In some embodiments, the display may be sensitive or responsive to contact of the display with a finger or hand. In other embodiments, the display may be operative to sense other passive objects, such as a stylus or electronic pen. In various embodiments, displays may enable a user to interact directly with what is displayed, rather than indirectly with a pointer controlled by a mouse or touchpad. Other embodiments are described and claimed.While not limited in this respect, in some embodiments the device 102 may include one or more of a wearable device, a control device, a display device, an audio/video (A/V) device, a toy device such as a remote control car or a robot device. For example, the device may include a smartwatch device, a TV remote control device, a smart speaker, etc. One of ordinary skill in the art will understand that any suitable device could be arranged as a peripheral device 104 to accommodate device 102 and, as such, the embodiments are not limited to the examples described herein. In some embodiments, the peripheral device 104 may include a dumb device. More particularly, the device itself may not include components as shown in FIG. 1 as forming part of device 102. For example, peripheral device 104 may not include its own processor, memory, power source, transceiver, etc. Instead, the peripheral device 104 may rely on a device like device 102 for power and processing capabilities. In this manner, any number of peripheral devices could be produced inexpensively and each could be powered and provided with computing capabilities by a common device. Other embodiments are described and claimed.While not shown herein, in some embodiments, the device 102 may include an independent power supply (e.g. separate and distinct from the power supply of the device) that may power one or more of the components of the peripheral device 104 and/or one or more components of the device 102. Other embodiments are described and claimed.In example embodiments, device 102 may include power management logic 126. The power management logic 126 may include hardware circuitry, software application(s), firmware code, or a combination of the above types of logic. Power management logic 126 may determine a power state of the device 102, including the processor 106, and potentially other components within device 102. Power management logic 126 can also be configured to scale the power and performance of the device up and down within a specified range. This capability is provided in currently available processor systems. Power management logic 126 may utilize many different inputs to determine a context of the device 102. Context of the device 102 may be considered in determining a power management policy, e.g., what power state, condition, or level (e.g., wake/sleep state) each component is to be in at any given time. Power management logic 126 may utilize sensor input from sensor(s) 122 and SICL 120, and/or activity levels from a user or from other applications running on an OS such as OS 112A, among other inputs to determine the context of the device 102. Additionally, power management logic 126 may utilize environmental sensor input from sensor(s) 122 to measure the ambient temperature, humidity, altitude/elevation, GPS location, time of day, etc. This input can be used to further refine the context used by the power management logic 126 to configure the operation of the device 102. Moreover, the context can be further refined to include user identity information and related profile information to configure the operation of the device 102 based on the particular preferences of a specific user. Thus, the power management logic 126 can use the context to configure the operation of the device 102, for example, to operate at a particular processing performance level and/or at a particular Tskintemperature based on the preferences of a specific user of the device 102.In example embodiments, device context-based power and performance management (PM) is described. Device context may refer to a combination of device position (e.g., vertical, horizontal, horizontal with air flow, vertical with air flow), device contact state (e.g., whether the device is free from human contact or in physical contact with a human), and docking state (whether or not the device is docked, and whether active cooling is being provided). Different device contexts may influence device platform thermal constraints. The PM policy may react to the device context changes and may manage power and thermal state decisions, e.g., to enhance or otherwise modify processor performance or to modify theof the device. Additionally, a device context- based PM infrastructure may monitor performance and power consumption of software applications and may offer suggestions (e.g., via a user interface) to an end user, e.g., to improve performance or battery life of the platform. Additionally, the user interface may provide to the user an indication of whether the performance or the battery life of the platform is improved responsive to user-executed changes to the device context, e.g., change in device orientation, contact with a human being, proximity to active cooling, flow rate of a coolant proximate to the device, etc. For example, the user interface may include a color indicator that may correspond to a thermal characteristic of the device, e.g., operating temperature of a processor of the device. The device 102 platform may have a thermal dissipation limit (TDP), e.g., a high temperature operating characteristic of the device 102 that is typically higher than a temperature of an exterior (skin) of the device 102 (Τ5Μη). By relaxing theto exceed determined constraints, higher performance may be extracted from the system. By utilizing device contextual information to determine whether to relax Ύ^η, the PM policy may improve the performance of the system.Modern data processors may have significant capabilities for performance scaling. For instance, many processors can deliver significantly higher performance than their nominal performance level if the thermal constraints of the platform can be increased from a configured TDP (cTDP). The graph shown in FIG. 5 illustrates an example of performance scalability for the same workload on a given processor under different usage scenarios. Some workloads scale very well with an increase in platform cTDP. These workloads can benefit substantially from active cooling docks, different system orientations, etc. Contextual PM leverages the performance scalability of a processor and may result in significant performance gains, depending on the context of the device.In example embodiments, a current platform PM policy may be determined as follows: PM policy = function (device orientation, device contact state, device dock state). Variables that are input to determine the PM policy may include, but are not limited to: device orientation, device contact state, and device dock state. Each of device orientation, device contact state, and device dock state is further described below.Device Orientation: In example embodiments, an orientation of the device may be utilized to help determine the PM policy. Device orientation with respect to an orientation standard (e.g., horizontal, vertical, gravitational vertical, or other orientation standard) may be inferred based on position sensors such as accelerometers, inclinometers, gyroscopes, etc. For example, the position sensors can provide information as to whether the device is substantially in a horizontal orientation or a vertical orientation with respect to an orientation standard. For instance, the device may be positioned horizontally on, e.g., a table or desk with no air flow to cool the device. In some embodiments, the device may be positioned horizontally or angled with respect to horizontal, and permitting airflow that enables some heat transfer to surrounding air.In a vertical orientation, the device is positioned vertically due to, e.g., a user holding the device, or the device being leaned against a wall, or the device being coupled to a dock, or by another means of positioning the device. The vertical orientation permits surrounding hot air to rise due to convection, e.g., between a core heat sink and a top-plate, and may enable heat to transfer from the processor to an exterior skin of the device. In a vertical position with active cooling, the device may be attached to a dock or base with active cooling capability. Active cooling refers to, e.g., one or more fans, or a mechanism to circulate a heat conductive agent (e.g., gas or liquid) that can remove heat from the device, etc., to reduce Tskinof the platform.Device Contact State: Physical contact with a human being (e.g., user) may be inferred through analysis of data from one or more sensors such as a Specific Absorption Rate (SAR) sensor, accelerometer, gyro sensor, touch sensor, etc. The SAR sensor can output data indicating whether an object that is in close proximity to the sensor is human skin, or wood, air, etc. Some SAR sensors operate according to capacitive proximity measurement. An advantage of knowing whether or not a human is proximate is that when the human is not holding the device, Ύ^ηcan be allowed to scale higher, permitting increased performance of the device. In other cases, device performance and Ύ^ηcan be managed to achieve a comfortable temperature level for a user if a user is in contact with the device.Because a SAR sensor typically has a granularity of a few centimeters of accuracy, it may not be reliable to use as a sole sensor by which to determine whether the device is in close proximity/physical contact with a human being. Alternatively, a plurality of sensor measurements may be recorded from another sensor (e.g., an inclinometer that determines orientation with respect to an orientation standard) over a defined time period. Because it is very difficult for humans to hold an item still (e.g., at a fixed orientation) without moving or touching a device display screen, measurements recorded over time may be analyzed to determine whether the user is holding the device. For example, the measurements recorded may be data that includes a plurality of orientation measurements received from the inclinometer, each orientation measurement taken at a distinct time over a defined time period. The measurements may be recorded in serial fashion, e.g., periodically over the defined time period. The logic may determine whether the apparatus is in physical contact with the user based at least in part on a comparison of a standard deviation of the sensor measurements, to a threshold value, e.g. threshold standard deviation.Alternatively, by receiving data from several sensors (e.g., two or more of SAR, accelerometer, inclinometer, gyroscope, etc.), physical proximity to a human may be able to be inferred with greater accuracy than from SAR data alone. For instance, data from each of two or more sensors may be evaluated, e.g., statistically using standard deviation of the respective data from each of the sensors, to infer whether there is physical contact between the device and a human (e.g., the user).In some embodiments, sensor data may be received from a "virtual sensor," e.g., data received from several sensors (e.g., "fused sensors"), which in combination emulates another sensor. For example, sensor data received from an accelerometer, a gyroscope, and a compass may be analyzed statistically. In one embodiment, data received (e.g., periodically) from each of the fused sensors during a defined time period may be analyzed to determine a corresponding standard deviation. The standard deviations of each of the fused sensors, as determined by statistical analysis, may be combined and compared with an overall threshold, (e.g., an overall standard deviation threshold), to determine whether the device is being held by a human being.Device Dock State: Diverse instantiations of docks and peripheral devices may be available to a system to deliver an enhanced user experience. By virtue of placing the electronic device in a dock or coupling to a docking interface, the cTDP of the platform may change and with it, the scope to deliver a contextual performance boost. Information on the type of dock (if any) attached, may be obtainable from an embedded controller (EC) on the platform. A dock with active cooling refers to a dock that has fans or other mechanisms to actively cool the platform by removing heat from the system. On docks with fans for active cooling, contextual PM policy may be able to adjust air speed (e.g., fan speed) dynamically, depending on platform requirements and dock capabilities. Two examples of fan control policies are described below.In one example of fan control policies, fan speed of a fan on the dock can be increased linearly with performance requirements of a system. Fan speed can also be controlled non- linearly, or using a smart controller with feedback. If the system is performing at close to full utilization, then fan speed can be ramped up to the maximum levels to support cooling of the platform. Fan speed of the dock can be adjusted depending on ambient noise. For example, if the system is in an office environment with minimum ambient noise, fan speed may be lowered to be less audible to the end user. In circumstances where the ambient noise is significant, fan speed can be ramped up to improve cooling of the platform. It is to be noted that the dock's effectiveness is dependent to a large degree on the system design and thermal characteristics of the (detachable) electronic device. For example, if a battery is placed at a center of the system to absorb heat generated by a system on a chip (SoC), thermal conductivity at edges of the device may be improved as compared with other battery placements. Docks with active cooling may be designed in conjunction with system design to enhance platform cooling for the specific device. Undocked refers to a system that is not currently placed in any dock nor coupled to a docking interface and is in a stand-alone mode.In another example of fan control policies, a hybrid outer plate of the computing system platform can be provided in which plastic is used around edges of the device and metal is used in the interior portions, so that heat becomes less of a factor around the edges. For example, plastic typically feels cooler to a user than metal does; so, thecan be raised in the portions of the device where there is plastic.FIG. 2 is a diagram depicting states of a state machine, according to an example embodiment. A horizontal/non-human contact state 200 can be determined by system sensors, such as a SAR sensor and/or accelerometer data that provide information to PM logic. A horizontal/human contact state 202 can be determined by system sensors such as SAR, gyroscope, and/or accelerometer, providing information to the PM logic. A vertical/human contact state 204 can be determined by system sensors such as SAR, gyroscope, and/or accelerometer, providing information to the PM logic. A vertical/no active cooling (non-human contact) docked state 206 can be determined by a docking event that provides information to the PM logic. A vertical/active cooling docked state 208 can be determined by a docking event that provides information to the PM logic. Data provided by sensors may be received and transmitted through sensor and I/O (input/output) control logic e.g., the SICL 120 of FIG. 1.FIG. 3 is a block diagram of logic utilized to provide contextual information to power management logic of a device, according to an example embodiment. Device sensors 330 may send sensor information input to logic 300, e.g., through a SICL (e.g., SICL 120 of FIG. 1), which in example embodiments may include firmware logic 306 that can include an embedded controller 302 that may be configured through a basic input/output system (BIOS) 304.The sensor information is then sent to a driver layer 308 that may include sensor drivers 310, human interface device (HID)/ Advanced Configuration and Power Interface (ACPI) drivers 312, and/or a power management driver 314. The driver layer 308 may notify an applications layer 316 registered for the input with the data from the sensors 330. Contextual PM application logic 318 may receive the sensor data to manage a state machine whose state depends on device context. The contextual PM application logic 318 may provide, to a PM framework 320, power state recommendations, instructions, or commands for one or more components of the device, based on the device context that includes one or more parameters, e.g., whether the device is in physical contact with a human (e.g., user), orientation of the device, proximity to external air flow or other heat removing mechanisms, speed of external air flow, etc. The PM framework 320 may adjust power supplied to the one or more components of the device based on the recommendations, instructions, or commands received from the PM application logic 318. Additionally, a contextual PM user interface 322 may be user- visible to present recommendations to a user from, e.g., the PM application logic 318, and to allow a user to manually change the PM policy. Additionally, the contextual PM user interface 322 may be operable to provide to the user an indication as to the effectiveness in heat removal from the device based on changes implemented by the user or the system.FIG. 4 is a flow diagram of a method including logic flow, according to an example embodiment. Logic flow 400 may be managed by processing logic (such as the logic 300 described above), which can be one or more of hardware, software, and firmware logic. At block 402, the processing logic reads a current performance power limit. Continuing to block 404, the processing logic retrieves sensor data, e.g., from one or more context sensors such as SAR, gyroscope, inclinometer, accelerometer, etc. Advancing to block 406, the processing logic computes a new power limit of the device based on the sensor data. Proceeding to decision block 408, processing logic determines whether the current power limit is equal to the computed new power limit. If the current power limit is equal to the new power limit, the process returns to block 402. If the current power limit is not equal to the new power limit, continuing to block 410, the processing logic sets the current power limit to the new power limit and returns to block 402.In some embodiments, a contextual PM application can observe performance and battery life metrics corresponding to the application and system level components and can provide suggestions to end users regarding adjustments to improve performance or battery life. For example, a user may be holding a portable device horizontally and playing a game. The contextual PM application may determine that a horizontal orientation is not optimal for heat dissipation and may suggest to the user (e.g., via a user interface) that the user hold the system more vertically, or that the user place the system on a vertically oriented dock. In some embodiments, the suggestion may be presented in a non-intrusive fashion, e.g., a visual indicator on a taskbar that changes color (from green to yellow or to amber) to indicate that the device may be warmer in its current device context than if the device context were changed, e.g., by change of device orientation, or change of contact with the user, or introduction of active cooling. Such a visual indicator may influence the user to adjust one or more usage parameters (e.g., device orientation, contact with the user, external airflow or other external cooling) to improve device performance and/or battery life.Included herein is a set of logic flows representative of example methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein are shown and described as a series of acts, those of ordinary skill in the art will understand and appreciate that the methodologies are not limited by the order of acts. Some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from those shown and described herein. For example, those of ordinary skill in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation. A logic flow may be implemented in software, firmware, and/or hardware. In software and firmware embodiments, a logic flow may be implemented by computer executable instructions stored on at least one non- transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. The example embodiments disclosed herein are not limited in this respect.The various elements of the device 102 as previously described with reference to the figures may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processors, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. However, determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.In particular example embodiments described herein, an accessory in the form of a dock with active cooling (e.g., fan or blower) can be provided. As described in more detail below, an adaptive performance capability provided by a contextual PM system as described above can detect the presence of active cooling in a dock and allow the SOC in a system platform of an electronic device inserted into the dock to scale up to a higher processing performance level based on a revised thermal envelope determined by surface cooling of the inserted system. The forced airflow over the exterior of the electronic system inserted into the active cooling dock can compensate for increased thermal dissipation due to the SOC operating at higher performance levels, without violating the skin temperature requirements for the inserted electronic system. As a result, the active cooling can decrease the skin temperature of the inserted electronic system and the temperature of the processor therein.In an example embodiment, the inserted system can be, for example, a 2-in-l detachable computing system, which may include, for example, an Intel®Core SOC. Current 2-in-l computing systems are designed to be thin, closed, fan-less systems, with limited thermal headroom, which constrains the SOC performance to levels significantly below their maximum allowable limits. An example embodiment described herein can provide an accessory device for the 2-in-l computing system, or other electronic device, which provides an increased thermal envelope allowing the SOC in the 2-in-l computing system to scale up to higher performance levels. This extra processing performance can be configured and used by the user as needed, without compromising the thin and light 2-in-l computing experience.Referring now to FIG. 6, an example embodiment of a dock 610 configured as an attachable mobile base or stationary dock with active cooling is illustrated. As shown in FIG. 6, the dock 610 uses an air-moving device 612 (e.g., a fan), which provides forced airflow over a passively cooled electronic device (e.g., a tablet) 620 removably inserted into an opening in the dock 610. In the configuration shown in FIG. 6, the air-moving device 612 can pull air into an inlet of the dock 610 and direct the airflow laterally across the front and back of the inserted electronic device 620 as shown. The lateral airflow general denotes the movement of air adjacent and parallel to the larger dimension surfaces of the inserted electronic device 620. The airflow over the larger dimension surfaces of the inserted electronic device 620 (e.g., the front of the touch panel and back of the chassis) increases the heat transfer rate of the electronic device 620 surfaces by converting natural convection into forced convection heat transfer. As a result, the electronic device 620 can be cooled more efficiently. Baffles can be installed in the dock 610 and used in conjunction with the air-moving device 612 to orient the airflow along the larger dimension surfaces of the inserted electronic device 620.As provided in most docking systems, the dock 610 can also include an electrical interface 614 into which the electronic device 620 can be plugged while inserted into the dock 610. The electrical interface 614 can be as simple as a mere power interface for charging the electronic device 620 while the device is inserted into the dock 610. In other embodiments, the electrical interface 614 can be a power and data interface to couple the electronic device 620 with electrical devices, ports, controllers, or processors provided in the dock 610. In still other embodiments, the electrical interface 614 can be a wireless data interface to couple the electronic device 620 wirelessly with electrical devices, ports, controllers, or processors provided in the dock 610. In any of these example embodiments, the electrical interface 614 can be used by a dock detection subsystem in the electronic device 620 to detect when the electronic device 620 is inserted in the dock 610. A data interface (if any) provided by the electrical interface 614 can further provide an exchange of data between the dock 610 and the inserted device 620. The exchanged data can include information specifying a type of dock 610, a profile of dock 610, or other information the electronic device 620 can use to determine a context associated with the particular dock 610. As a result, the contextual PM system of an example embodiment can configure the performance level of a processor on the electronic device 620 based on the particular capabilities of the dock 610 into which the electronic device 620 is inserted. These features of an example embodiment are described in more detail below in connection with FIG. 7.FIG. 7 illustrates a high level architecture 800 of a DPTF (Dynamic Platform Thermal Framework) integrated with adaptive performance, according to an example embodiment. Using the techniques described herein, an adaptive performance solution can be integrated into a DPTF system to enable adaptive thermal and performance management in electronic devices, particularly mobile computing devices. As such, the architecture can be denoted an adaptive thermal and performance management framework. The adaptive thermal and performance management framework of an example embodiment can include logic utilized to provide contextual information to power management logic of an electronic device, according to an example embodiment.As shown in FIG. 7, device sensors may send sensor information input 841 to logic 830, e.g., through a SICL (e.g., SICL 120 of FIG. 1). In the example embodiment shown in FIG. 7, the logic 830 can be a software layer, which includes a plurality of drivers, such as sensor drivers 838, fan driver 836, and SOC driver 834. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of drivers can be similarly provided as part of logic 830. The sensor driver 838 can receive the sensor information 841 from the device sensors. The drivers can interface with the DPTF logic 832. In the example embodiment, the DPTF logic 832 or adaptive thermal and performance management subsystem represents a software layer module configured to implement the context-based power and performance management features as described above. In particular, the DPTF logic 832 can receive sensor information 841, determine a corresponding context, and implement an appropriate policy for any of a plurality of DPTF participants. The DPTF participants can be in data communication with DPTF logic 832 via DPTF participant logic blocks 845 and DPTF interface 847, which in example embodiments may include firmware logic that can be configured through a basic input/output system (BIOS) 840. The BIOS 840 can also include a dock detection logic block 843, which can detect the insertion or removal of the electronic device into or from the dock 610 as described above. The DPTF logic 832 may receive the sensor data, dock detection data, SOC state, fan state, and DPTF participant information as described above for use in managing a state machine in which state depends on device context as developed from these various inputs. The DPTF logic 832 can use this determined device context to select from a plurality policy engines 820 that conform to the current device context.In an example embodiment, these policy engines can include an active cooling policy, a passive policy, a critical policy, and an adaptive performance policy. In a system without adaptive performance, the system can use the active cooling policy, the passive policy, and the critical policy to successively deal with thermal issues. For example, consider a laptop computer with a built-in cooling fan. If thermals rise, the DPTF logic 832 can first attempt to use the active cooling policy to deal with the thermal issues; because, this type of resolution results in the least amount of performance reduction. If the active cooling policy cannot resolve the thermal issues based on detected temperature levels, the DPTF logic 832 can apply the passive policy, which does lower power limits of various system components, like the processor, and therefore can impact system performance. Finally, if the DPTF logic 832 detects a thermal runaway situation, which can't be addressed by the active cooling and passive policies, the DPTF logic 832 can take drastic actions with the critical policy. These actions can include causing the system to hibernate or to shut completely down. Typically, such a critical situation may occur if the system has been left running unattended in a closed space.In a system supporting an adaptive performance policy (AP policy) as described herein, the AP policy is responsible for the setting revised power and skin temperature (TsMn) limits, based on the device context. Every time the device context changes, the AP policy can be activated to change the system power and thermal parameters, relevant to the context. After the new parameters are set, the active cooling policy, the passive policy and the critical policy as described above continue to be used to manage system thermal issues. The device context (e.g., docked with active cooling) can enable the adaptive performance policy to set higher thresholds for TsMn and power/performance levels. The active cooling policy can then maintain the new thresholds by managing the cooling device. It can be expected that an electronic device inserted into a dock with active cooling can be configured to ramp up (increase) to a higher level of power and/or performance based on the improved ability of the device to dissipate heat. In contrast, it can be expected that the electronic device can be configured to maintain a current level of power and/or performance or perhaps ramp down (decrease) to a lower level of power and/or performance based on the decreased ability of the device to dissipate heat. Similarly, the critical policy can be used to cause the electronic device to ramp down to a lower level of power and/or performance more quickly based a detection of a level of heat nearing a threshold limit. Moreover, the critical policy can be used to shut down systems when components reach critical thresholds.The adaptive performance policy can cause the DPTF logic 832 to periodically read and set appropriate power and performance values for the subsystems of the electronic device when the device is subject to a changing thermal environment or variable processing demands. The DPTF logic 832 can dynamically respond to the real-time demands of the processing load and the thermal environment in which the electronic device is operating. Thus, the DPTF logic 832 may provide power and/or performance state recommendations, instructions or commands for one or more subsystems or components of the electronic device, based on the device context that includes one or more parameters, e.g., whether the device is in physical contact with a human (e.g., user), orientation of the device, proximity to external air flow or other heat removing mechanisms, speed of external air flow, etc. The DPTF logic 832 may adjust power supplied to or performance levels of the one or more subsystems or components of the electronic device based on the power/performance and Tskin limits set by the adaptive performance policy, relevant to the particular device context. Additionally, an adaptive performance user interface 810 may be provided and made user-visible to present recommendations to a user from, e.g., the DPTF logic 832, and to allow a user to manually change the PM policy. Additionally, the adaptive performance user interface 810 may be operable to provide to the user an indication as to the effectiveness in heat removal from the device based on the changes implemented by the user or the system.The various embodiments described herein are unique in a variety of ways. In particular, an example embodiment can implement adaptive performance, wherein a software solution can detect the presence of a dock with active cooling and dynamically scale up SOC power and performance levels of electronic devices inserted therein. Users of the inserted electronic devices can benefit from approximately a 30% increase in SOC performance as compared to device performance in a standard mode. For example, on an Intel®Core M-Series processor based 2-in-l detachable electronic system, which is typically less than 8mm thick, closed, and fan-less, a significant increase in the thermal envelope can be achieved, while maintaining a safe device skin temperature, when the device is docked with active cooling. Moreover, the adaptive thermal and performance management features of the various example embodiments described herein can provide among the following additional advantages:Dynamic and instant SOC performance scalability without any system redesign.The adaptive thermal and performance management features can be made available as needed by the user, without unduly compromising battery life. In particular, by adapting maximum power limits on as needed basis, the user can save on battery life.No thermal considerations need to be factored into an original electronic system design to still obtain the advantages of the increased SOC TDP as disclosed herein.Can be implemented for any fan-less electronic device without any impact or change to the internal design of the device, unlike internal cooling which requires vents/ways to direct airflow.It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of alternative usage models can also be employed. Thus, the various embodiments described herein provide systems and methods for adaptive thermal and performance management in electronic devices.The example embodiments described herein provide a technical solution to a technical problem. The various embodiments improve the functioning of the electronic device by providing systems and methods for adaptive thermal and performance management in an electronic device. The various embodiments also serve to transform the state of various system components based on a dynamically determined system context. Additionally, the various embodiments effect an improvement in a variety of technical fields including the fields of dynamic data processing, thermal regulation, mobile computing, information sharing, and mobile communications.Referring now to FIG. 8, a processing flow diagram illustrates an example embodiment of a method 1100 for a method as described herein. The method 1100 of an example embodiment includes: providing a processor with a plurality of selectable performance levels and a sensor in an electronic device (processing block 1110); receiving sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow (processing block 1120); determining a device context from the sensor information (processing block 1130); and dynamically modifying the performance level of the processor by implementing one of a plurality of selectable performance levels of the processor based on the device context (processing block 1140).FIG. 9 shows a diagrammatic representation of a machine in the example form of an electronic device, such as a mobile computing and/or communication system 700 within which a set of instructions when executed and/or processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a laptop computer, a tablet computing system, a Personal Digital Assistant (PDA), a cellular telephone, a smartphone, a web appliance, a set-top box (STB), a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) or activating processing logic that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" can also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions or processing logic to perform any one or more of the methodologies described and/or claimed herein.The example mobile computing and/or communication system 700 includes a data processor 702 (e.g., a System-on-a-Chip [SoC], general processing core, graphics core, and optionally other processing logic) and a memory 704, which can communicate with each other via a bus or other data transfer system 706. The mobile computing and/or communication system 700 may further include various input/output (I/O) devices and/or interfaces 710, such as a touchscreen display, an audio jack, and optionally a network interface 712. In an example embodiment, the network interface 712 can include one or more radio transceivers configured for compatibility with any one or more standard wireless and/or cellular protocols or access technologies (e.g., 2nd (2G), 2.5, 3rd (3G), 4th (4G) generation, and future generation radio access for cellular systems, Global System for Mobile communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), LTE, CDMA2000, WLAN, Wireless Router (WR) mesh, and the like). Network interface 712 may also be configured for use with various other wired and/or wireless communication protocols, including TCP/IP, UDP, SIP, SMS, RTP, WAP, CDMA, TDMA, UMTS, UWB, WiFi, WiMax, Bluetooth, IEEE 802. llx, and the like. In essence, network interface 712 may include or support virtually any wired and/or wireless communication mechanisms by which information may travel between the mobile computing and/or communication system 700 and another computing or communication system via network 714.The memory 704 can represent a machine-readable medium on which is stored one or more sets of instructions, software, firmware, or other processing logic (e.g., logic 708) embodying any one or more of the methodologies or functions described and/or claimed herein. The logic 708, or a portion thereof, may also reside, completely or at least partially within the processor 702 during execution thereof by the mobile computing and/or communication system 700. As such, the memory 704 and the processor 702 may also constitute machine-readable media. The logic 708, or a portion thereof, may also be configured as processing logic or logic, at least a portion of which is partially implemented in hardware. The logic 708, or a portion thereof, may further be transmitted or received over a network 714 via the network interface 712. While the machine -readable medium of an example embodiment can be a single medium, the term "machine-readable medium" should be taken to include a single non-transitory medium or multiple non- transitory media (e.g., a centralized or distributed database, and/or associated caches and computing systems) that store the one or more sets of instructions. The term "machine -readable medium" can also be taken to include any non-transitory medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term "machine-readable medium" can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.With general reference to notations and nomenclature used herein, the description presented herein may be disclosed in terms of program procedures executed on a computer or a network of computers. These procedural descriptions and representations may be used by those of ordinary skill in the art to convey their work to others of ordinary skill in the art.A procedure is generally conceived to be a self-consistent sequence of operations performed on electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. These signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities. Further, the manipulations performed are often referred to in terms such as adding or comparing, which operations may be executed by one or more machines. Useful machines for performing operations of various embodiments may include general-purpose digital computers or similar devices. Various embodiments also relate to apparatus or systems for performing these operations. This apparatus may be specially constructed for a purpose, or it may include a general-purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The procedures presented herein are not inherently related to a particular computer or other apparatus. Various general-purpose machines may be used with programs written in accordance with teachings herein, or it may prove convenient to construct more specialized apparatus to perform methods described herein.In various embodiments as described herein, example embodiments include at least the following examples.An electronic device comprising: a processor with a plurality of selectable performance levels; a sensor; and an adaptive thermal and performance management subsystem in data communication with the processor and the sensor, the adaptive thermal and performance management subsystem to: receive sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow; determine a device context from the sensor information; and dynamically modify the performance level of the processor by implementing one of the plurality of selectable performance levels of the processor based on the device context.The electronic device as claimed above wherein the adaptive thermal and performance management subsystem being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.The electronic device as claimed above wherein the sensor information including information for determining if the electronic device is inserted into a dock with an active airflow.The electronic device as claimed above wherein the adaptive thermal and performance management subsystem to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.The electronic device as claimed above wherein the adaptive thermal and performance management subsystem to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.The electronic device as claimed above wherein the adaptive thermal and performance management subsystem to dynamically monitor the current temperature of the electronic device.The electronic device as claimed above wherein the adaptive thermal and performance management subsystem to dynamically modify the performance level of the processor by changing a power level and changing a thermal envelope by modifying skin temperature (TSkin) limits.The electronic device as claimed above wherein the sensor information includes orientation data associated with orientation of the electronic device, wherein the device context is further based on the orientation of the electronic device.A method comprising: providing a processor with a plurality of selectable performance levels and a sensor in an electronic device; receiving sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned proximately to an active airflow; determining a device context from the sensor information; and dynamically modifying the performance level of the processor by implementing one of a plurality of selectable performance levels of the processor based on the device context.The method as claimed above including selecting a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.The method as claimed above wherein the sensor information including information for determining if the electronic device is inserted into a dock with an active airflow.The method as claimed above including dynamically increasing the performance level of the processor if the electronic device is positioned proximately to an active airflow.The method as claimed above including dynamically decreasing the performance level of the processor if the electronic device is not positioned proximately to an active airflow.The method as claimed above including dynamically monitoring the current temperature of the electronic device. The method as claimed above wherein dynamically modifying the performance level of the processor includes changing a power level and changing a thermal envelope by modifying skin temperature (Τ5ΜΠ) limits.The method as claimed above wherein the sensor information includes orientation data associated with orientation of the electronic device, wherein the device context is further based on the orientation of the electronic device.A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to: receive sensor information from a sensor, the sensor information including information for determining if an electronic device is positioned proximately to an active airflow; determine a device context from the sensor information; and dynamically modify the performance level of a processor having a plurality of selectable performance levels by implementing one of a plurality of selectable performance levels of the processor based on the device context.The machine-useable storage medium as claimed above being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.The machine-useable storage medium as claimed above wherein the sensor information including information for determining if the electronic device is inserted into a dock with an active airflow.The machine-useable storage medium as claimed above being further configured to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.The machine-useable storage medium as claimed above being further configured to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.The machine-useable storage medium as claimed above being further configured to dynamically monitor the current temperature of the electronic device.The machine-useable storage medium as claimed above being further configured to dynamically modify the performance level of the processor by changing a power level and changing a thermal envelope by modifying skin temperature (Ύ^η) limits.The machine-useable storage medium as claimed above wherein the sensor information includes orientation data associated with orientation of the electronic device, wherein the device context is further based on the orientation of the electronic device.An electronic system comprising: an electronic device docking mechanism with an active airflow producing element; and an electronic device for insertion into the electronic device docking mechanism, the electronic device including: a processor with a plurality of selectable performance levels; a sensor; and an adaptive thermal and performance management subsystem in data communication with the processor and the sensor, the adaptive thermal and performance management subsystem to: receive sensor information from the sensor, the sensor information including information for determining if the electronic device is positioned in the electronic device docking mechanism; determine a device context from the sensor information; and dynamically modify the performance level of the processor by implementing one of the plurality of selectable performance levels of the processor based on the device context.The electronic system as claimed above wherein the adaptive thermal and performance management subsystem being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.The electronic system as claimed above wherein the sensor information including information for determining if the electronic device is positioned proximately to an active airflow.The electronic system as claimed above wherein the adaptive thermal and performance management subsystem to dynamically increase the performance level of the processor if the electronic device is positioned proximately to an active airflow.The electronic system as claimed above wherein the adaptive thermal and performance management subsystem to dynamically decrease the performance level of the processor if the electronic device is not positioned proximately to an active airflow.The electronic system as claimed above wherein the adaptive thermal and performance management subsystem to dynamically monitor the current temperature of the electronic device.The electronic system as claimed above wherein the adaptive thermal and performance management subsystem to dynamically modify the performance level of the processor by changing a power level and changing a thermal envelope by modifying skin temperature (TSkin) limits.The electronic system as claimed above wherein the sensor information includes orientation data associated with orientation of the electronic device, wherein the device context is further based on the orientation of the electronic device.An apparatus comprising: a data processing means with a plurality of selectable performance levels; a sensing means; and an adaptive thermal and performance management means in data communication with the data processing means and the sensing means, the adaptive thermal and performance management means to: receive sensing information from the sensing means, the sensing information including information for determining if the apparatus is positioned proximately to an active airflow; determine a device context from the sensing information; and dynamically modify the performance level of the data processing means by implementing one of the plurality of selectable performance levels of the data processing means based on the device context.The apparatus as claimed above wherein the adaptive thermal and performance management means being further configured to select a policy from the group consisting of: an active cooling policy, a passive policy, an adaptive performance policy, and a critical policy.The apparatus as claimed above wherein the sensing information including information for determining if the apparatus is inserted into a dock with an active airflow.The apparatus as claimed above wherein the adaptive thermal and performance management means to dynamically increase the performance level of the data processing means if the apparatus is positioned proximately to an active airflow.The apparatus as claimed above wherein the adaptive thermal and performance management means to dynamically decrease the performance level of the data processing means if the apparatus is not positioned proximately to an active airflow.The apparatus as claimed above wherein the adaptive thermal and performance management means to dynamically monitor the current temperature of the apparatus.The apparatus as claimed above wherein the adaptive thermal and performance management means to dynamically modify the performance level of the data processing means by changing a power level and changing a thermal envelope by modifying skin temperature (Tskin) limits.The apparatus as claimed above wherein the sensing information includes orientation data associated with orientation of the apparatus, wherein the device context is further based on the orientation of the apparatus.The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Method for packaging a semiconductor die assemblies. In one embodiment, a method is directed to packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack over the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies. The method can comprise coupling a thermal transfer structure to the peripheral region of the first die and flowing an underfill material between the second dies. The underfill material is flowed after coupling the thermal transfer structure to the peripheral region of the first die such that the thermal transfer structure limits lateral flow of the underfill material. |
CLAIMSI/We claim:1. A method for packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack and attached to the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies, the method comprising:positioning at least a portion of a thermal transfer structure at the peripheral region of the first die, wherein the thermal transfer structure comprises a thermally conductive material; andflowing an underfill material between the second dies after positioning the thermal transfer structure on the peripheral region of the first die, wherein the underfill material has a fillet extending laterally from the stack of second dies, and wherein the lateral extension of the underfill material is limited by the thermal transfer structure.2. The method of claim 1 wherein:the thermal transfer structure comprises a first portion having a foundation configured to extend at least around a portion of the first die and a shoulder configured to be positioned over the peripheral region of the first die; andpositioning at least a portion of the thermal transfer structure on the peripheral region of the first die comprises attaching the foundation to a package support substrate and the shoulder to the peripheral region of the first die with a thermal interface material between the shoulder and the peripheral region of the first die.3. The method of claim 1 wherein:the thermal transfer structure comprises a first portion having (a) a foundation configured to extend at least around a portion of the first die and be attached to a package support substrate and (b) a shoulder configured to be attached to an upper surface of the peripheral region; positioning at least a portion of the thermal transfer structure on the peripheral region of the first die comprises attaching the foundation to a package support substrate and the shoulder to the peripheral region of the first die with a thermal interface material between the shoulder and the peripheral region of the first die; and the method further comprises attaching a second portion of the thermal transfer structure to the first portion of the thermal transfer structure after flowing the underfill material, wherein first and second portions of the thermal transfer structure form a casing having a cavity in which the first die and the stack of second dies are positioned.4. The method of claim 3 wherein the first portion of the thermal transfer structure comprises a dam member and the second portion of the thermal transfer member comprises a cover.5. The method of claim 4 wherein the cover has a top and a sidewall pendent from the top, and the sidewall is attached to the dam member.6. The method of claim 4 wherein the dam member comprises a ring that surrounds the first die.7. The method of claim 3 wherein the first portion of the thermal transfer structure comprises a sidewall extending to a height of an uppermost second die and the second portion of the thermal transfer member comprises a top.8. The method of claim 1 wherein:the thermal transfer structure comprises a sidewall, a top integrally formed with the sidewall, a cavity formed by the sidewall and the top, and an inlet; the sidewall has a foundation configured to surround at least a portion of the first die and a shoulder configured to be positioned over the peripheral region of the first die; positioning the thermal transfer structure on the peripheral region of the first die comprises attaching the foundation to a package support substrate and attaching the shoulder to the peripheral region of the die, wherein the stack of second dies is in the cavity; and flowing the underfill material between the second dies comprises injecting the underfill material into the cavity via the inlet.9. The method of claim 8 wherein the inlet is a first passageway through a lower area of the sidewall and the outlet is a second passageway through an upper area of the sidewall.10. The method of claim 8 wherein the inlet is a first passageway through the top and the outlet is a second passageway through the top.1 1. The method of claim 1 wherein:the thermal transfer structure comprises an inner casing having a first support and a top extending from the first support;positioning at least a portion of the thermal transfer structure on the peripheral region of the first die comprises attaching the first support to the peripheral region of the die, wherein the stack of second dies is under the top of the casing; and flowing the underfill material between the second dies comprises flowing the underfill material between the stack of second dies and the inner casing.12. The method of claim 1 1 wherein the inner casing further comprises a second support and the top has one end attached to the first support and another end attached to the second support, and wherein positioning the thermal transfer structure on the peripheral region of the first die comprises attaching the first and second supports to the peripheral region of the first die.13. The method of claim 1 1 wherein the thermal transfer structure further comprises an outer casing having a cavity, and the method further includes attaching the outer casing to a package support substrate and the inner casing such that the inner casing is received within the cavity of the outer casing.14. The method of claim 1 wherein the thermal transfer structure comprises a metal casing having a cavity, and wherein positioning at least a portion of the thermal transfer structure at the peripheral region of the first die comprises attaching a lower portion of the metal casing to the peripheral region of the first die with a thermal interface material such that the stack of second dies is in the cavity of the metal casing.15. The method of claim 14 wherein:the metal casing comprises a metal ring and a metal cover;positioning at least a portion of the thermal transfer structure at the peripheral region of the first die comprises attaching the ring to the peripheral region; and the method further comprises attaching the metal cover to the ring after flowing the underfill material such that the stack of second dies is encased within the metal ring and the metal cover.16. The method of claim 14 wherein:the metal casing comprises a metal sidewall, a metal top that together with the metal sidewall forms the cavity, and an inlet;positioning at least a portion of the thermal transfer structure at the peripheral region of the first die comprises attaching the sidewall to the peripheral region; and flowing an underfill material between the second dies comprises injecting the underfill material into the cavity via the inlet.17. The method of claim 1, further comprising instilling a dielectric liquid within the casing after flowing the underfill material, wherein the dielectric liquid is thermally conductive.18. The method of claim 17, further comprising at least partially curing the dielectric liquid within the cavity.19. A method for packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack over the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies, the method comprising:coupling a thermal transfer structure to the peripheral region of the first die; and flowing an underfill material between the second dies after coupling the thermal transfer structure to the peripheral region of the first die, and wherein the thermal transfer structure limits lateral flow of the underfill material.20. The method of claim 19 wherein:the thermal transfer structure comprises a metal casing having a sidewall and a top; coupling the thermal transfer structure to the peripheral region of the first die comprises attaching a lower portion of the sidewall to the peripheral region of the first die and attaching the top to an uppermost second die of the stack of second dies; and flowing the underfill material comprises instilling the underfill material between at least the sidewall of the casing and the stack of second dies.21. The method of claim 19 wherein:the thermal transfer structure comprises a metal casing having a sidewall, a top integrally formed with the sidewall, and an inlet, and the sidewall further comprises a foundation configured to extend at least around a portion of the first die and a shoulder configured to be positioned over the peripheral region of the first die; coupling the thermal transfer structure to the peripheral region of the first die comprises attaching the foundation to a package support substrate and attaching the shoulder to the peripheral region of the first die, wherein a thermal interface material is between the shoulder and the peripheral region of the first die; and flowing the underfill material comprise injecting the underfill material into the casing through the inlet.22. The method of claim 21 wherein the casing further comprises an outlet, and the method further comprises exhausting matter from the casing via the exhaust port and plugging the outlet.23. The method of claim 19 wherein:the thermal transfer structure comprises a metal casing having the base and a separate cover, and wherein the base further comprises a foundation configured to extend at least around a portion of the first die and a shoulder configured to be positioned over the peripheral region of the first die;coupling the thermal transfer structure to the peripheral region of the first die comprises attaching the foundation of the base to a package support substrate and attaching the shoulder of the base to the peripheral region of the first die, wherein a thermal interface material is between the shoulder of the base and the peripheral region of the first die; andflowing the underfill material comprise depositing the underfill material such that it flows between the base and the stack of second dies.24. The method of 23, further comprising attaching the cover to the base after flowing the underfill material.25. The method of claim 19 wherein:the thermal transfer structure comprises an inner casing having a first support, a second support, and a top extending between the first and second supports such that the inner casing has a cavity;coupling the thermal transfer structure to the peripheral region of the first die comprises attaching the first and second supports to the peripheral region of the first die with a thermal interface material such that the stack of second dies is in the cavity; andflowing the underfill material comprises depositing the underfill material between at least the stack of second dies and the first and second supports.26. The method of claim 25 wherein the thermal transfer structure further comprises an outer casing, and the method further comprises attaching the outer casing to a package support substrate and the inner casing, wherein the outer casing has a cavity in which the stack of second dies, the first die and the inner casing are positioned.27. The method of claim 19 wherein:coupling the thermal transfer structure to the peripheral region of the first die further comprises attaching a thermally conductive casing to a package support substrate, and wherein the casing has a cavity in which the first die and the stack of second dies are positioned; andthe method further comprises injecting a dielectric liquid into the cavity of the casing after flowing the underfill material, wherein the dielectric liquid is thermally conductive.28. The method of claim 27, further comprises curing the dielectric liquid within the cavity.29. A method for packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack and attached to the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies, the method comprising:positioning a dam member on the peripheral region of the first die, wherein the dam member comprises a thermally conductive material; andflowing an underfill material between the second dies after positioning the dam member on the peripheral region of the first die, wherein the dam member has a height that contains a fillet portion of the underfill material.30. A method for packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack and attached to the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies, the method comprising:positioning a portion of a casing on the peripheral region of the first die, wherein the casing member comprises a thermally conductive material and at least partially encloses the stack of second dies; andflowing an underfill material between the second dies after positioning the casing on the peripheral region of the first die.31. A method for packaging a semiconductor die assembly having a first die and a plurality of second dies arranged in a stack over the first die, wherein the first die has a peripheral region extending laterally outward from the stack of second dies, the method comprising:coupling at least a portion of a thermal transfer structure to a package support substrate such that the first die and the second dies are within a cavity of the thermal transfer structure; andinstilling a dielectric liquid in the cavity of the thermal transfer structure, and wherein the dielectric liquid has a high thermal conductivity.32. The method of claim 31 wherein:the thermal transfer structure has a sidewall, a top, a cavity defined by the sidewall and the top, and an inlet;coupling at least a portion of the thermal transfer structure to the package support substrate comprises mounting the sidewall to the package support substrate; and instilling the dielectric liquid in the cavity comprises injecting the dielectric liquid into the inlet.33. The method of claim 31 wherein the sidewall and the top are integrally formed together.34. The method of claim 31 wherein the inlet comprises a passageway though the top.35. The method of claim 31 wherein:the thermal transfer structure has a sidewall and a top, and the sidewall and top are separate components;coupling at least a portion of the thermal transfer structure to the package support substrate comprises mounting the sidewall to the package support substrate; instilling the dielectric liquid in the cavity comprises flowing the dielectric liquid into the cavity before attaching the top to the sidewall, andthe method further comprises attaching the top to the sidewall after instilling the dielectric liquid. |
METHODS OF MANUFACTURING STACKED SEMICONDUCTORDIE ASSEMBLIES WITH HIGH EFFICIENCY THERMAL PATHSTECHNICAL FIELD[0001] The disclosed embodiments relate to semiconductor die assemblies. In particular, the present technology relates to stacked semiconductor die assemblies with highly efficient thermal paths and associated systems and methods.BACKGROUND[0002] Packaged semiconductor dies, including memory chips, microprocessor chips, and imager chips, typically include a semiconductor die mounted on a substrate and encased in a plastic protective covering. The die includes functional features, such as memory cells, processor circuits, and imager devices, as well as bond pads electrically connected to the functional features. The bond pads can be electrically connected to terminals outside the protective covering to allow the die to be connected to higher level circuitry.[0003] Market pressures continually drive semiconductor manufacturers to reduce the size of die packages to fit within the space constraints of electronic devices, while also pressuring them to increase the functional capacity of each package to meet operating parameters. One approach for increasing the processing power of a semiconductor package without substantially increasing the surface area covered by the package (i.e., the package's "footprint") is to vertically stack multiple semiconductor dies on top of one another in a single package. The dies in such vertically-stacked packages can be interconnected by electrically coupling the bond pads of the individual dies with the bond pads of adjacent dies using through-silicon vias (TSVs).[0004] A challenge associated with vertically-stacked die packages is that the heat from the individual dies is additive and it is difficult to dissipate the aggregated heat generated by the stacked die. This increases the operating temperatures of the individual dies, the junctions between the dies, and the package as a whole, which can cause the stacked dies to reach temperatures above their maximum operating temperatures (Tmax). The problem is also exacerbated as the density of the dies in the package increases. Moreover, when devices have different types of dies in the die stack, the maximum operating temperature of the device is limited to the die with the lowest maximum operating temperature. BRIEF DESCRIPTION OF THE DRAWINGS[0005] Figure 1 is a cross-sectional view illustrating a semiconductor die assembly in accordance with embodiments of the present technology.[0006] Figure 2A is a cross-sectional view and Figure 2B is a top plan view illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the technology.[0007] Figure 2C is a cross-sectional view and Figure 2D is a top plan view illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the technology.[0008] Figures 2E and 2F are cross-sectional views illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the technology.[0009] Figure 3 is a cross-sectional view illustrating a semiconductor die assembly in accordance with embodiments of the present technology.[0010] Figure 4A is a cross-sectional view and Figure 4B is a top plan view illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the technology.[0011] Figure 4C is a cross-sectional view illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the present technology.[0012] Figure 4D is a cross-sectional view and Figure 4E is a top plan view illustrating a method of manufacturing a semiconductor die assembly in accordance with embodiments of the present technology.[0013] Figure 5A is a cross-sectional view and Figure 5B is a top plan view of a semiconductor die assembly in accordance with embodiments of the present technology.[0014] Figure 6 is a cross-sectional view of a semiconductor die assembly in accordance with embodiments of the present technology.[0015] Figure 7 is a cross-sectional view of a semiconductor die assembly in accordance with embodiments of the present technology.[0016] Figure 8 is a cross-sectional view of a semiconductor die assembly in accordance with embodiments of the present technology. [0017] Figure 9 is a cross-sectional view of a semiconductor die assembly in accordance with embodiments of the present technology.[0018] Figure 10 is a schematic view of a system that includes a semiconductor die assembly configured in accordance with embodiments of the present technology.DETAILED DESCRIPTION[0019] Specific details of several embodiments of stacked semiconductor die assemblies with highly efficient thermal paths and associated systems and methods are described below. The term "semiconductor die" generally refers to a die having integrated circuits or components, data storage elements, processing components, and/or other features manufactured on semiconductor substrates. For example, semiconductor dies can include integrated circuit memory and/or logic circuitry. Semiconductor dies and/or other features in semiconductor die packages can be said to be in "thermal contact" with one another if the two structures can exchange energy through heat via, for example, conduction, convection and/or radiation. A person skilled in the relevant art will also understand that the technology may have additional embodiments, and that the technology may be practiced without several of the details of the embodiments described below with reference to Figures 1-10[0020] As used herein, the terms "vertical," "lateral," "upper" and "lower" can refer to relative directions or positions of features in the semiconductor die assemblies in view of the orientation shown in the Figures. For example, "upper" or "uppermost" can refer to a feature positioned closer to the top of a page than another feature. These terms, however, should be construed broadly to include semiconductor devices having other orientations, such as inverted or inclined orientations where top/bottom, over/under, above/below, up/down and left/right can be interchanged depending on the orientation.[0021] Figure 1 is a cross-sectional view illustrating a semiconductor die assembly 100 ("assembly 100") in accordance with an embodiment of the present technology. The assembly 100 can include a package support substrate 102, a first semiconductor die 110 mounted to the package support substrate 102, and a plurality of second semiconductor dies 120 arranged in a stack 122 at a stacking area, such as a central region or an off-center region, of the first die 110. The first die 110 can further include a peripheral region 112 laterally outboard of the second dies 120 and a thermal transfer structure (TTS) 130 having a first portion 131 attached to the peripheral region 112 of the first die 110 by an adhesive 133 and a second portion 132 covering, enclosing or otherwise over the stack 122 of second dies 120. The adhesive 133, for example, can be a thermal interface material ("TIM") or another suitable adhesive. For example, TIMs and other adhesives can include silicone-based greases, gels, or adhesives that are doped with conductive materials (e.g., carbon nano-tubes, solder materials, diamond-like carbon (DLC), etc.), as well as phase-change materials. In the embodiment illustrated in Figure 1, the first portion 131 is a base, such as a dam member, that extends at least from the peripheral region 112 of the first die 110 to a height at an intermediate elevation of the stack 122 of second dies 120. The second portion 132 is a cover that is attached to the first portion 131 and the uppermost second die 120 by the adhesive 133. The first portion 131 and second portion 132 together can define a casing made from a metal (e.g., copper or aluminum) or other highly thermally conductive materials, and the first and second portions 131 and 132 together can define a cavity 138 in which the stack 122 of second dies 120 are positioned.[0022] The assembly 100 further includes an underfill material 160 between each of the second dies 120 and between the first die 110 and the bottom second die 120. The underfill material 160 can form a fillet 162 that extends outwardly from the stack 122 of second dies 120 in a region proximate the first die 110. The assembly 100 is expected to provide enhanced thermal dissipation of heat from the first die 110 and the stack 122 of second dies 120. For example, the TTS 130 can be made from a material with a high thermal conductivity to efficiently transfer heat along a first path directly from a large portion of the peripheral region 112 of the first die 110 and along a second path through the second dies 120. The first portion 131 of the TTS 130 is attached to a large percentage of the available area of the peripheral region 112 of the first die 110 because the first portion 131 provides a dam that prevents the fillet 162 of underfill material 160 from covering a significant percentage of the peripheral region 112. This enhances the efficiency of the first heat path because, compared to devices where the underfill material is deposited before the first portion 131 is attached to the peripheral region 112 of the first die 110, more surface area of the peripheral region 112 can be covered by the first portion 131 of the TTS 130.[0023] Several embodiments of the assembly 100 shown in Figure 1 can accordingly provide enhanced thermal properties that lower the operating temperatures of the individual dies 110, 120 in the assembly 100 such that they stay below their designated maximum temperatures (Tmax). This can be very useful when the assembly 100 is arranged as a hybrid memory cube (HMC) because the first die 110 is generally a larger underlying logic die and the second dies 120 are generally memory dies, and logic dies typically operate at a much higher power level than memory dies (e.g., 5.24 W compared to 0.628 W). The logic die HMC configuration generally concentrates a significant amount of heat at the peripheral region 112 of the first die 110. The logic die may also have a higher power density at the peripheral region, resulting in a further concentration of heat and higher temperatures at the peripheral region. As such, by coupling a large percentage of the peripheral region 112 of the first die 110 to the highly conductive first portion 131 of the TTS 130, the heat can be efficiently removed from the peripheral region 112 of the first die.[0024] Figures 2A-2F illustrate aspects of a method of manufacturing the assembly 100 in accordance with embodiments of the present technology. Figure 2A is a cross-sectional view and Figure 2B is a top plan view of a stage of manufacturing the assembly 100. Referring to Figure 2A, the package support substrate 102 is configured to connect the first and second dies110, 120 to external electrical components of higher- level packaging (not shown). For example, the package support substrate 102 can be an interposer or printed circuit board that includes semiconductor components (e.g., doped silicon wafers or gallium arsenide wafers), non- conductive components (e.g., various ceramic substrates, such as aluminum oxide (AI2O3), aluminum nitride (ΑΓΝ), etc.), and/or conductive portions (e.g., interconnecting circuitry, TSVs, etc.). In the embodiment illustrated in Figure 2A, the package support substrate 102 is electrically coupled to the first die 110 at a first side 103a of the package support substrate 102 via a first plurality of electrical connectors 104a and to external circuitry (not shown) at a second side 103b of the package support substrate 102 via a second plurality of electrical connectors 104b (collectively referred to as "the electrical connectors 104"). The electrical connectors 104 can be solder balls, conductive bumps and pillars, conductive epoxies, and/or other suitable electrically conductive elements. In various embodiments, the package support substrate 102 can be made from a material with a relatively high thermal conductivity to enhance heat dissipation at the back side of the first semiconductor die 110.[0025] As shown in Figures 2A and 2B, the first die 110 can have a larger footprint than the stacked second dies 120. The first die 110, therefore, includes a mounting region 111 (Figure 2A) or stacking area where the second dies 120 are attached to the first die 110 and the peripheral region 112 extends laterally outward beyond at least one side of the mounting region111. The peripheral region 112 is accordingly outboard of the second dies 120 (e.g., beyond the length and/or width of the second dies 120). [0026] The first and second dies 110, 120 can include various types of semiconductor components and functional features, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, other forms of integrated circuit memory, processing circuits, imaging components, and/or other semiconductor features. In various embodiments, for example, the assembly 100 can be configured as an HMC in which the stacked second dies 120 are DRAM dies or other memory dies that provide data storage and the first die 110 is a high-speed logic die that provides memory control (e.g., DRAM control) within the HMC. In other embodiments, the first and second dies 110 and 120 may include other semiconductor components and/or the semiconductor components of the individual second dies 120 in the stack 122 may differ.[0027] The first and second dies 110, 120 can be rectangular, circular, and/or other suitable shapes and may have various different dimensions. For example, the individual second dies 120 can each have a length Li of about 10-11 mm (e.g., 10.7 mm) and a width of about 8-9 mm (e.g., 8.6 mm, 8.7 mm). The first die 110 can have a length L2 of about 12-13 mm (e.g., 12.67 mm) and a width of about 8-9 mm (e.g., 8.5 mm, 8.6 mm, etc.). In other embodiments, the first and second dies 110 and 120 can have other suitable dimensions and/or the individual second dies 120 may have different dimensions from one another.[0028] The peripheral region 112 (known to those skilled in the art as a "porch" or "shelf) of the first die 110 can be defined by the relative dimensions of the first and second dies 110 and 120 and the position of the stack 122 on a forward-facing surface 114 of the first die 110. In the embodiment illustrated in Figures 2A and 2B, the stack 122 is centered with respect to the length L2 of the first die 110 such that the peripheral region 112 extends laterally beyond two opposite sides of the stack 122. For example, if the length L2 of the first die 110 is about 1.0 mm greater than the length Li of the second dies 120, the peripheral region 112 will extend about 0.5 mm beyond either side of the centered second dies 120. The stack 122 may also be centered with respect to the width of the first die 110 and, in embodiments where both the width and length of the first die 110 are greater than those of the centered stack 122, the peripheral region 112 may extend around the entire perimeter of the second dies 120. In other embodiments, the stack 122 may be offset with respect to the forward- facing surface 114 (Figure 2A) of the first die 110 and/or the peripheral region 112 of the first die 110 may extend around less than the full perimeter of the stack 122. In further embodiments, the first and second dies 110 and 120 can be circular, and therefore the relative diameters of the first and second dies 110 and 120 define the peripheral region 112.[0029] As shown in Figure 2A, the second dies 120 can be electrically coupled to one another in the stack 122 and to the underlying first die 110 by a plurality of electrically conductive elements 124 positioned between adjacent dies 110, 120. Although the stack 122 shown in Figure 1 includes eight second dies 120 electrically coupled together, in other embodiments the stack 122 can include more or less than eight dies (e.g., 2-4 dies, or at least 9 dies etc.). The electrically conductive elements 124 can have various suitable structures, such as pillars, columns, studs, bumps, and can be made from copper, nickel, solder (e.g., SnAg-based solder), conductor-filled epoxy, and/or other electrically conductive materials. In selected embodiments, for example, the electrically conductive elements 124 can be copper pillars, whereas in other embodiments the electrically conductive elements 124 can include more complex structures, such as bump-on-nitride structures.[0030] As further shown in Figure 2 A, the individual second dies 120 can each include a plurality of TSVs 126 aligned on one or both sides with corresponding electrically conductive elements 124 to provide electrical connections at opposing sides of the second dies 120. Each TSV 126 can include an electrically conductive material (e.g., copper) that passes completely through the individual second dies 120 and an electrically insulative material surrounding the electrically conductive material to electrically isolate the TSVs 126 from the remainder of the second dies 120. Though not shown in Figure 1, the first die 110 can also include a plurality of TSVs 126 to electrically couple the first die 110 to higher level circuitry. Beyond electrical communication, the TSVs 126 and the electrically conductive elements 124 provide thermal conduits through which heat can be transferred away from the first and second dies 110 and 120 (e.g., through the first thermal path). In some embodiments, the dimensions of the electrically conductive elements 124 and/or the TSVs 126 can be increased to enhance heat transfer vertically through the stack 122. For example, the individual electrically conductive elements 124 can each have a diameter of about 15-30 μιη or other suitable dimensions to enhance the thermal pathway through the dies 110, 120. In other embodiments, the second dies 120 can be electrically coupled to one another and to the first die 110 using other types of electrical connectors (e.g., wirebonds) that may also provide thermal pathways through the stack 122.[0031] In various embodiments, the assembly 100 may also include a plurality of thermally conductive elements 128 (shown in broken lines) positioned interstitially between the electrically conductive elements 124. The individual thermally conductive elements 128 can be at least generally similar in structure and composition as that of the electrically conductive elements 124 (e.g., copper pillars). However, the thermally conductive elements 128 are not electrically coupled to the TSVs 126 or other electrically active components of the dies 110 and 120, and therefore do not provide electrical connections between the second dies 120. Instead, the thermally conductive elements 128 are electrically isolated "dumb elements" that increase the overall thermal conductivity through the stack 122 to enhance the heat transfer along a first thermal path. For example, in embodiments where the assembly 100 is configured as a HMC, the addition of the thermally conductive elements 128 between the electrically conductive elements 124 has been shown to decrease the operating temperature of the HMC by several degrees (e.g., about 6-7°C).[0032] Figure 2C is a cross-sectional view and Figure 2D is a top plan view illustrating a subsequent stage of a method for manufacturing the assembly 100 after the first portion 131 of the TTS 130 (Figure 1) has been attached to the first die 110 and the package support substrate 102. Referring to Figure 2C, this embodiment of the first portion 131 has a foundation 142 (e.g., footing) configured to extend around at least a portion of the first die 110 and a shoulder 144 configured to be positioned over the peripheral region 112 of the first die 110. The first portion 131 can further include a sidewall 146 that extends to a height (Hi) relative to the stack 122 of second dies 120. The sidewall 146 is also spaced apart from the stack 122 of second dies 120 by a gap (G) such that the shoulder 144 covers a significant percentage of the peripheral region 112 (e.g., coverage area (C)). The foundation 142 can be attached to the package support substrate 102 by an adhesive 148, and the shoulder 144 can be attached to the peripheral region 112 of the first die 110 by the thermally conductive adhesive 133. The adhesives 133 and 148 can be the same adhesive, or they can be different from each other. The adhesive 133, for example, can be a TIM. As shown in Figure 2D, the first portion 131 can be a ring that surrounds the first die 110 and the second dies 120.[0033] Figure 2E is a cross-sectional view illustrating another stage of the method of manufacturing the assembly 100 after the underfill material 160 has been deposited between the second dies 120 and between the first die 110 and the bottom second die 120. The underfill material 160 is typically a flowable material that fills the interstitial spaces between the second dies 120, the electrically conductive elements 124, and the thermally conductive elements 128. The first portion 131 of the TTS 130 provides a dam member that inhibits the extent that the fillet 162 covers the peripheral region 112 of the first die 110. For example, instead of the fillet 162 spreading laterally over the peripheral region 112 as in other devices that attach a thermally conductive member to the peripheral region 112 after depositing the underfill material 160, the fillet 162 extends upwardly along a portion of the sidewall 146. The underfill material 160 can be a non-conductive epoxy paste (e.g., XS8448-171 manufactured by Namics Corporation of Niigata, Japan), a capillary underfill, a non-conductive film, a molded underfill, and/or include other suitable electrically-insulative materials. The underfill material 160 can alternatively be a dielectric underfill, such as FP4585 manufactured by Henkel of Dusseldorf, Germany. In some embodiments, the underfill material 160 can be selected based on its thermal conductivity to enhance heat dissipation through the stack 122. The volume of underfill material 160 is selected to adequately fill the interstitial spaces such that an excess portion of the underfill material 160 goes into the gap (G) between the sidewall 146 of the first portion 131 and the stack 122 of second dies 120 to form the fillet 162. The height (Hi), gap (G), and coverage area (C), are selected to provide a large coverage area (C) of the peripheral region 112 while also providing sufficient space between the sidewall 146 and the stack 122 of second dies 120 to accommodate the fillet 162 of underfill material 160.[0034] Figure 2F is a cross-sectional view illustrating the assembly 100 of Figure 1 after the second portion 132 of the TTS 130 has been attached to the first portion 131 to complete the TTS 130. The second portion 132 can have a top 152 attached to the uppermost second die 120 by the adhesive 133, a bottom 154 attached to the first portion 131 by the adhesive 133, and a sidewall 156 pendent from the top 152. The first portion 131 and second portion 132 together define the cavity 138 which encases the stack 122 of second dies 120. The TTS 130 of the embodiment illustrated in Figure 2F is accordingly a thermally conductive casing that provides enhanced heat transfer to remove heat generated by the first die 110 and the second dies 120. Each of the first portion 131 and the second portion 132 of the TTS 130 can be made from metal, such as copper or aluminum, such that the TTS 130 has a metal base portion and a metal cover.[0035] Figure 3 is a cross-sectional view of another embodiment of the assembly 100 in accordance with the present technology. In this embodiment, the first portion 131 of the TTS 130 has a sidewall 146 with a height (H2) that extends to at least approximately the same elevation as the top of the uppermost second die 120, and the second portion 132 of the TTS 130 has a bottom 154 attached to the top of the sidewall 146. The second portion 132 accordingly does not have a separate sidewall pendent from the top 152. The second portion 132 can be attached to the first portion 131 by the adhesive 133. [0036] Figure 4A is a side cross-sectional view and Figure 4B is a top plan view of a semiconductor die assembly 400 at one stage of a manufacturing process in accordance with the present technology. Several features of the assembly 400 are similar to those described above with respect to the assembly 100, and thus like reference numbers refer to like components in Figures 1-4B. Figure 4A shows the assembly 400 after an inner casing 430 has been attached to the first die 110. The inner casing 430 can include a first support 431 with a first interior surface 433, a second support 432 with a second interior surface 434, and a top 435 extending between the first and second supports 431 and 432. The inner casing 430 has a cavity 436 that is closed on the sides with the first and second supports 431 and 432, but open on the other two sides. The first and second supports 431 and 432 can be attached to the peripheral region 112 of the first die 110 with the adhesive 133. The top 435 of the inner casing 430 can also be attached to the top of the second die 120 by the adhesive 133. As shown in Figure 4B, the inner casing 430 can have a footprint similar to the footprint of the first die 110.[0037] Figure 4C is a side cross-sectional view of the assembly 400 at a subsequent stage of manufacturing after the underfill material 160 has been deposited between the second dies 120 and between the first die 110 and the bottom second die 120. Referring back to Figure 4B, the underfill material can be distributed within the interstitial spaces by flowing the underfill material through the open sides of the inner casing 430 as shown by arrow F. To enhance the flow of underfill material, the assembly 400 can be inclined at an angle such that gravity pulls the underfill material 160 through the interstitial spaces within the cavity 436.[0038] Figure 4D is a side cross-sectional view and Figure 4E is a top plan view of the assembly 400 at a subsequent stage of manufacturing. Referring to Figure 4D, the assembly 400 further includes an outer casing 440 having a sidewall 442 with an inner surface 444 and a top 446 that together define a cavity 448. As shown in Figure 4E, the inner surface 444 of the sidewall 442 has four sides such that the cavity 448 encloses the first die 110, the stack of second dies 120, and the inner casing 430. As shown in Figure 4D, the outer casing 440 can be attached to the package support substrate 102 by the adhesive 148 and to the top 435 of the inner casing 430 by the adhesive 133. This embodiment provides a good thermal interface with the peripheral region 112 of the first die 110 as explained above and with the sides of the second dies 120 because the underfill material 160 can have a higher thermal conductivity than a void in within the casing. [0039] Figure 5A is a cross-sectional view and Figure 5B is a top plan view of a semiconductor device assembly 500 ("assembly 500") in accordance with another embodiment of the present technology. Like reference numbers refer to like components throughout Figures 1-5B. The assembly 500 includes a TTS 530 having a top 532, a sidewall 534 integrally formed with the top 532, and a cavity 538 defined by the top 532 and the sidewall 534. The TTS 530 is a single-piece casing formed from a material having a high thermal conductivity, such as copper or aluminum. The sidewall 534 can have an interior surface 535. In one embodiment as shown in Figure 5B, the interior surface 535 can have four sides configured to be spaced apart from the stack 122 of second dies 120 such that a small gap exists between the second dies 120 and the interior surface 535 of the sidewall 534. Referring back to Figure 5A, the sidewall 534 can further include a foundation 536 attached to the package support substrate 102 by the adhesive 148 and a shoulder 537 attached to the peripheral region 112 of the first die 110 by the adhesive 133. The foundation 536 can be a footing that has an inner surface 539 spaced laterally outward from the peripheral region 112 of the first die 110. The TTS 530 can further include an inlet 540a and an outlet 540b. The inlet 540a can be a first passageway extending through a lower portion of the sidewall 534, and the outlet 540b can be a second passageway that extends through an upper portion of the sidewall 534. Referring to Figure 5B, the inlet 540a and the outlet 540b can be laterally offset from each other, or in other embodiments they can be aligned with each other across the cavity 538. In other embodiments, the inlet 540a and outlet 540b can extend through the sidewall at approximately the same elevation. In still other embodiments, the inlet 540a can be positioned relatively higher along the sidewall 534 than the outlet 540b.[0040] The underfill material 160 is injected (I) into the cavity 538 via the inlet 540a such that the underfill material 160 fills the interstitial spaces between the second dies 120 and between the first die and the bottom second die 120. In one embodiment, the underfill material 160 can be injected into the cavity 538 until the underfill material 160 flows out of the outlet 540b (O). The inlet 540a and outlet 540b can be sealed by filling these passageways with the underfill material 160, or in other embodiments the exterior openings of the inlet 540a and outlet 540b can be capped with another material to seal the cavity 538 within the TTS 530. As a result, the TTS 530 provides a dam member that effectively contains the underfill material 160 while also providing coverage of a large surface area of the peripheral region 112 of the first die 110 by the shoulder 537 of the sidewall 534. Moreover, the underfill material 160 also contacts the sides of the second die 120 to also enhance the heat transfer laterally away from the second dies 120. [0041] Figure 6 is a cross-sectional view of a semiconductor die assembly 600 ("assembly 600") in accordance with another embodiment of the present technology. Like reference number refer to like components in Figures 1-6. The assembly 600 can include a TTS 630 having a top 632 and a sidewall 634 having an interior surface 636. The top 632 and the sidewall 634 define a cavity 638 configured to receive the first die 110 and the stack 122 of second dies 120. The top 632 can be attached to the upper second die 120 by the adhesive 133, and the sidewall 634 can be attached to the package support substrate 102 by the adhesive 148. The embodiment of the sidewall 634 shown in Figure 6 does not contact the peripheral region 112 of the first die 110. In other embodiments, the sidewall 634 can have a shoulder adhered to the peripheral region 112 of the first die 110 and a foundation adhered to the package support substrate 102 as shown by the shoulder 537 and foundation 536 of the sidewall 534 show in Figure 5A. The TTS 630 can further include an inlet 640a and an outlet 640b. In the illustrated embodiment, the inlet 640a and outlet 640b are passageways that extend through the top 632 of the TTS 630. In other embodiments, the inlet 640a and/or the outlet 640b can be passageways through the sidewall 634. Additionally, the embodiment of the TTS 630 illustrated in Figure 6 is a single-piece casing in which the top 632 is formed integrally with the sidewall 634. In other embodiments, the top 632 can be a separate component that is attached to the sidewall 634 by an adhesive, such as shown and described with respect to Figure 3.[0042] The assembly 600 further includes a thermally conductive dielectric liquid 670 in the cavity 638. The dielectric liquid 670 can be injected into the cavity 638 (I) via the inlet 640a. The outlet 640b can accordingly provide a vent through which air or other matter can escape (O) from the cavity 638 as the dielectric liquid 670 is injected. The dielectric liquid 670 can be injected as a liquid and remain in the liquid state within the cavity 638, or it can be injected as a liquid and partially cured to a gel-like substance or fully cured to a solid. Suitable thermally conductive dielectric liquids 670 include, for example, paraffin fluid and Dowtherm™ manufactured by the Dow Chemical Company. Suitable Dowtherm™ heat transfer fluids include Dowtherm A™, Dowtherm G™, Dowtherm Q™ and Dowtherm T™, all of which are manufactured by the Dow Chemical Company. The dielectric liquid 670 should have a boiling point greater than the maximum operating temperature of the assembly 600 to avoid generating a gas in the cavity. In some embodiments, the dielectric liquid 670 can be selected to cure to a solid or semi-solid material at ambient temperatures, but undergo a phase change to a liquid state at or near maximum operating temperatures to potentially enhance the heat transfer and provide a steady state operating temperature when maximum operating temperatures are reached.[0043] The dielectric liquid 670 can fill the interstitial spaces between the second dies 120 and between the first die 110 and the bottom second die 120 such that a separate underfill material is not necessarily required. In other embodiments, an underfill material may be deposited between the second dies 120 and between the first die 110 and the bottom second die 120 before filling the cavity 638 with the dielectric liquid 670. The underfill material is generally desirable when the dielectric liquid 670 remains in the liquid state to provide structural support for the dies 110, 120. However, the underfill material can be eliminated when the dielectric liquid 670 cures to a sufficiently solid state.[0044] In operation, the dielectric liquid 670 contacts not only the peripheral region 112 of the first die 110, but also the second dies 120 to efficiently transfer heat to the TTS 630. This provides significantly more surface contact between a material with high thermal conductivity and the dies 110 and 120 compared to devices that use an underfill material and/or have voids between the casing and the dies 110 and 120. In some embodiments, the cavity 638 is completely filled to prevent voids within the TTS 630, and the inlet 640a and outlet 640b are capped to seal the cavity 638. The embodiment of the assembly 600 is expected to provide highly efficient heat transfer from the first and second dies 110 and 120.[0045] Figure 7 is a cross-sectional view of another embodiment of the assembly 600 in accordance with the present technology. In this embodiment, the inlet 640a is a passageway extending through a lower portion of the sidewall 634 and the outlet 640b is a passageway extending through the top 632. This embodiment provides bottom up filling of the cavity 638, which is expected to mitigate the possible formation of air pockets within the cavity 638.[0046] Figure 8 is a cross-sectional view illustrating another embodiment of the assembly 600 in accordance with the present technology. In this embodiment, the TTS 630 is a multi- piece casing having a top component 632 and a separate sidewall 634 that are attached to each other by the adhesive 133. The sidewall 634 can be attached to the package support substrate 102 by the adhesive 148, and then the space between the interior surface 636 of the sidewall 634 and the dies 110 and 120 can be filled with the dielectric liquid 670. The top 632 is then attached to the sidewall 634 and the upper second die 120 by the adhesive 133. In many embodiments, the cavity 638 will have a small void caused by the thickness of the adhesives 133. To avoid having an expandable gas within the cavity 638, the top 632 of the TTS 630 can be attached to the sidewall 634 in a vacuum.[0047] Figure 9 is a cross-sectional view of a semiconductor die assembly 900 ("assembly 900") in accordance with another embodiment of the present technology. The embodiment illustrated in Figure 9 is similar to the embodiment of the assembly 100 illustrated in Figure 2F, and therefore like reference numbers refer to like components in Figures 1-9. In the assembly 900, the TTS 130 can further include an inlet 910a and an outlet 910b in the second portion 132 of the TTS 130. The inlet 910a and outlet 910b are passageways that are exposed to the cavity 138 within the TTS 130. The assembly 900 further includes both the underfill material 160 and the dielectric liquid 670 in the cavity 138. The underfill material 160 can be deposited as described above with reference to Figure 2E. The dielectric liquid 670 can be injected into the cavity via the inlet 910a, and air or excess dielectric liquid 670 can pass out of the cavity 138 via the outlet 910b. After the cavity 138 has been filled with the dielectric liquid 670, the inlet 910a and outlet 910b can be capped or otherwise sealed to seal the cavity 138 from the external environment.[0048] Any one of the stacked semiconductor die assemblies described above with reference to Figures 1-9 can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is system 1000 shown schematically in Figure 10. The system 1000 can include a semiconductor die assembly 1010, a power source 1020, a driver 1030, a processor 1040, and/or other subsystems or components 1050. The semiconductor die assembly 1010 can include features generally similar to those of the stacked semiconductor die assemblies described above, and can therefore include multiple thermal paths with good coverage of the peripheral region 112 of the first die 110 that enhance heat dissipation. The resulting system 1000 can perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Accordingly, representative systems 1000 can include, without limitation, hand-held devices (e.g., mobile phones, tablets, digital readers, and digital audio players), computers, and appliances. Components of the system 1000 may be housed in a single unit or distributed over multiple, interconnected units (e.g., through a communications network). The components of the system 1000 can also include remote devices and any of a wide variety of computer readable media.[0049] From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. For example, although many of the embodiments of the semiconductor dies assemblies are described with respect to HMCs, in other embodiments the semiconductor die assemblies can be configured as other memory devices or other types of stacked die assemblies. In addition, the semiconductor die assemblies illustrated in Figures 1-9 include a plurality of first semiconductor dies arranged in a stack on the second semiconductor die. In other embodiments, however, the semiconductor die assemblies can include one first semiconductor die stacked on one or more of the second semiconductor dies. Certain aspects of the new technology described in the context of particular embodiments may also be combined or eliminated in other embodiments. Moreover, although advantages associated with certain embodiments of the new technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein. |
Aspects of a method and system for configurable antenna in an integrated circuit package are provided. In this regard, a phased array antenna embedded in a multi-layer integrated circuit (IC) package may be utilized for transmitting and/or receiving signals. An IC enabled to transmit and/or receive signals may be bonded to the multi-layer IC package and may communicate a reference signal and/or one or more phase shifted versions of said reference signal to the antenna. One or more phase shifters (fabricated, for example, in planar transmission line) may be embedded in the multi-layer IC package and may be controlled via an IC bonded to the multi-layer IC package. The phased array antenna may comprise a plurality of antenna elements which may each comprise an interconnection for communicatively coupling to an associated transmitter and/or receiver, a feeder line, a quarter wavelength transformer, and a radiating portion (e.g., a folded dipole). |
1.A method for signal processing, characterized in that the method includes: transmitting and / or receiving signals through a phased array antenna embedded in a multi-layer integrated circuit package, wherein the phased array antenna is embedded In a layered integrated circuit package, it is integrated into the integrated circuit package through solder balls and thermal epoxy resin; the phased array antenna includes multiple antenna elements, and each antenna element includes a transmitter and / or The interconnection of the receiver, feeder, 1/4 wavelength converter, and the transmission component communicatively coupled; andThe directionality of the phased array antenna is controlled by changing the phase of the signal transmitted from the phased array antenna and / or received by the phased array antenna: The reference signal and / or one or more phase shifted forms of the reference signal are sent to the phased array antenna; by controlling the phase adjustment of the signal coupled with the plurality of antenna elements to obtain a desired transmission mode;Each antenna element of the phased array antenna can transmit and / or receive signals that are phase-shifted with respect to other transmission and / or reception signals.2.The method of claim 1, wherein one or more phase shifters are embedded in the multilayer integrated circuit package.3.The method of claim 2, wherein the method further comprises controlling the one or more phase shifts by one or more control signals from an integrated circuit incorporated into the multilayer integrated circuit package Phase shift.4.The method of claim 2, wherein the one or more phase shifters comprise planar transmission lines.5.A signal processing system, characterized in that the system includes:A phased array antenna embedded in a multi-layer integrated circuit package, which can be used to receive and / or transmit signals, wherein the phased array antenna is embedded in a multi-layer integrated circuit package, which passes through a solder ball and Thermal epoxy resin is incorporated into the integrated circuit package; the phased array antenna includes multiple antenna elements, and each antenna element includes an associated transmitter and / or receiver, feeder, 1/4 wavelength converter , And a communication component that is communicatively coupled to each other; each antenna element of the phased array antenna can transmit and / or receive signals that are phase shifted with respect to other transmit and / or received signals; andAn integrated circuit incorporated in a multi-layer integrated circuit package for transmitting a reference signal and / or one or more phase shifted forms of the reference signal to the phased array antenna.6.The system of claim 5, wherein one or more phase shifters are embedded in the multilayer integrated circuit package. |
A signal processing method and signal processing systemTechnical fieldThe present invention relates to signal processing, and more particularly, to a method and system for a phased array antenna embedded in an integrated circuit package.Background techniqueMobile communication has changed the way people communicate, and mobile phones have changed from luxury to an indispensable part of people's daily lives. Today, the use of mobile devices is driven by the social environment without being restricted by geography and technology. Although voice communication meets the basic requirements of people's communication, and mobile voice communication has further penetrated into people's daily life, the next stage of mobile communication development is mobile Internet. The mobile Internet will become a common source of daily information, and of course, simple and universal mobile access to these data should be realized.As the number of electronic devices that can support wired or mobile communications increases, it will take a lot of effort to make the power usage efficiency of these devices more efficient. For example, most communication devices are mobile wireless devices, which use batteries to operate. In addition, the transmitting and / or receiving circuits in these mobile wireless devices often account for most of the power consumed in these devices. In addition, in some conventional communication systems, the transmitter and / or receiver are usually power inefficient compared to other modules in the portable communication device. Therefore, these transmitters and / or receivers seriously affect the battery life of these mobile wireless devices.Comparing the various features of the system that will be described later in conjunction with the drawings of the present invention, other limitations and disadvantages of existing and conventional technologies will be apparent to those of ordinary skill in the art.Summary of the inventionThe present invention provides a system and / or method for a phased array antenna in an integrated circuit package, which is fully demonstrated and described in conjunction with at least one drawing, and is more fully elaborated in the claims.According to one aspect, a method for signal processing includes transmitting and / or receiving signals through a phased array antenna embedded in a multi-layer integrated circuit package.Preferably, the integrated circuit for performing the transmission and / or reception is bonded to the multilayer integrated circuit package through one or more solder balls.Preferably, the integrated circuit transmits a reference signal and / or one or more phase shifted forms of the reference signal to the phased array antenna.Preferably, one or more phase shifters are embedded in the multilayer integrated circuit package.Preferably, the method further includes controlling the phase shift of the one or more phase shifters by one or more control signals from the integrated circuit incorporated into the multilayer integrated circuit package.Preferably, the one or more phase shifters include a planar transmission line.Preferably, the phased array antenna includes a plurality of antenna elements, and each antenna element includes a coupler for communicatively coupling with an associated transmitter and / or receiver, feeder, 1/4 wavelength converter, and transmitting component Connected to each other.Preferably, the emitting component is a folded dipole structure.Preferably, the phased array antenna includes a planar transmission line.Preferably, the multilayer integrated circuit package includes one or more ferromagnetic and / or ferrimagnetic materials.According to one aspect, a signal processing system is provided, the system including:A phased array antenna embedded in a multi-layer integrated circuit package, which can be used to receive and / or transmit signals.Preferably, the integrated circuit is bonded to the multilayer integrated circuit package through one or more solder balls.Preferably, the integrated circuit transmits a reference signal and / or one or more phase shifted forms of the reference signal to the phased array antenna.Preferably, one or more phase shifters are embedded in the multilayer integrated circuit package.Preferably, the phase shift of the one or more phase shifters is controlled by one or more control signals from integrated circuits incorporated into the multilayer integrated circuit package.Preferably, the one or more phase shifters include a planar transmission line.Preferably, the phased array antenna includes a plurality of antenna elements, and each antenna element includes a coupler for communicatively coupling with an associated transmitter and / or receiver, feeder, 1/4 wavelength converter, and transmitting component Connected to each other.Preferably, the emitting component is a folded dipole structure.Preferably, the phased array antenna includes a planar transmission line.Preferably, the multilayer integrated circuit package includes one or more ferromagnetic and / or ferrimagnetic materials.Various advantages, aspects and innovative features of the present invention, as well as details of the embodiments illustrated therein, will be described in detail in the following description and drawings.BRIEF DESCRIPTIONThe present invention will be further described below with reference to the drawings and embodiments. In the drawings:1A is a block diagram of a phased array antenna in an integrated circuit package according to an embodiment of the present invention;1B is a typical block diagram of a phased array antenna embedded in and / or on an IC package according to an embodiment of the present invention;2A is a lateral cross-sectional view of a multilayer IC package with an embedded phased array antenna according to an embodiment of the present invention;2B is a lateral cross-sectional view of a multilayer IC package with an embedded phased array antenna and phase shifter according to an embodiment of the present invention;3 is a flowchart of typical steps for transmitting signals using a phased array antenna embedded in and / or on an IC package according to an embodiment of the present invention;4 is a typical block diagram of a wireless device according to an embodiment of the present invention.detailed descriptionCertain embodiments of the invention may include methods and systems for phased array antennas embedded in and / or on integrated circuit packages. In this regard, a phased array antenna embedded in and / or on a multilayer integrated circuit package can be used to transmit and / or receive signals. The multilayer integrated circuit package may include one or more metal layers, insulating materials, ferromagnetic materials, and / or ferrimagnetic materials. In a typical embodiment of the present invention, the antenna further includes one or more planar transmission lines. The phased array antenna may include multiple antenna elements, and each antenna element may include an associated transmitter and / or receiver, feeder (line feeder), quarter wavelength converter (quarter wavelength converter), An interconnection that is communicatively coupled to a transmitting component (eg, a folded dipole antenna). In various exemplary embodiments of the present invention, an IC that can be used to receive and / or transmit signals can be combined on a multilayer IC package through one or more solder balls. Therefore, the IC can send the reference signal and one or more phase shifted versions of the reference signal to the antenna. In a typical embodiment of the present invention, one or more phase shifters (for example, assembled in a planar transmission line) may be embedded in a multilayer IC package and may be integrated into the IC Logic, circuit and / or code control.FIG. 1A is a schematic block diagram of a configurable antenna embedded in and / or on an integrated circuit package according to an embodiment of the present invention. Referring to FIG. 1, a multilayer integrated circuit (IC) package 102, an associated IC ("chip") 106, a phased array antenna 102, and a solder ball 108 are shown.The IC 106 may include suitable logic, circuits, and / or code for performing one or more functions related to transmitting and / or receiving RF signals. In this regard, the IC 106 may include part or all of the system 420 shown in FIG. 4. In this regard, the IC 106 may use an antenna embedded in the package 104 to transmit and / or receive RF signals. In this regard, the IC 106 may include suitable logic, circuitry, and / or code to drive the antenna 102 with multiple phase shift signals. For example, IC 106 may include a transmitter and / or receiver. Or, the IC 106 may include a phase shift circuit and may be coupled with a separate transmitter and / or receiver (eg, via one or more traces on the PCB). In another embodiment, the phase shift element may be assembled in a package 104, and the package 104 may be used as a "standalone" or "standard" antenna communicatively coupled with various transmitters and / or receivers.The IC 106 can be bump-bonded or flip-chip bonded to the multilayer IC package 104 using the solder balls 108. In this method, wire bonds connecting the IC 106 to the multilayer IC package 104 can be eliminated and reduced, and / or uncontrolled leakage inductance caused by wire bonding can be eliminated. In addition, the use of solder balls 108 and thermal epoxy 206 (Figure 2) can significantly improve the thermal conductivity outside the IC 106. The thermal epoxy resin 221 may be electrically insulating, but thermally conductive, so as to allow for greater thermal mass to conduct heat away from the IC 106 to the multilayer integrated circuit package 104.The solder ball 108 may include a spherical metal ball for providing electrical, thermal, and physical contact between the multilayer IC package 104 and the IC 106. In the manufacture of contact with the solder ball 108, sufficient force can be used to crush the IC to flatten the metal ball slightly, and can be performed at a higher temperature to provide appropriate resistance and physical bonding strength, the solder ball 108 can be used Provide electrical, thermal, and physical contact between the multilayer IC package 104 and the printed circuit board that includes other components of the wireless system 420 shown in FIG. 4.The multilayer IC package 104 may include one or more metal layers and / or insulating materials (various embodiments may include ferromagnets and / or ferromagnetic regions and / or layers). In this regard, the package 104 can be manufactured in a similar or the same way as the IC. Therefore, these layers can be used to implement circuit elements such as resistors, transistors, capacitors, transmission lines, switches, and antennas. At this point, multiple components of the phased array antenna 102 may be assembled into and / or on the package 104. Therefore, each of the plurality of antenna elements can transmit and / or receive a signal that is phase-shifted with respect to other transmission and / or reception signals.The phased array antenna 102 may include multiple antenna elements, each of which may be a metal or conductor structure for providing RF energy to the transceiver 423 in the system 420. In various embodiments of the invention, the various elements may be rectangular, circular, and / or other shapes. One or more of these elements may be coupled with one or more of the solder balls 108 (via one or more vias and / or one or more metal layers). In this method, the signal can be passed to / from the package 104. In the exemplary embodiment shown, 4 elements corresponding to 4 phases are used. Therefore, four phase shift representations of the reference signal can be received and / or transmitted through the antenna 102. FIG. 1B shows details of a typical 4-element phased array antenna.In operation, logic, circuitry, and / or code in the IC 106 and / or another device coupled to the package 104 (eg, located on the PCB and coupled through one or more solder balls 108) may pass through the phased array antenna Send and / or receive signals. The phase adjustment of the signal coupled with the antenna element can be controlled to obtain a desired transmission mode. In this method, the sensitivity and / or power in the desired direction may be enhanced compared to the sensitivity and / or power in the other direction.FIG. 1B is a typical block diagram of a phased array antenna embedded in and / or on an IC package according to an embodiment of the present invention. Referring to FIG. 1B, the phased array antenna may include four antenna elements 1501 ... 1504 (collectively referred to herein as 150). Each element 150 may include folded dipole emitting elements (folded dipole radiating elements) 1521 ... 1524 (collectively referred to herein as 152), 1/4 wavelength converters 1541 ... 1544 (collectively referred to herein as 154), Feeders 1561 ... 1564 (collectively referred to herein as 156) and interconnections 1581 ... 1584 (collectively referred to herein as 158).The folded dipole radiating element 152 may be a metal and / or conductive material that can receive and / or transmit RF energy through a wireless channel / media. At this point, the folded dipole radiating elements 1521 ... 1524 can be fitted into, for example, a planar transmission line (for example, a microwave transmission belt and / or a strip line). The physical size of the dipole antenna may affect the optimal transmission and / or reception frequency band. Each dipole radiating element 152 may transmit a signal that is phase-shifted with respect to signals transmitted by other dipole radiating elements 152.The quarter-wavelength converters 154 may be the length of a planar transmission line (for example, a microwave transmission line and / or a strip line). The length and / or width of the quarter-wavelength converter 154 depends on the transmission and / or reception frequency and the impedance of the dipole radiating element 152 and the feed line 156. At this point, the 1/4 wavelength converter 154 can impedance match the feeder 156 to the folded dipole radiating element 152.The feed line 156 may be the length of a planar or coplanar transmission line for coupling to / from the RF signal of the dipole radiating element 152, respectively.The interconnections 158 may be vias and / or one or more metal layers that couple the feeder line 156 on the package 104 to one or more solder balls 108 that couple the package 104 to the IC 106, respectively.In a typical embodiment, the phased array antenna 102 may be designed to transmit and / or receive signals at or near 60 GHz. At this point, the exemplary embodiment can be implemented on a multilayer integrated circuit package of about 5 mm by 5 mm.2A is a lateral cross-sectional view of a multilayer IC package with an embedded phased array antenna according to an embodiment of the present invention. Referring to FIG. 2, a system 200 including an IC 106 and a multi-layer IC package 104 is shown. The multilayer IC package 104 may include an insulating material 203 and metal layers 202, 210. Although only two metal layers are shown, various embodiments of the invention may include any number of metal layers. The phased array antenna 102 can be assembled into the metal layer 202, and one or more devices, such as resistors, capacitors, inductors, transmission lines, phase shifters, etc., can be assembled into the metal layer 210. The IC 106 may be communicatively coupled to the package 104 via the solder ball 108, and the package 104 may be communicatively coupled to the PCB (not shown) via the solder ball 108. One or more surface mount devices 208 can be assembled onto the package 104. The thermal epoxy resin (or similar material) 206 may be pressed between the IC 106 and the package 104.IC 106 can be shown in Figure 1.The solder ball 108 may be as shown in FIG. 1.The surface mount device 208 may include discrete circuit elements, such as resistors, capacitors, inductors, or diodes. The surface mount device 208 can be soldered to the multilayer IC package 104 to provide electrical contact. In various embodiments of the invention, there may or may not be other surface mount devices 208 coupled to the package 104.In a typical embodiment of the present invention, the metal layer 202 may include a deposited metal layer for depicting the phased array antenna 102 shown in FIGS. 1A and 1B. At this point, in shape, the metal layer 202 can be stacked into a planar transmission line (eg, microwave transmission band), and / or can be folded in size to achieve a dipole radiating element 152, 1/4 wavelength converter 154, and feed line 156 .The interconnection 158 may be implemented in the form of one or more through holes that can communicatively couple the phased antenna 102 to one or more solder balls 108.In an exemplary embodiment of the present invention, the metal layer 210 may include a build-up metal layer for describing discrete devices, waveguides, transmission lines, interconnections, and the like. For example, the device 204a may be an inductor assembled to the metal layer 210. In addition, the transmission line 204b may couple the discrete device 208 to the solder ball 108. In this method, the signal can be transmitted to / from the antenna element 102 in the metal layer 202.In operation, IC 106 and associated package 104 may be used to send and / or receive RF signals. The IC 106 can be communicatively coupled to a phased array antenna embedded in and / or on the multilayer IC package 104. The directivity of the phased array antenna can be controlled by changing the phase of the signal transmitted from and / or received by the phased array antenna. For example, the signal to be transmitted can be modulated into an RF carrier, and 4 phase shift forms of the RF carrier can be generated. Therefore, various signals may be coupled to the antenna 102 through one or more solder balls 108. At this point, each signal may be coupled to the corresponding folded dipole emissive element 152 through the interconnection 158, feeder 156, and quarter-wavelength converter 154.2B is a lateral cross-sectional view of a multilayer IC package with embedded configurable antenna and phase shifter according to an embodiment of the present invention. Referring to FIG. 2B, a multilayer IC package 104 having an integrated phased array antenna similar to FIG. 2B is shown. However, FIG. 2B differs from FIG. 2A in that the package 104 in FIG. 2B includes a phase shifter 254. At this point, the RF signal coupled to IC 106 from interconnect 1581 may experience a first phase delay, and the RF signal coupled to IC 106 from interconnect 1583 may experience a second phase delay. Although not shown, each signal coupled to the other interconnect 158 will experience a phase delay.3 is a flowchart of typical steps for transmitting signals using a phased array antenna embedded in and / or on an IC package according to an embodiment of the present invention. Referring to FIG. 3, when the baseband signal is ready to be transmitted, the typical step may start at step 302. After step 302, the typical step will proceed to step 304.In step 304, the baseband signal may be modulated into a carrier signal. For example, the baseband signal can be divided into an in-phase signal and a quadrature-phase signal (quadrature) phase, and modulated into a pair of phase-quadrature carrier signals. In this regard, various embodiments of the invention may use carrier waves at or near 60 GHz. Then, the modulated signal may be combined to generate an RF signal. After step 304, the typical step will proceed to step 306.In step 306, the RF signal generated in step 304 may be divided into multiple phases. For example, in the embodiment shown in FIGS. 1A and 1B, the signal may be divided into 4 phases. At this point, the phase adjustment of the signal can be used to control the direction of the phased array antenna. After step 306, the typical step will proceed to step 308.At step 308, the RF signal may be amplified. At this point, a power amplifier can be used to amplify the signal, so that sufficient signal strength can be transmitted through the phased array antenna. After step 308, the typical step will proceed to step 310.At step 310, the amplified signal may be forwarded to the phased array antenna for transmission. At this point, multiple phases can be forwarded to a corresponding number of transmitting elements.Some steps similar to those shown in FIG. 3 can also be used to receive signals through a phased array antenna integrated into the IC package.4 is a typical block diagram of a wireless device according to an embodiment of the present invention. Referring to FIG. 4, a wireless device 420 that may include an RF receiver 423a, an RF transmitter 423b, a digital baseband processor 429, a processor 425, and a memory 427 is shown. The receiving antenna 421a may be communicatively coupled to the RF receiver 423a. The transmit antenna 421b may be communicatively coupled to the RF transmitter 423b. The wireless device 420 may operate in a system, such as a cellular network and / or digital video broadcasting network.The antennas 421a and 421b may be phased array antennas similar to or the same as the antenna 102 shown in FIG. 1A. At this point, the directivity of the antenna can be controlled by controlling the phase of the signal coupled to the antenna.The RF receiver 423a may include appropriate logic, circuitry, and / or code for processing the received signal. The RF receiver can receive RF signals in multiple frequency bands. For example, the RF receiver 423a can receive signals in an extremely high frequency band (such as 60 GHz). The RF receiver 423a may be used to receive, filter, amplify, down-convert, and / or perform digital-to-analog conversion. The RF receiver 423a can down-convert the received RF. At this point, the RF receiver 423a may perform direct down conversion of the received RF signal to baseband, or may convert the received RF signal to an intermediate frequency (IF). In various embodiments of the present invention, the receiver 423a may perform integral down-conversion, where the in-phase component and the quadrature-phase component may be processed in parallel. The receiver 423a may be used to receive a signal through an antenna 421a, which may be a phased array antenna embedded on and / or in an integrated circuit package as shown in FIGS. 1A, 1B, 2A, and 2B. In various embodiments of the present invention, the wireless device 420 may include multiple receivers 423a, and may support simultaneous reception of signals in multiple frequency bands and / or the same frequency band.The digital baseband processor 429 may include suitable logic, circuitry, and / or code to process and / or manipulate baseband signals. At this point, when there is an RF transmitter 423b, the digital baseband processor may process or manipulate the signal received from the RF receiver 423a and / or the signal to be transmitted to the RF transmitter 423b for transmission to the network. The digital baseband processor 429 may provide control and / or feedback information to the RF receiver 423a and the RF transmitter 423b based on the information from the processed signal. At this point, the baseband processor 429 may provide one or more control signals for configuring the phase shift of received and / or transmitted RF signals. At this point, the phase shift applied to the RF signal can control the directionality of the phased array signal. The digital baseband processor 429 may send the processed information and / or data to the processor 425 and / or the memory 427. In addition, the digital baseband processor 429 may receive information from the processor 425 and / or send the information to the memory 427, process the information and forward the information to the RF transmitter 423b to send to the network.The RF transmitter 423b may include appropriate logic, circuitry, and / or code to process the RF signal in order to transmit the RF signal. The transmitter 423b can be used to transmit a signal through an antenna 421b, which can be assembled in an integrated circuit package of a phased array antenna as shown in FIGS. 1A, 1B, 2A, and 2B. The RF transmitter 423b can be used to transmit RF signals in multiple frequency bands. For example, the RF transmitter 423b can be used to transmit signals in an extremely high frequency band (for example, 60 GHz). Each frequency band supported by the RF transmitter 423b may have a corresponding front-end circuit for handling amplification and up-conversion operations. In this regard, when it supports more than one frequency band, the RF transmitter 423b may be referred to as a multi-band transmitter. In another embodiment of the invention, the wireless device 420 may include more than one RF transmitter 423b, where each RF transmitter 423b may be a single-band or multi-band transmitter.In various embodiments of the present invention, the RF transmitter 423b may perform direct up-conversion of baseband signals to RF signals. In some examples, the RF transmitter 423b may be used to perform digital-to-analog conversion on the baseband signal component received from the digital baseband processor 429 before upconversion. In other examples, the RF transmitter 423b may receive baseband signal components in analog form.The processor 425 may include suitable logic, circuitry, and / or code for controlling and / or for data processing operations of the wireless device 420. The processor 425 may be used to control at least a part of the RF receiver 423a, the RF transmitter 423b, the digital baseband processor 429, and / or the memory 427. At this point, the processor 425 may generate at least one signal for controlling operations in the wireless device 420. In this regard, the baseband processor 429 may provide one or more phases for controlling signals transmitted and / or received through the phased array antennas 421a and 421b. The processor 425 may be used to execute applications that can be used by the wireless device 420. For example, the processor 425 can execute applications that can display content and / or interact with it that can be received through cellular transmission signals in the wireless device 420.The memory 427 may include suitable logic, circuitry, and / or code for storing data and / or other information used by the wireless device 420. For example, the memory 427 may be used to store processed data generated by the digital baseband processor 429 and / or processor 425. The memory 427 can be used to store information, such as configuration information that can be used to control the operation of at least one module in the wireless device 420. For example, the memory 427 may include information necessary for controlling the phase of signals received and / or transmitted through the antennas 421a and 421b. At this point, the memory may store control and / or configuration information for configuring one or more phase shifters.The present invention provides a method and system for a resettable antenna in an integrated circuit package. In this regard, a phased array antenna (eg, 102) embedded in a multilayer integrated circuit (IC) package (eg, 104) can be used to transmit and / or receive signals. The multi-layer integrated circuit package may include one or more metal layers (eg, 202 and 210), insulating materials (eg, 203), ferromagnetic and / or ferrimagnetic materials. In a typical embodiment of the present invention, the antenna may include one or more planar transmission lines. The phased array antenna may include multiple antenna elements (eg, 150), and each antenna element may include a transmitter and / or receiver, a feeder (eg, 156), a quarter-wavelength converter ( For example, 154), and a transmitting member (for example, folded dipole antenna 152) are communicatively coupled to each other (for example, 158). In various exemplary embodiments of the present invention, an IC (eg, 106) that can be used to receive and / or transmit signals can be incorporated on a multilayer IC package through one or more solder balls (eg, 211). Therefore, the IC can send the reference signal and one or more phase shifted versions of the reference signal to the antenna. In a typical embodiment of the present invention, one or more phase shifters (for example, 256) (for example, assembled in a planar transmission line) can be embedded in a multilayer IC package and can be combined with the multilayer IC package Logic, circuit and / or code control in the integrated IC.Yet another embodiment of the present invention may provide a computer-readable information storage method. After storing the information on it, a computer program containing at least one code segment executable by the instrument can control the instrument to run the above steps for the phased array antenna in the integrated circuit package.Therefore, the present invention can be implemented by hardware, software, or a combination of software and hardware. The invention can be implemented in a centralized manner in at least one computer system, or in a decentralized manner by different parts distributed among several interconnected computer systems. Any computer system or other device that can implement the method is applicable. The combination of common hardware and software can be a general computer system with a computer program installed, and the computer system is controlled by the installation and execution of the program so that it operates according to the method.The present invention can also be implemented by a computer program product. The program contains all the features that can implement the method of the present invention. When it is installed in a computer system, the method of the present invention can be implemented. The computer program in this document refers to: any expression of a set of instructions that can be written in any programming language, code, or symbol. This set of instructions gives the system information processing capabilities to directly implement a specific function, or is underway. After one or two steps, specific functions are realized: a) conversion into other languages, codes or symbols; b) reproduction in different formats.Although the present invention is described through specific embodiments, those skilled in the art should understand that various changes and equivalent substitutions can be made to the present invention without departing from the scope of the present invention. In addition, the present invention can be variously modified for specific situations or materials without departing from the scope of the present invention. Therefore, the present invention is not limited to the disclosed specific embodiments, but should include all embodiments falling within the scope of the claims of the present invention. |
A method for forming a metal-oxide semiconductor field-effect transistor (MOSFET) (200) includes patterning a fin area, a source region, and a drain region on a substrate, forming a fin (310) in the fin area, and forming a mask (320) in the fin area. The method further includes etching the mask (320) to expose a channel area (330) of the MOSFET (200), etching the fin (310) to thin a width of the fin (310) in the channel area (330), forming a gate over the fin (310), and forming contacts to the gate, the source region, and the drain region. |
WHAT IS CLAIMED IS: 1. A method for forming a metal-oxide semiconductor field-effect transistor (MOSFET) (200), comprising: forming a fin (310) on a substrate; forming a mask (320) on the substrate; etching the mask (320) to expose a channel area (330) of the MOSFET (200); thinning a width of the fin (310) in the channel area (330); and forming a gate over the fin (310), the gate extending on each side of the fin (310). 2. The method of claim 1, further comprising: patterning a fin area, a source area, and a drain area. 3. The method of claim 2, further comprising: forming a silicide material on the substrate; and forming a gate contact, a source contact, and a drain contact through the silicide material. 4. The method of claim 1, wherein the forming a mask includes: depositing damascene material over the substrate. 5. The method of claim 4, wherein the forming a gate includes: etching the damascene material to form a gate area, forming a gate dielectric (510) on side surfaces of the fin (310), and depositing gate electrode material (520) to at least partially fill the gate area. 6. The method of claim 1, wherein the thinning a width of the fin (310) includes: removing approximately 100 A to 200 A per side from a width of the fin (310). 7. A metal-oxide semiconductor field-effect transistor (MOSFET) (200), characterized by: a fin (310) having a width of approximately 100 A to 400 A formed on a substrate; a gate dielectric (510) formed on side surfaces of the fin (310) ; and a gate electrode (520) formed covering the fin (310). 8. The MOSFET (200) of claim 7, wherein the gate electrode (520) comprises first and second gate areas formed on first and second respective sides of the fin (310), the first and second gate areas being aligned with each other; and wherein the MOSFET (200) further comprises: a source area; and a drain area. 9. A method for forming a metal-oxide semiconductor field-effect transistor (MOSFET) (200), <Desc/Clms Page number 8> comprising: patterning a fin area, a source region, and a drain region on a substrate; forming a fin (310) in the fin area; forming a mask (320) in the fin area; etching the mask (320) to expose a channel area (330) of the MOSFET (200); etching the fin (310) to thin a width of the fin (310) in the channel area (330); forming a gate over the fin (310) ; and forming contacts (610-630) to the gate, the source region, and the drain region. 10. The method of claim 9, wherein the forming a mask (320) includes: depositing damascene material (320) over the substrate. 11. The method of claim 10, wherein the forming a gate includes: etching the damascene material (320) to form a gate region, forming a gate dielectric (510) on side surfaces of the fin (310), and depositing gate electrode material (520) to at least partially fill the gate region. 12. The method of claim 9, wherein the etching the fin (310) includes: removing approximately 100 A to 200 A per side from the width of the fin (310). |
<Desc/Clms Page number 1> SELF ALIGNED DAMASCENE GATE FIELD OF THE INVENTION The present invention relates generally to semiconductor devices and, more particularly, to metal-oxide semiconductor field-effect transistor (MOSFET) devices with a self aligned damascene gate and methods of making these devices. BACKGROUND OF THE INVENTION Scaling of device dimensions has been a primary factor driving improvements in integrated circuit performance and reduction in integrated circuit cost. Due to limitations associated with gate-oxide thicknesses and source/drain (S/D) junction depths, scaling of existing bulk MOSFET devices below the 0. 1 um process generation may be difficult, if not impossible. New device structures and new materials, thus, are likely to be needed to improve FET performance. Double-gate MOSFETs represent devices that are candidates for succeeding existing planar MOSFETs. In double-gate MOSFETs, the use of two gates to control the channel significantly suppresses short-channel effects. A FinFET is a double-gate structure that includes a channel formed in a vertical fin. Although a double-gate structure, the FinFET is similar to existing planar MOSFETs in layout and fabrication techniques. The FinFET also provides a range of channel lengths, CMOS compatibility, and large packing density compared to other double-gate structures. SUMMARY OF THE INVENTION Implementations consistent with the principles of the invention provide FinFET devices that include a damascene gate formed with a self aligned gate mask and methods for manufacturing these devices. In one aspect consistent with the principles of the invention, a method for forming a metal-oxide semiconductor field-effect transistor (MOSFET) includes patterning a fin (310) area, a source region, and a drain region on a substrate, forming a fin (310) in the fin (310) area, and forming a mask in the fin (310) area. The method further includes etching the mask to expose a channel area of the MOSFET, etching the fin (310) to thin a width of the fin (310) in the channel area, forming a gate over the fin, and forming contacts to the gate, the source region, and the drain region. In another aspect consistent with the principles of the invention, a method for forming a MOSFET includes forming a fin (310) on a substrate ; forming a mask on the substrate; etching the mask to expose a channel area of the MOSFET; thinning a width of the fin (310) in the channel area; and forming a gate over the fin, where the gate extends on each side of the fin. In yet another aspect consistent with the principles of the invention, a MOSFET includes a fin (310) having a width of approximately 100 A to 400 A formed on a substrate, a gate dielectric formed on side surfaces of the fin, and a gate electrode formed covering the fin. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings, Fig. 1 illustrates an exemplary process for fabricating a MOSFET in accordance with an implementation consistent with the principles of the invention; <Desc/Clms Page number 2> Figs. 2A-6C illustrate exemplary top and cross-sectional views of a MOSFET fabricated according to the processing described in Fig. 1; Figs. 7A-7C illustrate a process for forming spacers according to another implementation consistent with the principles of the invention; Figs. 8A-8C illustrate an exemplary process for removing fin (310) sidewall damage; and Fig. 9 illustrates an exemplary process for improving mobility of a FinFET device. DETAILED DESCRIPTION The following detailed description of implementations consistent with the present invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents. Implementations consistent with the principles of the invention provide FinFET devices that include a self aligned damascene gate and methods for manufacturing these devices. Such FinFET devices have certain advantages. For example, only the active area of the fin (310) is at the minimum channel length, which reduces source/drain resistance. The gate is also self aligned to the minimum channel area, which significantly reduces the parasitic source/drain resistance of the device. In traditional FinFET approaches, the narrow channel is usually significantly longer than the gate length in order to account for gate-to-fin (310) overlay tolerance. Also, the gate patterning is done on a planar substrate (e. g. , a polished damascene material), which provides increased lithography margin since the depth of focus of aggressive lithography schemes tends to be quite low. Also, critical dimension variation due to changes in resist thickness over topography (i. e. , CD swingO can be avoided since the resist coating is on a planarized surface. EXEMPLARY MOSFET Fig. 1 illustrates an exemplary process for fabricating a MOSFET in accordance with an implementation consistent with the principles of the invention. Figs. 2A-6C illustrate exemplary top and cross- sectional views of a MOSFET fabricated according to the processing described with regard to Fig. 1. With reference to Figs. 1 and 2A-2C, processing may begin with semiconductor device 200. As shown in the cross-sectional views in Figs. 2A and 2B, semiconductor device 200 may include a silicon on insulator (SOI) structure that includes a silicon (Si) substrate 210, a buried oxide layer 220, and a silicon layer 230 on the buried oxide layer 220. Buried oxide layer 220 and silicon layer 230 may be formed on substrate 210 in a conventional manner. The thickness of buried oxide layer 220 may range, for example, from about 1, 000 A to 10,000 A. The thickness of silicon layer 230 may range, for example, from about 400 A to 1, 500 A. The silicon thickness may be as thick as possible since increased thickness leads to enhanced width of the device (i. e. , more current flow along the sidewall of the fin (310) and thereby higher drive current (in a MOSFET I oc W/L) ). Usually it is difficult to use a thick silicon thickness in a conventional FinFET approach since that also leads to a bigger step in the gate lithography step and poor lithography margin. It will be appreciated that silicon layer 230 is used to form the fin. In alternative implementations, substrate 210 and layer 230 may include other semiconductor materials, such as germanium, or combinations of semiconductor materials, such as silicon-germanium. Buried oxide layer 220 may include a silicon oxide or other types of dielectric materials. <Desc/Clms Page number 3> A silicon nitride, or another. type of material, may be formed on silicon layer 230 and may function as a bottom antireflective coating (BARC) 240 for subsequent processing, as illustrated in Figs. 2A and 2B. The thickness of BARC layer 240 may range from approximately 150 A to 350 A. A photoresist 250, or the like, may be deposited and patterned to facilitate formation of a large fin (310) area and the source and drain regions (act 110), as shown in Figs. 2A-2C. Photoresist 250 may be deposited to a thickness ranging from about 1,000 A to 4, 000 A. Fig. 2C illustrates the top view of semiconductor device 200 of Figs. 2A and 2B. The cross- section in Fig. 2A is taken along line X in Fig. 2C and the cross-section in Fig. 2B is taken along line Y in Fig. 2C. Silicon layer 230 may be etched to form a fin (310) 310 (act 120), as shown in Figs. 3A and 3B. For example, the portion of silicon layer 230 not located under photoresist 250 may be etched with the etching terminating on buried oxide layer 220. Photoresist 250 may then be removed. The width of fin (310) 310, as shown in Fig. 3B, may range from approximately 500 A to 800 A. A damascene mask may be formed in the area of fin (310) 310 (act 130), as illustrated in Figs. 3A-3C. For example, a damascene material 320, such as silicon oxide, silicon nitride, SiCOH, etc. , may be deposited over semiconductor device 200 to a thickness ranging from approximately 800 A to 2,200 A (to enclose fin (310) 310 and BARC 240) and then polished using known techniques, as illustrated in Figs. 3A and 3B. Damascene material 320 may function as a BARC for subsequent processing. Damascene material 320 may then be etched using a gate mask to expose a channel area 330 in the gate opening, as shown in Figs. 3A-3C. The width of channel area 330, as illustrated in Fig. 3C, may range from approximately 300 A to 500 A. The gate mask used to expose channel area 330 may be created using aggressive lithography and patterning techniques known to those skilled in the art. The width of fin (310) 310 may then be reduced (act 140), as illustrated in Figs. 4A-4C. One or more etching techniques may be used to laterally etch fin (310) 310 in channel area 330. For example, a thermal oxidation of Si followed by a dilute HF dip may be used. Other types of etches may alternatively to be used. For example, Si may be etched in a downstream F plasma where the chemical selectivity of the Si etch in F species over oxide is very high, or a lateral Si etch in HBr based plasma chemistries may be used. The amount of silicon removed may range from approximately 100 A to 200 A per side, as illustrated in Fig. 4B. The resulting width of fin (310) 310 may range from approximately 100 A to 400 A. BARC 240 may remain in implementations consistent with the principles of the invention, as illustrated in Fig. 4B. In other implementations, BARC 240 may be removed. Fig. 4C illustrates a top view of semiconductor device 200 after fin (310) 310 has been thinned in channel area 330. A gate may then be formed (act 150), as illustrated in Figs. 5A-5C. For example, a gate dielectric material 510 may be deposited or thermally grown on the side surfaces of fin (310) 310 using known techniques, as illustrated in Fig. 5B. Gate dielectric material 510 may include conventional dielectric materials, such as an oxide (e. g. , silicon dioxide), silicon oxy-nitride, or high dielectric constant (high K) materials, such as HfO2. In other implementations, a silicon nitride or other materials may be used to form the gate dielectric. Gate dielectric material 510 may be formed at a thickness ranging from approximately 10 A to 20 A. A gate electrode material 520 may then be deposited over semiconductor device 200 and polished, as illustrated in Figs. 5A and 5B. Gate electrode material 520 may be polished (e. g. , via chemical-mechanical polishing (CMP) ) to remove any gate material over damascene material 320, as illustrated in Figs. 5A and 5B. <Desc/Clms Page number 4> A number of materials may be used for gate electrode material 520. For example, gate electrode material 520 may include a polycrystalline silicon or other types of conductive material, such as germanium or combinations of silicon and germanium, or metals, such as W, WN, TaN, TiN, etc. Gate electrode material 520 may be formed at a thickness ranging from approximately 700 A to 2,100 A, as illustrated in Fig. 5B, which may be approximately equal to the thickness of damascene material 320 (some of which may be lost due to the polishing). Fig. 5C illustrates a top view of semiconductor 200 after gate electrode 520 is formed. The dotted lines in Fig. 5C represent the thinned portion of fin (310) 310. Gate dielectric layer 510 is not illustrated in Fig. 5C for simplicity. Source, drain, and gate contacts may then be formed (act 160), as illustrated in Figs. 6A-6C. For example in one implementation, large contact areas may be opened over fin (310) 310 on either side of the gate, as illustrated in Fig. 6A. Source and drain contact areas 610 and 620 may be opened by etching through the extra amount of damascene material 320 left above fin (310) 310 and also removing BARC 240. Gate contact area 630 may also be formed on gate electrode 520. It may be possible for these contact areas 610-630 to be larger than the actual dimensions of fin (310) 310 and the source/drain. Silicidation, such as CoSi2 or NiSi silicidation, can then occur in these openings. The CoSi2 or NiSi silicidation occurs only where there is polysilicon (i. e. , gate) or silicon (i. e. , source/drain) and whatever fin (310) region (wide fin) is exposed. The unreacted cobalt or nickel (wherever there is no silicon) can be etched away just as is done in typical self-aligned silicide schemes in use by the industry today. In another implementation, damascene material 320 and BARC 240 may be removed from the top of fin (310) 310 and the source/drain. Then, a sidewall spacer may be formed on the sides of the gate and fin (310) 310. Next, a silicide metal, such as cobalt or nickel, may be deposited to form a self aligned silicide wherever there is silicon or polysilicon exposed at the top (i. e. , on the gate and on the exposed fin (310) channel). The resulting semiconductor device 200, therefore, may include a self aligned damascene gate formed on either side of fin (310) 310. Fin (310) 310 is thinned in the channel area, as illustrated by the dotted lines in Fig. 6C. According to another implementation consistent with the principles of the invention, spacers may be formed for the transfer of the damascene gate to make the gate length smaller. Figs. 7A-7C illustrate an exemplary process for forming spacers according to an alternate implementation consistent with the principles of the invention. As illustrated in Figs. 7A-7C, a hardmask 710 may be opened (Fig. 7A), spacers 720 may be formed (Fig. 7B), and the transfer of the damascene gate may be performed in the opening (Fig. 7C). The spacer formation inside the damascene gate opening may facilitate printing of small spaces (as mentioned above) in order to form small gate length devices. The spacer technique enables the formation of smaller spaces than may be attained by photolithographic shrinking alone. In another implementation, damascene gate shrink techniques, such as the ones described in copending, commonly assigned applications entitled,"FINFET GATE FORMATION USING REVERSE TRIM AND OXIDE POLISH" (Serial No. 10/459,589) (Docket No. H1122), filed June 12,2003,"FINFET GATE FORMATION USING REVERSE TRIM OF DUMMY GATE" (Serial No. 10/320, 536) (Docket No. H1121), filed December 17,2002, and"ETCH STOP LAYER FOR ETCHING FINFET GATE OVER A <Desc/Clms Page number 5> LARGE TOPOGRAPHY" (Serial No. 10/632,989) (Docket No. H1172), filed August 4,2003, which are incorporated herein by reference. In yet another implementation, a metal gate electrode may be used instead of the polysilicon damascene process described above. OTHER IMPLEMENTATIONS There is a need in the art to remove damage that may occur to the side surfaces (i. e. , sidewalls) of a fin (310) during processing. Figs. 8A-8C illustrate an exemplary process for removing fin (310) sidewall damage. A semiconductor device 800 may include a fin (310) layer 810 and a cover layer 820 formed on a substrate 830, as illustrated in Fig. 8A. Fin (310) layer 810 may include a semiconductor material, such as silicon or germanium, or combinations of semiconductor materials. Cover layer 820 may, for example, include a silicon nitride material or some other type of material capable of protecting fin (310) layer 810 during the fabrication process. Fin (310) layer 810 and cover layer 820 may be etched using a conventional dry etching technique to form fin (310) 840, as illustrated in Fig. 8B. A conventional wet etching technique may then be used to remove fin (310) sidewall damage, as illustrated in Fig. 8C. During the wet etching, the width of fin (310) 840 may be thinned by approximately 20 A to 40 A per side. Wet etching of silicon may also result in some buried oxide loss since it is difficult when wet etching to get good selectivity of silicon to silicon dioxide. There is also a need in the art to improve the mobility of a FinFET device. Fig. 9 illustrates an exemplary process for improving mobility of a FinFET device. A die-attach material may be formed on a package, as illustrated in Fig. 9. The die-attach material may be selected to induce stress (strain) in the FinFET channel. A die may then be attached to the die-attach material, as illustrated in Fig. 9. Tensile stress induced in the silicon FinFET channel may result in enhanced hole mobility, which can help significantly improve PMOS FinFET performance. The die-attach material and process may be such that the residual stress in the silicon layer is tensile. For example, if the package material did not shrink as fast as the silicon layer after the (hot) die attach/solder/bump process, then the silicon layer could be in tensile stress when cooled to lower temperatures. CONCLUSION Implementations consistent with the principles of the invention provide FinFET devices that include a damascene gate formed with a self aligned gate mask and methods for manufacturing these devices. These FinFET devices have certain advantages. For example, only the active area of the fin (310) is at the minimum channel length, the gate is self aligned to the minimum channel, and the gate patterning is performed on a planar substrate (e. g. , a polished damascene material). The foregoing description of exemplary embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, in the above descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc. , in order to provide a thorough understanding of implementations consistent with the present invention. These implementations and other implementations can be practiced, however, without resorting to the details specifically set forth herein. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the thrust <Desc/Clms Page number 6> of the present invention. In practicing the present invention, conventional deposition, photolithographic and etching techniques may be employed, and hence, the details of such techniques have not been set forth herein in detail. While a series of acts has been described with regard to Fig. 1, the order of the acts may be varied in other implementations consistent with the present invention. Moreover, non-dependent acts may be implemented in parallel. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article"a"is intended to include one or more items. Where only one item is intended, the term"one"or similar language is used. The scope of the invention is defined by the claims and their equivalents. |
Embodiments of the present disclosure include semiconductor processing methods and systems. One method includes forming a material layer (104-1, 104-2) on a semiconductor substrate by exposing a deposition surface of the substrate to at least a first (REACTANTl, Rl) and a second (REACTANT2, R2) reactant sequentially introduced into a reaction chamber (202, 402) having an associated process temperature. The method includes removing residual first reactant (REACTANTl, Rl) from the chamber (202, 402) after introduction of the first reactant (REACTANTl, Rl), removing residual second reactant (REACTANT2, R2) from the chamber (202, 402) after introduction of the second reactant(REACTANT2, R2), and establishing a temperature differential substantially between an edge of the substrate and a center (105)of the substrate via a purge process. |
What is claimed is: 1. A method for semiconductor processing, comprising: exposing a deposition surface of a semiconductor substrate to at least a first (REACTANTl, Rl) and a second (REACTANT2, R2) reactant sequentially introduced into a chamber (202, 402) having an associated process temperature (710); removing residual first reactant (REACTANTl, Rl) from the chamber (202, 402) after introduction of the first reactant (REACTANTl, Rl) (720); removing residual second reactant (REACTANT2, R2) from the chamber (202, 402) after introduction of the second reactant (REACTANT2, R2) (730); and establishing a temperature differential substantially between an edge of the substrate and a center (105) of the substrate via a purge process (740). 2. The method of claim 1, wherein establishing the temperature differential includes, during the purge process, introducing an amount of purge gas (PURGEl, PURGE2) having a temperature different than the process temperature into the chamber (202, 402). 3. The method of claim 2, wherein the amount of purge gas (PURGEl, PURGE2) has a temperature less than the process temperature, and wherein establishing the temperature differential includes delivering the amount of purge gas (PURGEl, PURGE2) across a deposition surface of the substrate. 4. The method of claim 2, wherein the amount of purge gas (PURGEl , PURGE2) has a temperature greater than the process temperature, and wherein establishing the temperature differential includes delivering a first portion of the amount of purge gas (PURGEl, PURGE2) across a deposition surface of the substrate. 5. The method of claim 4, wherein the method includes delivering the first portion of the amount of purge gas (PURGEl, PURGE2) from a gas source through a number of elongate injectors (235, 230-1, 230-2, 230-3, 435, 430-1,430-2, 430-3) of an injector assembly (229, 529, 629) such that a temperature of the deposition surface of the substrate decreases as the first portion of the amount of purge gas (PURGEl, PURGE2) moves from the edge of the substrate toward the center (105). 6. The method of claim 4, wherein the method includes delivering a second portion of the amount of purge gas (PURGEl, PURGE2) having a temperature greater than the process temperature from an injector assembly (229, 529, 629) toward an upper surface of the chamber (202, 402). 7. The method of claim 1 , wherein the method includes maintaining the chamber (202, 402) at a steady process temperature while exposing the deposition surface of the substrate to the sequentially introduced first (REACTANTl, Rl) and the second (REACTANT2, R2) reactants. 8. The method of claim 1, wherein the method includes delivering an amount of purge gas (PURGEl, PURGE2) heated to a temperature greater than the process temperature into the chamber (202, 402) during introduction of at least one of the first reactant (REACTANTl, Rl) and the second reactant (REACTANT2, R2) into the chamber (202, 402). 9. The method of claim 1, wherein the method includes forming a material layer (104-1, 104-2) of silicon oxide on the substrate via an atomic layer deposition process. 10. A method for semiconductor processing, comprising: heating a batch of semiconductor substrates within a chamber (202, 402) to a process temperature; exposing a deposition surface of the substrates to at least a first (REACTANTl, Rl) and a second reactant (REACTANT2, R2) sequentially introduced into the chamber (202, 402); performing a first purge process (720) after introduction of the first reactant (REACTANTl, Rl) and performing a second purge process (730) after introduction of the second reactant (REACTANT2, R2); andwherein at least one of the first (720) and second (730) purge process includes creating a temperature differential across the deposition surface of a number of the substrates by directing an amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER), heated to a temperature other than the process temperature, across the deposition surface of the number of wafers (207). 11. The method of claim 10, wherein creating the temperature differential includes introducing a first portion of the amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER) into the chamber (202, 402) through a first vertical injector (230-1, 230-2, 230-3, 430-1, 430-2, 430-3) configured to direct the first portion through a number of apertures (232) along a length of the first vertical injector (230-1, 230-2, 230-3, 430-1, 430-2, 430-3) toward a center (105) of the number of substrates. 12. The method of claim 11, wherein the at least one of the first (720) and second (730) purge processes includes introducing a second portion of the amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER) into the chamber (202, 402) through a second vertical injector (235, 435) configured to direct the second portion through an aperture at an end of the second vertical injector (235, 435) toward an upper surface of the chamber (202, 402). 13. The method of claim 12, wherein the method includes directing the second portion (PURGEl, PURGE2, PURGE/CARRIER) toward the upper surface of the chamber (202, 402) to decrease a deposition rate associated with a number of wafers (207) located closer to the upper surface than a number of wafers (207) located farther from the upper surface. 14. The method of claim 10, wherein the method includes heating the amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER) to a temperature greater than the process temperature. 15. The method of claim 10, wherein the method includes:maintaining a first gas line (216-2, 616-2) at the temperature greater than the process temperature, the first gas line (216-2, 616-2) in fluid communication with a first source of purge gas (215-2, 615); and providing a separate gas line (216-1, 616-1) heated to a temperature not greater than the process temperature, the separate gas line (216-1, 616-1) in fluid communication with the first source of purge gas (215-1, 615). 16. The method of claim 15, wherein the method includes coupling the first gas line (216-2, 616-2) and the separate gas line (216-1, 616-1) to a third gas line (220-1, 220-2, 620-1, 620-2) in fluid communication with an injector assembly (229, 629) associated with the chamber (202, 402), the third line (220-1, 220-2, 620-1, 620-2) in fluid communication with at least one of the first (REACTANTl, Rl) and the second (REACTANT2, R2) reactant. 17. The method of claim 16, wherein the method includes: introducing the amount of purge gas (PURGE2, PURGE/CARRIER) into the chamber (202, 402) via the first gas line (216-2, 616-2) and the third gas line (220-1, 220-2, 620-1, 620-2); and introducing at least one of the first (REACTANTl , Rl) and the second reactant (REACT ANT2, R2) into the chamber (202, 402) via the third gas line (220-1, 220-2, 620-1, 620-2), the at least one of the first (REACTANTl, Rl) and the second reactant (REACT ANT2, R2) introduced into the chamber (202, 402) at a temperature not greater than the process temperature. 18. The method of claim 15, wherein the method includes using the first source of purge gas (215-2, 615) as a carrier gas source for at least one of the first (REACTANTl, Rl) and the second reactant (REACTANT2, R2). 19. The method of claim 10, wherein the method includes: maintaining a first gas line (216-2, 616-2) at the temperature greater than the process temperature, the first gas line (216-2, 616-2) in fluid communication with a first source of purge gas (215-2, 615); andmaintaining a second gas line(616-1) at a temperature not greater than the process temperature, the second gas line (616-1) in fluid communication with a second source of purge gas (215-1, 615). 20. A semiconductor processing system, comprising: a chamber (202, 402) maintained at a process temperature during an atomic layer deposition process; a carrier (209) for holding a batch of semiconductor wafers (207, 407) each having a deposition surface to be exposed to at least a first (REACTANTl, Rl) and a second reactant (REACTANT2, R2) sequentially introduced into the chamber (202, 402); an injector assembly (229, 529, 629) through which an amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER), heated to a temperature other than the process temperature, is introduced into the chamber (202, 402) during at least one of a first purge process (720) performed after introduction of the first reactant (REACTANTl, Rl) and a second purge process (730) performed after introduction of the second reactant (REACTANT2, R2); and an evacuation port (242, 442) through which residual first (REACTANTl, Rl) and second reactant (REACTANT2, R2) are removed from the chamber (202, 402) during the first (720) and the second (730) purge process. 21. The system of claim 20, wherein the amount of purge gas (PURGEl , PURGE2, PURGE/CARRIER), heated to a temperature other than the process temperature, is introduced into the chamber (202, 402) to establish a temperature differential between an edge of the wafers (207) and a center (105) of the wafers (207). 22. The system of claim 20, wherein the batch of wafers (207) includes a vertical stack of wafers spaced apart in a rotating carrier (209). 23. The system of claim 20, wherein the injector assembly (229, 529, 629) includes a first vertical injector (235, 435) having an aperture (236, 436) at an end through which a first portion of the amount of purge gas (PURGEl,PURGE2, PURGE/CARRIER), heated to the temperature other than the process temperature, is delivered toward an upper surface of the chamber (202, 402). 24. The system of claim 23, wherein the injector assembly (229, 529, 629) includes at least a second vertical injector (230-1, 230-2, 230-3, 430-1, 430-2, 430-3) having a number of apertures (232) along its length through which a second portion of the amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER) and at least one of the first (REACTANTl, Rl) and second reactant (REACTANT2, R2) are delivered toward the batch. 25. The system of claim 24, wherein the aperture (236, 436) at the end of the first vertical injector (235, 435) is the only aperture (236, 436) of the first vertical injector (235, 435) through which the first portion of the amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER) is delivered. 26. The system of claim 23, wherein the system includes: a first gas line (625) in fluid communication with a gas source (615) and maintained at the temperature other than the process temperature during the deposition process, wherein the first portion of the amount of purge gas (PURGE/CARRIER) is delivered from the gas source (615) to the first vertical injector (235, 435) via the first gas line (625); and a second gas line (620-1, 620-2) in fluid communication with the gas source (615) and having a temperature not greater than the process temperature during the deposition process, wherein the second gas line (620-1, 620-2) is in fluid communication with at least one of a first (210-1, 610-1) and a second (210-2, 610-2) reactant source and is in fluid communication with at least one injector (230-1, 230-2, 430-1, 430-2) of the injector assembly (229, 629). 27. The system of claim 24, wherein a second amount of purge gas (PURGEl, PURGE2, PURGE/CARRIER), heated to a temperature greater than the process temperature, is delivered through the first vertical injector (235, 435) and the at least a second vertical injector (230-1, 230-2, 230-3, 430-1, 430-2, 430-3) after the carrier (209) has been removed from the chamber (202, 402) to bake out the chamber (202, 402) in between deposition processes. 28. The system of claim 20, wherein the atomic layer deposition process is a catalytic process to form a material layer (104-1, 104-2) of silicon oxide on the batch. 29. The system of claim 20, wherein: the process temperature is below 100°C; the first reactant (REACTANTl, Rl) is hexachlorodisilane (HCD); the second reactant (REACTANT2, R2) is H2O; and a pyridine catalyst (CATALYST) is introduced into the chamber (202, 402) with at least one of the first (REACTANTl, Rl) and the second reactant (REACTANT2, R2). 30. A semiconductor processing system, comprising: a chamber (202, 402) maintained at a process temperature during an atomic layer deposition process; a carrier (209) for holding a batch of semiconductor wafers (207) each having a deposition surface to be exposed to at least a first (REACTANTl, Rl) and a second reactant (REACTANT2, R2) sequentially introduced into the chamber (202, 402); an injector assembly (229, 529, 629) through which an amount of purge gas (PURGE/CARRIER), heated to a temperature greater than the process temperature, is introduced into the chamber (202, 402) during at least one of a first purge process (720) performed after introduction of the first reactant (REACTANTl, Rl) and a second purge process (730) performed after introduction of the second reactant (REACTANT2, R2); a first conduit (616-2) in fluid communication with a gas source (615) and maintained at the temperature greater than the process temperature during the deposition process, wherein the first portion of the amount of purge gas (PURGE/CARRIER) is delivered from the gas source (615) to the first vertical injector (235, 435) via the first conduit (616-2); and a second conduit (616-1) in fluid communication with the gas source (615) and having a temperature not greater than the process temperature fordelivering carrier gas to at least one of a first (610-1) and a second (610-2) reactant source. |
SEMICONDUCTOR PROCESSING Technical Field [0001] The present disclosure relates generally to semiconductor processing and, more particularly, to semiconductor processing via atomic layer deposition (ALD) and/or chemical vapor deposition (CVD). Background [0002] During semiconductor device fabrication, layers of materials are formed over semiconductor substrates, e.g., wafers. Among the materials which can be included in such layers are tantalum pentoxide, titanium nitride, titanium silicon nitride, tantalum nitride, tantalum silicon nitride, titanium suicide, tantalum suicide, tungsten nitride, aluminum oxide, hafnium oxide, zirconium oxide, silicon nitride, silicon dioxide, elemental tungsten and elemental titanium. Methods for forming layers of such materials can include chemical vapor deposition (CVD) and atomic layer deposition (ALD). [0003] Chemical vapor deposition includes mixing two or more reactants in a chamber to form a material which subsequently deposits across exposed surfaces of one or more semiconductor substrates. In CVD processes, it can be difficult to control reactions between the reactants provided in the chamber and various side -reactions can occur which can generate contaminants. Additionally, it can be difficult to form a uniform layer over multiple exposed surfaces of one or more semiconductor substrates with CVD. The deposition of CVD material can be faster in various regions of semiconductor topography than other regions, which can lead to within wafer (WIW) non-uniformity, e.g., increased WIW uniformity variance in a thickness of the deposited material across various exposed surfaces of semiconductor substrates provided within a CVD reaction chamber. [0004] Atomic layer deposition (ALD) can overcome some of the problems discussed above relative to CVD. ALD processing includes forming thin films of material by repeatedly depositing monoatomic layers. The technique involves individually depositing reactants, e.g., precursors, that react in situ to form a desired film of material across a semiconductor substrate. More specifically, ALD processes involve introduction of a first reactant which reactswith a substrate to form a monolayer across the substrate. The first reactant will often react with the substrate, but not with itself. Accordingly, side-reactions can be reduced or eliminated. Further, the reaction of the reactant with the substrate can be self-limiting, e.g., once a monolayer forms across exposed surfaces of the substrate there is no longer further reaction of the reactant with the substrate. [0005] In ALD processes, after the monolayer is formed, the excess first reactant can be evacuated from the reaction chamber via a purge process, and a second reactant can be subsequently introduced. A purge process can include one or more purge steps in which a purge gas, e.g., an inert gas, is introduced into the reaction chamber and one or more pumping steps preceding and/or following introduction of the purge gas to remove excess reactant, catalyst, purge gas, and/or by-product gases from the chamber. [0006] In ALD processes, the second reactant reacts with the monolayer of material formed from the first reactant to convert such monolayer into a desired material layer over the substrate. The desired material layer can have a relatively uniform thickness across the various surfaces of the substrate, which can be made thicker by evacuating the second reactant from the processing chamber via a purge process and repeating the above-described process until a desired thickness of the desired material layer is formed. [0007] Depending on the reactant system and with long enough pump and/or purge times, an ALD process can produce very uniform thickness across a wafer regardless of topography and can maintain uniform thickness profiles for each wafer in a batch if the processing temperature is held constant. However, the layer by layer ALD processing can have significantly lower throughput as compared to CVD processing techniques. To improve the throughput associated with ALD processes, the purge process can be shortened by using shorter pump and/or purge times between reactant pulses. In some cases, the deposition rate associated with ALD processing can be improved by increasing or decreasing the process temperature. Also, ALD throughput can be improved by processing a plurality of wafers simultaneously in a batch process. [0008] However, performing batch processes, increasing or decreasing the process temperature, and/or shortening pump and/or purge times can lead to an added CVD component associated with an ALD process. An ALD processhaving an added CVD component refers to a quasi-ALD process which exhibits some CVD process characteristic, such as increased direct reactions between residual reactants and/or other CVD process characteristics, which can increase the WIW uniformity variance associated with the deposition process. For example, performing batch processes, increasing or decreasing the process temperature, and/or shortening the pump and/or purge time, e.g., the time used to evacuate the chamber between ALD reactant pulses, can lead to incomplete removal of the ALD reactants and thereby increases contaminants and/or co- reactions within the chamber. Brief Description of the Drawings [0009] Figure IA illustrates a thickness profile of a material layer formed on a semiconductor wafer during an ALD process having a CVD component. [0010] Figure IB illustrates the temperature of a purge gas introduced into a reaction chamber during a purge process according to an embodiment of the present disclosure. [0011] Figure 1C illustrates a thickness profile of a material layer formed on a semiconductor wafer during an ALD process according to an embodiment of the present disclosure. [0012] Figure 2 illustrates a diagram of a semiconductor processing system according to an embodiment of the present disclosure. [0013] Figure 3 A is a graph illustrating an example of WIW uniformity variance versus position within a boat for a batch of wafers. [0014] Figure 3B is a graph illustrating an example of purge gas temperature versus height within a reaction chamber for purge gas introduced into the chamber in accordance with an embodiment of the present disclosure. [0015] Figure 4 is an overhead view of a reaction chamber according to an embodiment of the present disclosure. [0016] Figure 5 illustrates a portion of a semiconductor processing system according to an embodiment of the present disclosure. [0017] Figure 6 illustrates a portion of a semiconductor processing system according to an embodiment of the present disclosure. [0018] Figure 7 is a block diagram of a method for semiconductor processing according to an embodiment of the present disclosure.Detailed Description [0019] Embodiments of the present disclosure include semiconductor processing methods and systems. Various embodiments can improve the throughput of an atomic layer deposition (ALD) process by controlling and/or compensating for one or more chemical vapor deposition (CVD) components associated with the ALD process. [0020] One method includes forming a material layer on a semiconductor substrate by exposing a deposition surface of the substrate to at least a first and a second reactant sequentially introduced into a reaction chamber having an associated process temperature. The method includes removing residual first reactant from the chamber after introduction of the first reactant, removing residual second reactant from the chamber after introduction of the second reactant, and establishing a temperature differential substantially between an edge of the substrate and a center of the substrate via a purge process. [0021] As used herein the terms "wafer" and "substrate" may include a number of semiconductor-based structures that have an exposed semiconductor surface. Structure can be understood to include silicon, silicon-on-insulator (SOI), silicon-on sapphire (SOS), doped, and undoped semiconductors. In addition, structure can be understood to include epitaxial layers of silicon supported by a base semiconductor foundation. The base semiconductor foundation is typically the lowest layer of silicon material on a wafer or a silicon layer deposited on another material. [0022] The semiconductor need not be silicon-based. For example, the semiconductor can be silicon-germanium, germanium, or gallium-arsenide. When reference is made to "wafer" and "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in or on the semiconductor structure and/or foundation. [0023] As used herein, "layer" can refer to a layer formed on a substrate using a deposition process such as an atomic layer deposition (ALD), plasma deposition, and/or chemical vapor deposition (CVD) process. The term "layer" is meant to include layers specific to the semiconductor industry, such as "barrier layer", "dielectric layer", and "conductive layer". The term "layer" is also meantto include layers found in technology outside of semiconductor technology, such as coatings on glass. [0024] Figure IA illustrates a thickness profile of a material layer formed on a semiconductor wafer. The illustration 101-1 of Figure IA illustrates a thickness profile of a material layer 104-1 formed on a semiconductor wafer during an ALD process having a CVD component. As shown in Figure IA, the material layer 104-1 has a non-uniform thickness profile, e.g., the material layer 104-1 is thicker at the edges of the wafer than at the center 105. The difference between the maximum and minimum thickness of a material layer, e.g., the thickness variance, can be used as a measure of WIW uniformity. For example, a larger thickness variance of a particular layer indicates the layer has a lesser WIW uniformity than a layer having a smaller thickness variance of the material layer. In Figure IA, the thickness variance of layer 104-1 is indicated by ΔA. [0025] In one or more embodiments, the WIW uniformity of a material layer, e.g., layer 104-1, can be determined based on a measured thickness of a wafer at a number of different points of the wafer. In such embodiments, the WIW uniformity can be defined as the difference between a maximum thickness measurement and a minimum thickness measurement divided by an average of the number of thickness measurements, e.g., (maximum thickness measurement - minimum thickness measurement) / average thickness measurement). As such, WIW uniformity measurements closer to zero, indicate a wafer having a more uniform thickness profile. The number of measured points used to determine the WIW uniformity can be 9, 13, 25, or 49 points, among others. In this manner, wafers determined to have a larger measured WIW uniformity can be said to have an increased WIW uniformity variance, e.g., an increased WIW non- uniformity. [0026] In various embodiments of the present disclosure, the material layer, e.g., 104-1, can include, for example, an oxide layer such as Al2O3, TiO2, ZrO2, HfO2, Ta2O5, Nb2O5, CeO2, SiO2, In2O3, or IrO2. The material layer, e.g., 104-1, can also be a composite oxide layer, a nitride layer, a complex nitride layer, a metal layer, or a suicide layer. Embodiments of the present disclosure are not limited to a particular type of material layer, i.e., the above list is not exhaustive.[0027] In the illustrations shown in Figures IA- 1C, the wafers are rotated about their centers 105 during deposition of a material layer, e.g., 104- 1. However, embodiments of the present disclosure are not limited to wafers which are rotated during processing. [0028] The non-uniform thickness profile, e.g., edge-thick profile, of material layer 104-1 shown in Figure IA can be the result of various factors. For example, the edge-thick profile can be the result of reactant gradients associated with the direct reaction between an amount of residual first reactant and a subsequent pulse of a second reactant in a deposition chamber. That is, in an ALD process, a residual amount of the first reactant after a purge process can react with the subsequently introduced second reactant. [0029] In such cases, the associated reaction rate, e.g., deposition rate, decreases as the second reactant moves across the deposition surface, e.g. the concentration of residual first reactant decreases as the residual amount of first reactant reacts with the second reactant as the second reactant moves from the edge of the wafer toward the center 105. The reactant gradient, e.g., the decreasing reaction rate toward the center, can lead to an edge-thick material layer profile such as that shown in Figure IA. A similar effect can occur when an amount of residual second reactant remains in the chamber after a second purge process. That is, the residual amount of the second reactant after a purge process can react with a subsequently introduced pulse of first reactant. As one of ordinary skill in the art will appreciate, a residual amount of reactant can refer to an amount of an ALD reactant pulse that remains unreacted with, e.g., non- adsorbed to, the deposition surface and/or remains in the chamber after a purge process. [0030] As used herein, a purge process refers to a process used to remove an amount of residual reactant from a reaction chamber. A purge process can include one or more purge steps in which a purge gas, e.g., an inert gas, is introduced into the reaction chamber and one or more pumping steps preceding and/or following introduction of the purge gas to remove excess reactant, catalyst, purge gas, and/or by-product gases from the chamber. [0031] As noted above, performing batch processes, lowering the process temperature, and/or shortening pump/purge times can lead to an added CVD component associated with an ALD process, which can increase the depositionrate and/or throughput of the process. However, the increased throughput can lead to an increase in WIW uniformity variance associated with the material layer, e.g., 104-1, for the reasons stated above. For example, an added CVD component associated with the ALD process can result in an edge-thick or "bowl" shaped profile as illustrated in Figure IA. [0032] Figure 1 B illustrates the temperature of a purge gas introduced into a reaction chamber during a purge process according to an embodiment of the present disclosure. The illustration 103 of Figure IB shows a gas temperature profile from the wafer edges, e.g., EDGE, to a center 105 of the wafer. In various embodiments and as described further below, the temperature at which the purge gas is introduced into the chamber is different than the process temperature of the chamber. In one or more embodiments, and as shown in Figure 1 B, the temperature at which the purge gas is introduced into the chamber e.g., Tl as shown in Figure IB, is greater than the process temperature of the chamber. In one or more embodiments, the temperature Tl can be within a range of about 5°C-25°C greater than the process temperature, which can be within a range of about 500C-IOO0C, in some embodiments. As an example, in some embodiments, a layer of silicon oxide is deposited on a wafer at a process temperature of about 65°C-90°C. In such embodiments, the heated purge gas can be in the range of about 70°C-l 100C. However, embodiments of the present disclosure are not limited to a particular material layer, process temperature range, and/or to a particular purge gas temperature range. [0033] In various embodiments, the process temperature of the chamber can be maintained at a steady temperature during deposition of a material layer upon wafer. For example, one or more heating elements internal and/or external to a reaction chamber can be used to maintain a reaction chamber and/or batch of semiconductor wafers at a steady process temperature while the deposition surface of the wafers are exposed to sequentially introduced reactants. [0034] In embodiments in which the purge gas is introduced at a temperature greater than the process temperature, the temperature of the purge gas decreases as the purge gas progresses from an edge of the wafer toward the center 105 of the wafer. In the example shown in Figure IB, the purge gas temperature decreases from a first temperature Tl to a second temperature T2 as the purge gas progresses toward the center 105.[0035] In one or more embodiments, the purge gas establishes a temperature differential substantially between an edge of the wafer and a center, e.g., 105, of the wafer. That is, the purge gas having a temperature greater than the process temperature of the reaction chamber can create a temperature differential across the deposition surface of the wafer as the heated purge gas cools, e.g., from Tl to T2, as it moves over the deposition surface. [0036] In some embodiments of the present disclosure, the temperature differential across the wafer results in a non-uniform deposition rate across the wafer. For instance, the deposition rate near the edge of the wafer, which is hotter than the process temperature due to the purge gas, is slower than the deposition rate near the center, e.g., 105, of the wafer, which is cooler than the edge of the wafer due to the introduction of the heated purge gas. [0037] In prior ALD processes, creating a temperature differential across a wafer during processing is discouraged because the temperature differential leads to non-uniform deposition rates, e.g., the deposition rate of the material layer can be slower at portions of the deposition surface which are hotter than at portions of the deposition surface which are cooler. That is, in various prior ALD processes, a uniform temperature across the deposition surface is desirable in order to achieve a uniform deposition rate, e.g., uniform thickness, across the wafer. [0038] The illustration 101 -2 of Figure 1 C shows a thickness profile of a material layer 104-2 formed on a semiconductor wafer during an ALD process according to an embodiment of the present disclosure. The material layer 104-2 shown in Figure 1C has a more uniform thickness profile as compared to the edge -thick profile of material layer 104-1 shown in Figure IA. That is, the thickness variance ΔC of material layer 104-2 is smaller than the thickness variance ΔA of material layer 104-1 shown in Figure IA. [0039] As described above, the edge-thick profile of material layer 104-1 shown in Figure IA can be indicative of the presence of a CVD component associated with the ALD process used to form layer 104-1. As described further herein, various processing embodiments of the present disclosure can be used to increase the throughput associated with an ALD process while maintaining a suitable WIW uniformity by using a purge gas heated to a temperature other than the process temperature to compensate for WIW uniformity variance due to theCVD component. In one or more embodiments, the purge gas is heated to a temperature greater than the process temperature in order to decrease the WIW uniformity variance, e.g., in order to decrease the difference ΔC to a desired level. Some embodiments can allow a suitable WIW uniformity to be achieved even when the CVD component associated with an ALD process is purposely increased. [0040] The material layer 104-2 can represent a material layer formed via an ALD process having the same CVD component presence as that associated with deposition of material layer 104-1. That is, the material layer 104-2 represents a material layer formed in accordance with a processing embodiment of the present disclosure which compensates for the CVD component, e.g., reduces the edge-thick profile and/or thickness variance ΔA associated with material layer 104-1 shown in Figure IA. [0041] Figure 2 illustrates a diagram of a semiconductor processing system 200 according to an embodiment of the present disclosure. The system 200 includes a reaction chamber 202 (which is sometimes referred to as a deposition chamber) that includes a wafer carrier 209, or boat, which can be loaded into and removed from the chamber 202. The carrier 209 can hold a number of semiconductor wafers 207, e.g., a batch, upon which a material layer is to be formed. As noted above, the material layer can be an oxide layer, a composite oxide layer, a nitride layer, a complex nitride layer, a metal layer, or a suicide layer, among various other material layer types. [0042] The wafers 207 can be vertically stacked and spaced apart from each other in the carrier 209 and can be rotated about their centers 205 during processing. Although the system 200 illustrates a vertical reaction chamber 202 for processing a vertically stacked batch of wafers 207 rotated about their centers 205, embodiments are not limited to batch deposition processes, to vertical chambers, to rotating wafers, or to a particular orientation of the semiconductor wafers within the chamber. [0043] The reaction chamber 202 and/or the wafers 207 can be heated to a desired process temperature (Tp) via a number of heaters 206-1, 206-2, 206-3, and 206-4. Although the chamber 202 includes four heaters, embodiments can include more or fewer heaters. The system 200 includes a pump 240 which can be used to remove residual, e.g., excess, gas such as residual reactant gas,catalyst, purge gas, and/or by-products from the chamber 202 through evacuation port 242. The pump 240 is coupled to a flow controller 241 which can be used to control the exhaust rate through port 242. In the embodiment shown in Figure 2, the evacuation port 242 is located near a bottom portion of chamber 202. However, in some embodiments, the evacuation port 242 can be located at other locations of reaction chamber 202 and/or the reaction chamber 202 can include multiple evacuation ports. [0044] In various embodiments, and as shown in Figure 2, the reaction chamber 202 includes an injector assembly 229 through which materials can be introduced into the chamber 202. In the embodiment illustrated in Figure 2, the injector assembly 229 includes a number of injectors 230-1, 230-2, 230-3, and 235. In the embodiment shown in Figure 2, the assembly 229 includes four vertical injectors 230-1, 230-2, 230-3, and 235. However, embodiments of the present disclosure are not limited to a particular number of injectors or to vertical injectors. [0045] The vertical injectors 230-1, 230-2, and 230-3 are elongate multi- holed injectors each having a number of apertures 232 along their respective lengths. The number, size, and/or orientation of the apertures 232 can depend on a number of factors such as one or more process parameters associated with the deposition of a material layer on the batch of wafers 207, the type of material layer being deposited, etc. [0046] In various embodiments, a semiconductor processing system can include an injector for introducing purge gas into an upper portion of the reaction chamber and/or for delivering purge gas toward an upper surface of the chamber. In the embodiment illustrated in Figure 2, the injector assembly 229 includes at least one vertical injector 235 having an aperture 236 at its end, e.g., at its tip as shown. As described further below, the aperture 236 of injector 235 can be used to deliver gas toward an upper portion, e.g., an upper surface, of the chamber 202 during various deposition process stages. For instance, during a purge process, an amount of purge gas, heated to the temperature greater than the process temperature, can be delivered toward an upper surface of the chamber 202. The upper surface of the chamber 202 can include a higher concentration of excess reactant and/or catalyst than other portions, e.g., the side walls, of the chamber 202 due to factors such as the distance between the upper surface andthe evacuation port 242 and the relatively large surface area of the upper surface, among other factors. [0047] A higher concentration of residual gases toward the top of the chamber, e.g., additional CVD component, can lead to increased WIW uniformity variance of wafers near the top of the batch. That is, an edge-thick profile can be more pronounced and/or the thickness variance, e.g., variance ΔA shown in Figure IA, of the deposited material layer can be greater for wafers 207 near the top of the batch than for lower wafers 207. An example of WIW uniformity variance versus position within a carrier, e.g., carrier 209, for a batch of wafers is illustrated in the graph shown in Figure 3 A. [0048] As described in connection with Figures 3A and 3B below, in one or more embodiments, introducing a heated purge gas into an upper portion of the reaction chamber 202 can decrease WIW uniformity variance among wafers in carrier 209. For example, a heated purge gas delivered toward the upper portion of the chamber 202 via injector 235 can be used to compensate for the increased WIW uniformity variance associated with the wafers near the top of the batch, which can increase process throughput. For instance, compensating for the increased WIW uniformity variance associated with wafers near the top of the batch can increase the likelihood that the entire batch of wafers has a suitable WIW uniformity. The heated purge gas delivered toward the top of the chamber 202 via injector 235 creates a temperature differential across the wafers 207 nearest the upper portion of the chamber as it progresses toward the center 205. The heated purge gas becomes less effective, e.g., has less of an effect on the deposition rate of the material layer, as it cools toward the process temperature, e.g., through heat dissipation as it moves from the injection aperture 236 downward in the chamber 202. That is, the temperature differential across the top wafer 207 is greater than the temperature differential across the surface of a next lower wafer, etc. until the heated purge gas reaches the process temperature. As such, in one or more embodiments, heated purge gas delivered toward the upper portion of the chamber 202 via injector 235 creates a temperature differential across only the top few wafers 205 of the batch. As discussed above in connection with Figures IA- 1C, the temperature differential across the surface of a wafer can cause a decrease in deposition rate at the edgesin order to compensate for an edge-thick profile due to a CVD component associated with the ALD process. [0049] In some embodiments, the injector 235 includes only an aperture, e.g., 236, at its end. Embodiments are not so limited. For instance, in some embodiments, the injector 235 can include multiple apertures located at or near its tip and/or along its length. In some embodiments, the injector 235 can have a curved shape. In such embodiments, the curved end of the injector 235 can be used to introduce heated purge gas into an upper portion of the chamber 202. [0050] Various system embodiments can include a number of gas sources, e.g., reactant gas sources, catalyst gas sources, purge gas sources, and carrier gas sources. In the embodiment illustrated in Figure 2, the system 200 includes two reactant sources 210-1 and 210-2, a catalyst source 212, and two purge sources 215-1 and 215-2 which can be delivered, via one or more conduits, e.g., gas lines, to the injector assembly 229 for introduction into the chamber 202. [0051] In the system 200, a first reactant (REACTANTl ), a second reactant (REACTANT2), and a catalyst are delivered to the assembly 229 from respective sources 210-1, 210-2, and 212 through respective gas lines 220-1, 220-2, and 220-3 and are introduced into the chamber 202 via respective injectors 230-1, 230-2, and 230-3. The gas flow from sources 210-1, 210-2, and 212 is controlled by respective flow controllers 213-1, 213-2, and 214. As discussed in connection with Figure 3 below, in embodiments in which the material layer to be deposited on the wafers is silicon oxide (SiO2), REACTANTl can be hexachlorodisilane (Cl6Si2), REACTANT2 can be water (H2O), and pyridine (C5H5N) can be used as the reaction catalyst. In such embodiments, nitrogen gas (N2) can be used as a carrier gas source for delivering the reactants to the chamber 202. [0052] In the embodiment illustrated in Figure 2, the system 200 includes a first purge gas (PURGEl) and a second purge gas (PURGE2) that can be delivered to the assembly 229 from respective sources 215-1 and 215-2 through respective gas lines 216-1 and 216-2. The flow rate of purge gas through the gas lines 216-1 and 216-2 can be controlled via flow controllers 217- 1 and 217-2, respectively. The gas lines 216-1 and 216-2 are coupled to respective heating elements 218-1 and 218-2 which can be used to heat purgegas delivered from respective sources 215-1 and 215-2 to the assembly 229. Example purge gases include, but are not limited to, nitrogen gas and/or argon gas. [0053] In the embodiment illustrated in Figure 2, only gas lines 216-1 and 216-2 are shown as being coupled to heating elements, e.g., 218-1 and 218- 2, respectively. As one of ordinary skill in the art will appreciate, other gas lines, e.g., 219-1, 219-2, 219-3, 220-1, 220-2, 220-3, may also be coupled to heating elements. [0054] In some embodiments, the source 215-1 and/or 215-2 can be both a source of purge gas and a source of carrier gas. That is, carrier gas lines (not shown) from source 215-1 and/or source 215-2 can be coupled to source 210-1, 210-2, and/or 212. However, in some embodiments, the system 200 can include a separate carrier source, e.g., a source separate from sources 215-1 and 215-2, which can be used as the carrier source. [0055] In the embodiment illustrated in Figure 2, the heating element 218-1 is used to heat purge gas line 216-1 to a temperature (T) which is not greater than, e.g., is less than or equal to, a process temperature (Tp) associated with the particular deposition process. The heating element 218-2 is used to heat purge gas line 216-2 to a temperature which is greater than the process temperature (Tp). Purge gas having a temperature greater than the process temperature of the chamber, e.g., 202, can be referred to herein as "hot purge gas," while purge gas having a temperature at or below the process temperature of the chamber can be referred to herein as "cold purge gas." In some embodiments, the purge gas line 216-2 is heated to a temperature at least 5°C greater than Tp. In some embodiments, the purge gas line 216-2 is heated such that the hot purge gas PURGE2 is about 5°C-25°C greater than the process temperature within the chamber 202. Embodiments are not limited to the above examples. For instance, in some embodiments, the purge gas line 216-2 is heated such that the hot purge gas PURGE2 is more than 25°C greater than the process temperature within the chamber 202. [0056] The gas line 216-1 has a number of associated gas lines 222- 1 , 222-2, and 222-3 which are connected to, e.g., are in fluid communication with, respective gas lines 220-1, 220-2, and 220-3 for delivering cold purge gas PURGEl from source 215-1 to injectors 230-1, 230-2, and 230-3, respectively.The flow of PURGE 1 through gas lines 222-1, 222-2, and 222-3 can be controlled with respective flow controllers 221-1, 221-2, and 221-3. [0057] The gas line 216-2 has a number of associated gas lines 219-1, 219-2, and 219-3 which are connected to, e.g., are in fluid communication with, respective gas lines 220-1, 220-2, and 220-3 for delivering hot purge gas PURGE2 from source 215-2 to injectors 230-1, 230-2, and 230-3, respectively. The flow of PURGE2 through gas lines 219-1, 219-2, and 219-3 can be controlled with respective flow controllers 223-1, 223-2, and 223-3. [0058] The gas line 216-2 also has an associated gas line 225 which can be used to deliver PURGE2 from source 215-2 to the injector assembly 229 for introduction of the purge gas into chamber 202 via injector 235. As described further below, in some embodiments purge gas PURGE2, heated to a temperature greater than the process temperature (Tp), can be introduced into the chamber 202 through each of the injectors 230-1, 230-2, 230-3, and 235 during one or more purge processes associated with deposition of a material layer on the batch of wafers 207. [0059] In some embodiments, an amount of hot purge gas PURGE2 can be introduced into the chamber 202 during one or more reactant pulses. For instance, in such embodiments, an amount of PURGE2 can be flowed into the chamber 202 along with a pulse of REACTANTl and/or along with a pulse of REACTANT2. [0060] In some embodiments, hot purge gas can be used to perform a chamber cleaning process, e.g., a bake out, in between deposition processes. As one of ordinary skill in the art will appreciate, a bake out process can be performed to remove unwanted reactant, catalyst, and/or by-products which may have formed a layer of film on the chamber side walls and upper surface during the deposition process. In such embodiments, hot purge gas can be flowed into the chamber via one or more of the injectors 230-1, 230-2, 230-3, and 235 while the boat 209 is being reloaded with a subsequent batch of wafers 207. In various embodiments, the hot purge gas used for the cleaning process can be hotter than the hot purge gas used for the purge processes. In some embodiments, the hot purge gas has a temperature of about 150°C-250°C, e.g., the purge gas line 216- 2 is heated to a temperature of about 150°C-250°C via heating element 218-2.[0061] Using a hot purge gas, e.g., PURGE2, to perform a bake out process can provide several benefits. For instance, performing the bake out with the hot purge gas can reduce or prevent the use of chamber heaters, e.g., 206-1, 206-2, 206-3, and 206-4, to perform the bake out process. Using the chamber heaters 206-1, 206-2, 206-3, and 206-4 to heat the chamber 202 for a bake out process can decrease processing throughput by increasing the time associated with performing the bake out process. For example, it can be difficult to quickly reduce the temperature of the heaters from the elevated bake out temperature to the appropriate process temperature for a subsequent deposition process. Also, it can be difficult to controllably cool the heaters 206-1, 206-2, 206-3, and 206-4 from the elevated temperature to the process temperature since the heaters may cool at different rates. In embodiments in which hot purge gas is used to perform the bake out, the bake out process can be performed with the chamber heaters held at or near the process temperature of the chamber, which can reduce the time associated with cooling the chamber heaters. [0062] In embodiments in which hot purge gas is used to perform the bake out, the processing system, e.g., system 200, may include a separate gas source and/or separate gas lines heated to the elevated bake out temperature to deliver the heated bake out gas to the chamber. For instance, the system 200 can include an additional purge gas line heated to a temperature greater than the process temperature, e.g., a gas line in addition to 216-1 and 216-2 shown in Figure 2. In such embodiments, the additional heated gas line can allow switching between the hot purge gas used to perform a bake out process and the hot purge gas, e.g., PURGE2, used to perform purge processes according to one or more embodiments of the present disclosure. [0063] As discussed above in connection with Figures 1 A-IC, in various embodiments of the present disclosure, at least one purge process performed after a reactant pulse includes creating a temperature differential across the deposition surface of a number of the wafers 207 by directing an amount of purge gas, e.g., an amount of hot purge gas PURGE2 across the deposition surface of the number of wafers. The amount of hot purge gas PURGE2 can be introduced into the chamber 202 through one or more injectors, e.g., 230-1, 230- 2, 230-3, and 235.[0064] In some embodiments, a first portion of the amount of hot purge gas PURGE2 introduced into the chamber 202 during a purge process is delivered through one or more vertical injectors, e.g., 230-1, 230-2, and/or 230- 3, configured to direct the first portion through a number of apertures toward a center 205 of the wafers 207. In some embodiments, a second portion of the amount of hot purge gas PURGE2 introduced into the chamber 202 during a purge process is delivered through a vertical injector, e.g., 235, configured to direct the second portion through an aperture at an end of the second vertical injector, e.g., aperture 236, toward an upper surface of the chamber 202. As described below in connection with Figures 3 A and 3B, the amount of hot purge gas PURGE2 can be directed toward the upper surface of the chamber 202 in order to decrease WIW uniformity variance associated with wafer position within a boat for a batch of wafers. [0065] In one or more embodiments in which an amount of hot purge gas, e.g., PURGE2, is introduced into the reaction chamber 202, the temperature of the hot purge gas decreases as the purge gas progresses from an edge of the wafer 207 toward the center 205 of the wafer 207. For instance, as shown in the example of Figure IB, the hot purge gas temperature decreases from a first temperature, e.g., Tl as shown in Figure IB, to a second temperature, e.g., T2 as shown in Figure IB, as the purge gas progresses toward the centers 205 of the wafers 207. The cooling of the hot purge gas as the gas moves across the deposition surface of the wafers 207 creates a temperature differential between the edge of the wafers 207 and the center 205. The temperature differential across the wafer 207 results in a non-uniform deposition rate, e.g., a deposition rate gradient, across the wafer 207. In various embodiments, the deposition rate associated with the wafers 207 is slower at the edges of the wafers, where the deposition surface is hottest due to the hot purge gas, and is faster near the center of the wafers, where the deposition surface is at a temperature between the process temperature and the temperature of the edges of the wafers 207. [0066] As described above in connection with Figures IA- 1C, establishing a temperature differential across the surface of the wafers 207 by performing a purge process with hot purge gas, e.g., purge gas heated to a temperature greater than the process temperature of the chamber 202 and/or wafers 207, can provide various benefits. For instance, as described above, thetemperature differential created by the hot purge gas produces a deposition rate gradient, e.g., the deposition rate is slower at the edges of the wafers 207 than at the center 205 of the wafers 207. As such, the deposition rate gradient established by the hot purge gas can compensate for an edge-thick profile of a material layer, e.g., 104-1 shown in Figure IA, associated with the presence of reactant concentration gradients during processing, e.g., the presence of a CVD component associated with the ALD processing method, to produce very low, e.g., near zero, WIW uniformity measurements. [0067] Compensating for WIW uniformity variance, e.g., WIW non- uniformity, associated with the presence of a CVD component in an ALD process can provide improved throughput as compared to prior ALD methods. For example, embodiments of the present disclosure can allow a particular ALD process, e.g., deposition of a particular material layer having a desired thickness and suitable WIW uniformity, to be performed in a shortened amount of time. The deposition time associated with a particular ALD process can be shortened by adjusting various processing parameters which lead to an increase in the presence of a CVD component associated with the ALD process, e.g., an increase in the amount residual reactants between reactant pulses. Examples of processing parameter adjustments which can decrease the deposition time include reducing the amount of pumping and/or purging time between reactant pulses, reducing the number of pumping and/or purging cycles between reactant pulses, increasing the temperature of a reactant source, performing the process at a lower process temperature, and/or flowing an amount of reactant into the chamber during a purge process, among other processing parameter adjustments. [0068] As one of ordinary skill in the art will appreciate, and as described above, an added CVD component associated with an ALD process can increase the deposition rate, e.g., increased throughput, of the ALD process but can cause an increased WIW uniformity measurement, e.g., a more pronounced "bowl" shape profile such as that shown in Figure IA. Using embodiments of the present disclosure to compensate for an added CVD component associated with the ALD process can provide the benefits of the increased deposition rate associated with the CVD component while maintaining the WIW uniformity benefits associated with ALD processes.[0069] In various embodiments, the first purge gas PURGEl and the second purge gas PURGE2 can be the same gas, e.g., nitrogen gas, argon gas, etc. That is, the same type of purge gas can be delivered from first purge gas source 215-1 and second purge gas source 215-2. In such embodiments, providing a separate gas source and/or separate gas line for hot purge gas, e.g., PURGE2, and for cold purge, e.g., PURGEl, can provide several benefits. [0070] For example, as described further below in connection with Figures 4, 5, and 6, in various embodiments a purge gas source, e.g., 215-1 and 215-2, can be used as a carrier gas source. In such embodiments, it can be desirable to adjust the temperature of the gas line, e.g., 216-1 and 216-2, during deposition processing. For instance, in various embodiments of the present disclosure, one or more purge processes are performed with a hot purge gas, e.g., a purge gas heated to a temperature above Tp, and one or more reactant pulses are conducted with a cooler purge/carrier gas, e.g., a purge gas heated to a temperature at or below Tp. In such embodiments, it can be difficult to adjust the temperature of a purge gas line to different levels in the time between reactant pulses and purge pulses, which can be on the order of seconds. Therefore, providing one or more separate gas lines for hot purge gas, e.g., purge gas above Tp, and cold purge gas, e.g., purge gas at or below Tp, can allow the system 200 to rapidly switch between using the hot or cold purge gas without increasing processing time due to adjusting the gas line temperature. [0071] Figure 3 A is a graph 301 illustrating an example of WIW uniformity variance versus position within a boat for a batch of wafers. As shown in Figure 3 A, in various semiconductor processing systems, there is some WIW uniformity variance associated with wafer position within a boat, e.g., boat 209 shown in Figure 2. As an example, the thickness variance, e.g., thickness variance ΔA shown in Figure IA, can be greater for wafers at or near the top of the boat than for wafers lower in the boat. For instance, in graph 301, curve 355 illustrates that the WIW uniformity associated with wafers in a boat increases as the wafer position increases. That is, wafers positioned further up in the boat, e.g., at or near the top of the boat, have a greater thickness variance, e.g., a more pronounced "bowl shape" thickness profile as shown in Figure IA, than wafers positioned toward the bottom of the boat.[0072] The increased thickness variance, e.g., higher WIW uniformity variance, associated with wafers at the top of a boat can be caused by a higher concentration of excess reactant and/or catalyst on the upper surface of the reaction chamber than on lower portions of the chamber, e.g., chamber side- walls. The higher concentration of excess reactant and/or catalyst on the upper surface of the reaction chamber can be due to factors such as the distance between the upper surface and the evacuation port 242 and the relatively large surface area of the upper surface, among other factors. The higher concentration of residual gases toward the top of the chamber adds a CVD component to the system which can lead to increased WIW uniformity of wafers near the top of the batch as shown in graph 301. [0073] Figure 3B is a graph 303 illustrating an example of purge gas temperature versus height within a reaction chamber for purge gas introduced into the chamber in accordance with an embodiment of the present disclosure. In one or more embodiments, a heated purge gas can be introduced into an upper portion of the reaction chamber, e.g., chamber 202 shown in Figure 2, in order to decrease WIW uniformity variance among wafers in a boat, e.g., carrier 209 shown in Figure 2. That is, in some embodiments, introducing the heated purge gas into the upper portion of the chamber can combat the WIW uniformity variance illustrated in graph 301 of Figure 3 A. [0074] In the embodiment shown in graph 303, curve 308 illustrates the temperature of a purge gas introduced into an upper portion of a reaction chamber, e.g., chamber 202 shown in Figure 2. In this embodiment, the purge gas is introduced into an upper portion of the chamber at a temperature Tpurge which is greater than the process temperature Tp of the chamber. The heated purge gas cools from the temperature Tpurge toward the process temperature Tp as the gas moves from the upper portion (UPPER) to the lower portion (LOWER). [0075] As described in Figure 2, the heated purge gas can be delivered toward the upper portion of the chamber via an injector, e.g., injector 235 shown in Figure 2. In such embodiments, the heated purge gas becomes less effective, e.g., has less of an effect on the deposition rate of the material layer, as it cools toward the process temperature Tp, e.g., through heat dissipation as it moves from the introduction point downward in the chamber. As such, in one or moreembodiments, heated purge gas delivered toward the upper portion of the chamber creates a temperature differential across only the top few wafers of the batch, e.g., the wafers located nearest the top of the wafer boat, which can have a higher WIW uniformity as compared to wafers located further down in the boat as shown in Figure 3 A. [0076] Figure 4 is an overhead view of a reaction chamber 402 according to an embodiment of the present disclosure. The reaction chamber 402 includes a number of injectors 430-1, 430-2, 430-3, and 435. The injectors can be vertical injectors such as vertical injectors 230-1, 230-2, 230-3, and 235 shown in Figure 2. The chamber 402 can include a carrier (not shown), e.g., a wafer boat, into which a number of semiconductor wafers 407 can be loaded to receive a material layer formed thereon. The chamber 402 includes an evacuation port 442 through which residual gas can be removed via a pump, e.g., pump 240 shown in the embodiment of Figure 2. [0077] As one example, the chamber 402 can be used to form a material layer, e.g., material layer 104-2 shown in Figure 1C, of silicon oxide (SiO2) on a batch of wafers 407 according to an ALD process embodiment of the present disclosure. In various embodiments, the ALD process can be a catalytic ALD process. In the example discussed in connection with Figure 4, the material layer of silicon oxide is formed using hexachlorodisilane (Cl6Si2), or HCD, as a first reactant gas and using water (H2O) as a second reactant gas. In this example, pyridine (C5H5N), or PYR, is used as the reaction catalyst and nitrogen gas (N2) is used both as a purge gas and as a carrier gas. [0078] In various embodiments, one or more heaters, e.g., 206-1 to 206-4 shown in Figure 2, can be used to heat the chamber 402 to a suitable process temperature (Tp). As an example, forming a material layer of silicon oxide via the catalytic process described in connection with Figure 4 can include heating the chamber 402 to a process temperature of about 75°C. The deposition surface of the batch of wafers 407 can then be sequentially exposed to a first reactant gas pulse, e.g., an HCD pulse, via injector 430-1 and a second reactant gas pulse, e.g., a water vapor pulse, via injector 430-2. An amount of catalyst, e.g., pyridine in this example, is flowed into the chamber 402 via injector 430-3 in order to facilitate an increased growth rate, e.g., deposition rate, of the silicon oxide at the 75°C process temperature as compared to the silicon oxide growthrate via ALD processing at higher process temperatures. As one of ordinary skill in the art will appreciate, a carrier gas such as nitrogen gas can be used to deliver the first and/or second reactant gas to the chamber 409. [0079] In some embodiments, the HCD and H2O reactants and the pyridine catalyst can be introduced into the chamber 409 at a temperature at or below the process temperature. That is, one or more gas lines used to deliver the reactants and the catalyst can be heated such that the temperature of the gases passing therethrough have a temperature at or below the process temperature, e.g., 750C in this example, when introduced into the chamber 409 via the respective injectors 430-1, 430-2, and 430-3. In some embodiments, and as described further below, an amount of purge gas, e.g., nitrogen gas in this example, heated to temperature greater than the process temperature, can be introduced into the chamber along with one or both of the reactant pulses. In such embodiments, the hot purge gas introduced along with the reactant pulse can be used to establish a temperature differential across the surfaces of the wafers. [0080] In various embodiments, a purge process is performed after each reactant pulse. As discussed above, the purge process includes performing one or more pumping and/or one or more purging steps in order to remove excess reactant and/or by-products from the reaction chamber between the sequentially introduced, e.g., separately introduced, reactant pulses. The purging steps involve introducing an amount of purge gas into the chamber and the pumping steps involve evacuating the excess reactant gases, purging gases, and byproduct gases from the chamber. The reader will appreciate that an ALD process can be repeated until a desired material layer thickness is deposited on a wafer, e.g., until a desired thickness of silicon oxide is formed on the batch of wafers. [0081] As described above, at least one of the first and second purge process includes creating a temperature differential across the deposition surface of a number of the wafers by directing an amount of purge gas across the deposition surface of the number of wafers. In one or more embodiments, the purge gas is heated to a temperature greater than the process temperature. For instance, in the example shown in Figure 4, nitrogen gas (N2), heated to a temperature above 75 °C, can be introduced into the chamber through one ormore of the injectors 430-1, 430-2, 430-3, and 435 during a purge process performed subsequent to an HCD pulse and/or subsequent to an H2O pulse. As the heated nitrogen purge gas progresses toward the center 405 of the rotating wafers 407, a temperature differential is created across the deposition surface of the wafers 407, e.g., the hot purge gas heats edges of the wafers 407 more than the center 405. In the embodiment illustrated in Figure 4, the injector 435 is a vertical injector having an aperture 436 for directing hot purge gas toward an upper surface of the chamber 402. [0082] The temperature differential established by the hot purge gas creates a deposition rate gradient which can compensate for an edge-thick profile of a material layer, e.g., 104-1 shown in Figure IA, associated with the presence of reactant concentration gradients during processing, e.g., the presence of a CVD component associated with the ALD processing method. As such, one or more embodiments of the present disclosure can provide the benefits of the increased deposition rate associated with the CVD component while maintaining the WIW uniformity benefits, e.g., decreased WIW uniformity measurements, associated with ALD processes. [0083] Although in the example described in connection with Figure 4 the purge gas directed across the deposition surface of the wafers is heated to a temperature greater than the process temperature of the chamber, embodiments are not so limited. For example, in some ALD reaction systems, the deposition rate associated with the material layer can increase as the temperature increases. As such, in some embodiments, the temperature of the purge gas introduced into the chamber and directed across the deposition surface of the wafer can have a temperature below the process temperature of the chamber. In such embodiments, the temperature of the purge gas will increase as the cool purge gas moves from across the wafer from the edge to the center of the wafer. [0084] Figure 5 illustrates a portion of a semiconductor processing system 500 according to an embodiment of the present disclosure. The system 500 includes an injector assembly 529 for introducing materials into a reaction chamber, e.g., chamber 202 described in Figure 2. As described above in connection with Figures 2 and 4, the injector assembly 529 can include a number of injectors (not shown) coupled to gas lines 520-1, 520-2, 520-3, and 525. In the embodiment illustrated in Figure 5, the system 500 includes two reactantsources 510-1 and 510-2, a catalyst source 512, and a purge/carrier source 515 which are delivered to the injector assembly 529, via the appropriate gas lines, to the injector assembly 529 for introduction into the chamber. [0085] In the embodiment illustrated in Figure 5, a first reactant (Rl), a second reactant (KZ), and a catalyst (C) are delivered to the assembly 529 from respective sources 510-1, 510-2, and 512 through respective gas lines 520-1, 520-2, and 520-3. The system 500 includes a purge/carrier gas (PURGE/CARRIER), e.g., nitrogen gas, that can be delivered to the assembly 529 from gas source 515 through gas line 516. As illustrated in Figure 5, the gas line 516 is coupled to gas lines 511-1, 511-2, and 511-3 which serve as carrier gas lines for respective sources 510-1, 510-2, and 512. [0086] The gas line 516 has a number of associated gas lines 519-1, 519- 2, and 519-3 which are connected to, e.g., are in fluid communication with, respective gas lines 520-1, 520-2, and 520-3 for delivering purge gas PURGE/CARRIER from source 515 to one or more injectors, e.g., injectors 230- 1 to 230-3 shown in Figure 2, of injector assembly 529. The gas line 516 also has an associated gas line 525 which can be used to deliver PURGE/CARRIER from source 515 to one or more injectors, e.g., injector 235 shown in Figure 2, of injector assembly 529. As described above, the gas line 525 can be coupled to a vertical injector having an aperture only at its end for delivering purge gas toward the upper portion of the chamber. [0087] As illustrated in Figure 5, the gas line 516 includes a heating element 518 which is used to heat purge gas line 516 to various temperatures during processing. For instance, as described above, in various embodiments, the heating element 518 is used to heat gas line 516 to a temperature which is greater than the process temperature to deliver hot purge gas to the injector assembly 529 via gas lines 520-1, 520-2, 520-3, and/or 525 during a purge process. In various embodiments, the temperature of the gas line 516 is reduced, e.g., to a temperature at or below the process temperature, such that an amount of PURGE/CARRIER used as a carrier gas does not have a temperature greater than the process temperature of the reaction chamber. [0088] In the embodiment illustrated in Figure 5, only gas lines 516 is shown as being coupled to a heating element, e.g., 518. Other gas lines, e.g., 511-1, 511-2, 51 1-3, 519-1, 519-2, 519-3, 520-1, 520-2, 520-3, and 525, mayalso be coupled to heating elements, which can be used to heat the gas lines to various temperatures above and/or below the process temperature associated with the particular ALD process. [0089] In some embodiments, the system 500 can include a separate source for carrier gas and purge gas. For instance, source 515 can be a source of purge gas and the system 500 can include a separate source of carrier gas. [0090] Figure 6 illustrates a portion of a semiconductor processing system 600 according to an embodiment of the present disclosure. The system 600 includes an injector assembly 629 for introducing materials into a reaction chamber, e.g., chamber 202 described in Figure 2. As described above in connection with Figures 2 and 4, the injector assembly 629 can include a number of injectors (not shown) coupled to, e.g., in fluid communication with, gas lines 620-1, 620-2, 620-3, and 625. In the embodiment illustrated in Figure 6, the system 600 includes two reactant sources 610-1 and 610-2, a catalyst source 612, and a purge/carrier source 615 which are delivered to the injector assembly 629, via the appropriate gas lines, to the injector assembly 629 for introduction into the deposition chamber. [0091] In the embodiment illustrated in Figure 6, a first reactant (Rl), a second reactant (R2), and a catalyst (C) are delivered to the assembly 629 from respective sources 610-1, 610-2, and 612 through respective gas lines 620-1, 620-2, and 620-3. The system 600 includes a purge/carrier gas (PURGE/CARRIER), e.g., nitrogen gas, that can be delivered to the assembly 629 from gas source 615 through gas lines 616-1 and/or 616-2. As illustrated in Figure 6, the gas line 616-1 is coupled to gas line 611 which is coupled to reactant sources 610-1 and 610-2, and to catalyst source 612. That is, in the embodiment of Figure 6, the PURGE/CARRIER gas delivered from source 615 is used as a purge gas and as a carrier gas. [0092] The gas lines 616-1 and 616-2 are coupled to respective heating elements 618-1 and 618-2 which can be used to heat the PURGE/CARRIER gas delivered from source 615 to the assembly 629. In the embodiment illustrated in Figure 6, the heating element 618-1 is used to heat gas line 616-1 to a temperature (T) which is not greater than, e.g., is less than or equal to, a process temperature (Tp) associated with the particular deposition process. The heating element 618-2 is used to heat gas line 616-2 to a temperature which is greaterthan the process temperature (Tp). As used herein, the PURGE/CARRIER gas having a temperature greater than the process temperature of the chamber can be referred to herein as "hot purge gas," while the PURGE/CARRIER gas having a temperature at or below the process temperature of the chamber can be referred to herein as "cold purge gas." In various embodiments, the gas line 616-2 is heated to a temperature at least 5°C greater than Tp. In some embodiments, the purge gas line 616-2 is heated such that the hot purge gas PURGE/CARRIER is about 5°C-25°C greater than the process temperature within the chamber. Embodiments are not limited to the above examples. [0093] The gas line 616-1 has a number of associated gas lines 622-1, 622-2, and 622-3 which are connected to, e.g., are in fluid communication with, respective gas lines 620- 1 , 620-2, and 620-3 for delivering cold purge gas PURGE/CARRIER from source 615 to one or more injectors, e.g., injectors 230- 1 to 230-3 shown in Figure 2, of injector assembly 629. The gas line 616-2 has a number of associated gas lines 619-1, 619-2, and 619-3 which are connected to, e.g., are in fluid communication with, respective gas lines 620-1, 620-2, and 620-3 for delivering hot purge gas PURGE/CARRIER from source 615-2 to injector assembly 629. The gas line 616-2 also has an associated gas line 625 which can be used to deliver hot purge gas PURGE/CARRIER from source 615 to one or more injectors, e.g., injector 235 shown in Figure 2, of injector assembly 629. [0094] As noted above, it can be desirable to adjust the temperature of a purge/carrier gas, e.g., PURGE/CARRIER, used during deposition processing. For instance, in various embodiments of the present disclosure, one or more purge processes are performed with a hot purge/carrier gas, e.g., PURGE/CARRIER heated to a temperature above Tp, and one or more reactant pulses are conducted with a cooler purge/carrier gas, e.g., PURGE/CARRIER gas heated to a temperature at or below Tp. In such embodiments, it can be difficult to adjust the temperature of a gas line, e.g., 616-1 and/or 616-2, to different levels in the time between reactant pulses and purge pulses, which can be on the order of seconds. Therefore, providing one or more separate gas lines, e.g., 616-2 for hot purge gas and 616-1 for cold purge gas, can allow the system 600 to rapidly switch between using the hot or cold purge gas without increasing processing time due to adjusting the gas line temperature.[0095] Figure 7 is a block diagram of a method for semiconductor processing according to an embodiment of the present disclosure. At block 710, the method includes forming a material layer on a semiconductor substrate by exposing a deposition surface of the substrate to at least a first and a second reactant sequentially introduced into a reaction chamber having an associated process temperature. [0096] In various embodiments, the method includes maintaining the chamber at a steady process temperature while exposing the deposition surface of the wafers to the sequentially introduced first and the second reactants. In some embodiments the process is a catalytic ALD process used to form a material layer of silicon oxide on a batch of wafers. [0097] As shown at block 720, the method includes removing residual first reactant from the chamber after introduction of the first reactant. As shown at block 730, the method includes removing residual second reactant from the chamber after introduction of the second reactant. [0098] As shown at block 740, the method includes establishing a temperature differential substantially between an edge of the substrate and a center of the substrate via a purge process. In one or more embodiments, establishing the temperature differential includes, during the purge process, introducing an amount of purge gas having a temperature different than the process temperature into the chamber. [0099] In various embodiments, the amount of purge gas has a temperature less than the process temperature, and establishing the temperature differential includes delivering the amount of purge gas across a deposition surface of the substrate. In various embodiments, the amount of purge gas has a temperature greater than the process temperature, and establishing the temperature differential includes delivering a first portion of the amount of purge gas across a deposition surface of the substrate. [0100] In embodiments in which an amount of purge gas hotter than the process temperature is delivered across the deposition surface of the substrate, a first portion of the amount of purge gas can be delivered from a gas source through a number of elongate injectors of an injector assembly such that a temperature of the deposition surface of the wafers decreases as the first portion of the amount of purge gas moves from the edge of the substrate toward thecenter. In embodiments in which an amount of purge gas hotter than the process temperature is delivered across the deposition surface of the substrate, the method can include delivering a second portion of the amount of purge gas having a temperature greater than the process temperature from an injector assembly toward an upper surface of the chamber. [0101] In some embodiments, the method includes heating the amount of purge gas to different temperatures for the first and second purge processes. In some embodiments, the method includes delivering an amount of purge gas heated to a temperature greater than the process temperature into the chamber during introduction of at least one of the first reactant and the second reactant into the chamber, e.g., during a reactant pulse. [0102] One or more of the method embodiments create a temperature differential across a deposition surface of the wafers by introducing a first portion of the amount of purge gas into the chamber through a first vertical injector configured to direct the first portion through a number of apertures along a length of the first vertical injector toward a center of the number of wafers. [0103] In some embodiments, at least one of the first and second purge processes includes introducing a second portion of an amount of hot purge gas into the chamber through a second vertical injector configured to direct the second portion through an aperture at an end of the second vertical injector toward an upper surface of the chamber. As described in connection with Figures 2, 3 A, and 3B, directing the second portion toward the upper surface of the chamber can decrease a WIW uniformity variance associated with wafers positioned at different locations in a wafer carrier. For instance, introducing a hot purge gas into the upper portion of the chamber can produce a larger reduced thickness variance for wafers toward the top of a wafer boat than for wafers located lower in the wafer boat. [0104] In some embodiments, the method includes using a particular gas source as a purge gas source and as a carrier gas source. In such embodiments, the method can include providing one or more separate gas lines for delivering purge gas heated to a temperature greater than the process temperature to the chamber and for delivering a purge gas heated to a temperature not greater than the process temperature to the reactant sources.Conclusion [0105] Embodiments of the present disclosure include semiconductor processing methods and systems. Various embodiments can improve the throughput of an atomic layer deposition (ALD) process by controlling and/or compensating for one or more chemical vapor deposition (CVD) components associated with the ALD process. [0106] One method includes forming a material layer on a semiconductor substrate by exposing a deposition surface of the substrate to at least a first and a second reactant sequentially introduced into a reaction chamber having an associated process temperature. The method includes removing residual first reactant from the chamber after introduction of the first reactant, removing residual second reactant from the chamber after introduction of the second reactant, and establishing a temperature differential substantially between an edge of the substrate and a center of the substrate via a purge process. [0107] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0108] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a singledisclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
A system for regulating heating temperature of a material is provided. The material may be a photoresist, a top or bottom anti-reflective coating, a low K dielectric material, SOG or other spin-on material, for example. The system includes a plurality of lamps and optical fibers, each optical fiber directing radiation to and heating a respective portions of a bakeplate on which the material is to be placed. In one embodiment, the temperature at various locations on the material placed on the bakeplate is determined and the heating rates are controlled in response to those measurements. In another aspect of the invention, the temperature at various portions of the bakeplate is determined and controlled. In this latter aspect, uniform heating of the material is a consequence of uniform bakeplate temperature. |
What is claimed is: 1. A photoresist heating system, comprising:a bakeplate; a plurality of lamps; a plurality of optical fibers configured to direct radiation to various portions of the bakeplate; a lamp driving system configured to drive said lamps; a measuring system configured to measure a parameter indicative of temperature at a plurality of locations on a photoresist coated on a wafer when such a coated wafer is placed on the bakeplate; and a processor operatively coupled to the measuring system and the lamp driving system, the processor receiving data from the measuring system and controlling, at least partially based on such data, the lamp driving system so as to regulate photoresist temperature. 2. The system of claim 1, wherein the measuring system includes an interferometer.3. The system of claim 1, wherein the measuring system includes a spectrophotometer.4. The system of claim 3, wherein the processor analyzes data relating to color of the photoresist.5. The system of claim 3, wherein the processor analyzes data relating to absorptivity of the photoresist.6. A system, comprising;a bakeplate; a plurality of lamps; a plurality of optical fibers configured to direct radiation to various portions of the bakeplate; a lamp driving system configured to drive the lamps; a measuring system configured to measure a parameter indicative of temperature at a plurality of locations on the bakeplate; and a processor operatively coupled to the measuring system and the lamp driving system, the processor being capable of receiving data from the measuring system and controlling, at least partially based on such data, the lamp driving system so as to regulate temperature at the plurality of locations. 7. The system of claim 6, wherein the measuring system is based on reflected radiation.8. The system of claim 7, wherein the measuring system includes an interferometer.9. The system of claim 7, wherein the measuring system includes a spectrophotometer.10. The system of claim 9, wherein the processor is configured to analyze data relating to color of the bakeplate.11. The system of claim 9, wherein the processor is configured to analyze data relating to absorptivity of the bakeplate.12. The system of claim 11, wherein the bakeplate includes a substantially inert material that causes a color to vary with changes in temperature.13. The system of claim 12, wherein the substantially inert material includes europium chelate.14. A method for regulating coating temperature, comprising:placing a wafer coated with a coating on top of a bakeplate; heating a plurality of portions of the bake plate with a plurality of heating lamps; directing radiation to the plurality of portions of the bakeplate via optical fibers: measuring via reflected radiation a parameter indicative of the coating temperature at a plurality of locations on the coating; and controlling heating of bakeplate portions, independently of heating of other bakeplate portions, to regulate coating temperature at each of the plurality of locations. 15. The method of claim 14, wherein the measuring includes using an interferometer to measure reflected radiation.16. The method of claim 14, wherein the measuring includes using a spectrophotometer to measure reflected radiation.17. The method of claim 14, the coating being a photoresist coating.18. The method of claim 14, the coating being a top anti-reflective coating.19. The method of claim 14, the coating being a bottom anti-reflective coating.20. The method of claim 14, the coating being a low K dielectric material.21. The method of claim 14, the coating being spin on glass.22. The method of claim 14, the coating being spin-on material.23. A method for regulating coating temperature, comprising:placing a coated wafer on top of a bakeplate; heating a plurality of portions of the bake plate with a plurality of heating lamps; directing radiation to the plurality of portions of the bakeplate via optical fibers; measuring via reflected radiation a parameter indicative of bakeplate temperature at a plurality of locations on the bakeplate; and controlling heating of each of the bakeplate portions, independently of heating of bakeplate portions to regulate bakeplate temperature at each of the corresponding plurality of locations. 24. A system for regulating temperature of a photoresist coating a wafer supported by a bakeplate, comprising:means for monitoring temperature of portions of the photoresist corresponding to portions of the bakeplate; and means for directing radiation to portions of the bakeplate via optical fibers: and means for selectively heating a plurality of the portions of the bakeplate with a plurality of lamps so as to regulate temperature of the photoresist. 25. A system for regulating temperature of a photoresist coating a wafer supported by a bakeplate, comprising:a temperature measuring system for measuring temperature of various portions of the photoresist; a temperature measuring system for measuring temperature of the bake plate by directing radiation thereupon via optical fibers; a system for mapping the photoresist portions with portions of the bakeplate; and a system for selectively heating the bakeplate portions with a plurality of heating lamps so as to control temperature of corresponding photoresist portions. |
TECHNICAL FIELDThe present invention generally relates to semiconductor processing, and in particular to a system for uniformly heating a photoresist.BACKGROUND OF THE INVENTIONIn the semiconductor industry, there is a continuing trend toward higher device densities. To achieve these high densities there has been and continues to be efforts toward scaling down device dimensions (e.g., at submicron levels) on semiconductor wafers. In order to accomplish such high device packing density, smaller and smaller features sizes are required. This may include width and spacing of interconnecting lines, spacing and diameter of contact holes, and surface geometry such as corners and edges of various features.The requirement of small features with close spacing between adjacent features requires high resolution photolithographic processes. In general, lithography refers to processes for pattern transfer between various media. It is a technique used for integrated circuit fabrication in which a silicon slice, the wafer, is coated uniformly with a radiation-sensitive film, the resist, and the film exposed with a radiation source (such as optical light, x-rays, or an electron beam) that illuminates selected areas of the surface through an intervening master template, the mask, forming a particular pattern. The lithographic coating is generally a radiation-sensitive coating suitable for receiving a projected image of the subject pattern. Once the image is projected, it is indelibly formed in the coating. The projected image may be either a negative or a positive image of the subject pattern. Exposure of the coating through a photomask causes the image area to become either more or less soluble (depending on the coating) in a particular solvent developer. The more soluble areas are removed in the developing process to leave the pattern image in the coating as less soluble polymer.Proper preparation of the photoresist is critical to obtaining extremely fine patterns after exposure of the photoresist. In a typical process, a few droplets of photoresist are applied to a spinning wafer. The photoresist is then "softbaked" to remove solvent and anneal. The properties of the photoresist, and the quality of pattern transfer, are affected by the heating temperature and time. To achieve uniformity and quality of the photoresist layer, heating must be uniform and temperature must be accurately controlled.Both the overall temperature history, and variations in the temperature history across the photoresist must be controlled. For example, baking time and temperature affect the photoresist layer thickness. While the layer thickness is typically in the range of 0.1 to 3.0 microns, variances in thickness should be kept less than +10-20 Å across the wafer. Small variations in the time/temperature history across the photoresist can substantially alter image sizes, resulting in lack of image line control. A uniform time/temperature history of the photoresist is especially important with chemically amplified photoresists because image size control may be drastically affected by only a few degrees difference in temperature. Often substantial line size deviations occur when the temperature is not maintained within 0.5 degree tolerance across a silicon wafer. For example, when a photoresist is baked onto a substrate (e.g., wafer), temperature tolerances of ±0.2[deg.] C. are required.Efficient systems and methods for uniformly and rapidly heating layers of temperature-sensitive film formed on semiconductor substrates are therefore desired to increase fidelity in image transfer.SUMMARY OF THE INVENTIONThe present invention provides a system that can be used to control photoresist baking temperature so as to facilitate uniform heating of a photoresist formed on a semiconductor substrate (e.g., wafer). The system includes a bakeplate on which a coated wafer can be placed; a plurality of lamps, and a plurality of optical fibers configured to direct radiation to various portions of the bakeplate. At least one lamp driving device is used to drive the lamps and at least one measuring device is used to measure a parameter indicative of temperature. In one aspect of the invention, the temperature is measured at a plurality of location on the bakeplate. In accordance with another aspect, the temperature is measured at a plurality of locations on a coated wafer, when such a wafer is placed on the bakeplate. A processor operatively coupled to the at least one measuring device and the at least one lamp driving system, is capable of receiving data from the at least one measuring device and is configured to control, at least partially based on such data, the at least one lamp driving device so as to regulate temperature at the plurality of locations where temperature is measured. Temperature may be measured based on reflected radiation; the temperature measuring device may be a spectrophotometer or an interferometer. The spectrophotometer may measure either absorptivity or color. It is preferred to use a spectrophotometer measuring absorptivity. When bakeplate temperature is measured, the bakeplate may include europium chelate.In one aspect of the invention, the system is configured to monitor temperature of a coating on a wafer, when such a wafer is placed on a bakeplate, and to selectively drive a plurality of heaters so as to maintain the coating temperature at a desired level. Substantial uniformity in heating may thereby be achieved, increasing fidelity of image transfer. In another aspect, the system is configured to monitor and keep uniform the bakeplate temperature, which has the effect of maintaining a substantially uniform temperature of a coated wafer when placed on the bakeplate.Another aspect of the present invention is a method, comprising the steps of placing a coated wafer on top of a bakeplate, heating a plurality of portions of the bakeplate; measuring a parameter indicative of the coating temperature at a plurality of locations on the coating, and independently controlling the heating of each of the bakeplate portions to regulate the coating temperature at each of the locations where temperature is measured.A further aspect of the present invention is a method comprising the steps of placing a coated wafer on top of a bakeplate, heating a plurality of portions of the bakeplate; measuring a parameter indicative of temperature at a corresponding plurality of locations on the bakeplate, and independently controlling heating of each of the bakeplate portions to regulate bakeplate temperature at each of the corresponding locations where temperature is measured.The following description and the annexed drawings set forth in detail the invention and certain illustrative aspects of the invention. The illustrative aspects are indicative of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent to one of ordinary skill in the art from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1a is schematic block diagram of a photoresist heating system in accordance with the present invention;FIG. 1b is schematic block diagram of another photoresist heating system in accordance with the present invention;FIG. 1c is a partial schematic block diagram of the system of FIG. 1a being employed in connection with determining photoresist temperature by measuring photoresist thickness in accordance with the present invention;FIG. 1d is a partial schematic block diagram of the system of FIG. 1a being employed in connection with determining photoresist temperature by measuring photoresist color in accordance with the present invention;FIG. 1e is a partial schematic block diagram of the system of FIG. 1b being employed in connection with determining photoresist temperature by measuring bakeplate color in accordance with the present invention;FIG. 1f is a partial schematic block diagram of the system of FIG. 1a being employed in connection with determining photoresist temperature by measuring photoresist absorptivity in accordance with the present invention;FIG. 1g is a partial schematic block diagram of the system of FIG. 1b being employed in connection with determining photoresist temperature by measuring bakeplate absorptivity in accordance with the present invention;FIG. 2 is a perspective illustration of a top side of a bakeplate, and a substrate having a photoresist formed thereon;FIG. 3 is a representative three-dimensional grid map of a photoresist illustrating temperature amplitudes taken at grid blocks of the grid map in accordance with the present invention;FIG. 4 is a temperature amplitude table correlating the temperature amplitudes of FIG. 3 with desired values for the temperature amplitudes in accordance with the present invention;FIG. 5 is a flow diagram illustrating one specific methodology for carrying out the present invention.FIG. 6 is a flow diagram illustrating another specific methodology for carrying out the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. The present invention is a system and method that can be used to uniformly heat a photoresist using a plurality of heaters. It is to be appreciated that the present invention may be applied to pre-baking as well as post exposure baking of the photoresist. Furthermore, although the present invention is primarily described within the context of heating photoresist, it is to be understood that the present invention may be applied to heating of top and bottom anti-reflective coatings, low K dielectric materials, spin-on-glass (SOG) and other spin-on materials.Referring initially to FIG. 1a, a system 20a for heating a photoresist 22 is shown. The photoresist 22 and substrate 26 are not part of the system. A plurality of proximity pins 32 prevent contact between the substrate 26 and a bakeplate 30 when the substrate 26 is placed on a bakeplate 30. The proximity pins 32 elevate the substrate 26 about 1 mm above the surface of the bakeplate 30. Preventing contact of the substrate 26 with the plate 30 mitigates contamination of the substrate 26 by particles from the bakeplate 30. However, it is to be appreciated that the scope of the present invention is intended to cover systems where the substrate will be in contact with the bakeplate. In fact, it is recognized that removing the pins 32 would improve heat transfer between the bakeplate 30 and the substrate 26 thereby facilitating temperature control across the photoresist.The system 20a uses a plurality of heat lamps 40 as heating devices. These lamps can be selectively controlled to facilitate uniform heating of the photoresist 22 when coated on the substrate 26 and placed on the bakeplate 30. Preferably, the bakeplate 30 has a high thermal conductivity to facilitate uniformity in temperature. A fan (not shown) to increase convection within the system may also promote uniformity in temperature. A plurality of optical fibers 44 are configured to project radiation onto respective portions of the bakeplate 30. A measuring device is configured to collect and process radiation reflected from the photoresist 22 and determine at least one parameter relating to the temperature of the photoresist 22.FIG. 1a illustrates a measuring system 50, which includes a light sensor 50c such as for example an interferometer or/and a spectrometer. It is to be appreciated that any stable measuring system may be employed to carry out the present invention and such systems are intended to fall within the scope of the hereto appended claims. Interferometers, spectrometers, and other measuring devices are well known in the art, and further discussion related thereto is omitted for sake of brevity.A source 62 of monochromatic radiation, such as a laser, provides radiation to the surface of the bakeplate 30, which is reflected into the plurality of optical fibers 44, which guide the radiation to the measuring device 50. Preferably, the radiation source 62 is a frequency stabilized laser, however, it will be appreciated that any laser or other radiation source (e.g., laser diode or helium neon (HeNe) gas laser) suitable for carrying out the present invention may be employed.A processor 60 receives the measured data from the measuring system 50 and determines temperature of respective portions of the photoresist 22. The processor 60 is operatively coupled to the measuring system 50 and is programmed to control and operate the various components within the heating system 20a in order to carry out the various functions described herein. The manner in which the processor 60 can be programmed to carry out the functions relating to the present invention will be readily apparent to those having ordinary skill in the art based on the description provided herein.A memory 70 which is operatively coupled to the processor 60 is also included in the system 20a and serves to store program code executed by the processor 60 for carrying out operating functions of the system 20a as described herein. The memory 70 also serves as a storage medium for temporarily storing information such as photoresist temperature, temperature tables, photoresist coordinate tables, interferometry information, spectrometry information and other data which may be employed in carrying out the present invention.Power supply 78 provides operating power to the system 20. Any suitable power supply (e.g., battery, line power) may be employed to carry out the present invention.The processor 60 is also coupled to a lamp driving device 80 that drives the heat lamps 40. The lamp driving device may be, for example, a set of rheostats. The lamp driving device 80 is controlled by the processor 60 so as to selectively vary heat output of the respective heat lamps 40. That lamps are preferably configured such that each respective portion of the photoresist 22 will have a corresponding portion of the bakeplate 30 and a corresponding lamp 40 and optical fiber 44 associated therewith. The processor 60 is able to monitor the temperature of the various photoresist portions and selectively regulate the temperatures of each portion by applying heat to various portions of bakeplate 30 through heat lamps 40. As a result, the system 20a provides for regulating temperature of a photoresist 22 with substantial uniformity, which in turn improves fidelity of image transfer in a lithographic process employing such a photoresist 22.FIG. 1b illustrates a system 20b where the measuring system 50 and optical fibers 44 are configured to measure parameters indicative of temperature at a plurality of locations on bakeplate 30. The processor 60 of the system 20b is configured to operate the lamp driving system 80 to control the temperature of the various portions of the bakeplate 30 where temperature is detected. Maintaining uniform bakeplate temperature is intended to maintain uniform temperature in a photoresist 22 when such a photoresist coated on a wafer 26 is placed on bakeplate 30. An alternate aspect of system 20b omits the light source 62 and optical fibers 44 and employs a measuring system which includes thermocouples.FIG. 1c illustrates a system 20c that has an interferometer 50a configured to measure thickness of a photoresist 22 at a particular position. The temperature of the photoresist 22 will have an impact on its thickness. The optical fiber 44 directs radiation 44a to the surface of the photoresist 22, and the phase and/or intensity of reflected radiation 44b from the surface of photoresist will vary in accordance with the thickness of the photoresist 22. The measuring system 50 collects the reflected radiation 44b and processes the reflected radiation 44b in accordance with interferometry techniques to provide the processor 60 with data corresponding to the thickness of the photoresist 22. The processor 60 analyzes the data and determines the temperature of the photoresist 22.FIG. 1d illustrates a system 20d that is configured to measure fluorescence of a photoresist or similar material 22, when placed in the system. It is contemplated that the fluorescent material will be substantially inert and not impede the performance of the photoresist 22 or other material to be heated is used. Europium chelate is an example of a suitable material for use with a photoresist. The fluorescent material 22 to vary in accordance with the temperature thereof. The optical fiber 44 directs the radiation 44a incident to the surface of the photoresist and the color of the reflected radiation 44c will vary in accordance with the temperature of the photoresist 22. The measuring system 50 collects the reflected radiation 44c and processes the reflected radiation in accordance with spectrometry techniques to provide the processor 60 with data corresponding to the color of the photoresist 22. The processor 60 analyzes the data and determines the temperature of the photoresist 22.FIG. 1e illustrates a system 20e that measures fluorescence of the bakeplate 30. A fluorescent material is coated on the bakeplate 30 such that the color of the bakeplate 30 will vary in accordance with the temperature thereof. The fluorescent material may be an inert material, such as europium chelate, however the choice of fluorescent materials will be much wider when the material is placed on the bakeplate rather than the photoresist or similar material to be heated. The optical fiber 44 directs the radiation 44a incident to the surface of the bakeplate 30 and the color of the reflected radiation 44c will vary in accordance with the temperature of the bakeplate 30. The measuring system 50 collects the reflected radiation 44c and processes the reflected radiation 44c in accordance with spectrometry techniques to provide the processor 60 with data corresponding to the color of the photoresist 22. The processor 60 analyzes the data and determines the temperature of the bakeplate 22.FIG. 1f illustrates a system 20f that measures absorptivity of the photoresist 22. The absorption of the incident radiation 44a by a photoresist 22 corresponds to the temperature of the photoresist 22. Accordingly, the intensity of reflected radiation 44d will be indicative of the absorptivity of the photoresist 22 which in turn is indicative of photoresist temperature. The measuring system 50 collects the reflected radiation 44d and processes the reflected radiation 44d in accordance with spectrometry techniques to provide the processor 60 with data corresponding to the absorptivity of the photoresist 22. The processor 60analyzes the data and determines the temperature of the photoresist 22.FIG. 1g illustrates a system 20g that measures absorptivity of the bakeplate 30. The absorption of the incident radiation 44a by a bakeplate 30 corresponds to the temperature of the bakeplate 30. Accordingly, the intensity of reflected radiation 44d will be indicative of the absorptivity of the bakeplate 30, which in turn is indicative of bakeplate temperature. The measuring system 50 collects the reflected radiation 44d and processes the reflected radiation 44d in accordance with spectrometry techniques to provide the processor 60 with data corresponding to the absorptivity of the bakeplate 30. The processor 60 analyzes the data and determines the temperature of the bakeplate 30.It is to be appreciated that although FIGS. 1a-g are described herein with respect to heating a photoresist 22, these systems may be used to heat any other suitable material (e.g., top and bottom anti-reflective coatings, low K dielectric materials, spin-on-glass (SOG) and other spin-on materials).Turning now to FIGS. 2-4 the bakeplate 30 is shown in perspective supporting a substrate 26 having a photoresist 22 thereon. The photoresist heating system 20 provides for regulating temperature of the photoresist 22 during the above described heating process in order to maintain uniform temperature. The photoresist 22 may be divided into a grid pattern as that shown in FIG. 3. Each grid block (XY) of the grid pattern corresponds to a particular portion of a photoresist 22, and each portion is individually monitored and controlled for temperature. Preferably, there is one heat source for each temperature measured and the temperatures of the various regions are controlled individually. However, it is to be understood that while it is preferred that the temperatures and lamps be controlled individually and that one optical fiber 44 and one lamp 40 corresponds to each grid block XY, the numbers and positions of the optical fibers 44 and the lamps 40 need not correspond.In FIG. 3, each respective portion of the photoresist (X1Y1 . . . X12, Y12) is being monitored for temperature using a respective optical fiber 44, the measuring system 50 and the processor 60. The temperature amplitudes of each photoresist portion is shown. As can be seen, the temperature of the photoresist at coordinate X7Y6 is substantially higher than the temperature of the other photoresist portions XY. It is to be appreciated that although FIG. 3 illustrates the photoresist 22 being mapped (partitioned) into 144 grid block portions, the photoresist 22 may be mapped with any suitable number of portions.FIG. 4 is a representative table of temperature amplitudes (taken at the various grid blocks which have been correlated with acceptable temperature amplitude values for the portions of the photoresist 22 mapped by the respective grid blocks. As can be seen, all of the grid blocks except grid block X7Y6 have temperature amplitudes corresponding to an acceptable temperature value (TA) (e.g., are within an expected range of temperature amplitudes), while grid block X7Y6 has an undesired temperature value (TU). Thus, the processor 60 has determined that an undesirable temperature condition exists at the portion of the photoresist 22 mapped by grid block X7Y6. Accordingly, the processor 60 can drive the lamp 407,6 which corresponds to the portion of the photoresist 22 mapped at grid block X7Y6 so as to bring the temperature of this portion of the photoresist 22 down to an acceptable level. It is to be appreciated that the lamps 40 may be driven so as to increase and/or decrease the temperature of the respective photoresist portions as desired.FIG. 5 is a flow diagram illustrating one particular methodology for carrying out the present invention. In step 200a, the processor 60 performs general initializations to the photoresist heating system 20a. In step 210a, the processor 60 maps at least a portion of the photoresist 22 into a plurality of grid blocks "XY". During step 210a, a determination can be made as to which optical fibers 44 are detecting light reflected from a photoresist. Alternatively, the system 20a may be configured so that the fibers 44 always detect light reflected from a photoresist when a photoresist coated wafer of standard dimensions is placed in the system 20a. In step 220a, temperature determinations are made with respect to the various photoresist portions mapped by the respective grid blocks XY. In step 230a, the processor 60 determines if all grid block measurements have been taken. If no, the processor 60 returns to step 220a. In step 240a, the processor 60 adjusts the heating rate for each lamp in accordance with the most recently measured temperatures, any temperatures determined during preceding iterations, and target temperature levels for the current time all in accordance with the control strategy. The present iteration is then ended and the process returns to step 220a to perform another iteration.Each lamp 40 may be controlled based on the temperature measured from one optical fiber 44. The control strategy is preferably a standard PID (Proportional, Integral, Derivative) control strategy, which sets the heating rate for the lamp based on a combination of the current difference between the target (set-point) temperature and the measured temperature, the rate at which the temperature is changing, and the integral of the difference between the target temperature and the measured temperature over a preceding interval of time.FIG. 6 is a flow diagram illustrating another particular methodology for carrying out the present invention. In step 200b, the processor 60 performs general initializations to the photoresist heating system 20b. In step 210b, the bakeplate is mapped into gridblocks XY. In step 220b, temperature determinations are made with respect to the various bakeplate portions mapped by the respective grid blocks XY. In step 230b, the processor 60 determines if all grid block measurements have been taken. If no, the processor 60 returns to step 220b. In step 240b, the processor 60 adjusts the heating rate for each lamp in accordance with the most recently measured temperatures, any temperatures determined during preceding iterations, and target temperature levels for the current time all in accordance with the control strategy. The present iteration is then ended and the process returns to step 220b to perform another iteration.The present invention provides for a system and method for heating a photoresist in a substantially uniform manner. As a result, the present invention facilitates improving photoresist integrity and reliability which in turn affords increases in quality of image transfer in lithographic processes employing a photoresist heated in accordance with the present invention.What has been described above is the present invention and several of its specific aspects. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. |
An output switch fabric is disclosed that comprises an interleaved plurality of multiplexers for switching channels between first and second busses. The busses run in tracks that form a grid pattern. The interleaving of the multiplexers is arranged according to the grid pattern for the busses. |
CLAIMSWe claim:1. A circuit, comprising:a switch fabric configured to have a footprint on a semiconductor surface, the footprint having a plurality of sides;a plurality of input conductors configured to conduct a plurality of channels into the switch fabric with regard to each side of the footprint;a plurality of output conductors configured to conduct the plurality of channels out of the switch fabric with regard to each side of the footprint, wherein the input and output conductors for a first opposing pair of the sides are arranged in first tracks corresponding to the channels, and wherein the switch fabric includes a plurality of channel switching circuits corresponding to the channels and arranged in the footprint such that each first track spans the corresponding channel switching circuit, and wherein the input and output conductors for a second opposing pair of the sides for the footprint are arranged in second tracks such that each second track spans across all the channel switching circuits; andwherein each channel switching circuit includes a plurality of multiplexers corresponding to each side, and wherein each side's corresponding multiplexers in each channel switching circuit are configured to drive the output conductors for the channel switching circuit's corresponding channel with a selected channel from the input conductors for the remaining sides.2. The circuit of claim 1, wherein each channel has a width equaling a plurality of bits, and wherein the input and output conductors for the second opposing pair of sides are arranged corresponding to the plurality of bits such that each second track accommodates the input and output conductors for all the channels for the corresponding bit, each second track configured to span across the second opposing pair of sides.3. The circuit of claim 2, wherein the multiplexers in each channel switching circuit include first multiplexers configured to drive the corresponding channel into the output conductors for a first side in the first pair of sides, and wherein the multiplexers in each channel switching circuit further include second multiplexers configured to drive the corresponding channel into the output conductors for a remaining second side in the first pair of sides.4. The circuit of claim 3, wherein each channel switching circuit's first multiplexers and second multiplexers are configured to select from a subset of the plurality of channels from the input conductors, the subset being defined with regard to a channel span from the channel switching circuit's corresponding channel.5. The circuit of claim 4, wherein the channel span is one-half of a total number of channels in the plurality of channels.6. The circuit of claim 3, wherein the multiplexers in each channel switching circuit further include third multiplexers configured to drive thecorresponding channel into the output conductors for a third side in the second pair of sides, and wherein the multiplexers in each channel switching circuit further include fourth multiplexers configured to drive the corresponding channel into the output conductors for a remaining fourth side in the second pair of sides.7. The circuit of claim 6, wherein each channel switching circuit, the first, second, third, and fourth multiplexers are arranged in a plurality of tiles corresponding to the plurality of bits, each tile having one each of the first, second, third, and fourth multiplexers.8. The circuit of claim 7, wherein for each channel switching circuit's tile is configured to drive the corresponding bit for the corresponding channel onto the corresponding output conductor on each side of the footprint.9. The circuit of claim 7, wherein the tiles are aligned with the second tracks such that each tile aligns with the second track for the corresponding bit.10. The circuit of claim 6, wherein the third and fourth multiplexers are configured to select from a subset of the plurality of channels from the input conductors, the subset being defined with regard to a channel span of one from the channel switching circuit's corresponding channel.1 1. A method, comprising:in a switch fabric having a multi-sided footprint on a semiconductor substrate, the switch fabric organized into a plurality of channel switching circuits corresponding to a plurality of channels, routing the plurality of channels into the switch fabric with regard to each footprint side on corresponding input conductors, the switch fabric having output conductors for the plurality of channels on each footprint side, wherein the input and output conductors for a first opposing pair of the sides are arranged in first tracks corresponding to the channels such that each first track spans across the corresponding channel switching circuit and wherein the input and output conductors for a second opposing pair of sides for the footprint are arranged in second tracks that span across all the channel switching circuits; andin each channel switching circuit for each footprint side, driving the output conductors for the corresponding channel by selecting for a channel conducted on the input conductors for the remaining sides of the footprint.12. The method of claim 1 1, wherein driving the output conductors for each channel switching circuit's corresponding channel comprises using first and second multiplexers with regard to the first opposing pair of sides of the footprint.13. The method of claim 12, wherein driving the output conductors for each channel switching circuits corresponding channel comprises using third and fourth multiplexers with regard to the second opposing pair of sides of the footprint.14. A switching fabric configured to switch a plurality of channels with regard to input and output conductors, each channel comprising a digital word, the switching fabric comprising:a plurality of first multiplexers configured to select from the input conductors to drive the plurality of channels into output conductors in a first direction;a plurality of second multiplexers configured to select from the input conductors to drive the plurality of channels into output conductors in a second direction; a plurality of third multiplexers configured to select from the input conductors to drive the plurality of channels into output conductors in a third direction; anda plurality of fourth multiplexers configured to select from the input conductors to drive the plurality of channels into output conductors in a fourth direction,wherein the first, second, third, and fourth multiplexers are interleaved to form tiles arranged in a plurality of rows, each tile having one first multiplexer, one second multiplexer, one third multiplexer, and one fourth multiplexer, the rows being aligned with the first and second directions and corresponding to the channels such that the output conductors in the first and second directions for each channel are driven by the corresponding row's first and second multiplexers and such that the output conductors in the third and fourth directions are each channel are driven by the corresponding rows third and fourth multiplexers.15. The switching fabric of claim 14, wherein each digital word has a width in bits and wherein the rows are aligned to form a plurality of columns of the tiles, the columns being parallel with the third and fourth directions, wherein the plurality of columns correspond to the plurality of bits such that, for each column, the column's third and fourth multiplexers are configured to drive the corresponding bit for all the channels in the third and fourth directions into corresponding ones of the output conductors.16. A circuit, comprising:a switch fabric configured to route a plurality of channels with regard to four sides of a footprint for the switch fabric on a semiconductor substrate, each channel having same width in bits, wherein for each side of the footprint each bit is carried by a corresponding input conductor into the switch fabric and by a corresponding output conductor out of the switch fabric, and wherein the input and output conductors for a first opposing pair of the four sides are arranged by the channels and wherein the input and output conductors for a remaining opposing pair of the four sides are arranged by the bits; andthe switch fabric including, for each side of footprint, a corresponding plurality of multiplexers configured to drive the side's output conductors with the bits for a selected channel from the input conductors for the remaining sides; wherein the multiplexers for the first opposing pair of the four sides are configured such that a channel span is no greater than one for the channel selection from the input conductors for the remaining opposing pair of the four sides.17. The circuit of claim 16, wherein the multiplexers for the second opposing pair of the four sides are configured such that a channel span is less than a total number of channels in the plurality of channels for the channel selection from the input conductors for the first opposing pair of the four sides.18. The circuit of claim 17, further comprising:a programmable instruction cell providing an output signal, wherein each multiplexer is further configurable to select the output signal for its corresponding output conductor.19. The circuit of claim 18, wherein each multiplexer is a 4: 1 multiplexer.20. A reconfigurable instruction cell array (RICA), comprising: a plurality of switchboxes arranged by rows and columns,each switch box including an output switch fabric configured to route a plurality of channels with regard to four sides of the switch box, each channel having a same width in bits, wherein, for each side of the switchbox, each bit for each channel is carried by a corresponding input conductor into the switch box and by a corresponding output conductor out of the switch box, and wherein the input and output conductors for a first opposing pair of the sides are arranged by the channels and wherein the input and output conductors for a remaining opposing pair of the sides are arranged by the bits, each switch box including a programmable instruction cell operable to provide an output signal responsive to processing of the bits carried by selected ones of the input conductors for the row and column sides, andwherein the output switch fabric includes a plurality of channel switching circuits corresponding to the plurality of channels, and wherein each channel switching circuit includes a plurality of multiplexers corresponding to each side, and wherein each side's corresponding multiplexers in each channel switching circuit are configured to drive the output conductors for the channel switching circuit's corresponding channel with a selected channel from the input conductors for the remaining sides.21. The RICA of claim 20, wherein each of the multiplexers comprises a 4: 1 multiplexer.22. The RICA of claim 20, wherein a subset of the switch boxes comprise a registered domain and a remaining portion of the switch boxes comprise an unregistered domain.23. The RICA of claim 22, wherein the output switch fabrics in the switch boxes in the registered domain are configured to route only within the registered domain.24. A circuit, comprising:a switch fabric configured to have a footprint on a semiconductor surface, the footprint having a plurality of sides;a plurality of input conductors configured to conduct a plurality of channels into the switch fabric with regard to each side of the footprint;a plurality of output conductors configured to conduct the plurality of channels out of the switch fabric with regard to each side of the footprint, wherein the input and output conductors for a first opposing pair of the sides are arranged in first tracks corresponding to the channels, and wherein the switch fabric includes a plurality of channel switching circuits corresponding to the channels and arranged in the footprint such that each first track spans the corresponding channel switching circuit, and wherein the input and output conductors for a second opposing pair of sides for the footprint are arranged in second tracks such that each second track spans across all the channel switching circuits; andwherein each channel switching circuit includes a means for driving the output conductors for the corresponding channel by selecting from the plurality of channels conducted on the input conductors.25. The circuit of claim 24, wherein the means comprises a plurality of interleaved multiplexers. |
SWITCHING FABRIC FOR EMBEDDED RECONFIGURABLE COMPUTINGCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to U.S. Nonprovisional Application No. 13/781,755, filed on March 1, 2013, which is herein incorporated by reference in its entirety.TECHNICAL FIELD[0002] This application relates to reconfigurable computing, and more particularly to a switching fabric for reconfigurable computing.BACKGROUND[0003] Although processor speeds have been progressively increased, the need for increased computing power remains unabated. For example, smart phones now burden their processors with a bewildering variety of tasks. But a single-core processor can only accommodate so many instructions at a given time. Thus, it is now common to provide multi-core or multi-threaded processors that can process sets of instructions in parallel. But such instruction-based architectures must always battle the limits imposed by die space, power consumption, and complexity with regard to increasing the instruction processing time.[0004] As compared to the use of a programmable processing core, there are many algorithms that can be more efficiently processed in dedicated hardware. For example, image processing involves substantial parallelism and processing of pixels in groups through a pipeline of processing steps. If the algorithm is then mapped to hardware, the implementation takes advantages of this symmetry and parallelism. But designing dedicated hardware is expensive and also cumbersome in that if the algorithm is modified, the dedicated hardware must be redesigned.[0005] To provide an efficient compromise between instruction-based architectures and dedicated hardware approaches, a reconfigurable instruction cell array (RICA) architecture has been developed. Figure 1A illustrates an example RICA system 50 having a reconfigurable core 1. In RICA 50, a plurality of instruction cells 2 such as adders (ADD), multipliers (MUL), registers (REG), logic operation shifters (SHIFT), dividers (DIV), data comparators (COMP), logic gates (LOGIC), and logic jump cells (JUMP) are interconnected through a programmable switching fabric 4. The configuration of instruction cells 2 with regard to their logical function or instruction they implement can be reprogrammed every clock cycle as necessary to implement a given algorithm or function. Switching fabric 4 would be reprogrammed accordingly as well. Instruction cells 2 include memory interface cells 12 that interface data for instructions cells 2 as retrieved or loaded into a data memory 8. The resulting processing by instruction cells 2 occurs according to configuration instructions 10 obtained from a configuration RAM 6. A decode module 1 1 decodes instructions 10 to not only get the configuration data for instructions cells 2 but also for switching fabric 4. RICA 50 interfaces with external systems through I/O ports 16 and specialized instructions cell registers 14. Additional features shown in Figure la are described in U.S. Patent Publication No. 2010/0122105, filed April 28, 2006, the contents of which are hereby incorporated by reference in their entirety.[0006] Note the advantages of a RICA: an algorithm such as image processing that involves processing multiple pixels through a pipelined processing scheme can be mapped to instruction cells in a manner that emulates a dedicated hardware approach. But there is no need to design dedicated hardware, instead one can merely program the cells and switching fabric as necessary. Thus, if an algorithm must be redesigned, there is no need for hardware redesign but instead a user may merely change theprogramming as necessary. This is quite advantageous over traditional instruction- based computing approaches.[0007] Although a RICA thus offers robust advantages, challenges remain in its implementations. For example, it is conventional to arrange the instruction cells in a reconfigurable array by rows and columns. Each instruction cell, any associated register, and the associated input and output switching fabric for the instruction cell may be considered to reside within a switching box. Figure IB shows an example array of switch boxes arranged in rows and columns. A datapath formed between selected switch boxes is carried on selected channels from a plurality of channels. The channels are also arranged in rows and columns matching the rows and columns for the switch boxes. Each channel has a certain width in bits. The row directions may be considered to run east and west whereas the column directions run north and south. A datapath beginning in an instruction cell in an initial switchbox 100 routes from initial switch box 100 on a channel 101 in an east row direction. The routing for the datapath from subsequent switch boxes is in the appropriate east/west row direction or north/south column direction such that a final switch box 105 at some selected row and column position is reached. In this example data path, two instruction cells are configured as arithmetic logic units (ALUs) 1 10. The instruction cells for the remaining switch boxes are not shown for illustration clarity. Note that each switch box must thenaccommodate two switching matrices or fabrics: an input switching fabric to select for channel inputs to its instruction cell and also an output switching fabric to select for the channel outputs from the switch box. This disclosure focuses on the output switching fabric. [0008] The number of channels for a RICA is arbitrary - e.g., suppose there are 20 channels, each 8 bits wide. The output switch fabric for any given direction for a switch box could then use 20 * 8 = 160 multiplexers to drive the 160 bits in the 20 channels. For example, initial switch box 101 would include 160 multiplexers to drive the 20 channels in east row direction 101 in such an embodiment. An example output switch fabric 150 is shown in Figure 1C. Switch fabric 150 is configured to switch the channels with regard to north, south, east, and west directions. With regard to each direction, switch fabric 150 receives the channels on input conductors. Similarly, switch fabric 150 drives the channels in each direction on corresponding output conductors. As known in the integrated circuit layout arts, the routing of the channels occurs in tracks in corresponding metal layers. For example, the south input conductors for the channels are arranged in a track 171 that becomes the track for the north output conductors for the channels. Similar tracks cross switch fabric 150 for the north-to- south, east-to-west, and west-to-east routing. The channels are driven out of each side of switch fabric 150 on the output conductors by corresponding multiplexers.[0009] Although a "channel" is signal that is distinct from the conductors on which it is carried, it is convenient to simply refer to a channel carried on corresponding input conductors as an "input channel.' Similarly, a channel carried on corresponding output conductors is an 'Output channel." For example, a south switching circuit 155 includes the multiplexers to drive the south output channels. Similarly, an east switching circuit 160 includes the multiplexers to drive the east output channels, a west switching circuit 165 includes the multiplexers to drive the west output channels, and a north switching circuit 170 includes the multiplexers to drive the north output channels.[0010] Referring again to Figure IB, the output channels for a given switch box's output switch fabric become the input channels for a neighboring switch box's output switch fabric. For example, channel 101 in Figure IB is the east output channel for initial switch box 100 whereas channel 101 in the west input channel for neighboring switch box 115.[001 1] By grouping all the output multiplexers in corresponding switching circuits, output switching fabric 150 of Figure 1C suffers from a large degree of bus turning. In that regard, as known in the routing arts, the row and column routing is typically organized in corresponding tracks. With regard to a switching fabric, the track for input conductors in a given direction becomes the track for the output conductors in the opposing direction. Such tracking greatly simplifies the row and column routing. For example, a track 172 for the west input channels spans across the die space for north switching circuit 170 and east switching circuit 160. Track 172 does not run across the die space dedicated to south switching circuit 155. Because channel routing for the north and south directions cannot short to the channel routing for the east the west directions, the row and column routing occurs in dedicated metal layers. For example, a first metal layer (or layers) may be dedicated to the east/west row routing whereas a second metal layer (or layers) would carry the north/south column routing.[0012] The west input channels must thus be "bus turned" in a different metal layer to be received at the multiplexers in south switching circuit 155. The west input channels could not route directly through the first metal layer to couple to south switching circuit 155 since they would then short to the east input channels in their track to south switching circuit 155. Analogous bus turning must occur for the other switching circuits. For example, the south input channels require bus turning to be received at east switching circuit 160. Such bus turning wastes die space, demands excessive power consumption, and leads to timing delays. [0013] The channel switching for switch fabric 150 is conducted with regard to its north, south, west, and east sides of its footprint on its semiconductor substrate surface. With regard to any given footprint side, the corresponding switching circuit can select from the three remaining sides with regard to the input channel selection. For example, the multiplexers in south switching circuit 155 may select from the north input channels, the east input channels, and the west input channels. But south switching circuit 155 cannot select from the south input channels. Similarly, east switching circuit 160 may select from the input channels for the north, south, and west footprint sides. Such a restriction to the three remaining sides for the outputs from any given switch fabric footprint side is conventional in that it leads to considerable routing complexity reduction.[0014] Much study has thus been expended for various switch fabric architectures that follow such a channel selection from the three remaining sides for any given switch fabric side. Figure 2A shows one type of switch fabric architecture known as a disjoint matrix. In this example, there are five rows and five columns, each numbered from 0 to 4. Each one of rows (or each one of the columns) may be thought of as representing a channel for a given data word. Thus, there are five data channels in this system. For illustration clarity, the input and output channels are not shown separately. Instead, a given channel such as west channel 4 represents both the west input channel 4 and the west output channel 4. In a disjoint matrix, a given channel is restricted to be routed into the same channel. For example, the data word for channel 0 carried on its west input can be switched to propagate in the north output for channel 0 but cannot be switched to propagate in the north output for the remaining channels 1 through 4. Each channel output for a switch fabric side facing a given cardinal direction (north, south, east, or west) can thus be selected by a 3 : 1 multiplexer (not illustrated) that selects from the remaining sides facing the remaining cardinal directions.[0015] Note the advantage of the disjoint matrix: the 3: 1 multiplexer can be located at the intersection of the row and column for a given channel. The inputs to the 3 : 1 multiplexer are right there at the intersection - there needs to be no bus turning or spanning across other channels to get the inputs. Such a disjoint switching fabric thus greatly simplifies the layout design. But this disjoint simplification comes at a considerable restriction in routing flexibility: a disjoint matrix provides no means for selecting from other channels with regard to any given channel output.[0016] To provide a more flexible routing ability, a universal switch matrix and a Wilton switch matrix have been developed as shown in Figure 2B and Figure 2C, respectively. In these switch matrices or fabrics, the selection of the output signals for a channel in a given cardinal direction is not restricted to the same channel. For example, in the universal switch matrix, the output in channel 4 in the north direction can selected from channel 0 west input, channel 4 south input, and channel 4 east input. Similarly, in the Wilton switch matrix, the output in channel 4 north can be selected from the inputs for channel 1 west, channel 0 east, and channel 4 south. But just like the disjoint matrix, each output in a given direction for a universal or Wilton switch matrix may be provided by a 3 : 1 multiplexer that selects from channel inputs from the remaining directions.[0017] Regardless of the type of matrix, a given channel output in the column dimension is either headed in the north (N) direction or the south direction (S).Similarly, a given channel output in the row dimension is either headed in the west (W) direction or the east (E) direction. The input and output channels follow the same track regardless of the type of switching matrix. For example, the track for input channel 4 becomes the track for the output channel 4 in all the directions. In that regard, it is always the case (regardless of whether the matrix is disjoint, universal, or Wilton) that for a given channel in a given output direction, the same channel can be routed as an input with regard to the opposing cardinal direction. This same-channel-routing occurs for both the columns and the rows. Thus, a north input for a given channel can always be routed in that channel's south output. Conversely, a south input for a given channel can always be routed into that channel's north output. The analogous routing is true for the east and west outputs with regard to the west and east inputs. The possibility of selecting for another channel thus only exists when switching from the row dimension to the column dimension or vice versa. One of the inputs to the 3 : 1 multiplexing is thus always determined by the channel number and the opposite cardinal direction to the output.[0018] Although universal and Wilton switch matrices have routing flexibility as compared to a disjoint approach, that flexibility comes at the cost of routing complication. For example, the ability to select for channel 0 west input with regard to channel 4 north output in the universal switch matrix example discussed above means that the channel 0 east input to the switching means (such as a 3 : 1 multiplexer) must span at least the intervening row channels 1 , 2, and 3. The wire or lead for such a span must be electrically isolated from the remaining row channel routing as discussed above with regard to bus turning. Thus, the spanning wire such as from channel 0 west input to the multiplexer for the channel 4 north output in the universal matrix must then be routed on a different metal layer from the normal row tracking as coupled to by vias. This bus turning complicates the layout and design considerably.[0019] Accordingly, there is a need in the art for a switching fabric architecture that can provide routing flexibility yet simplify the associated routing complexities. SUMMARY[0020] A switch fabric is provided that includes a plurality of channel switching circuits for routing into a corresponding plurality of channels. The switch fabric is formed from devices integrated into a semiconductor substrate. The switch fabric thus occupies a footprint on a surface of the semiconductor substrate. With regard to each side of the footprint, the switch fabric receives the plurality of channels oncorresponding input conductors and outputs the plurality of channels on corresponding output conductors.[0021 ] The input and output conductors are arranged in tracks in metal layers adjacent the semiconductor substrate such that the track for a given input conductor becomes the track for the corresponding output conductor. The input conductors and output conductors for a first opposing pair of sides for the footprint are arranged in a plurality of first tracks corresponding to the plurality of channels such that each first track carries the input and output conductors for the corresponding channel.[0022] Each channel switching circuit is configured to route its corresponding channel into the output conductors for each side of the footprint. The channel switching circuits are arranged with respect to the footprint such that each first track spans across the corresponding channel switching circuit with regard to the first opposing pair of sides for the footprint. But the input and output conductors for a second opposing pair of sides for the footprint are arranged in second tracks that span across all the channel switching circuits with regard to the second opposing pair of sides.[0023] Note the advantages of such a switch fabric layout with regard to each channel switching circuit driving its corresponding channel into the corresponding output conductors for the first opposing pair of sides - since the input conductors in the second tracks carry all the channels across each channel switching circuit, a channel switching circuit may readily select from the channels through appropriate vias to the desired channel's input conductors in the second track. No bus turning is thus necessary for such a selection. Moreover, because the channel switching circuits are arranged across the footprint according to the first tracks, there is no wasted die space in the footprint. The resulting switch fabric is thus advantageously dense yet greatly reduces the channel routing complexity.BRIEF DESCRIPTION OF THE DRAWINGS[0024] Figure 1 A is a block diagram for an example reconfigurable instruction cell array (RICA).[0025] Figure IB is a block diagram for an array of switch boxes in the RICA of Figure 1A.[0026] Figure 1 C is a block diagram for an example output switch fabric for a switch box of Figure IB.[0027] Figure 2A illustrates the row and column channel routing for a disjoint switch matrix.[0028] Figure 2B illustrates the row and column channel routing for a universal switch matrix.[0029] Figure 2C illustrates the row and column channel routing for a Wilton switch matrix.[0030] Figure 3 A is a block diagram for an output switch fabric including a plurality of channel switching circuits.[003 1] Figure 3B illustrates the channel track layout for an example output switch fabric. [0032] Figure 3C illustrates the semiconductor substrate footprints occupied by channel switching circuits that are not optimized to achieve density and routing complexity reduction.[0033] Figure 4A illustrates an example multiplexer interleaving for an output switch fabric.[0034] Figure 4B illustrates the tiling of a channel switch circuit in an output switch fabric.[0035] Figure 5 is a block diagram of an example switch box.[0036] Figure 6 illustrates the channel routing for an example output switch fabric.[0037] Figure 7 is a cross-sectional view of a channel switch circuit to show the channel track layout with regard to the semiconductor surface footprint for the channel switch circuit.[0038] Figure 8 is a flowchart for an example method of channel routing.[0039] Embodiments of the present invention and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures.DETAILED DESCRIPTION[0040] To meet this need in the art, an improved switch fabric architecture is disclosed. This architecture will be described with regard to a RICA but it will be appreciated that it is widely applicable to other systems and circuits that switch channels such as between rows and columns. To eliminate the need for bus turning, the disclosed output switch fabric "brings the multiplexers to the wires" as will be further described herein.[0041] In contrast, consider again the grouped multiplexers in the switching circuits of Figure 1C. Because the multiplexers are grouped according to the channel direction they drive, the "wires are brought to the multiplexers." In that regard, each channel will typically have some width in bits although the switching fabric principles disclosed herein include an application to channels that are just one bit wide. Given the channel width, each channel may be routed to the disclosed switching fabric on an input bus comprising corresponding input conductors. The input bus would have the same number of input conductors (which may be denoted as wires) as the channel width in bits. For example, if the channels are eight bits wide, each input bus would include eight input conductors. An analogous output bus comprised of output conductors would be used to conduct each channel from the output switching fabric in the output directions.[0042] The bus turning discussed above for output switching fabric 150 requires the corresponding input bus to span over the input busses for other channels to the multiplexers. For example, should south switching circuit 155 select from a west input channel, the input bus for the selected west input channel must span over the south input channels to the multiplexers in south switching circuit 155. In that sense, the wires for the west input channel are "brought to the multiplexers."[0043] The improved output switch fabrics eliminate the need for such bus turning in that the multiplexers are located corresponding to the bus intersections. In that regard, an input channel bus travels in tracks for the row and column directions. The track for a given input bus becomes the track for the corresponding output bus. To obtain an advantageous density increase and routing complexity reduction, the multiplexers are interleaved rather than grouped. This interleaving will be described with regard to row and column routing for a plurality of channels. In that regard, what is a "row" versus what is a "column" is simply a matter of perspective. Thus, the terms row and column are used herein without loss of generality. The input and output buses for the row directions are arranged in row tracks whereas the input and output buses for the column direction are arranged in column tracks. Whereas the output switch fabric comprises devices integrated into a semiconductor substrate, the input and output busses travel in tracks in corresponding metal layers adjacent an active surface for the semiconductor substrate. The resulting arrangement of tracks with regard to the semiconductor substrate surface is further discussed below with regard to Figure 7.[0044] As opposed to grouping all the multiplexers for any given cardinal direction in one location as discussed with regard to switch fabric 150, the advantageous output switch fabrics disclosed herein have interleaved multiplexers. In that regard, it is conceptually illustrative to suppose that one may break up the switching circuits of output switch fabric 150 and interleave the corresponding multiplexers to form an output switch fabric as shown in Figure 3A. For illustration clarity, the output switch fabric of Figure 3 A is configured to switch with regard to just four channels: A, B, C, and D,[0045] As known in the integrated circuit arts, a given system or component will occupy a certain amount of die space. This die space is the semiconductor surface area occupied by a given circuit and may also be denoted as a footprint. Since the multiplexer interleaving principles herein relate to the physical placement or layout of circuits and corresponding channel routing, the output switch fabric of Figure 3 A is represented by its footprint 300. But since multiplexer interleaving principles disclosed herein also relate to circuit function, footprint 300 will also be referred to as output switch fabric 300. In other words, as disclosed herein, circuits and their footprints will be referred to be the same element number. Each channel has a corresponding switching circuit that drives the output busses for that channel. Thus, a channel A switching circuit 305 is configured to drive the output busses for channel A, a channel B switching circuit 310 is configured to drive the output busses for channel B, and a channel C switching circuit 315 is configured to drive the output busses for channel C. Similarly, a channel D switching circuit 320 is configured to drive the output busses for channel D.[0046] Each channel switching circuit occupies a certain amount of die space. The row and column busses for the channels are routed in tracks in metal layers adjacent to a surface of the semiconductor die. For example, the track for a west input bus 325 for channel A spans across a width of channel A switching circuit 305. The same track eventually becomes the track for an east output bus 330 for channel A. Similarly, an east input bus 335 for channel A spans across channel A switching circuit 305 such that the same track becomes the track for a west output bus 340 for channel A.The remaining row busses are spaced apart such that their tracks span their respective channel switching circuit. For example, the track for a west input bus 345 for channel D spans across channel D switching circuit 320 and becomes the track for an east output bus 350 for channel D. In that regard, each channel switching circuit has a certain height H in the column direction with regard to the die space or footprint it occupies. The tracks for the channels in the row directions are thus spaced apart by at least the channel switching circuit height H.[0047] In contrast to the row channel spacing, the tracks for the column busses are arranged such that each column bus' track spans across all the channel switching circuits. Since each channel switching circuit is thus spanned by all the channels in the column direction, the channel's tracks are not shown separated in the north and south directions. For example, a track 355 symbolizes the tracks for the north input busses for all the channels. Track 355 spans across all the channel switching circuits and exits as track 355 for the south output busses for all the channels. The track for the south input and north output busses span the footprint 300 analogously.[0048] Each channel switching circuit includes the corresponding multiplexers to drive the output busses for the corresponding channel. These multiplexers may be denoted by the cardinal direction they drive. Thus, whereas the south multiplexers in south switching circuit 155 are grouped together, the corresponding south multiplexers driving the south output conductors for switch fabric 300 would be dispersed by channels and placed in the respective channel switching circuit accordingly. The corresponding multiplexers from east switching circuit 160, west switching circuit 165, and north switching circuit 170 would also be dispersed by channels and placed in the respective channel switching circuit accordingly. In this fashion, each channel switching circuit includes an interleaved selection of north, south, east, and west multiplexers to drive the corresponding channel's output busses in the corresponding directions.[0049] It will be appreciated that the designations of directions north, south, east, and west merely refers to the corresponding switch fabric footprint side and is not used to designate directions as defined by a compass. In that regard, footprint 300 maybe more generally considered to have four sides. Each side receives input conductors for the channels. The output conductors from the four sides are driven by the corresponding channel switching circuit. For example, channel A switching circuit 305 drives the output conductors for all four sides with its corresponding channel as selected from the input conductors. With regard to a first opposing pair of sides (in this example, east and west), the input and output conductors are arranged by channels in first tracks formed in the appropriate metal layer (or layers). These first tracks span the width of the corresponding channel switching circuit. In contrast, the input and output conductors for a second opposing pair of sides (in this example, north and south) are arranged by bits in second tracks formed in their own appropriate metal layer (or layers). The second tracks span across all the channel switching circuits.[0050] Note that each channel will typically be wider than just one bit. Each bit for a given channel is carried into the output switching fabric by a corresponding row or column input conductor. Similarly, each bit for a given channel is carried out of the output switching fabric by a corresponding row or column output conductor. For switch matrix 300, the input and output conductors in the column dimension (both north and south) are arranged by bits with regard to any given channel. Conversely, the input and output conductors in the row dimension (east and west) are arranged by channels. For example, suppose there are 3 channels numbered 0 to 2, each channel having a word width of two bits. If the rows are arranged to be consecutive in bits for any given channel and the word width is 2 bits, the resulting row and channel routing is as shown in Figure 3B. For illustration clarity, Figure 3B shows only the south input conductors and the west output conductors. The west output conductors form rows whereas the south input conductors form columns. In that regard, what is a "row" versus what is a "column" is simply a matter of perspective such that an assumption that the column and row conductors are arranged in this fashion is not limiting. The point is that one dimension is arranged by channels and the other is arranged by bits.[0051] In a disjoint switching matrix, the columns and rows need not be arranged in this fashion. But suppose one needs more routing flexibility such as discussed earlier with regard to a universal or a Wilton switching matrix. If the columns are also arranged by channels just as with the rows, the resulting channel switching circuits must be spaced apart accordingly. For example, Figure 3C shows an example arrangement of channel switching circuits in which the columns and rows are both arranged by channels. For illustration clarity, only two channels are shown in the row and column dimensions: channels A and B. A channel A switching circuit 360 occupies a die space that is spanned by the tracks for channel A in both the column and row dimensions. Similarly, a channel B switching circuit 370 occupies a die space that is spanned by the tracks for channel B in both the column and row dimensions.[0052] But because both the row and columns are arranged by channels, the channel B switching circuit footprint 370 occupies a die space that must be spaced apart in the row dimension with respect to the channel A switching circuit footprint 360 such that die spaces 375 and 380 are empty. Such spacing occurs because a channel switching circuit has no function with regard to driving other channels. Instead, a channel switching circuit can only drive its own channel. Thus, the multiplexers for driving a channel switching circuit's channel are located in the tracks for that channel. The resulting unoccupied die space is of course expensive and diminishes circuit density. In contrast, output switch fabric 300 has no such wasted die space and is thus advantageously dense. Since the channels are arranged by bits in the column direction, the input and output conductors for a given channel span across the width of each channel switching circuit. This results in an advantageous multiplexer tiling or interleaving as will be further discussed herein. In this interleaving, there need be no unused semiconductor area within the switch fabric footprint.[0053] As discussed earlier, Figure 3B illustrates how the column busses are arranged by bits whereas the row busses are arranged by channels. Routing from the illustrated south input conductors into the west output conductors is thus greatly simplified. For example, a west multiplexer 385 configured to drive bit 0 for west output channel 0 may occupy a die space 385 that is spanned by the track for the south input bit 0 for all three channels. A selected channel (for example channel 2) may thus be readily coupled to west multiplexer 385 merely through a via extending from the input conductor for the south input bit 0 for channel 2 to die space 385. There is thus no complication of channel spanning or bus turning because the wires for the south input channels for bit 0 are all "brought" to west multiplexer 385 in the corresponding track. An analogous west multiplexer 390 for bit 1 occupies die space 390 that is spanned by a bit 1 track for the south input channels. Thus, arranging one of the dimensions by channels and the other by bits as discussed with regard to Figure 3C reduces routing complexity in non-disjoint routing schemes yet achieves the density discussed with regard to Figure 3A.[0054] But note that this span reduction only occurs for the column inputs (or whatever dimension is arranged into bits instead of channels). For example, the north multiplexers that drive channel A north output for channel A switching circuit 305 of Figure 3A are spanned only by the track for the input and output conductors for channel A with regard to the row dimension. Should a datapath routing demand that one of the other channels such as the west input for channel B be routed into channel A north output, then a bus turning is required as represented by arrow 331 extending between the channel A and B row tracks in Figure 3A. But this bus turning only occurs for the routing of a channel in the row dimension into a different channel in the column dimension. In contrast, no such bus turning is required for the routing of any of the channels in column dimension to any of the channels in the row dimension.[0055] The switching fabric disclosed herein exploits the channel vs. bit routing architecture in one embodiment by limiting, with regard to any routing of east or west input channels into north or south output channels, that the input channel span for the row selection be limited to no more than one channel. In other words, for the routing into a north or south channel i, where i is some arbitrary channel number, the input channel in the row dimension can only be selected from input channel i-1, input channel i, and input channel i+1. The channel span for routing column input channels into row output channels need not be limited to just one channel since that each channel switching circuit is spanned by the tracks for all the column input channels. For example, if the routing architecture of Figure 3B is expanded to accommodate 10 channels, spanning all 10 channels with regard to the column input channels into any row output channel would still be relatively simple with regard to routing and layout demands.[0056] Thus, in one embodiment, the span for channel selection from the row dimension to the column dimension could be limited to one channel but no limits be imposed with regard to the channel span for channel selection from the column dimension to the row dimension. However, through testing and implementation, it has been shown that such unlimited channel span for the column inputs does not add in a significant fashion to performance. Thus, the span for channel selection from the column dimension to the row dimension for the improved output switch fabric disclosed herein is limited in some embodiments to a value that is less than the channel number. For example, in a 10 channel embodiment, the span limit for channel switching from the column to the row dimension would be five channels.[0057] Output switch fabric 300 of Figure 3A is illustrated conceptually in the sense that the interleaved north, south, east, and west multiplexers are not shown within each channel switching circuit. Figure 4A illustrates an example multiplexer interleaving for an output switch fabric 400. For illustration purposes, output switch fabric 400 is configured to switch only two channels: C I and C2. Each channel is three bits wide, ranging from a bit BO to a bit B2. Since each channel is three bits wide, the corresponding channel switching circuits have three north (N) multiplexers, three south (S) multiplexers, three east (E) multiplexers, and three west (W) multiplexers. A CI channel switching circuit 405 is configured to drive the CI output busses in the four cardinal directions. Similarly, a C2 channel switching circuit 410 is configured to drive the C2 output busses in the four cardinal directions. The east multiplexers in each channel switching circuit are interleaved with the north multiplexers to form a first row. Similarly, the west multiplexers in each channel switching circuit are interleaved with the south multiplexers to form a second row below the first row. These rows correspond to tracks for the corresponding row busses. The die space for the first row formed by the north and east multiplexers in a given channel switching circuit is spanned by the track for the west input bus and the east output bus for the corresponding channel. For example, the die space occupied by the north and east multiplexers in CI channel switching circuit 405 are spanned by the track in the corresponding metal layer (or layers) for the CI west input bus and the CI east output bus. Since each bus is three bits wide, the track for these busses is such that it accommodates three separate conductors or wires. A similar track spans across the die space occupied by the second row of west and south multiplexers for CI channel switching circuit 415 toaccommodate the conductors for the east input bus and the west output bus for channel CI . The interleaving order for switch fabric 400 may be re-arranged in alternative embodiments.[0058] As used herein, "track" refers to the space in any given metal layer dedicated to a certain set of conductors. For example, a track for the row conductors for a given channel spans the corresponding channel switching circuit. For channel CI in the row direction, its track spans across channel switching circuit 405. But in reference to a particular row direction such as east input and west output, the overall track for the row conductors for channel CI is organized into two smaller tracks: a track for the west input and east output conductors, and a track for the east input and west output conductors. But what is common to these tracks is that they define the space in given metal layer (or metal layers) dedicated to a particular set of conductors.[0059] Regardless of the particular interleaving order, each channel switching circuit may be considered to form a row of tiles such as shown in Figure 4B for a channel A switching circuit 415. In this embodiment, channel A is N bits wide, ranging from a bit B0 to a bit BN, where N is an arbitrary positive integer. There are thus N tiles corresponding to the N bits. Each tile would include the four multiplexers for the corresponding bit. For example, a first tile B0 would include a north multiplexer to drive a B0 north output conductor for channel A, a south multiplexer to drive a B0 south output conductor for channel A, a west multiplexer to drive a B0 west output conductor for channel A, and an east multiplexer to drive a B0 east output conductor for channel B.[0060] Referring again to switch fabric 400, the resulting tiles are stacked into columns according to the bits. Because there are three bits, each switching fabric includes three tiles of multiplexers. Channel CI switching circuit 405 includes a tile Cl-BO, a tile Cl-B l, and a tile C1-B2 corresponding to the 3 bits, respectively. The north, south, east, and west designations for the multiplexers is abbreviated as N, S, E, and W, respectively. For example, a C1-B0 tile for channel CI switching circuit 405 includes N multiplexer C1-B0, E multiplexer C1-B0, W multiplexer C1-B0, and S multiplexer C1-B0. Similarly, a C2-B0 tile for channel C2 switching circuit 410 includes N multiplexer C2-B0, E multiplexer C2-B-0, W multiplexer C2-B0, and S multiplexer C2-B0. The bit track that accommodates BO for all the channels passes above the die space for tiles CI -BO and C2-B0 in the column dimension. Since there are just two channels in this embodiment, there are just two conductors for each bit in the north and south directions. Just like a channel track in the row direction, a bit track in the column direction can actually comprise two individual tracks. For example, a track for bit 0 for the south input and north output for all the channels spans the W and N multiplexers in tiles CI -BO and C2-B0 within the corresponding metal layer (or layer). This track is sufficiently wide to accommodate the corresponding pair of conductors. A similar track passes over the E and S multiplexers in tiles C1-B0 and C2- B0 to accommodate the north input conductors for bits CI -BO and C2-B0 and the south output conductors for these same bits. In general, an ith tile in each channel switching circuit is spanned by the column tracks for the ith bit, where i represents an arbitrary bit for the channels.[0061] Since each channel switching circuit drives the output conductors for its corresponding channel, the output conductors are defined with regard to each channel switching circuit. For example, an output conductor for CI -BO south originates in S multiplexer CI -BO as indicated by dashed line 420.[0062] Referring again to Figure I B, each switch box may be considered to have two row sides (east and west) and two column sides (north and south). The resulting array of switch boxes is also arranged by corresponding rows and columns. Each switchbox includes an output switch fabric to route- the channels in the row and column directions as discussed herein. In that regard, a switch fabric is configured to route the channels with regard to the four sides of the switch box. Such a routing is equivalent to the routing with regard to the four sides of the output switch fabric's footprint. Since each switch box includes an instruction cell, the output switch fabric for each switch box has two options: it may route an input channel that is driven by some neighboring switch box's output switch fabric or it may route its own instruction cell output signal. Referring again to Figure 4A, output switch fabric 400 may be implemented within a corresponding switch box of Figure IB. Each multiplexer may thus be a 4: 1 multiplexer. Three of the inputs are the channel inputs discussed above. The fourth input to each 4: 1 multiplexer is the instruction cell output from the corresponding switch box's instruction cell.[0063] To further simplify and optimize the design of the switching fabric, the channels are segmented into a set of registered channels and a set of channels having no storage capability. In one embodiment, the output switching fabric enforces a register- domain separation in that registered channels can only route to other registered channels. Such a register-domain separation increases the routing ability within the respective domains.[0064] Yet another optimization occurs by requiring the switching fabric to ensure channel reachability - in other words, that with a sufficient number of hops from channel to channel, all channels are reachable. Given this requirement of reachability, the switching fabric is optimized to minimize the number of channel hops necessary. In addition, there should be at least one cyclic path per output channel that routes back to the same channel after four hops. It will be appreciated that for a given channel number and word width, a variety of switching fabrics could be implemented to satisfy the 1 - channel span for switching from channels in the row dimension into channels in the column dimension, a channel span of less than the total number of channels for switching channels in the column dimension into channels in the row dimension, register-domain separation, minimized hops with reachability, and cyclic path embodiment requirements. [0065] With regard to the channel input selections for any given channel output, a 3 : 1 multiplexer is sufficient as discussed above. But in a RICA embodiment such as discussed with regard to Figure lb, there is also the need to select for the instruction cell output. Thus, each output conductor in a RICA embodiment for each switch fabric footprint side may be driven by a 4: 1 multiplexer that selects from the inputs to the 3 remaining footprint sides and an instruction cell output signal. The resulting channel switching for an example switch box (SBOX) 500 having 5 channels per side (the east, west, north, and south directions) is shown in Figure 5. In this embodiment, each channel is a byte wide (8 bits). Thus, there are 5 channels/side * 4 sides (corresponding the east, west, north, and south directions) * 1 byte/ channel = 20 bytes to select from coming into SBOX 500 as well as 20 bytes to select from with respect to propagation from SBOX 500. In this embodiment, an instruction cell 505 associated with switch box 500 processes 4 bytes simultaneously during each clock cycle (its operands thereby being four 8-bit words). Instruction cell 505 is thus shown receiving a 32-bit wide input to produce a 32-bit wide instruction cell output. The selection of this 32-bit wide input is made with regard to channel inputs on all sides of SBOX 500. For example, SBOX 500 may include thirty-two 16: 1 multiplexers 510 for this selection. Just as discussed with regard to Figure 2a-2c, for a channel output in a given cardinal direction, there is a 3: 1 selection with regard to the remaining cardinal directions. One of the three inputs to a given channel output is the same channel in the opposite cardinal direction. But the remaining two channel inputs selected from the orthogonal directions are selected so as to satisfy the goals and rules discussed above. In addition, there is the possibility of a fourth selection in that an instruction cell output from instruction cell 505 may drive a channel output. Thus, the selection for each channel bit output in SBOX 500 may be accomplished by a 4: 1 multiplexer 515. Because each channel output word is a byte wide in this embodiment, each channel output word requires eight 4: 1 multiplexers 515.[0066] The number of 4: 1 multiplexers 415 depends upon the number of channels, the channel width, and the number of words processed by instruction cell 405. In the example shown in Figure 5, there are five 8-bit output channels per each side of SBOX 500 that may select from four 8-bit words from instruction cell 505 such that there will thus be 8 per byte * 4 bytes * 5 channels = 160 4: 1 multiplexers 415 per each cardinal direction (each side of the SBOX 500) for such an embodiment. The output switching fabric that is the focus of this disclosure thus concerns these multiplexers and the span for the channel inputs. Each channel input to 4: 1 multiplexer 515 is shown as a 32-bit input to correspond to the 4 bytes provided by instruction cell 405. As discussed previously, one channel input (e.g., a channel input 525) to each multiplexer 515 is determined by the channel output. For example, if multiplexers 515 are selecting for a north output for channel number 1 , then channel input 525 would correspond to the south input for channel number 1. More generally, channel input 525 is the input in the same channel as the channel output but from the opposite side or cardinal direction. The remaining two channel inputs 530 to multiplexer 415 come from the orthogonal directions. For example, if multiplexer 515 is selecting for a channel output in the north direction, channel inputs 530 would correspond to east and west channel inputs.[0067] The selection for channel inputs 530 may be implemented so as to satisfy the goals discussed earlier. It will be appreciated that numerous channel mappings or selections satisfy these goals. An example channel mapping for a 10 channel embodiment is shown in Figure 6. The mappings for the north, east, south, and west channel outputs are shown separately in Figure 6. For example, the mapping with the heading of "north" lists in the first column the 10 channel outputs in the north direction. As discussed above, an east or west input channel for the north and south output channels can at most span one channel in some embodiments. For example, the north output for channel 7 can select from the east input for channel 6. But the north output for channel 7 cannot be selected from, e.g., the east input for channel 1. The channel number for the south input to any given north output is of course the same as the channel number for that north output.[0068] As compared to the mapping from east and west channel inputs into north and south channel outputs, the channel switching from north and south channel inputs into east and west channel outputs have a greater channel span such as five channels. For example, under the heading of "west" in Figure 6 are the west outputs for the 10 channels. The west output for channel 1 can be driven by from the south input for channel 6, which demonstrates the five channel span. However, there need not be such a span, it is just a maximum. For example, the west output for channel 8 can be driven by the south input for channel 9, which is just a one channel span.[0069] The columns with a heading of "c" in the Figure 6 mapping indicate the availability of routing an instruction cell output for a channel output. In general, most of the channel outputs can be selected from the instruction cell output (those channels outputs that can be selected from the instruction cell output are designated by an "x" in corresponding row of the "c" column). But to provide greater routing flexibility, certain channel outputs do not have the ability to select for the instruction cell output but instead can select for an additional channel input. For example, the north output for channel 5 does not have the capability for selecting for the instruction cell output but instead can select for the west input for channel 5.[0070] As discussed previously, the output switching fabric may also accommodate a register domain separation such that the channels are divided into a registered domain and a non-registered domain. Each switch box may thus include a set of registers (not illustrated) for storage of corresponding registered channel outputs. For example, each switch box may include or be associated with registers for the south outputs in channels 1, 2, 4, 6, and 9. Conversely, each switch box would not have registers for registering the remaining south outputs in channels 0 3, 5, 7, and 8 since these remaining south output channels are in the unregistered domain. To increase route-ability within the respective domains (registered vs. non-registered), an input from a registered channel can only be mapped to other registered channels.[0071] As discussed previously, the output switch fabrics disclosed herein "bring the multiplexers to the wires" as opposed to the grouping of the multiplexers for a given switch fabric footprint side and the resulting use of bus turning. To better appreciate this concept, consider the cross-sectional view looking into the column dimension for a channel switching circuit 700 integrated into a semiconductor substrate 705 as shown in Figure 7. In this embodiment, there are 3 channels, each 3 bits wide ranging from a bit B0 to a bit B2. Channel switching circuit 700 thus includes 3 tiles B0, B l, and B2 corresponding to the bits for its channel. As known in thesemiconductor arts, metal layers are separated from an active surface 710 of substrate 705 by intervening insulating layers. The transistors (not illustrated) implementing the various multiplexers in tiles B0, B l, and B2 are integrated into active surface 710. The tracks for the column input and output conductors for each bit span in a metal layer Mi above the corresponding tiles. Since Figure 7 shows a cross-section along a width W of channel switching circuit 700, each column conductor is seen in cross-section only. In contrast, a row conductor (not illustrated) for the corresponding channel in a metal layer Mj would span the width W for channel switching circuit 700. Since each tile is thus directly traversed by the track for the corresponding bit for all the channels in the column dimension, a row multiplexer (east or west) may be coupled to a column conductor through just a via such as illustrated for tile B 1. No bus turning is necessary to accomplish this coupling.[0072] Figure 8 illustrates a flow chart for a routing method practiced by the improved output switch fabrics disclosed herein. In an initial step 800, a switch fabric having a four-sided footprint on a semiconductor substrate receives a plurality of channels into the switch fabric on corresponding input conductors at each footprint side. The switch fabric is organized into a plurality of channel switching circuitscorresponding to the plurality of channels. In a step 805, the switch fabric routes the plurality of channels out of the switching fabric footprint side on corresponding output conductors. With regard to a first opposing pair of sides for the footprint, the input and output conductors are arranged in first tracks corresponding to the channels such the each first track accommodates the input and outputs conductors for the corresponding channel and spans across the corresponding channel switching circuit. With regard to a second opposing pair of sides for the footprint, the input and output conductors for all the channels are arranged in tracks that span across all the channel switching circuits.In a step 810, for each footprint side, each channel switching circuit drives the output conductors for the corresponding channel by selecting for a channel conducted on the input conductors for the remaining sides of the footprint.[0073] As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the spirit and scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular embodiments illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents. |
The present invention is generally directed to various structures for analyzing electromigration, and methods of using same. In one illustrative embodiment, the method includes forming a grating structure above a semiconducting substrate, the grating structure being comprised of a plurality of conductive features, forcing an electrical current through at least one of the conductive features until a resistance of the conductive feature increases by a preselected amount, and performing at least one scatterometric measurement of the conductive feature to determine a critical dimension of the conductive feature. In another illustrative embodiment, the method includes forming a plurality of grating structures above a semiconducting substrate, each of the grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of the features of each of the plurality of grating structures being different, and forcing an electrical current through at least one of the conductive features in each of the plurality of grating structures until a resistance of the conductive feature increases by a preselected amount. |
1. A method of performing electromigration analysis, comprising:forming a grating structure above a semiconducting substrate, said grating structure being comprised of a plurality of conductive features; forcing an electrical current through at least one of said conductive features until a resistance of said at least one conductive feature increases by a preselected amount; and performing at least one scatterometric measurement of said at least one conductive feature to determine a critical dimension of said at least one conductive feature. 2. The method of claim 1, wherein said conductive features are conductive metal lines.3. The method of claim 1, wherein said conductive features are comprised of at least one of aluminum, copper, tungsten and titanium.4. The method of claim 1, wherein said grating structure occupies approximately 10,000 [mu]m<2 > of surface area.5. The method of claim 1, wherein said grating structure is comprised of approximately 100-700 conductive features.6. The method of claim 1, wherein performing at least one scatterometric measurement comprises illuminating said at least one conductive feature and measuring light reflected therefrom.7. The method of claim 1, wherein said at least one scatterometric measurement is performed while said electrical current is being forced through said at least one conductive feature.8. The method of claim 1, wherein said at least one scatterometric measurement is performed after stopping said electrical current.9. The method of claim 1, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.10. The method of claim 1, wherein said at least one scatterometric measurement is performed after said resistance of said conductive feature has increased above said preselected amount.11. The method of claim 1, wherein said preselected amount is 10-20%.12. The method of claim 1, further comprising determining a susceptibility of another conductive feature to electromigration based upon a comparison of a critical dimension of said another conductive feature and said determined critical dimension of said conductive feature through which said electrical current was passed.13. The method of claim 1, further comprising identifying a duration of time that said electrical current was forced through said at least one conductive feature.14. The method of claim 13, further comprising determining a susceptibility of another conductive feature having a critical dimension different than that of said determined critical dimension based upon a comparison of said different critical dimensions and said identified duration of time.15. A method of performing electromigration analysis, comprising:forming a grating structure above a semiconducting substrate, said grating structure being comprised of a plurality of conductive features, said conductive features being comprised of at least one of aluminum, copper, tungsten and titanium; forcing an electrical current through at least one of said conductive features until a resistance of said at least one conductive feature increases by a preselected amount; performing at least one scatterometric measurement of said at least one conductive feature to determine a critical dimension of said at least one conductive feature; and identifying a duration of time that said electrical current was forced through said at least one conductive feature. 16. The method of claim 15, wherein said conductive features are metal lines.17. The method of claim 15, wherein said grating structure occupies approximately 10,000 [mu]m<2 > of surface area.18. The method of claim 15, wherein said grating structure is comprised of approximately 100-700 conductive features.19. The method of claim 15, wherein performing at least one scatterometric measurement comprises illuminating said at least one conductive feature and measuring light reflected therefrom.20. The method of claim 15, wherein said at least one scatterometric measurement is performed while said electrical current is being forced through said at least one conductive feature.21. The method of claim 15, wherein said at least one scatterometric measurement is performed after stopping said electrical current.22. The method of claim 15, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.23. The method of claim 15, wherein said at least one scatterometric measurement is performed after said resistance of said conductive feature has increased above said preselected amount.24. The method of claim 15, wherein said preselected amount is 10-20%.25. The method of claim 15, further comprising determining a susceptibility of another conductive feature to electromigration based upon a comparison of a critical dimension of said another conductive feature and said determined critical dimension of said conductive feature through which said electrical current was passed.26. The method of claim 15, further comprising determining a susceptibility of another conductive feature having a critical dimension different than that of said determined critical dimension based upon a comparison of said different critical dimensions and said identified duration of time.27. A method of performing electromigration analysis, comprising:forming a grating structure above a semiconducting substrate, said grating structure being comprised of a plurality of conductive features comprised of aluminum; forcing an electrical current through at least one of said conductive features until a resistance of said at least one conductive feature increases by a preselected amount that ranges from 10-20%; performing at least one scatterometric measurement of said at least one conductive feature to determine a critical dimension of said at least one conductive feature; identifying a duration of time that said electrical current was forced through said at least one conductive feature; and determining a susceptibility of another conductive feature having a critical dimension different than that of said determined critical dimension based upon a comparison of said different critical dimensions and said identified duration of time. 28. The method of claim 27, wherein said grating structure occupies approximately 10,000 [mu]m<2 > of surface area.29. The method of claim 27, wherein said grating structure is comprised of approximately 100-700 conductive features.30. The method of claim 27, wherein performing at least one scatterometric measurement comprises illuminating said at least one conductive feature and measuring light reflected therefrom.31. The method of claim 27, wherein said at least one scatterometric measurement is performed while said electrical current is being forced through said at least one conductive feature.32. The method of claim 27, wherein said at least one scatterometric measurement is performed after stopping said electrical current.33. The method of claim 27, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.34. The method of claim 27, wherein said at least one scatterometric measurement is performed after said resistance of said conductive feature has increased above said preselected amount.35. A method, comprising:forming a plurality of grating structures above a semiconducting substrate, each of said grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of said features of each of said plurality of grating structures being different; forcing an electrical current through at least one of said conductive features in each of said plurality of grating structures until a resistance of said at least one conductive feature in each of said grating structures increases by a preselected amount; and for each of said plurality of grating structures, identifying a duration of time that said current is passed through said at least one conductive feature until said resistance of said at least one conductive feature is increased by said preselected amount. 36. The method of claim 35, further comprising creating a plot of duration until said resistance reaches said preselected amount versus a critical dimension of said conductive features in said plurality of grating structures.37. The method of claim 35, wherein forming a plurality of grating structures comprises forming at least three grating structures.38. The method of claim 35, wherein forming a plurality of grating structures comprises forming at least four grating structures.39. The method of claim 35, wherein said conductive features are conductive metal lines.40. The method of claim 35, wherein said conductive features are comprised of at least one of aluminum, copper, tungsten and titanium.41. The method of claim 35, further comprising performing at least one scatterometric measurement on each of said plurality of grating structures to determine a critical dimension of at least one of said conductive features within each of said plurality of grating structures.42. The method of claim 41, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.43. The method of claim 41, wherein said at least one scatterometric measurement is performed after said resistance of said conductive feature has increased above said preselected amount.44. The method of claim 35, wherein said electrical current is forced through said at least one conductive feature until said resistance of said at least one conductive feature increased by 10-20%.45. A method, comprising:forming at least three grating structures above a semiconducting substrate, each of said grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of said features of each of said plurality of grating structures being different, said conductive features being comprised of at least one of aluminum, copper, tungsten and titanium; forcing an electrical current through at least one of said conductive features in each of said at least three grating structures until a resistance of said at least one conductive feature in each of said at least three grating structures increases by a preselected amount; and performing at least one scatterometric measurement on each of said plurality of grating structures to determine a critical dimension of at least one of said conductive features within each of said plurality of grating structures. 46. The method of claim 45, further comprising, for each of said plurality of grating structures, identifying a duration of time that said current is passed through said at least one conductive feature until said resistance of said at least one conductive feature is increased by said preselected amount.47. The method of claim 45, further comprising creating a plot of duration until said resistance reaches said preselected amount versus a critical dimension of said conductive features in said plurality of grating structures.48. The method of claim 45, wherein said conductive features are conductive metal lines.49. The method of claim 45, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.50. The method of claim 45, wherein said at least one scatterometric measurement is performed after said resistance of said conductive features has increased above said preselected amount.51. The method of claim 45, wherein said electrical current is forced through said at least one conductive feature until said resistance of said at least one conductive feature increased by 10-20%.52. The method of claim 45, further comprising determining a susceptibility of another conductive feature to electromigration based upon a comparison of a critical dimension of said another conductive feature and said determined critical dimension of said conductive feature in said grating structure through which said electrical current was passed.53. The method of claim 45, further comprising identifying a duration of time that said electrical current was forced through said at least one conductive feature.54. The method of claim 53, further comprising determining a susceptibility of another conductive feature having a critical dimension different than that of said determined critical dimension based upon a comparison of said different critical dimensions and said identified duration of time.55. A method, comprising:forming at least three grating structures above a semiconducting substrate, each of said grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of said features of each of said plurality of grating structures being different, said conductive features being comprised of at least one of aluminum, copper, tungsten and titanium; forcing an electrical current through at least one of said conductive features in each of said at least three grating structures until a resistance of said at least one conductive feature in each of said at least three grating structures increases by a preselected amount; and for each of said plurality of grating structures, identifying a duration of time that said current is passed through said at least one conductive feature until said resistance of said at least one conductive feature is increased by said preselected amount. 56. The method of claim 55, further comprising creating a plot of duration until said resistance reaches said preselected amount versus a critical dimension of said conductive features in said plurality of grating structures.57. The method of claim 55, further comprising performing at least one scatterometric measurement on each of said plurality of grating structures to determine a critical dimension of at least one of said conductive features within each of said plurality of grating structures.58. The method of claim 55, wherein said electrical current is forced through said at least one conductive feature until said resistance of said at least one conductive feature increased by 10-20%.59. A method, comprising:forming at least three grating structures above a semiconducting substrate, each of said grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of said features of each of said plurality of grating structures being different, said conductive features being comprised of at least one of aluminum, copper, tungsten and titanium; forcing an electrical current through at least one of said conductive features in each of said at least three grating structures until a resistance of said at least one conductive feature in each of said at least three grating structures increases by a preselected amount; and creating a plot of duration until said resistance reaches said preselected amount versus a critical dimension of said conductive features in said plurality of grating structures. 60. The method of claim 59, further comprising performing at least one scatterometric measurement on each of said plurality of grating structures to determine a critical dimension of at least one of said conductive features within each of said plurality of grating structures.61. The method of claim 59, wherein said electrical current is forced through said at least one conductive feature until said resistance of said at least one conductive feature increased by 10-20%.62. A method, comprising:forming a plurality of grating structures above a semiconducting substrate, each of said grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of said features of each of said plurality of grating structures being different; forcing an electrical current through at least one of said conductive features in each of said plurality of grating structures until a resistance of said at least one conductive feature in each of said grating structures increases by a preselected amount; creating a plot of duration until said resistance reaches said preselected amount versus a critical dimension of said conductive features in said plurality of grating structures; and performing at least one scatterometric measurement on each of said plurality of grating structures to determine a critical dimension of at least one of said conductive features within each of said plurality of grating structures. 63. The method of claim 62, wherein forming a plurality of grating structures comprises forming at least three grating structures.64. The method of claim 62, wherein said at least one scatterometric measurement is performed before said current is forced through said conductive feature.65. The method of claim 62, wherein said at least one scatterometric measurement is performed after said resistance of said conductive feature has increased above said preselected amount. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to semiconductor fabrication technology, and, more particularly, to structures for analyzing electromigration, and methods of using same.2. Description of the Related ArtBy way of background, modern integrated circuit devices, e.g., microprocessors, ASICs, memory devices, etc., are comprised of millions of field effect transistors formed on a semiconducting substrate, such as silicon. The substrate may be doped with either N-type or P-type dopant materials. An illustrative field effect transistor 10, as shown in FIG. 1, may have a doped polycrystalline silicon (polysilicon) gate electrode 14 formed above a gate insulation layer 16. The gate electrode 14 and the gate insulation layer 16 may be separated from doped source/drain regions 22 of the transistor 10 by a dielectric sidewall spacer 20. The source/drain regions 22 for the transistor 10 may be formed by performing one or more ion implantation processes to introduce dopant atoms, e.g. arsenic or phosphorous for NMOS devices, boron for PMOS devices, into the substrate 11. Shallow trench isolation regions 18 may be provided to isolate the transistor 10 electrically from neighboring semiconductor devices, such as other transistors (not shown). Additionally, although not depicted in FIG. 1, a typical integrated circuit product is comprised of a plurality of conductive interconnections, such as conductive lines and conductive contacts or vias, positioned in multiple layers of insulating material formed above the substrate. These conductive interconnections allow electrical signals to propagate between the transistors formed above the substrate.The gate electrode 14 has a critical dimension 12, i.e., the width of the gate electrode 14, that approximately corresponds to the channel length 13 of the device when the transistor 10 is operational. Of course, the critical dimension 12 of the gate electrode 14 is but one example of a feature that must be formed very accurately in modern semiconductor manufacturing operations. Other examples include, but are not limited to, conductive lines, openings in insulating layers to allow subsequent formation of a conductive interconnection, i.e., a conductive line or contact, therein, etc.As device dimensions have continued to shrink, the packing density of the semiconductor devices, e.g., transistors, has increased. That is, ever increasing numbers of transistors or memory cells are located on the same plot space of a semiconducting substrate. As a result of this increased device density, the conductive metal lines and contacts or vias that connect these various devices have also been reduced in physical size, and they are also packed more closely together. In general, the resistance of a metal line is inversely proportional to the cross-sectional area of the metal line. Thus, all other things being equal, it is important that the cross-sectional area of the metal line be maintained above certain minimum levels such that the resistance of the metal line does not exceed allowable limits. Unanticipated increases in the resistance of a metal line may adversely impact device performance, e.g., a reduction in operating frequency, increased heat build-up, increased power consumption, etc.Unfortunately, a phenomenon known as electromigration can adversely impact conductive metal lines in an integrated circuit product. In general, electromigration is a process whereby a conductive structure, such as a metal line, contact or via tends to degrade, thereby resulting in a change in the physical characteristics, e.g., shape, size, etc., of the conductive structure. Typically, electromigration occurs when a current is passed through relatively long conductive structures. The current sets up an electrical field in the conductive structure that decreases from the input side to the output side of the conductive structure. Additionally, heat generated by the flowing current sets up a thermal gradient along the conductive structure. As a result, the metal atoms in the conductive structure become mobile and diffuse within the conductive structure. This electromigration phenomenon results in physical changes to the size and/or shape of the conductive structure. For example, in some cases, the conductive structure may be thinned at one or more locations. In a worst case scenario, electromigration can cause complete separation of the conductive structure. This electromigration phenomenon can occur on metals such as aluminum, copper, tungsten, titanium, etc.In designing integrated circuit products, efforts are taken to reduce, eliminate or account for electromigration of conductive structures in integrated circuit products. Such efforts may include selecting appropriate materials, making conductive structures sufficiently large such that the effects of electromigration does not adversely impact the performance of the integrated circuit product over its useful life.Typically, one or more tests are performed on an integrated circuit product to determine its ability to withstand electromigration during the product lifetime. FIG. 2 is an illustrative test structure 30 that can be used for such purposes. The test structure 30 is comprised of a conductive metal line 32, a plurality of dummy metal lines 34, and contacts 36 coupled to each end of the conductive metal line 32. The lines 32, 34 have a layer of insulating material 38 positioned therebetween. A relatively high current, much higher than that anticipated in normal usage of the integrated circuit product, is passed through the conductive metal line 32 until such time as the resistance of the conductive metal line 32 increases by a preselected amount, e.g., 10% or 20%. The increase in resistance is due to material loss and/or change in shape of the conductive metal line 32 due to electromigration. The acceptability of the product as to its ability to withstand electromigration depends upon the time it takes for the conductive metal line to exhibit the established standard for increase in resistance. Such testing can be very time-consuming. For example, such an electromigration test may involve subjecting the conductive metal line 32 to the test current for 10-12 hours.However, in forming the conductive metal line 32, the critical dimension 32A, i.e., width, of the conductive metal line 32 may vary from that anticipated by the design process. For example, the target critical dimension 32A of the conductive metal line 32 may be 180 nm. Due to variations and/or process bias in one or more of the process tools used in creating the metal line 32, e.g., a stepper exposure tool, an etch tool, etc., the actual critical dimension 32A may vary from that of the target value. For example, the manufactured conductive metal line 32 may have a critical dimension 32A that is actually 171 nm or 189 nm as compared to the target value of 180 nm. Thus, after the electromigration test is performed to breakdown, e.g., 20% increase in resistance, the conductive metal line 32 is typically cross-sectioned, and the critical dimension 32A is measured using a scanning electron microscope. Based upon the measured critical dimension, the duration to breakdown for a conductive metal line 32 having the target critical dimension 32A, e.g., 180 nm, is determined by extrapolating the electromigration data for the tested conductive metal line 32 having the measured critical dimension 32A.Such a process can be very time-consuming in that it requires the cross-section of one or more portions of the wafer. Moreover, the feedback from the electromigration testing may not be available as quickly as would otherwise be desired.The present invention is directed to various structures and methods that may solve, or at least reduce, some or all of the aforementioned problems.SUMMARY OF THE INVENTIONThe present invention is generally directed to various structures for analyzing electromigration, and methods of using same. In one illustrative embodiment, the method comprises forming a grating structure above a semiconducting substrate, the grating structure being comprised of a plurality of conductive features, forcing an electrical current through at least one of the conductive features until a resistance of the conductive feature increases by a preselected amount, and performing at least one scatterometric measurement of the conductive feature to determine a critical dimension of the conductive feature. In further embodiments, the method comprises determining a susceptibility of another conductive feature to electromigration based upon a comparison of a critical dimension of another conductive feature and the determined critical dimension of the conductive feature through which the electrical current was passed.In another illustrative embodiment, the method comprises forming a plurality of grating structures above a semiconducting substrate, each of the grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of the features of each of the plurality of grating structures being different, and forcing an electrical current through at least one of the conductive features in each of the plurality of grating structures until a resistance of the conductive feature increases by a preselected amount.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a cross-sectional view of an illustrative prior art transistor;FIG. 2 is a plan view of an illustrative prior art structure employed in electromigration testing;FIGS. 3A-3B depict one illustrative embodiment of a grating structure that may be employed with the present invention;FIGS. 4A-4B depict an embodiment of the present invention wherein a plurality of grating structures may be employed;FIG. 5 is an illustrative plot of the duration until electromigration breakdown versus the critical dimensions of measured conductive features; andFIG. 6 is a schematic depiction of an illustrative system that may be employed with the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to the attached figures. Although the various regions and structures of a semiconductor device are depicted in the drawings as having very precise, sharp configurations and profiles, those skilled in the art recognize that, in reality, these regions and structures are not as precise as indicated in the drawings. Additionally, the relative sizes of the various features and doped regions depicted in the drawings may be exaggerated or reduced as compared to the size of those features or regions on fabricated devices. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention. The words and phrases used herein should be understood and interpreted to have a meaning consistent with the understanding of those words and phrases by those skilled in the relevant art. No special definition of a term or phrase, i.e., a definition that is different from the ordinary and customary meaning as understood by those skilled in the art, is intended to be implied by consistent usage of the term or phrase herein. To the extent that a term or phrase is intended to have a special meaning, i.e., a meaning other than that understood by skilled artisans, such a special definition will be expressly set forth in the specification in a definitional manner that directly and unequivocally provides the special definition for the term or phrase.In general, the present invention is directed to various structures for analyzing electromigration, and methods of using same. As will be readily apparent to those skilled in the art upon a complete reading of the present application, the present method is applicable to a variety of technologies, e.g., NMOS, PMOS, CMOS, etc., and it is readily applicable to a variety of devices, including, but not limited to, logic devices, memory devices, etc. Moreover, the present methods may be employed to test the electromigration characteristics of a variety of conductive structures.In general, the present invention involves performing electromigration tests on one or more grating structures 40 (see FIGS. 3A-3B) and, in some cases, subjecting the grating structures 40 to one or more scatterometric measurement processes. Based upon the results of these measurements, a variety of information may be obtained as to the susceptibility of the grating structures 40 to withstand electromigration.Some aspects of the present invention will now be described with reference to FIGS. 3A-3B. According to one embodiment of the present invention, a grating structure 40 is comprised of a plurality of conductive line-type features 42, e.g., conductive metal lines, etc. The illustrative grating structure 40 depicted in FIGS. 3A-3B is comprised of a plurality of conductive features 42 that are intended to be representative in nature in that they may take any form or shape and they may be comprised of any type of material, e.g., aluminum, copper, titanium, tungsten, etc. The conductive features 42 may also have one or more cap layers, e.g., titanium, titanium nitride, positioned adjacent to the conductive feature 42, although such cap layers are not depicted in the attached drawings. Moreover, an insulating material, such as silicon dioxide, will generally be formed around the conductive features 42 although such an insulating layer is not depicted in the drawings.The conductive features 42 that comprise the grating structure 40 may be formed at any level of an integrated circuit product. For ease of explanation, the features 42 of the grating structure 40 are depicted as simply being formed above the wafer 41. However, after a complete reading of the present application, those skilled in the art will recognize that the present invention has broad applicability. Thus, the present invention should not be considered as limited to any specific size or configuration of the conductive features 42, the materials comprising the features 42, or to any particular location of one or more of the grating structures 40 above the wafer 41 unless such limitations are expressly set forth in the appended claims.The grating structure 40 may be of any size or configuration. Typically, the grating structure 40 will be relatively large, e.g., up to, for example, 100 [mu]m*100 [mu]m (10,000 [mu]m<2> ). Of course, the grating structure 40 need not be square or even rectangular in configuration. The number of conductive features 42 that comprise the grating structure 40, as well as the cross-sectional configuration of the features 42 and the pitch 44 and/or spacing 45 therebetween, may also vary. For example, as shown in FIG. 3A, the features 42 have a generally rectangular cross-sectional configuration. The illustrative features 42 may be a plurality of conductive metal lines of an integrated circuit product. The conductive features 42 have a critical dimension 46. FIG. 4B is a plan view of the illustrative grating structure 40. Conductive contacts 47 may be provided at opposite ends of one or more of the conductive features 42. For example, as depicted in FIG. 4B, conductive contacts 47 are provided on each end of every other conductive feature 42. Of course, other contacting schemes may be employed. The non-contacted conductive features 42 may serve as dummy features during subsequent electromigration testing.The grating structure 40 may be formed as a separate test structure, or in some embodiments, it may be comprised of features 42, e.g., lines, that are part of actual production devices. For example, the grating structure 40 may be essentially a test structure that is formed in an unused area or scribe line of a wafer. In the case of actual production devices, the features 42 that comprise the grating structure 40 may be formed as part of the processes of forming conductive metal lines for an integrated circuit product.The number of conductive features 42 that comprise the grating structure 40 may also vary. For example, the grating structure 40 may occupy approximately 100 [mu]m*100 [mu]m (10,000 [mu]m<2> ) of surface area, and approximately 100-700 conductive features 42 may be part of the grating structure 40. For ease of explanation, only six representative conductive features 42 are depicted in FIG. 3A. Additional conductive features 42 are depicted in FIG. 3B. As will be recognized by those skilled in the art after a complete reading of the present application, the size, shape and number of conductive features 42 that make up the grating structure 40 should not be considered a limitation of the present invention unless such limitations are expressly set forth in the appended claims. Additionally, the conductive features 42 may be comprised of a variety of materials or combination of materials. For example, the conductive features 42 may be comprised of aluminum, copper, tungsten, titanium, etc. One or more capping layers, e.g., titanium, titanium nitride, may be positioned adjacent at least portions of the conductive features 42.In one embodiment of the present invention, the critical dimension 46 of one or more of the conductive features 42 of the grating structure 40 may be measured using scatterometric techniques. An illustrative scatterometry tool 74 that may be used with the present invention is schematically depicted in FIG. 3A. The scatterometry tool 74 is generally comprised of a representative light source 73 and a detector 75, as depicted in FIG. 3A. The scatterometric measurements will be used for purposes described more fully below.A variety of scatterometry tools 74 may be used with the present invention, e.g., so-called 2[theta]-type systems and lens-type scatterometry tools. The scatterometry tool 74 may use white light, or some other wavelength or combination of wavelengths, depending on the specific implementation. Typically, the scatterometry tool 74 will generate an incident beam that has a wide spectral composition and wherein the intensity of the light changes slowly in comparison to changes in wavelength. The angle of incidence of the light may also vary, depending on the specific implementation. The optical characteristic traces generated by the scatterometry tool 74 may be based upon a comparison of light intensity to wavelength (for white light, fixed angle type scatterometry tools) or a comparison of intensity to incident angle (for angle resolved systems that use a single light source). The optical characteristic traces may be based upon any aspect of a reflection profile (e.g., intensity vs. wavelength -tan([delta]), phase vs. wavelength-sin([psi]), where [delta] and [psi] are common scatterometry outputs known to those of ordinary skill in the art).In general, the scatterometry tool 74 includes optical hardware, such as an ellipsometer or reflectometer, and a data processing unit loaded with a scatterometry software application for processing data collected by the optical hardware. For example, the optical hardware may include a Model OP5230 or OP5240 with a spectroscopic ellipsometer offered by Thermawave, Inc. of Fremont, Calif. The data processing unit may comprise a profile application server manufactured by Timbre Technologies, a fully owned subsidiary of Tokyo Electron America, Inc. of Austin, Tex. and distributed by Thermawave, Inc. Scatterometry libraries are commercially available from Timbre Technologies, Inc.In one aspect of the present invention, electromigration analysis is performed by passing an electrical current through one or more of the conductive features 42 of the grating structure 40. This may be accomplished by coupling the appropriate voltage supply to one of the contacts 47 on the desired conductive feature 42. Scatterometric measurements of the conductive features 42 subjected to the electrical current may be made to determine the critical dimension 46 of the conductive feature 42. In one embodiment, such scatterometric measurements are made after breakdown has occurred, i.e., after the resistance of the conductive feature 42 has increased above a preselected amount, such as 20%. At that time, scatterometric measurements of the tested conductive feature 42 may be made to determine its actual, manufactured critical dimension 46. With this information in hand, electromigration effects on conductive features 42 having smaller or larger critical dimensions 46 may be extrapolated. Of course, if desired, the critical dimension 46 of the conductive feature 42 may be measured prior to forcing electrical current through the conductive feature 42.Through use of scatterometry-based measurement techniques, the critical dimension 46 of the tested conductive feature 42, i.e., the one subjected to the current flow, may be readily determined without resort to destructive testing, i.e., cross-sectioning the grating structure 40 and measuring the critical dimension 42 using other measurement techniques, such as a scanning electron microscope. The methods involved in using scatterometry to measure a critical dimension of a feature of a grating structure are well known to those skilled in the art and, thus, will not be described in any further detail herein. The scatterometric measurement of the critical dimension 46 of the tested conductive feature 42 may be performed at any point during the manufacture of integrated circuit products on the wafer. That is, in employing the present invention, the critical dimension 46 of the tested conductive feature 42 may be performed at any point after the conductive feature 42 is formed. Thus, it is not necessary to wait for all production steps to be completed to determine the critical dimension 46 of the conductive feature 42 as is the case with prior art techniques when cross-sectioning of the conductive line 32 (see FIG. 2) was involved in determining the critical dimension 32A of the tested feature 42. As a result, the present invention can be employed to provide more timely feedback as the ability of the integrated circuit product to withstand electromigration. Such methodologies may result in better control of manufacturing processes and improved yields may be obtained, thereby resulting in improved overall manufacturing efficiencies.FIGS. 4A-4B depict another aspect of the present invention. As shown therein, a plurality of grating structures 40A-40D are formed above a semiconducting substrate 41. Although four illustrative grating structures 40A-40D are depicted in FIGS. 4A-4B, the present invention may be employed in other situations where more or less than four grating structures 40 are employed. For example, the present invention may be employed in situations where only three grating structures 40 are formed above the substrate 41. The grating structures 40A-40D may be formed at any level of an integrated circuit product. Thus, as will be recognized by those skilled in the art after a complete reading of the present application, the present invention should not be considered as limited to the formation of any particular number of grating structures 40 above the substrate 41 unless such limitations are expressly set forth in the appended claims.In general, each of the grating structures 40A-40D is comprised of a plurality of conductive features 42A-42D, respectively. The conductive features 42A-42D may be of any size, shape or configuration, and they may be comprised of a variety of materials, e.g., aluminum. According to the present invention, the conductive features 42A-42D have different critical dimensions. That is, for example, the conductive features 42A, 42B, 42C and 42D may have a critical dimension of, respectively, 120 nm, 140 nm, 160 nm and 180 nm, as indicated in FIG. 4B. Stated another way, the grating structures 40A-40D are comprised of different size conductive features 42.As with the previously described grating structure 40 in FIGS. 3A-3B, conductive contacts 47 may be provided to one or more of the conductive features 42 in each of the grating structures 40A-40D. An electrical current may be passed through one or more of the conductive features 42 on each of the grating structures 40A-40D. This process may be continued until breakdown occurs, i.e., until the resistance of the tested conductive features 42 increases above a preselected amount, e.g., 10-20%. This process occurs for each of the grating structures 40A-40D. Scatterometric measurements of the conductive features 42 for each of the grating structures 40A-40D may be made to determine the actual manufactured critical dimension of each of the measured conductive features 42 on each of the grating structures 40A-40D. Given the different critical dimensions 46 of the features as one progresses from one grating structure to another, the current passed through the conductive features 42 may also vary such that the same current density (A/cm<2> ) is applied to each conductive feature 42.Based upon this information, a plot may be made of breakdown points for each of the grating structures 40A-40D, each of which have conductive features 42 with different critical dimension. FIG. 5 is an illustrative plot that may be produced from this information. As shown therein, the vertical axis reflects the time taken to reach breakdown. The horizontal axis reflects the measured critical dimension of the tested conductive features 42. The data points on the plot of FIG. 5 represent the plotted time to breakdown for each of the various critical dimension sizes.With such information, the ability of conductive features 42 of varying critical dimensions to withstand electromigration may be readily determined by interpolating the information from the plot shown in FIG. 5. For example, if a target critical dimension for conductive features that may be subjected to electromigration is selected as 150 nm, then the duration to breakdown may be readily determined by graphical analysis, as indicated by the dashed lines in FIG. 5. By providing these plurality of grating structures 40A-40D with the different size conductive features, the effects of electromigration on production features of varying critical dimension sizes may be more easily and rapidly determined. As a result, manufacturing efficiencies may also increase.An illustrative system 60 that may be used in one embodiment of the present invention is shown in FIG. 6. The system 60 is comprised of a scatterometry tool 74 and a controller 78. As indicated in FIG. 6, the wafer 61 is representative of one or more wafers having one or more grating structures 40, formed thereabove. The scatterometric measurements described herein may be made solely within the scatterometry tool 74 or in combination with the processing resources provided by the controller 78.In the illustrated embodiments, the controller 78 is a computer programmed with software to implement the functions described herein. Moreover, the functions described for the controller 78 may be performed by one or more controllers spread through the system. For example, the controller 78 may be a fab level controller that is used to control processing operations throughout all or a portion of a semiconductor manufacturing facility. Alternatively, the controller 78 may be a lower level computer that controls only portions or cells of the manufacturing facility. Moreover, the controller 78 may be a stand-alone device, or it may reside on the scatterometry tool 74. However, as will be appreciated by those of ordinary skill in the art, a hardware controller (not shown) designed to implement the particular functions may also be used.Portions of the invention and corresponding detailed description are presented in terms of software, or algorithms and symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.An exemplary software system capable of being adapted to perform the functions of the controller 78, as described, is the Catalyst system offered by KLA Tencor, Inc. The Catalyst system uses Semiconductor Equipment and Materials International (SEMI) Computer Integrated Manufacturing (CIM) Framework compliant system technologies, and is based on the Advanced Process Control (APC) Framework. CIM (SEMI E81-0699-Provisional Specification for CIM Framework Domain Architecture) and APC (SEMI E93-0999-Provisional Specification for CIM Framework Advanced Process Control Component) specifications are publicly available from SEMI.In one aspect, the present invention is generally directed to various structures for analyzing electromigration, and methods of using same. In one illustrative embodiment, the method comprises forming a grating structure above a semiconducting substrate, the grating structure being comprised of a plurality of conductive features, forcing an electrical current through at least one of the conductive features until a resistance of the conductive feature increases by a preselected amount, and performing at least one scatterometric measurement of the conductive feature to determine a critical dimension of the conductive feature. In further embodiments, the method comprises determining a susceptibility of another conductive feature to electromigration based upon a comparison of a critical dimension of another conductive feature and the determined critical dimension of the conductive feature through which the electrical current was passed.In another illustrative embodiment, the method comprises forming a plurality of grating structures above a semiconducting substrate, each of the grating structures being comprised of a plurality of conductive features having the same critical dimension, the critical dimension of the features of each of the plurality of grating structures being different, and forcing an electrical current through at least one of the conductive features in each of the plurality of grating structures until a resistance of the conductive feature increases by a preselected amount.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
A security module [130] in a memory access path of a processor [102] of a processing system [100] protects secure information by verifying the contents of memory pages as they transition between one or more virtual machines (VMs) [150. 151] executing at the processor and a hypervisor [152] that provides an interface between the VMs and the processing system's hardware. The security module of the processor is employed to monitor memory pages as they transition between one or more VMs and a hypervisor so that memory pages that have been altered by a hypervisor or other VM cannot be returned to the VM from which they were transitioned. |
1.A method comprising:Converting at least one memory page [240] from a first virtual machine [150] executed at a processor [102] to a first hypervisor [152] executed at the processor;Receiving, at a security module [130] of the processor, a request from the first hypervisor to return the at least one memory page to the first virtual machine;Selectively verifying the at least one memory at the security module of the processor in response to the request from the first hypervisor to return the at least one memory page to the first virtual machine The content of the page; andIn response to verifying the content of the at least one memory page, the at least one memory page is provided to the first virtual machine.2.The method of claim 1, wherein selectively validating the content of the at least one memory page comprises:In response to the at least one memory page being converted from the first virtual machine to the first hypervisor, measuring at least one characteristic of the at least one memory page to generate at least one characteristic measurement value;Storing the at least one characteristic measurement at the security module; andIn response to the first management program returning the at least one memory page, comparing the at least one characteristic measurement value with the at least one memory page.3.The method of claim 2, further comprising:In response to the at least one characteristic measurement value not matching the at least one memory page, preventing the first hypervisor from providing the at least one memory page to the first virtual machine.4.The method of claim 1, further comprising:Allocate multiple memory pages [440] to the first virtual machine;Storing a first subset [442] of the plurality of memory pages in a balloon pool [424] associated with the first virtual machine, wherein the first subset includes the first virtual machine Written memory pages; andIn response to a first hypervisor requesting at least one memory page, at least one page of the first subset of the plurality of memory pages is provided to the first hypervisor.5.The method of claim 4, further comprising:Responsive to the first hypervisor request to return the at least one page of the first subset of memory pages to the balloon pool associated with the first virtual machine, bypassing the download from the first Hypervisor verification of the content of the at least one page of the first subset of memory pages of the first virtual machine.6.The method of claim 1, wherein the method further comprises:Allocate multiple memory pages [340] to the first virtual machine;Designating a subset [240] of the plurality of memory pages as invalid in response to a request by the first hypervisor for at least one memory page; andThe subset of the plurality of memory pages is provided to the hypervisor.7.The method of claim 6, further comprising:Measuring at least one characteristic of the subset of the plurality of memory pages to generate at least one characteristic measurement value;Storing the at least one characteristic measurement value at the security module;Comparing the at least one characteristic measurement value to the subset of the plurality of memory pages in response to the first management program returning the subset of the plurality of memory pages; andIn response to the at least one characteristic measurement value matching the subset of the plurality of memory pages, the subset of the plurality of memory pages is provided to the first virtual machine.8.The method of claim 6, further comprising:Encrypting the plurality of memory pages with a first key [316]; andA second key [319] is used to encrypt the subset of the plurality of memory pages.9.The method of claim 8, wherein the second key is generated in response to the plurality of memory pages being allocated to the first virtual machine.10.A method comprising:Converting a first memory page [240] from a first virtual machine [150] executed at a processor [102] to a first hypervisor [152] executed at the processor;In response to the first hypervisor request to return the first memory page to the first virtual machine, selectively verify at the security module [130] of the processor that the first hypervisor is requesting a return The content of the first memory page of matches the content of the first memory page converted from the first virtual machine to the first hypervisor; andResponsive to verifying the content of the first memory page transitioned from the first virtual machine to the first hypervisor and the content of the first memory page that the first hypervisor is requesting to return If yes, the first virtual machine is provided with the first memory page.11.The method of claim 10, wherein selectively validating the content of the first memory page comprises:In response to the first memory page being converted from the first virtual machine to the first hypervisor, measuring at least one characteristic of the first memory page to generate at least one characteristic measurement value;Storing the at least one characteristic measurement at the security module; andIn response to the first management program returning the first memory page, comparing the at least one characteristic measurement value with the first memory page.12.The method of claim 11, further comprising:In response to the at least one characteristic measurement value not matching the first memory page, a request from the first hypervisor to return the first memory page to the first virtual machine is rejected.13.The method of claim 10, wherein selectively validating the content of the first memory page comprises:Allocate multiple memory pages [440] to the first virtual machine;Storing a first subset [442] of the plurality of memory pages in a balloon pool [424] associated with the first virtual machine, wherein the first subset includes the first virtual machine Written memory pages; andIn response to the first hypervisor requesting at least one memory page, at least one page of the first subset of the plurality of memory pages is provided to the first hypervisor.14.The method of claim 13, further comprising:Responsive to the at least one page of the first subset of memory pages being transitioned from the first hypervisor to the first virtual machine, and also responsive to the first hypervisor converting the first of the memory pages The at least one page of the subset is provided to the balloon pool associated with the first virtual machine, bypassing verification of the content of the at least one page of the first subset of memory pages.15.The method of claim 10, further comprising:Allocate multiple memory pages [340] to the first virtual machine;Designating a subset [240] of the plurality of memory pages as invalid in response to a request by the first hypervisor for at least one memory page; andThe subset of the plurality of memory pages is provided to the hypervisor.16.The method of claim 15, further comprising:Measuring at least one characteristic of the subset of the plurality of memory pages to generate at least one characteristic measurement value;Storing the at least one characteristic measurement value at the security module;Comparing the at least one characteristic measurement value to the subset of the plurality of memory pages in response to the first management program returning the subset of the plurality of memory pages; andIn response to the at least one characteristic measurement value matching the subset of the plurality of memory pages, the subset of the plurality of memory pages is provided to the first virtual machine.17.The method of claim 15, further comprising:Encrypting the plurality of memory pages with a first key [316]; andA second key [319] is used to encrypt the subset of the plurality of memory pages.18.A processor includes:The first virtual machine [150];First management procedure [152];Security module [130] withIn response to receiving a request from the first hypervisor to return the first memory page to the first virtual machine, selectively verifying that the first hypervisor has been converted from the first virtual machine to the first hypervisor The content of the first memory page of [240] matches the content of the first memory page requested by the first hypervisor to be returned to the first virtual machine; andResponsive to verifying the content of the first memory page transitioned from the first virtual machine to the first hypervisor and the content of the first memory page that the first hypervisor is requesting to return If yes, the first virtual machine is provided with the first memory page.19.The processor of claim 18, wherein the security module selectively authenticates the content of the first memory page by:In response to the first memory page being converted from the first virtual machine to the first hypervisor, measuring at least one characteristic of the first memory page to generate at least one characteristic measurement value;Storing the at least one characteristic measurement at the security module; andIn response to the first hypervisor request to return to the first memory page, the at least one characteristic measurement value is compared with the first memory page.20.The processor of claim 19, wherein the security module is further configured to:In response to the at least one characteristic measurement value not matching the first memory page, the request from the first hypervisor to return the first memory page to the first virtual machine is rejected.21.The processor of claim 18, wherein the security module is further configured to:Allocating a plurality of memory pages [440] to the first virtual machine;Storing a first subset [442] of the plurality of memory pages at a first memory location [424] associated with the first virtual machine, wherein the first subset includes Memory pages written by the virtual machine; andIn response to a first hypervisor requesting at least one memory page, at least one page of the first subset of the plurality of memory pages is provided to the first hypervisor.22.The processor of claim 21, wherein the security module is further configured to:In response to the first hypervisor providing the at least one page of the first subset of memory pages to the first memory associated with the first virtual machine, bypassing the memory page Verification of the content of the at least one page of the first subset.23.The processor of claim 18, wherein the security module is further configured to:Allocating a plurality of memory pages [340] to the first virtual machine;Designating a subset [240] of the plurality of memory pages as invalid in response to a request by the first hypervisor for at least one memory page; andThe subset of the plurality of memory pages is provided to the hypervisor.24.The processor of claim 23, wherein the security module is further configured to:Measuring at least one characteristic of the first memory page to generate at least one characteristic measurement value;Storing the at least one characteristic measurement value at the security module;Returning the first memory page in response to the first hypervisor request, comparing the at least one characteristic measurement value with the first memory page; andIn response to the at least one characteristic measurement value matching the first memory page, the first virtual machine is provided with the first memory page.25.The processor of claim 23, wherein the security module is further configured to:Encrypting the plurality of memory pages with a first key [316]; andA second key [319] is used to encrypt the subset of the plurality of memory pages. |
Memory page transition monitoring between hypervisor and virtual machineCross-reference to related applicationsThis application is related to and claims priority from the following co-pending applications, the entire contents of which are incorporated herein by reference: U.S. Provisional Patent Application Serial No. 62, filed March 29, 2017 and entitled "PSP / HV Flows with SNP" / 478,148 (nominee file number 1458-17TEMP01-PR).Background techniqueInformation security is an important feature in many processor applications. For example, a processor may be used in a server in an infrastructure as a service (IAAS) environment, where the processor executes one or more virtual machines (VMs) and executes a hypervisor to divide server hardware between VMs and The machines are isolated from each other. Different VMs can be executed on behalf of different customers, so it is desirable to protect the information (instructions and data) used by each VM from other VMs and hypervisors. However, defects (eg, errors) in the hypervisor may make the hypervisor itself vulnerable, allowing the hypervisor or VM to access information of another VM.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood and its many features and advantages made apparent to those skilled in the art by reference to the drawings. The use of the same reference numbers in different figures indicates similar or identical items.FIG. 1 is a block diagram of a processing system that employs a security module in conjunction with a hypervisor to verify the contents of a memory page converted between a VM and a hypervisor in accordance with some embodiments.2 is a block diagram illustrating an example of a security module of the processing system of FIG. 1 that generates and stores a hash of a memory page converted between a VM and a hypervisor, according to some embodiments.3 is a block diagram illustrating an example of a security module of the processing system of FIG. 1 that allocates a subset of a memory page allocated to a VM to a hypervisor, according to some embodiments.FIG. 4 is a block diagram illustrating an example of a security module of the processing system of FIG. 1 that allocates a subset of the memory pages of a balloon pool allocated to a VM to a hypervisor, according to some embodiments.5 is a flowchart illustrating a method of verifying the contents of a subset of memory pages allocated to a VM at a security module of the processing system of FIG. 1 when transitioning between a VM and a hypervisor according to some embodiments.6 is a flowchart illustrating a method for allocating a subset of a memory page of a balloon pool allocated to a VM to a hypervisor, according to some embodiments.Detailed ways1 to 6 illustrate techniques for protecting security information at a processor of a processing system by employing a security module in a processor's memory access path to process memory pages at one or more The contents of the memory pages are verified when switching between a virtual machine (VM) executing at the processor of the system and a hypervisor that provides an interface between the VM and the hardware of the processing system. The hypervisor is used to isolate the VM by assigning a dedicated portion of memory for each VM and other resources of the processing system for its private use, said memory being divided into contiguous blocks called memory pages. In some embodiments, if, for example, the hypervisor runs out of memory, the hypervisor can request the VM to convert a subset of its dedicated memory portion back to the hypervisor. The hypervisor may return the subset of memory to the VM later, for example, in response to the VM's request for the subset of memory. However, a bug in the hypervisor or a maliciously modified hypervisor to act as a exploitation tool may allow the hypervisor or another VM to inspect or even change the information in the memory subset. By using the techniques described herein, the processor's security module is used to monitor memory pages as memory pages transition between one or more VMs and the hypervisor so that memory pages that may have been changed by the hypervisor or other VMs cannot be returned to The VM before it was converted.In some embodiments, when a hypervisor allocates a portion of memory to a VM, a subset of the allocated memory portion is protected (e.g., encrypted) and the remaining portion of the allocated memory portion (e.g., it is not expected to The memory used by the VM for a certain period of time (herein referred to as "excessive memory") exists in an unencrypted form, such as in a balloon pool. The VM has not yet written to too many memory pages, and therefore there is no security risk that information stored by the VM on too many memory pages may be destroyed by the hypervisor. Therefore, if there are too many memory pages available, the hypervisor can use and return too many memory pages without the security module monitoring the excessive memory pages. By bypassing the monitoring of the security module, the hypervisor can utilize more memory pages more efficiently than it uses the secure memory pages allocated to the VM.FIG. 1 illustrates a processing system 100 that supports monitoring memory pages transitioned between a VM and a hypervisor, according to some embodiments. The processing system 100 includes a processor 102 and a memory 120. The processing system 100 may be incorporated in any of a variety of electronic devices such as a server, a personal computer, a tablet computer, a set-top box, a gaming system, and the like. The processor 102 is typically configured to execute a set of instructions (eg, a computer program) that manipulates the circuitry of the processor 102 to perform a defined task. The memory 120 facilitates the execution of these tasks by storing data used by the processor 102. The memory 120 may be a random access memory (RAM), a non-volatile memory such as a flash memory or a hard disk drive (HDD), or the like, or a combination thereof.During the execution of the instruction set, the processor 102 generates a memory access request, including a write request to store data at the memory 120 and a read request to retrieve data from the memory 120. Each memory access request includes a memory address (eg, a system physical address) indicating the location at memory 120 targeted by the memory access request. In response to the read request, the memory 120 retrieves information (data or instructions) stored at a position corresponding to the memory address of the read request, and provides the information to the processor 102. In response to the write request, the memory 120 stores the requested write information at a location corresponding to the memory address of the write request.The processor 102 includes an encryption module 115. The encryption module 115 is an operation mode of a general-purpose processor, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a processor core, or other modules designed and configured to perform a secure operation of the processing system 100. The secure operation includes Registration of an entity to be performed at the processor 102 (eg, virtual machine, computer program, etc.), generation and identification of a security key for the entity to be performed, authentication of the processing system 100 for secure operation, and the like. As further described herein, the encryption module 115 supports cryptographic isolation of information at the processing system 100 by generating a security key, identifying entities that are registered for execution at the processing system 100, and other operations that implement such cryptographic isolation.To facilitate execution of instructions, the processor 102 includes one or more processor cores 106, a cache 108, a north bridge 110, and a security module 130. Although only a single processor core 106 is depicted in FIG. 1, the processing system 100 may include multiple processor cores. The processor core 106 is a processing unit that executes instructions individually and simultaneously. In some implementations, each of the processor cores 106 includes an individual instruction pipeline that fetches instructions, decodes the extracted instructions into corresponding operations, and uses resources of the processing system 100 to execute a request including a memory access Operation. Each of the processor cores 106 is configured to identify each memory access request as one of two types: a secure memory access request, which indicates that information corresponding to the memory access request is designated for password protection; or A non-secure memory access request indicating that the information corresponding to the memory access request is not designated for password protection.In some embodiments, the processing system 100 implements a security scheme in which the information is assigned based on control bits associated with a memory address corresponding to where the information is stored in the memory 120 or to a type (e.g., instruction or data) of the information Security specification (whether the information will be protected by encryption). This allows a large data collection to be easily classified as secure information, thereby providing effective information protection. For example, in some embodiments, the control bits are set by the processing system 100 to specify specific types of information, such as instruction information, or page table information that provides a mapping of virtual addresses to physical addresses of the memory 120, as security information To protect the information in an encrypted manner, as described further below. The control bits of the address assigned to the data may be specified in a more fine-grained manner based on, for example, a designation requested by a program executing at the processor 102. This security scheme provides protection for critical data (for example, preventing unauthorized execution of virtual machines or their programs) while still providing flexibility for more general-purpose data.In some embodiments, since the type of security assigned to the information is specified based on the corresponding memory address of the information, the processing system 100 uses the page table itself to indicate the type of security of each memory address. Therefore, the processor core 106 identifies the type of the memory access request in the process of identifying the memory address corresponding to the memory access request. In particular, if a memory address is indicated as storing secure information, the corresponding memory access is identified as a secure memory access. Similarly, if a memory address is indicated as storing non-secure information, the corresponding memory access is identified as a non-secure memory access.The cache 108 is a memory device that stores a subset of information stored at the memory 120, thereby enabling the processor core 106 to quickly access the corresponding subset of information. Although only a single cache 108 is depicted in FIG. 1, it should be understood that the processing system 100 may include multiple caches, including different caches at different levels of the memory hierarchy of the processor 102. The cache 108 receives the memory access request and identifies whether its storage array (not shown in FIG. 1) stores information targeted by the memory access request. If so, the cache 108 indicates a cache hit and the memory access request is satisfied at the storage array. If the cache 108 does not store the target information, it indicates a cache miss and provides a memory access request to the Northbridge 110.In the example shown in FIG. 1, the memory access path of the processing system 100 causes the cache 108 to store information, including security information, in an unencrypted form. Thus, in some embodiments, the cache 108 stores, for each storage location of a given size (e.g., a cache line), other entities (e.g., VMs) that identify a particular program or are authorized to access information at that storage location Entity tag information. In response to memory access to the location of the storage array, the cache 108 compares the identity of the entity generating the memory access request with the entity tag information, and in response to a mismatch, indicates a cache miss, thereby preventing unauthorized access to the information .Northbridge 110 is a memory controller that provides an interface for the processor 102 to communicate with the memory 120. In some embodiments, the north bridge 110 may perform other functions, such as connecting to an input / output controller (eg, south bridge, not shown), and providing an interface between different processor cores 106. In its capacity as a memory controller, Northbridge 110 receives memory access requests from cache 108 and controls the provision of these requests to memory 120. In addition, the Northbridge 110 receives a response to the memory access request from the memory 120 and controls to provide a response to the cache 108. In some embodiments, Northbridge 110 may receive a memory access request (e.g., a direct memory access request) from an input / output device (not shown) of the processing system 100, and control providing the request to the memory 120.In order to provide password isolation of the information, the Northbridge 110 includes an encryption module 115 configured to encrypt and decrypt the information according to a specified cryptographic standard and based on the keys 116, 118. In some implementations, the encryption module 115 is configured to employ Advanced Encryption Standard (AES) encryption and decryption, but in other embodiments, the encryption module 115 may employ other encryption / decryption techniques. In response to receiving the write request, Northbridge 110 identifies whether the request is a secure memory access request or a non-secure memory access request. If the write request is a non-secure memory access request, the Northbridge 110 bypasses the encryption module 115 and provides the write request to the memory 120 without encrypting the information to be written. If the write request is a secure memory access request, Northbridge 110 identifies one of the keys 116, 118 assigned to the entity (eg, program, VM, software service, etc.) that generated the memory access request. In some implementations, the security module 130 identifies the key to be selected based on which entity is currently executing at the processor 102. The encryption module 115 uses the selected key to encrypt the information to be written, and provides the write request with the encrypted information to the memory 120 for storage. In some embodiments, the encryption module 115 uses both the selected key and the physical address of the memory access request to encrypt and decrypt the corresponding information, thereby preventing a block move attack. In some implementations, the encryption module 115 identifies whether to use a physical address for encryption and decryption based on the state of a control bit (not shown) at the processor 102. The control bit status can be set by the security module 130.In response to receiving a read request, Northbridge 110 provides the request to the memory 120 and then receives information in response to the request. If Northbridge 110 recognizes the read request as a non-secure memory access request, it bypasses the encryption module 115 and provides the read information to the cache 108 without encryption. If the Northbridge 110 recognizes the read request as a secure memory access request, it identifies one of the keys 116, 118 assigned to the entity that generated the read access request, and the encryption module 115 decrypts the read information. Northbridge 110 provides the decrypted read information to cache 108 for storage. In some cases, the Northbridge 110 may bypass providing information to the cache 108 and provide the decrypted read information directly to the processor core that generated the corresponding read access request.The hypervisor 152 is configured to isolate the VM (VM-A 150, VM-B 151) by assigning a dedicated portion of memory to each VM and other resources of the processing system for its private use. Each VM 150, 151 provides a secure and isolated hardware emulation environment for one or more virtual processors, whereby each virtual processor executes a corresponding guest operating system (OS) (not shown). Each guest OS / virtual processor and hypervisor 152 has an associated address space. A specific identifier (referred to herein as "WorldID") is typically used to identify each guest OS, and a specific identifier referred to herein as an "address space identifier" or "ASID" is used to identify lower-level addresses managed by the guest OS space.The address space assigned to each VM may be designated to store secure information (eg, secure address space VM-A122, secure address space VM-B 126). In some embodiments, the address space assigned to each VM may also include excessive memory that is not designated for security information and is stored in a balloon pool (e.g., VM-A balloon pool 124, VM-B balloon pool 128) in. The balloon pool 124, 128 is a physical or virtual memory address space that holds excess memory assigned to the VM, which is expected not to be written by the corresponding VM within a given period of time, or the corresponding VM considers the Too much memory is less valuable. In some implementations, if, for example, the hypervisor 152 itself or another VM requires additional memory, the hypervisor 152 may request the VM to convert a subset of its dedicated memory portion back to the hypervisor 152. The hypervisor 152 may return the memory subset to the VM later, for example, in response to the VM's request for the memory subset.To facilitate secure conversion of memory between the hypervisor 152 and the VMs 150, 151, the security module 130 is configured to selectively monitor memory pages as the memory pages transition between the hypervisor 152 and the VMs 150, 151. When the VM is started, the hypervisor 152 allocates memory to the VMs 150, 151, and specifies a physical address and ASID for the memory allocated to each VM. In some embodiments, the hypervisor 152 maintains a log (not shown) indicating the physical address and ASID of the memory allocated to each VM. The hypervisor 152 additionally designates the allocated memory as immutable. In some implementations, the hypervisor 152 designates the allocated memory as immutable by setting the immutable bits associated with the allocated memory stored in the log. After the hypervisor 152 allocates storage to the VMs 150, 151, the hypervisor 152 signals the encryption module 115 to encrypt the data and instructions stored by each VM 150, 151 at its corresponding allocated memory 122, 126, and notifies the security module 130 Take measurements. In some embodiments, the security module 130 generates and stores an offline encryption key when the VM is started, which is a random key unique to each VM. In some embodiments, after the data stored at the allocated memory of each VM has been encrypted by the encryption module 115 and measured by the security module 130, the security module 130 indicates that the memory storing the encrypted and measured data has been verified ( For example, by setting a verified bit associated with the memory in a log maintained by the hypervisor 152, and clearing the immutable bit. The verification instruction signals the VM that the VM can write to the memory storing the encrypted and measurement data.If the hypervisor 152 requests the VMs 150, 151 to transition ("swap out" or move between DRAM and another storage medium such as a disk) from the secure page fields of the VM's address space 122, 126 to the hypervisor 152 Memory page, the security module 130 measures characteristics, including the hash of one or more memory pages to be swapped out, such as the physical memory address range of the memory page, the plaintext of one or more memory pages, random numbers, and Metadata associated with one or more memory pages. The security module 130 stores the measured values and provides the requested one or more memory pages to the management program 152. In some embodiments, the hypervisor 152 also stores measured characteristics, such as the physical memory address range of the memory page, the ciphertext of one or more memory pages, a random number, and the meta-associates associated with the one or more memory pages. data.When the hypervisor 152 then signals that it is ready to return ("swap in" or bring pages back to DRAM) one or more memory pages to the VMs 150, 151 from which the hypervisor 152 has converted the memory pages The management program 152 provides the stored measurement characteristics to the security module 130. The security module 130 retrieves the stored hash of the one or more memory pages and compares the measured characteristics of the swapped out memory pages with the characteristics of the one or more memory pages that the management program 152 is swapping in. If the measured characteristics of the swapped out memory pages match the characteristics of the swapped in memory pages, the security module 130 allows the hypervisor 152 to return the memory pages to the VMs 150, 151. If the measured characteristics of the swapped out memory page do not match the characteristics of the returned memory page, the security module 130 prevents the hypervisor 152 from returning the memory page to the VMs 150, 151. In this manner, the security module 130 prevents the hypervisor 152 from swapping modified memory pages into the VMs 150, 151.In some embodiments, if the hypervisor 152 requests the VM 150, 151 to swap out one or more memory pages from its balloon pool 124, 128 to the hypervisor 152, the security module 130 bypasses measuring the characteristics of the memory page and allows The management program 152 swaps out memory pages directly from the balloon pools 124, 128 to the management program 152 without security monitoring. Since there are too many memory pages stored at the balloon pools 124, 128, unused or less valuable memory, such memory pages do not require the security module 130 to verify their content.According to some embodiments, FIG. 2 illustrates an example of a security module 230 of the processing system 100 of FIG. 1, which generates and stores a hash 245 of a memory page 240 converted between the VM 250 and the hypervisor 252. In the depicted example, the hypervisor 252 requests a portion of the memory allocated to the VM-A 250. When the memory page is switched between the VM-A 250 and the hypervisor 252, the security module 230 verifies the contents of the memory page 240. The security module 230 includes a security module memory 235 configured to store measurement characteristics of memory pages swapped in and out by the management program 252.In operation, the hypervisor 252 requests additional storage 240 from the VM-A 250. Upon receiving the request from the hypervisor 252, the security module 230 verifies that one or more of the requested memory pages 240 are verified and allocated to the VM-A 250. In some embodiments, the security module 230 reads and decrypts the page data stored in the memory page 240, generates a random number, and re-encrypts the page data using an offline encryption key (not shown) generated when the VM starts. The security module 230 measures characteristics of the memory page 240 (referred to as a memory page hash) 245 and stores the memory page hash 245 in a page swap list (not shown) of the VM-A 250 in the security module memory 235. In some implementations, the memory page hash 245 is a hash of random numbers, re-encrypted data, and page metadata. In some implementations, the security module 230 marks swapped out memory pages 240 as "not valid" or "invalid" in a log (not shown) maintained by the hypervisor 252, thereby indicating that memory is allocated to the VM. By marking the swapped out memory page 240 as invalid, the security module 230 indicates to the VM-A 250 that the swapped out memory page is not available for VM-A 250 writing. This prevents the hypervisor 252 (or another VM to which the hypervisor 252 allocates swapped out memory pages) and the VM-A 250 writing to the same memory page at the same time. In some implementations, the security module 230 provides the re-encrypted data, random numbers, page metadata, and hash of the memory page 240 to the hypervisor 252 and the memory page 240 to the hypervisor 252.When the hypervisor 252 subsequently requests a swap in of the memory page 240, the hypervisor 252 provides the re-encrypted data, random numbers, page metadata, and hash of the memory page 240 to the security module 230. The security module 230 calculates a hash of the re-encrypted data, random numbers, and page metadata of the swapped memory page 240 provided by the hypervisor 252, and compares the calculated hash with the stored hash 245. If the calculated hash and the stored hash 245 do not match, the security module 230 rejects the request to swap in the memory page 240. If the calculated hash and stored hash 245 match, the security module 230 allows the hypervisor 252 to swap in the memory page 240 and return the memory page 240 to the VM-A 250.3 is a block diagram illustrating an example of a security module 330 of the processing system 100 of FIG. 1, which assigns a subset 342 of a memory page 340 allocated to the VM-A 350 to a hypervisor 352 according to some embodiments. In the depicted example, the security module 330 includes an encryption module 315. In some embodiments, the encryption module 315 may be separate from the security module 330.When the VM-A 350 is started, the hypervisor 352 allocates a memory page 340 to the VM-A 350. The memory page 340 is stored in a designated memory secure address space VM-A 322 of the memory 320. The hypervisor 352 calls the security module 330 to use the encryption key-A 316 to encrypt the data written by the VM-A 350 to the memory page 340 and generates an offline encryption key-B 319 uniquely associated with the VM-A 350. When the hypervisor 352 requests a subset 342 of the memory page 340, the security module 330 reads and decrypts the page data stored in the memory page subset 342, generates a random number, and uses the offline encryption key generated when the VM is started- B 319 to re-encrypt page data. The security module 330 calculates a hash of the random number, the re-encrypted data, and the page metadata, and stores the hash together with the swap list (not shown) of the page swapped out by the VM-A 350.FIG. 4 is a block diagram illustrating an example of a security module 430 of the processing system 100 of FIG. 1 that assigns a hypervisor 452 to a memory page subset 442 of a balloon pool 424 of the VM-A 450 according to some embodiments. In the depicted example, the memory 420 includes: a portion 422 designed to store security information for the VM-A 450; and a balloon pool 424, which is allocated to the VM-A 450 for Excessive memory pages that are not written during the time period or are considered less valuable by VM-A 450.When the hypervisor 452 requests the memory page subset 442 stored at the VM-A balloon pool 424, the security module 430 bypasses the encryption, calculation, and storage of the hash of the memory page subset 442. Since VM-A 450 is unused or considers memory page subset 442 to be of little value, hypervisor 452 can swap memory pages in and out of VM-A balloon pool 424 without invoking protection by security module 430 to prevent modification of memory Page subset 442.FIG. 5 is a flowchart illustrating a method 500 for monitoring assignments to a VM- at the security module 130 of the processing system of FIG. 1 when transitioning between the VM-A 150 and the hypervisor 152, according to some embodiments. A 150 memory page subset. At block 502, the hypervisor 152 allocates a plurality of memory pages to the virtual machine 150. At block 504, the encryption module 115 utilizes the first key 116 to encrypt a plurality of memory pages written by the VM-A 150. At block 506, the security module 130 receives a request from the hypervisor 152 for a subset of the memory pages allocated to the VM-A 150. At block 508, the security module specifies the requested subset of memory pages as invalid at VM-A 150. At block 510, the security module uses the second key to encrypt the requested subset of memory pages. At block 512, the security module 130 measures and stores characteristics of the requested subset of memory pages. At block 514, the security module 130 provides the requested subset of memory pages to the hypervisor 152.At block 516, the security module 130 receives a signal from the hypervisor 152 requesting a subset of the memory pages to be returned to the VM-A 150. At block 518, the security module 130 compares the measured characteristics of the stored subset of memory pages with the characteristics of the memory page being returned by the hypervisor 152. If the measured characteristics of the stored memory page subset match the characteristics of the memory page being returned by the hypervisor 152, then at block 520, the security module provides the VM-A 150 with the subset of memory pages. If the stored measurement characteristics of the memory page subset do not match the characteristics of the memory page being returned by the hypervisor 152, then at block 522, the security module 130 prevents the hypervisor 152 from submitting to the VM by rejecting the request to swap in the memory page subset -A 150 provides a subset of memory pages. In some embodiments, in response to the stored measurement characteristics not matching the characteristics of the memory page being returned by the hypervisor 152, the security module 130 will reject the request to decrypt and re-encrypt a subset of the memory page with the VM-A 150 key . In this manner, the security module 130 prevents the management program 152 from swapping in data encrypted with the VM-A 150 key by restricting the management program 152 from using the VM-A 150's key swap. Therefore, if the stored measurement characteristics do not match the characteristics of the memory page that the hypervisor 152 is trying to return, the security module 130 does not allow a subset of the memory pages to return to the secure page domain of the VM-A 150.6 is a flowchart illustrating a method 600 for allocating a subset of the memory pages of the balloon pool 124 allocated to the VM-A 150 to the hypervisor 152 of the processing system 100 of FIG. 1 according to some embodiments. At block 602, the hypervisor 152 allocates a memory page to the VM-A 150. At block 604, the hypervisor specifies a first subset of allocated memory pages at the first memory of the secure address space 122 associated with the VM-A 150. At block 606, the hypervisor specifies a second subset of the allocated memory pages at balloon pool 124 associated with VM-A 150. At block 608, the security module 130 receives a request from the hypervisor 152 for a memory page in the balloon pool 124. At block 610, the security module 130 provides at least one memory page from the balloon pool 124 to the hypervisor 152. At block 612, the security module 130 receives a signal from the hypervisor 152 that returns at least one memory page to the VM-A 150. At block 614, the security module 130 returns at least one memory page to the VM balloon pool 124.In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. Software includes one or more executable instruction sets stored or otherwise tangibly embodied on a non-transitory computer-readable storage medium. Software includes instructions and certain data that, when executed by one or more processors, manipulate one or more processors to perform one or more aspects of the techniques described above. Non-transitory computer-readable storage media may include, for example, magnetic or optical disk storage devices, solid-state storage devices such as flash memory, cache memory, random access memory (RAM), or one or more other non-volatile memory devices, etc. . The executable instructions stored on the non-transitory computer-readable storage medium may be in source code, assembly language code, object code, or other instruction formats that are interpreted or otherwise executable by one or more processors.Computer-readable storage media may include any storage medium or combination of storage media that can be accessed by a computer system during use to provide instructions and / or data to the computer system. Such storage media include, but are not limited to: optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-ray disc), magnetic media (e.g., floppy disk, magnetic tape, or magnetic hard drive), volatile memory ( For example, random access memory (RAM) or cache), non-volatile memory (eg, read-only memory (ROM) or flash memory), or micro-electromechanical systems (MEMS) -based storage media. The computer-readable storage medium may be embedded in a computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., magnetic hard drive), and removably attached to the computing system (e.g., optical disk or general-purpose A serial bus (USB) flash memory), or coupled to a computer system via a wired or wireless network (e.g., a network accessible storage device (NAS)).It should be noted that not all of the activities or elements described above in the general description are required, that a particular activity or part of a device may not be required, and that one or more other activities, or other elements, may be performed in addition to those described . Furthermore, the order in which activities are listed is not necessarily the order in which they are performed. Additionally, the concepts have been described with reference to specific embodiments. However, those of ordinary skill in the art realize that various modifications and changes may be made without departing from the scope of the present disclosure as set forth in the appended claims. Accordingly, the description and drawings are to be regarded as illustrative rather than restrictive, and all such modifications are intended to be included within the scope of this disclosure.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, no benefit, advantage, solution to a problem, or any feature that may cause any benefit, advantage, or solution to occur or become more apparent shall be construed as a critical, required, or essential feature of any or all of the claims. Moreover, the particular embodiments disclosed above are merely illustrative, as the disclosed subject matter may benefit from modifications and practices in different but equivalent ways apparent to those skilled in the art as taught herein. It is not intended to be limited to the details of construction or design shown herein other than as described in the appended claims. It is therefore apparent that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is set forth in the following claims. |
Methods and apparatuses to control digital data transfer via a memory channel between a memory module and a processor are disclosed. At least one of the memory module or the processor coalesces a plurality of short data words into multicast coalesced block data comprising a single data block for transfer via the memory channel. Each of the plurality of short data words pertains to one of at least two partitioned memory submodules in the memory module. The multicast coalesced block data is communicated over the memory channel. |
CLAIMSWhat is claimed is:1. A method for controlling digital data transfer via a memory channel between a memory module and a processor, the method comprising: coalescing, by at least one of the memory module or the processor, a plurality of short data words into multicast coalesced block data comprising a single data block for transfer via the memory channel, each of the plurality of short data words pertaining to one of at least two partitioned memory submodules in the memory module; and communicating the multicast coalesced block data over the memory channel.2. The method of claim 1, further comprising: detecting, by the processor, a condition indicative of short data word transfers; and responsive to the detected condition, switching, by the at least one of the memory module or the processor, between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules.3. The method of claim 1, further comprising: sending, by the processor, a coalesced load command to the memory module to cause the memory module to retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words into the multicast coalesced block data; and responsive to receiving the multicast coalesced block data, extracting, by the processor, each of the short data words from the multicast coalesced block data.4. The method of claim 3, further comprising: sending, by the processor, one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to retrieve the short data words from the identified locations within the memory submodules.5. The method of claim 1, further comprising: configuring, by the processor, at least one register associated with each of the memory submodules to store the short data words accessible by the processor to be coalesced into the multicast coalesced block data.6. The method of claim 1, further comprising: generating, by the processor, the multicast coalesced block data to be transferred to the memory module; and
sending, by the processor, a coalesced store command to the memory module to cause the memory module to perform a multicast memory extract operation to extract the short data words from the multicast coalesced block data, distribute the short data words to the memory submodules, and store the short data words within the memory submodules.7. The method of claim 6, further comprising: sending, by the processor, one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to store the short data words at the identified locations within the memory submodules.8. The processor of claim 1, further comprising: generating, by the processor, the multicast coalesced block data to be transferred to the memory module; and configuring, by the processor, at least one register associated with the memory submodules to cause the memory module to extract the short data words from the multicast coalesced block data and store the short data words in the at least one register.9. The processor of claim 1, further comprising: generating, by the processor, the multicast coalesced block data to be transferred to the memory module; and configuring, by the processor, the memory module to cause the memory module to determine one or more location identifiers identifying a plurality of locations associated with the short data words within the memory submodules based on the multicast coalesced block data supplied by the processor or information stored in the memory module.10. A processor comprising: a memory interface configured to communicate with a memory module via a memory channel; and multicast coalesce logic configured to: perform data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into multicast coalesced block data comprising a single data block prior to the transfer, wherein each of the plurality of short data words pertains to one of at least two partitioned memory submodules, and communicate the multicast coalesced block data over the memory channel.11. The processor of claim 10, the multicast coalesce logic further configured to:
configure a memory controller associated with the memory submodules to cause the memory controller to detect a condition to switch between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules, the condition indicative of short data word transfers.12. The processor of claim 10, the multicast coalesce logic further configured to: send a coalesced load command to the memory module to cause the memory module to retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words into the multicast coalesced block data; and responsive to receiving the multicast coalesced block data, extract each of the short data words from the multicast coalesced block data.13. The processor of claim 12, the multicast coalesce logic further configured to: send one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to retrieve the short data words from the identified locations within the memory submodules.14. The processor of claim 10, the multicast coalesce logic further configured to: configure at least one register associated with the memory submodules to cause the memory module to read the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words from the register into the multicast coalesced block data.15. The processor of claim 10, the multicast coalesce logic further configured to: generate the multicast coalesced block data to be transferred to the memory module; and send a coalesced store command to the memory module to cause the memory module to perform a multicast memory extract operation to extract the short data words from the multicast coalesced block data, distribute the short data words to the memory submodules, and store the short data words within the memory submodules.16. The processor of claim 15, the multicast coalesce logic further configured to: send one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to store the short data words at the identified locations within the memory submodules.17. The processor of claim 10, the multicast coalesce logic further configured to: generate the multicast coalesced block data to be transferred to the memory module; and configure at least one register associated with the memory submodules to cause the memory module to extract the short data words from the multicast coalesced block data and store the short data words in the register.18. A memory module comprising: a processor interface configured to communicate with a processor via a memory channel; a plurality of partitioned memory submodules; and multicast coalesce logic configured to: perform data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into coalesced block data comprising a single data block prior to the transfer, each of the plurality of short data words pertaining to one of the memory submodules, and communicate the multicast coalesced block data over the memory channel.19. The memory module of claim 18, further comprising: a mode selection component configured to switch between a first mode facilitating the transfer of the multicast coalesced data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules.20. The memory module of claim 19, further comprising: a memory controller associated with the memory submodules, the memory controller configured to detect a condition indicative of short data word transfers to switch between the first mode and the second mode.21. The memory module of claim 18, further comprising: at least one near-memory or in-memory processor configured to determine one or more location identifiers identifying a plurality of locations associated with the short data words within the memory submodules based on the multicast coalesced block data supplied by the processor or information stored in the memory module.22. The memory module of claim 18, the multicast coalesce logic further comprising: a plurality of shifter logic components coupled with the memory submodules, each of the shifter logic components configured to shift a position of the short data word based on an address offset for the memory submodule, wherein the short data words from the shifter logic components are concatenated to form the multicast coalesced block data.23. The memory module of claim 22, the multicast coalesce logic configured to: responsive to receiving from the processor a coalesced load command, retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation by shifting the positions of the short data words using the shifter- selector logic components and coalescing the short data words into the multicast coalesced block data.24. The memory module of claim 23, the multicast coalesce logic configured to: receive one or more location identifiers from the processor identifying a plurality of locations associated with the short data words within the memory submodules such that the short data words are retrieved from the identified locations within the memory submodules.25. The memory module of claim 18, the multicast coalesce logic further comprising: a plurality of selector logic components, each of the selector logic components configured to select the short data word from a portion of data retrieved from one of the memory submodules based on the address offset for the memory submodule, wherein the short data words from the selector logic components are concatenated to form the multicast coalesced block data.26. The memory module of claim 18, further comprising: at least one register associated with the memory submodules, the register configured to store the short data word associated with the corresponding memory submodule until a multicast memory coalesce operation is performed to coalesce the short data words from the register into the multicast coalesced block data.27. The memory module of claim 18, the multicast coalesce logic further comprising: a plurality of subset distribute logic components coupled with the memory submodules, each of the subset driver logic components configured to extract one of the short data words from the multicast coalesced block data and distribute the extracted short data word to one of the memory submodules.28. The memory module of claim 27, the multicast coalesce logic configured to: responsive to receiving from the processor a coalesced store command, perform a multicast memory extract operation to store within the memory submodules the short data words distributed to the memory submodules by the subset distribute logic components.29. The memory module of claim 28, the multicast coalesce logic further configured to:
receive one or more location identifiers from the processor identifying a plurality of locations associated with the short data words within the memory submodules such that the short data words are stored at the identified locations within the memory submodules.30. The memory module of claim 18, further comprising: at least one register associated with the memory submodules, the register configured to store the short data words extracted from the multicast coalesced block data that is received from the processor, the multicast coalesce logic configured to perform a multicast memory extract operation to extract and distribute the short data words from the register to the memory submodules and store the short data words within the memory submodules.31. A system for controlling digital data transfer, comprising: a processor; a memory module comprising a plurality of partitioned memory submodules; a memory channel configured between the processor and the memory module; and a multicast coalesce logic configured to: perform data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into multicast coalesced block data comprising a single data block prior to the transfer, wherein each of the plurality of short data words pertains to one of the memory submodules, and communicate the multicast coalesced block data over the memory channel.32. The system of claim 31, further comprising: a mode selection component configured to switch between a first mode facilitating the transfer of the multicast coalesced data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules; and a memory controller associated with the memory submodules, the memory controller configured to detect a condition indicative of short data word transfers and control the mode selection component to switch between the first mode and the second mode based on the detected condition. |
SYSTEM AND METHOD FOR COALESCED MULTICAST DATA TRANSFERS OVERMEMORY INTERFACESBACKGROUND ART[0001] Cache memories are typically organized into cache lines in which information read from main memory or information to be written to the main memory is stored. In utilizing the information in cache memory, whether it is to read (or load) data from the cache memory or to write (or store) data to the main memory, memory interfaces are designed to read or write entire cache line(s) of information at a time, even if only a portion of the entire cache line of information is needed or only a portion of the entire cache line of information needs to be updated in the main memory. As a result, there are often many more memory transactions than needed to read requested information into cache or to write information to main memory when performing narrow data word accesses. The excess memory transactions not only consume power by increasing the overhead, but also reduce performance and cause degradation of the memory.[0002] Some conventional memory modules typically use cache line size of 64 bytes such that each transfer of data to and/or from the memory happens in cache-line-sized bursts over a memory bus. A simplified example is shown in FIG. 1 which is a logical representation of a prior-art system implementing a wide access from a single memory bank. Commonly, a memory module 100 has a host interface 102 which couples with a memory interface 106 of a host processor 104, also referred to as a main processor which orchestrates the computation performed in the load and store operations of the memory module 100, via a memory bus or memory channel 108 of a specific width, in this case 256 bits or 32 bytes. Therefore, in some examples, the memory module 100 can only support a single coarse granularity of access (that is, 32 bytes) for any load or store operation. The memory module 100 includes a plurality of memory banks or submodules 112, 114, 116, and 118 (which in this example has 16 submodules) each operatively coupled with the host interface 102 via another memory channel 110 (which in this example has a channel width of 256 bits). Each memory submodule may include a memory address register (MAR) to store the address of the memory location that is being accessed by the load or store operation, as well as a memory data register (MDR) to store the data on which the operation is being performed. Such registers are not shown in the figures for simplicity.[0003] The host processor 104 issues a load or store request when there is a need to load memory data from certain memory submodules or to store data into the memory submodule.
When there is a data transfer between the host processor 104 and the memory module 100, such as the host processor 104 requesting certain bits of data from a specified submodule (which in this example is the submodule 114) to be sent to the host processor 104, a wide access is performed and an entire cache line of contiguous data, which includes the requested bits of data, is transferred from the single specified submodule to the host interface 102. For workloads that access data sparsely (for example, at 16-bit granularity), this can lead to significant wasted bandwidth, because whenever data is transferred across the memory interface, only a small portion of it is accessed.[0004] Some conventional computing systems use software to statically identify such short accesses with low cache reuse potential in an attempt to address the inefficiency of sub- cache-line memory accesses. Some of the approaches involve sub-ranking, which groups the multiple memory chips of a dual in-line memory module (DIMM), or RAM stick, into subsets to allow each subset to provide a data chunk smaller than the size of the original transfers. However, sub-ranking requires separate commands for each sub-rank, resulting in reduced performance due to higher demands on a shared command bus, or increased hardware cost due to the dedicated command path required for each sub-rank. Sub-ranking is also impractical when each data access is provided by a single memory module. Some approaches involve reducing the height and/or width of DRAM banks, but such approaches involve changing the core DRAM array design and incur high overheads.[0005] Other approaches involve matrix-vector multiplications, sparse matrix algebra, graph analytics, and sparse machine learning, which are within the domains where the software can often predict cache reuse, or the lack thereof. However, modem memory systems still suffer the inability to leverage such information to optimize memory bandwidth utilization, and there is still room for improvement regarding memory efficiency and effective bandwidth for sparse workloads.[0006] Therefore, it would be highly advantageous to have an improved data transfer between a memory module(s) and a host processor so as to allow for the transfers of smaller data width from the memory submodule without performing excess memory transactions. BRIEF DESCRIPTION OF DRAWINGS[0007] The implementations will be more readily understood in view of the following description when accompanied by the below figures, wherein like reference numerals represent like elements, and wherein:
[0008] FIG. l is a prior art block diagram illustrating one example of a memory module and a host processor of a system in which an entire cache line of contiguous data is transferred therebetween upon the host processor issuing a load or store request;[0009] FIG. 2 is a block diagram illustrating one example of a memory module and a host processor of a system configured to perform multicast memory coalesce operations in accordance with an embodiment set forth in the disclosure;[0010] FIG. 3 is a block diagram illustrating one example of a multicast coalesced block data that is formed as result of the multicast memory coalesce operations carried out by the system shown in FIG. 2;[0011] FIG. 4 is a diagram illustrating one example of the system according to FIG. 2 with a plurality of memory modules coupled with the host processor;[0012] FIG. 5 is a flowchart illustrating one example of a method for performing a multicast memory coalesce operation in accordance with one example set forth in the disclosure;[0013] FIG. 6 is a flowchart illustrating one example of a method for switching between facilitating multicast memory coalesce operation and facilitating contiguous block data transfers in accordance with one example set forth in the disclosure;[0014] FIG. 7 is a flowchart illustrating one example of a method for loading data from the memory submodules in accordance with one example set forth in the disclosure;[0015] FIG. 8 is a flowchart illustrating one example of a method for storing data in the memory submodules in accordance with one example set forth in the disclosure;[0016] FIG. 9 is a diagram illustrating one example of a system in accordance with one example set forth in the disclosure;[0017] FIG. 10 is a diagram illustrating one example of a memory module in accordance with one example set forth in the disclosure;[0018] FIG. 11 is a diagram illustrating one example of a system in accordance with one example set forth in the disclosure;[0019] FIG. 12 is a diagram illustrating one example of a system in accordance with one example set forth in the disclosure;[0020] FIG. 13 is a diagram illustrating one example of a system in accordance with one example set forth in the disclosure; and[0021] FIG. 14 is a diagram illustrating one example of a system in accordance with one example set forth in the disclosure.DESCRIPTION OF EMBODIMENTS
[0022] Briefly, systems and methods help reduce the data transfer overhead and facilitate fine-grained data transfers by coalescing or aggregating short data words from a plurality of disparate memory submodules and transferring or communicating the coalesced data, referred to herein as multicast coalesced block data, over the memory channel simultaneously in a single block data transfer. In some implementations, a short data word is returned or loaded from each of a collection of partitioned memory submodules to a host processor at a unique position within the single block data transfer. In some implementations, a short data word is written or stored into each of a collection of partitioned memory submodules from a host processor at a unique position within the single block data transfer. In some examples, the memory submodules has register(s) associated therewith, where the register(s) stores the short data word from the corresponding submodule until a multicast memory coalesce operation is performed, or stores the short data words that are extracted from the multicast coalesced block data that is received from the processor. In some examples, the register(s) may be implemented as part of near-memory or in-memory processing technologies.[0023] According to certain implementations, a method for controlling digital data transfer via a memory channel between a memory module and a processor, carried out by at least one of the memory module or the processor, coalesces a plurality of short data words into multicast coalesced block data comprising a single data block for transfer via the memory channel. Each of the plurality of short data words pertains to one of at least two partitioned memory submodules in the memory module. The multicast coalesced block data is communicated over the memory channel.[0024] In some examples, the method includes the processor detecting a condition indicative of potential for short data word coalescing and, responsive to the detected condition, switching between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules.[0025] In some examples, the method includes the processor sending a coalesced load command to the memory module to cause the memory module to retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words into the multicast coalesced block data and, responsive to receiving the multicast coalesced block data, extracting each of the short data words from the multicast coalesced block data. In certain examples, the method includes the processor sending one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the
memory module to retrieve the short data words from the identified locations within the memory submodules. In some examples, the method includes the processor sending one location identifier such that the memory module retrieves the short data words from the same location (e.g., address offset or near-memory register ID) within multiple memory submodules and coalesces the retrieved short data words into a single data block as the multicast coalesced block data. In some examples, the plurality of locations are associated with or identified by a plurality of location identifiers.[0026] In some embodiments, the method includes the processor configuring at least one register associated with each of the memory submodules to store the short data words accessible by the processor to be coalesce into the multicast coalesced block data. In some examples, the method includes the processor generating the multicast coalesced block data to be transferred to the memory module and sending a coalesced store command to the memory module. The coalesced store command causes the memory module to perform a multicast memory extract operation to extract the short data words from the multicast coalesced block data, distribute the short data words to the memory submodules, and store the short data words within the memory submodules. In certain examples, the method includes the processor sending one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to store the short data words at the identified locations within the memory submodules. In another example, the method includes the processor sending one location identifier such that the memory module stores the short data words in the coalesced data block to the same location (address offset or near-memory register ID) within multiple memory submodules.[0027] In some examples, the method includes the processor generating the multicast coalesced block data to be transferred to the memory module and configuring at least one register associated with the memory submodules to cause the memory module to extract the short data words from the multicast coalesced block data and store the short data words in the at least one register. In some examples, the method includes the processor generating the multicast coalesced block data to be transferred to the memory module and configuring the memory module to cause the memory module to determine one or more location identifiers identifying a plurality of locations associated with the short data words within the memory submodules based on the multicast coalesced block data supplied by the processor or information stored in the memory module.
[0028] According to certain implementations, a processor includes a memory interface which communicates with a memory module via a memory channel and multicast coalesce logic. The multicast coalesce logic performs data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into multicast coalesced block data comprising a single data block prior to the transfer, and communicates the multicast coalesced block data over the memory channel. Each of the plurality of short data words pertains to one of at least two partitioned memory submodules. [0029] In some examples, the multicast coalesce logic configures a memory controller associated with the memory submodules to cause the memory controller to detect a condition to switch between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules, the condition indicative of potential for short data word coalescing.[0030] In some examples, the multicast coalesce logic sends a coalesced load command to the memory module to cause the memory module to retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words into the multicast coalesced block data and, responsive to receiving the multicast coalesced block data, extracts each of the short data words from the multicast coalesced block data. In certain examples, the multicast coalesce logic also sends one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to retrieve the short data words from the identified locations within the memory submodules. [0031] In some examples, the multicast coalesce logic configures at least one register associated with the memory submodules to cause the memory module to read the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words from the register(s) into the multicast coalesced block data. In some examples, the register(s) may be per-submodule near-memory register(s) or per- submodule in-memory register(s), and in performing the multicast memory coalesce operation, the short data words stored in the register(s) from a prior in-memory or near memory operation are retrieved and coalesced into a single data block.[0032] In some examples, the multicast coalesce logic generates the multicast coalesced block data to be transferred to the memory module and sends a coalesced store command to the memory module to cause the memory module to perform a multicast memory extract operation to extract the short data words from the multicast coalesced block data, distribute
the short data words to the memory submodules, and store the short data words within the memory submodules. In certain examples, the multicast coalesce logic also sends one or more location identifiers to the memory module identifying a plurality of locations associated with the short data words within the memory submodules to cause the memory module to store the short data words at the identified locations within the memory submodules.[0033] In some examples, the multicast coalesce logic generates the multicast coalesced block data to be transferred to the memory module and configures at least one register associated with the memory submodules to cause the memory module to extract the short data words from the multicast coalesced block data and store the short data words in the register(s). The register(s) may be per-submodule near-memory register(s) or per-submodule in-memory register(s).[0034] According to certain implementations, a memory module includes a processor interface which communicates with a processor via a memory channel, a plurality of partitioned memory submodules, and multicast coalesce logic. The multicast coalesce logic performs data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into coalesced block data comprising a single data block prior to the transfer, and communicating the multicast coalesced block data over the memory channel. Each of the plurality of short data words pertains to one of the memory submodules.[0035] In some examples, the memory module includes a mode selection component, including but not limited to tri-state gates or multiplexer, that switches between a first mode facilitating the transfer of the multicast coalesced data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules. In certain examples, the memory module also includes a memory controller associated with the memory submodules, the memory controller which detects a condition indicative of potential for short data word coalescing to switch between the first mode and the second mode.[0036] In some examples, the memory module includes at least one near-memory or in memory processing logic which determines one or more location identifiers identifying a plurality of locations associated with the short data words within the memory submodules based on the multicast coalesced block data supplied by the processor or information stored in the memory module.[0037] In some examples, the multicast coalesce logic of the memory module includes a plurality of shifter logic components coupled with the memory submodules. Each of the shifter logic components shifts a position of the short data word based on an address offset
for the memory submodule. The short data words from the shifter logic components are concatenated to form the multicast coalesced block data. In certain examples, the multicast coalesce logic, in response to receiving from the processor a coalesced load command, retrieves the short data words from the memory submodules and perform a multicast memory coalesce operation by shifting the positions of the short data words using the shifter logic components and coalescing the short data words into the multicast coalesced block data. In certain examples, the multicast coalesce logic receives one or more location identifiers from the processor identifying a plurality of locations associated with the short data words within the memory submodules such that the short data words are retrieved from the identified locations within the memory submodules.[0038] In some examples, the multicast coalesce logic further includes a plurality of selector logic components. Each of the selector logic components selects the short data word from a portion of data retrieved from one of the memory submodules based on the address offset from the memory submodule. The short data words from the selector logic components are concatenated to form the multicast coalesced block data.[0039] In some examples, the memory module includes at least one register associated with the memory submodules. The register stores a short data word associated with the corresponding memory submodule until a multicast memory coalesce operation is performed to coalesce the short data words from the register into the multicast coalesced block data. In some examples, each of the short data words may have been previously loaded from the corresponding memory submodule, or it may have been computed by in-memory or near memory processing logic based on the data stored in the submodule.[0040] In some examples, the multicast coalesce logic of the memory module includes a plurality of subset distribute logic components coupled with the memory submodules. Each of the subset driver logic components extracts one of the short data words from the multicast coalesced block data and distribute the extracted short data word to one of the memory submodules. In certain examples, the multicast coalesce logic of the memory module, in response to receiving from the processor a coalesced store command, performs a multicast memory extract operation to store within the memory submodules the short data words distributed to the memory submodules by the subset distribute logic components. In certain examples, the multicast coalesce logic further receives one or more location identifiers from the processor identifying a plurality of locations associated with the short data words within the memory submodules such that the short data words are stored at the identified locations within the memory submodules.
[0041] In some examples, the memory module includes at least one register associated with the memory submodules. The register stores the short data words extracted from the multicast coalesced block data that is received from the processor. In some examples, subsequent memory command(s) may perform a memory operation using the short data words in the registers and/or the short data words in the associated memory submodules. In some examples, the multicast coalesce logic performs a multicast memory extract operation to extract and distribute the short data words from the register to the memory submodules and store the short data words within the memory submodules.[0042] According to some implementations, a system for controlling digital data transfer includes a processor, a memory module including a plurality of partitioned memory submodules, a memory channel between the processor and the memory module, and a multicast coalesce logic. The multicast coalesce logic performs data transfer between the processor and the memory module via the memory channel by causing a plurality of short data words to be coalesced into multicast coalesced block data comprising a single data block prior to the transfer, and communicating the multicast coalesced block data over the memory channel. Each of the plurality of short data words pertains to one of the memory submodules. In certain examples, the system further includes a mode selection component that switches between a first mode facilitating the transfer of the multicast coalesced data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules, and a memory controller associated with the memory submodules. The memory controller detects a condition indicative of potential for short data word coalescing and control the mode selection component to switch between the first mode and the second mode based on the detected condition.[0043] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.[0044] FIG. 2 illustrates a logical representation of one example of a system, or more specifically a computing system such as a portion of a hardware server, smartphone,
wearable, printer, laptop, desktop, or any other suitable computing device that utilizes data transfers between a memory module 100 and a host processor 104. A single memory module and a single memory channel is shown for simplicity, but it is to be understood that the disclosure also applies to systems with multiple memory modules and channels. In this example, the memory module 100 may be the main memory of the host processor 104, which may be a central processing unit (CPU), graphics processing unit (GPU), accelerated processing unit (APU), application specific integrated circuit (ASIC), or any other suitable integrated circuit that performs computations and issues memory requests to the memory module 100. Cache storage may also exist on the host processor, the memory module, or both. It is to be understood that any number of memory modules may be coupled with the host processor 104, as shown in FIG. 4, and the system may also implement a plurality of host processors 104 operatively coupled together.[0045] The memory submodules may also be referred to herein as memory banks. The memory submodules 112, 114, 116, and 118 are disparate and operatively coupled with the host interface 102 via a plurality of memory channels or data links 200, 202, 204, and 206 for short data word transfer. In the example where there are 16 submodules, there may be 16 short data word links where each short data word link facilitates short data word transfer of 16 bits, for example, and each memory link operates independently from other links. The links may each occupy a subsection of a shared memory channel such as a shared memory data bus 207. The data links 200, 202, 204, and 206 have smaller channel width than the memory channel 108 such that the total combined width of all the data links 200, 202, 204, and 206 equals the width of the memory channel 108.[0046] In the example shown, the memory module 100 includes 16 short word links, each having a width of 16 bits, totaling 256 bits, which equals the width of the memory channel 108. It is to be understood that other links may exist to couple the memory submodules with the host interface 102. For example, each the submodules may also be operatively coupled with the host interface 102 via the memory channel 110 as shown in FIG. 1, in which case the memory channel 110, which has a greater channel width than any of the data links 200, 202, 204, and 206, is used for contiguous block data transfer instead of a multicast coalesced block data transfer, the details of which is further disclosed herein.[0047] The systems and methods as disclosed herein are described in the context that the memory submodules share access to a memory channel to receive a broadcast instruction stream from the host processor. However, it should be apparent to those skilled in the art that
the techniques as disclosed herein may also extend to other forms of memory submodule groupings receiving broadcast instructions.[0048] FIG. 3 illustrates one example of multicast coalesced block data 300 that is to be transferred via the memory channel 110 between the memory module 100 and the host processor 104. The block data 300 includes data segments 302, 304, 306, and 310 such that each data segment pertains to or is associated with a separate memory submodule. That is, in the case of a load command, each data segment is a short data word retrieved from one of the memory submodules in one-to-one correlation, i.e., each memory submodule may contribute no more than one short data word to the block data 300. In the case of a store command, each data segment is a short data word that is to be stored in a location within one of the memory submodules in one-to-one correlation, i.e., each memory submodule may receive no more than one short data word from the block data 300 to be stored therein. Each short data word may have any suitable word size, for example 8 bits, 16 bits, 32 bits, 64 bits, etc., that is smaller than the channel width and the cache line width, depending on the number of memory submodules in the memory module.[0049] In the exemplary system shown in FIG. 2, the block data 300 to be transferred includes 16 data segments, each segment having 16 bits, and each segment is assigned to no more than one of the memory submodules. For example, the data segment 302 may be assigned to the memory submodule 112, the data segment 304 to the memory submodule 114, the data segment 306 to the memory submodule 116, and the data segment 308 to the memory submodule 118, and so on. For illustrative purposes only, when a CLD facilitates each memory submodule (which in some examples may include the PIM unit) to return 16 bits of data, calculated by dividing 256 bits (data transfer width of the memory interface of a single channel) by 16 memory submodules. Therefore, data from the memory submodule 112 may occupy bits 0 to 15 of the block data 300, data from the memory submodule 114 may occupy bits 16 to 31 of the block data 300, data from the memory submodule 116 may occupy bits 32 to 47 of the block data 300, and so forth, until data from the memory submodule 118 may occupy bits 240 to 255 of the block data 300. The block data 300 is then transferred or communicated over the memory channel 108 in a single block data transfer between the memory module 100 and the host processor 104. The block data 300 is also referred to as multicast coalesced block data due to the nature of the block data including a plurality of separate short data words that are addressed to a plurality of separate and independently functioning memory submodules, and the separate short data words are
transferred simultaneously in a single block data transfer when the multicast coalesced block data is sent via the memory channel.[0050] FIG. 4 illustrates one example of a system with a plurality of memory modules 100, 400, 402, and 404, each of which is operatively coupled with the host processor 104 via the memory channel 108. Each memory module may include one or more memory dies and one or more logic dies with built-in computation capabilities provided by processing-near- memory (PNM) technologies. For example, the computation capabilities of memory module 100 may be implemented on a separate logic die 406 which is 3D-stacked with the memory die(s), and the memory die(s) may be implemented with the memory submodules 112, 114, 116, and 118. The memory submodules 112, 114, 116, and 118, also referred to as memory banks, may be dynamic random access memory (DRAM) devices in some examples. Each of the other memory modules 400, 402, and 404 may be similarly constructed. The implementations described herein are also applicable to cases where the computation capabilities (that is, computing units) are incorporated at each memory bank or memory module, as in bank-level processing-in-memory (PIM) systems. For example, the computing units may be implemented directly on the memory dies instead of on the separate logic die.In order to overcome command bandwidth limitations between the host and PIM units, the stream of commands may be broadcast to multiple PIM units within a memory module. An example implementation of such an organization is for each PIM command to be broadcast to all the memory banks associated with a single memory channel. Still further, the implementations described herein are also applicable in other system configurations that may consist of multiple host processors and memory modules interconnected in various configurations.[0051] In certain embodiments, a non-transitory storage medium, such as memory, includes executable instructions that when executed by one or more processors, such as the host processor 104 or the memory module 100 with data processing capabilities, causes the one or more processors to perform the methods for controlling digital data transfer via a memory channel as disclosed in FIGs. 5 through 8.[0052] FIG. 5 illustrates a method 500 of performing the coalesced block data transfer, as performed by the host processor 104, or by the memory module 100 having either logic die(s) with built-in computation capabilities as provided by PNM or memory dies with built-in computation capabilities as provided by PIM where each memory die has its own independent computation capabilities. In step 502, the processor or the memory module coalesces a plurality of short data words into the multicast coalesced block data. The
multicast coalesced block data includes a single data block for transfer via the memory channel, where each of the short data words pertains to one of at least two partitioned memory submodules in the memory module. In some examples, the short data words are associated with a subset of the memory submodules such that the short data words are either loaded from specific locations within the memory submodules or stored in the specific locations within the memory submodules. In step 504, the single data block containing the plurality of short data words is transferred via the memory channel in a single data transfer. The transfer may be from the processor to the memory module or from the memory module to the processor.[0053] FIG. 6 illustrates a method 600 of switching between different modes in the system, where one mode facilitates the formation and transfer of the multicast coalesced block data between the processor and at least two of the memory submodules as explained in method 500, and the other mode facilitates the contiguous block data transfer between the processor and a single memory submodule. In step 602, the processor detects a condition indicative of potential for short data word coalescing. In step 604, responsive to the detected condition, the memory module or the processor switches between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer between the processor and one of the memory submodules.[0054] In some implementations, the processes of detecting the condition and issuing a command to switch between the first and second modes take place entirely on the host side at the memory interface of the host processor. Although the multicast memory coalesce operation to coalesce the short data words associated with the plurality of memory submodules may occur on the memory side at the host interface of the memory module (i.e., when a coalesced load command is issued by the processor), the host processor is responsible for detecting the condition and issuing multicast coalescing requests to the memory module, which may be issued as part of the coalesced load command.[0055] In some implementations, the condition is explicitly triggered by an application running on the processor or instructions issued by the application (e.g., a special memory command to “start coalescing”, or explicit coalesced memory commands issued by the application which trigger coalescing). In some implementations, the condition includes an indication of a sparse memory access operation at the memory interface. The sparse memory access is defined as accessing a smaller number of bits of data (e.g., a short data word) sparsely at two or more of the memory submodules, as opposed to a contiguous memory access in which a single contiguous section of a larger number of bits of data (e.g., an entire
cache line or data block) is to be accessed at a single memory submodule. For example, the memory interface may include a memory controller which stores a sequence of memory commands from the processor into queues such that there is a separate queue for each memory submodule. The memory controller may detect a hint or indication that a command at the front of each queue is comprised of memory commands for sparse bits or short data words, and in response to the detection, the sparse bits or short data words from the queues are concatenated or coalesced into the multicast coalesced block data.[0056] In some implementations, the memory controller may store a sequence of memory commands into a single memory queue. In this case, the condition may be detected by periodically searching the queue for memory commands to short data words that map to different memory submodules. Additionally or alternatively, the queue may be searched when inserting a memory command to short data words for other commands of the same or similar type to short data words in different submodules with which it may be coalesced. A threshold in the number of coalesce-able commands may be implemented to trigger the condition. If coalescing is limited to short data words that share some address offset bits (e.g., the short data words fall in the same DRAM column index), the address bits are also compared when searching for coalescing opportunities. In such implementations, the queue entries may also contain information regarding whether the associated memory command targets a short data word, information regarding whether multiple short data words have been coalesced into the queue entry, and offset information regarding the short data word(s) targeted by the queue entry.[0057] FIG. 7 illustrates a method 700 of performing a coalesced memory load operation by the processor. In step 702, the processor sends a coalesced load command to the memory module. The command causes the memory module to retrieve the short data words from the memory submodules and perform a multicast memory coalesce operation to coalesce the short data words into the coalesced block data. In step 704, the processor determines whether the coalesced command directly targets the memory submodules, or whether the command targets in-memory or near-memory registers. In some examples, the type of command that is being coalesced specifies whether the memory submodules or the in-memory or near memory registers are targeted.[0058] If the coalesced command targets the memory submodules, in step 706, the processor communicates one or more location identifiers to the memory module. The one or more location identifiers identify the plurality of locations associated with the short data words within the memory submodules. The processor may cause the memory module to retrieve the
short data words from the identified locations within the memory submodules. The short data words are coalesced into the multicast coalesced block data prior to be transferred as a single data block via the memory channel. Subsequently, in step 708, the processor extracts (or de- coalesces) each of the short data words from the multicast coalesced block data, in response to receiving the multicast coalesced block data.[0059] If the coalesced memory command targets the in-memory or near-memory registers, in step 710, the processor configures at least one register associated with each of the memory submodules to cause the memory module to store a short data word to be coalesced into the coalesced block data. In some embodiments, a register is implemented for every memory submodule and can be directly accessed. If the same register and same offset within the register is accessed for every memory submodule, then, according to some examples, it may not be necessary to supply location information for each of the short data words. Placing the short data words in the registers may be orchestrated in advance by the processor by performing memory-local load commands from the memory submodules in some examples.In some examples, the placement of the short data words in the registers may be orchestrated by processing each of the short data words (for example, performing calculations on the short data words) using the near-memory or in-memory processing capabilities of the memory module. In step 712, the multicast memory coalesce operation is performed by the processor to read the short data words from the register(s) and form the multicast coalesced block data, before proceeding to step 708.[0060] FIG. 8 illustrates a method 800 of performing a coalesced memory store operation by the processor. In step 802, the processor generates the multicast coalesced block data to be transferred to the memory module. In step 804, the processor determines whether the coalesced commands directly target the memory submodules, or whether the commands target the in-memory or near-memory registers.[0061] If the coalesced memory command targets the memory submodules, in step 806, the processor sends one or more location identifiers to the memory module. The location identifiers identify the locations associated with the short data words within the memory submodules, which causes the memory module to store the short data words at the identified locations within the memory submodules. In step 808, the processor sends a coalesced store command to the memory module. The command causes the memory module to perform a multicast memory extract operation to extract the short data words from the multicast coalesced block data, distribute the short data words to the memory submodules, and store the short data words at the identified locations within the memory submodules. The steps 806
and 808 may be performed sequentially or simultaneously. In some examples, the location identifier(s) may be generated or computed by the in-memory or near-memory processing logic component based on data stored in the per-submodule register or the memory submodule.[0062] If the coalesced memory command targets in-memory or near-memory registers, the processor proceeds to step 810 to configure at least one register associated with the memory submodules to cause the memory module to extract the short data words from the multicast coalesced block data and store the short data words in the at least one register. For example, after the short data words are stored to the register(s) local to the memory submodules, the memory submodule’s near-memory or in-memory computing component may access the register(s) for further computation and/or data movement.[0063] In methods 700 and 800, the location identified s) may be communicated by using number bits in the coalesced load/store command and/or data bus to transmit per-submodule location bits. Alternatively, the location identifier(s) may be obtained by computing per- submodule location information, for example by loading the location identifier(s) and/or computing the location identifier(s) near-memory and subsequently storing it in a near memory register. In some examples, the location information that is static or common to all memory submodules may not require to be transmitted separately. For example, additional per-submodule location bits may be added to a common base address, and no additional location information may to be provided if a coalesced command targets a short data word at the same offset in each memory submodule or near-memory register.[0064] Any of the logic components as disclosed herein, such as those shown in FIGs. 9 through 14, may be implemented as discrete logic, one or more state machines, a field programmable gate array (FPGA), or any suitable combination of processors/processing logic components executing instructions and other hardware. According to embodiments, a logic device or component includes devices or components that are able to perform a logical operation, e.g., an apparatus able to perform a Boolean logical operation. It will be recognized that functional blocks are shown and that the various operations may be combined or separated as desired. It will also be recognized that all the functional blocks are not needed in all implementations. The arrows shown in the figures define the directions of data transfer between the components during the specified loading or storing operation, which may be implemented using any suitable data path, such as via hardwiring or a data channel such as a data bus. Some implementations may also involve pipelining of transfers/operations in which the data is buffered in one or more registers. Furthermore, for
simplicity, the embodiments described herein pertain to a memory module 100 that is high bandwidth memory (HBM) where an entire cache line of data access is provided from a single memory module.[0065] It is to be understood that the disclosure is applicable to other types of memory where multiple memory modules contribute to a single data access. In such cases, the coalescing of multiple short data elements (that is, short data words) into a single data block may occur within each participating memory module. Furthermore, it is to be understood that although the figures are presented in the context of an example system with 16 memory banks or submodules and a memory channel interface with a data width of 256 bits for block data transfer via the single channel (that is, the size of data transferred with each load or store operation to the channel), other implementations of the system are possible with different parameters such as more or less memory submodules and/or wider or narrower memory channel, among others.[0066] FIG. 9 illustrates one example of a system, or more specifically a computing system that utilizes data transfers between a memory module 100 and a host processor 104. In this example, the memory module 100 includes a multicast coalesce logic 900 which includes a plurality of processing logic components 902, 904, 906, 908 with a plurality of registers 912, 914, 916, 918, where each processor and register is associated with one of the memory submodules 112, 114, 116, 118. The processing logic 902, 904, 906, 908 are the near memory or in-memory computing components configured to provide computing capabilities for the memory submodules. The multicast coalesce logic 900 is coupled with the data links 200, 202, 204, 206 such that the short data words are transferred via the data links 200, 202, 204, 206 through a shared data bus, or a data channel 934.[0067] For illustrative purposes only, the short word data to be received from or sent to the processing logic 902 may occupy bits 0 to 15 of the data channel 934, the short word data to be received from or sent to the processing logic 904 may occupy bits 16 to 31 of the data channel 934, the short word data to be received from or sent to the processing logic 906 may occupy bits 32 to 47 of the data channel 934, and so forth, until the short word data to be received from or sent to the processing logic 908 may occupy bits 240 to 255 of the data channel 934. In some embodiments, the processing logic 902, 904, 906, 908 may include enough processing capabilities to support a coalesced load (CLD) and coalesced store (CST) operations and may be dedicated for supporting only such operations.[0068] The memory module 100 further includes a mode selector 922, a mode selection component which may be implemented as tri-state gates or a multiplexer, for example. The
mode selector 922 operates as a switch between a first mode facilitating the transfer of the multicast coalesced block data and a second mode facilitating a contiguous block data transfer, as explained. The mode selector 922 may be implemented as a programmable logic device such that a control bit is utilized to activate the switching. The mode selector 922 is configured to transfer data through the data channel 934 while in the first mode and transfer data to and from a memory submodule selector 924 while in the second mode. The memory submodule selector 924 is shared by all the memory submodules and is configured to select which submodule to access based on a memory submodule identifier (ID) 910 provided, when the mode selector 922 is operating in the second mode. Each of the mode selector 922 and the submodule selector 924, according to some embodiments, may be implemented as a single logic component in a single central location (as shown in the figures). Alternatively, in some embodiments, one or both of the mode selector 922 and the submodule selector 924 may be implemented in a distributed manner such that the mode selector 922 and/or the submodule selector 924 includes a plurality of separately functioning logic components, with at least one logic component disposed near each of the memory submodules to control the access to a single shared data channel or data bus by the memory submodules. In some examples, the logic components may include, but are not limited to, multiplexers or tri-state gates.[0069] The host processor 104 also includes a multicast coalesce logic 926 separate from the multicast coalesce logic 900 of the memory module 100 and therefore has a different functionality. Examples of performing a multicast coalesced block data transfer are described in detail below, in view of the components mentioned above.[0070] In some implementations of the CLD operation, for PIM-implemented systems, a PIM register identifier is specified for the PIM unit associated with each memory bank or submodule, utilizing PIM support. Each of the PIM units contributes the short data word (e.g., 16 bits) of the identified register to the coalesced output. In cases where the register is wider than the length of the short data word, some embodiments may return the lower 16 bits of the register or some other fixed offset of the register. In other embodiments, the CLD operation may have a parameter that allows the software on the host to specify which 16 bits of the register each PIM unit should return. In other embodiments, the register can be programmed at the PIM unit a priori before the CLD operation is issued.[0071] In some of the other implementations, the CLD operation may specify a memory address within each memory bank or submodule such that each memory submodule reads the short word data (e.g., 16 bits) stored at the specified memory location and returns it. In one
embodiment, this is achieved by each memory module receiving a broadcast intra-module memory address as part of the CLD operation and returning the data at that location in each memory submodule. In other embodiments, this utilizes support for communicating additional address information to each memory submodule. This may be achieved via bank- local address generation or by enhancing the command interface, e.g., by using the data bus to send command information, or alternatively by coalescing commands that share target address bits.[0072] In the CLD operation, in response to a command broadcast to a collection of memory submodules, the memory submodules each returns to the requesting host a chunk or block of data at a fixed, unique position within the wide load data return of the memory interface. As each memory submodule returns data in a specific and unique location within the data returned to the host, all the participating memory submodules return their data in a single block data transfer over the memory data bus or channel.[0073] Specifically, during the CLD operation according to some examples, the host processor 104 specifies the submodule-specific registers 912, 914, 916, 918 associated with the memory submodules 112, 114, 116, 118 to access. Each of the in-memory or near memory processing logic components 902, 904, 906, 908 contributes a short data word of a predetermined word length (for example, 16 bits in the illustrated example) as stored in the corresponding register to be output to the host interface 102 via a data channels 934 and 936. [0074] In cases where the register 912, 914, 916, 918 is wider than the word length of the short data word, in some embodiments, the lower 16 bits (or the number of bits pertaining to the length of the short data word) are returned, or alternatively other offsets may be employed to retrieve the short data words from different locations in the registers. In some embodiments, upon initiating the CLD operation, the host processor 104 may issue a command specifying which bits of the register each of processing logic 902, 904, 906, 908 should return. In some embodiments, the registers 912, 914, 916, 918 may be populated with the short data words by the corresponding processing logic 902, 904, 906, 908 by retrieving the short data words via memory channels 930 a priori before the CLD operation is issued by the host processor 104. As shown, the memory channels 930 are wider (for example, 256 bits) than the data links 200, 202, 204, 206 because the same channels may be used to transfer block data between the submodules 112, 114, 116, 118 and the memory bank selector 924 as well.[0075] In some examples, after the registers 912, 914, 916, 918 are populated with the short data words from their corresponding memory submodules 112, 114, 116, 118, the processing
logic 902, 904, 906, 908 transfer the short data words via the data channel 934 as multicast coalesced block data. When the mode selector 922 switches to the first mode facilitating multicast coalesced block data transfer, the host processor 104 can retrieve the block data via the data channel 934. Otherwise, when the mode selector 922 switches to the second mode facilitating contiguous block data transfer, the host processor 104 can retrieve contiguous block data via another data channel 932 from one of the memory submodules 112, 114, 116,118 as determined by the submodule ID 910 provided by the host processor 104.[0076] Responsive to receiving the multicast coalesced block data via the memory channel 108, the host processor 104 transfers the block data via an internal data channel 938 to the multicast coalesce logic 926 to extract the short data words from the block data to be processed accordingly. The multicast coalesce logic 926, therefore, is also capable of operating as an extractor or de-coalescing component, and the multicast coalesce logic 900 in the memory module 100 is also capable of performing such operation, as further explained herein with regard to a coalesced store operation.[0077] In some implementations of the CST operation, for PIM-implemented systems, a PIM register identifier is specified for the PIM unit associated with each memory bank or submodule, utilizing PIM support. Each of the PIM units writes the short data word received from the coalesced input to the identified register. In cases where the register is wider than the length of the short data word (e.g., 16 bits), some embodiments may store or perform a PIM operation on the data in the lower 16 bits and zero out the remaining bits, for example via masking. Other embodiments may sign extend the 16-bit short data word and store or perform a PIM operation with the extended data targeting the specified PIM register. In yet other embodiments, the CST operation may have a parameter that allows the software on the host to specify to which 16 bits of the register that each PIM unit is to write the corresponding data. In other embodiments, the register can be programmed at the PIM unit a priori before the CST operation is issued.[0078] In some of the other implementations, the CST operation may specify a memory address within each memory bank or submodule such that each memory submodule writes the short word data (e.g., 16 bits) to the specified memory location. In one embodiment, this is achieved by each memory module receiving a broadcast intra-module memory address as part of the CST operation and storing the data at that location in each memory submodule. In other embodiments, this utilizes support for communicating additional address information to each memory submodule. This may be achieved via bank-local address generation or by
enhancing the command interface, e.g., by using the data bus to send command information, or alternatively by coalescing commands that share target address bits.[0079] Specifically, in some examples, the host processor 104 transfers data in the opposite direction from the CLD operation such that the multicast coalesce logic 926 performs the coalescing of multiple short data words to be transferred to the memory module 100 as a single block of data (or a single write operation), after which the short data words are extracted and distributed across the memory submodules 112, 114, 116, 118. As an illustrative example, the data distribution may be performed as follows: data from bits 0 to 15 of the host-provided multicast coalesced block data is sent to the submodule 112, data from bits 16 to 31 of the host-provided multicast coalesced block data is sent to the submodule 114, data from bits 32 to 47 of the host-provided multicast coalesced block data is sent to the submodule 116, and so forth, until data from bits 240 to 255 of the host-provided multicast coalesced block data is sent to the submodule 118.[0080] In some embodiments, the CST operation specifies the registers 912, 914, 916, 918 associated with the submodules 112, 114, 116, 118, and the corresponding processing logic 902, 904, 906, 908 write the short data words (for example, 16 bits) extracted from the multicast coalesced block data to be stored into the registers 912, 914, 916, 198.[0081] In cases where the register 912, 914, 916, 918 is wider than the word length of the short data word, in some embodiments, the short data word may be stored in the lower 16 bits (or the number of bits pertaining to the length of the short data word) while leaving the remaining bits as 0. In some embodiments, a sign extension operation is performed on the short data words to extend the data length thereof before storing it or performing a processing operation by the processor, with the extended data targeting the specified register 912, 914, 916, 918. In other embodiments, the CST operation may have a parameter that allows the host to specify which of the bits that are stored in the registers 912, 914, 916, 918 should be written by the processing logic 902, 904, 906, 908 into the corresponding memory submodules 112, 114, 116, 118.[0082] In both of the CLD and CST operations, the host processor 104 may provide instructions regarding the operation of the mode selector 922 and the submodule selector 924. For example, the multicast coalesce logic 926 may initiate a command or instruction 940 to switch the mode selector 922 from the second mode facilitating contiguous block data transfer to the first mode facilitating multicast coalesced block data transfer when providing the CLD or CST operation via a command channel or command bus 928. The instruction to switch to the first mode may be as simple as activating a control bit in the mode selector 922,
such that the first mode is activated when the control bit is 1, and the second mode is activated when the control bit is 0, for example. Otherwise, if the host processor 104 intends to switch the mode selector 922 back to the second mode (for example, after the multicast coalesced block data transfer is complete), this can be achieved by toggling the control bit. [0083] In some embodiments, the near-memory or in-memory processing logic components 902, 904, 906, 908 are capable of determining one or more location identifiers (for example, address bits such as column indices of memory arrays) identifying the specific locations associated with the short data words within the memory submodules 112, 114, 116, 118 based on the multicast coalesced block data supplied by the host processor 104 or the information stored in the memory module 100, for example in the registers 912, 914, 916,918. Furthermore, although FIG. 9 illustrates each memory submodule associated with its own near-memory or in-memory processor, in some embodiments, the processing capabilities of a single near-memory or in-memory processor may be shared among a plurality of memory submodules.[0084] In some embodiments, the instruction to switch between the two aforementioned modes may be provided by the near-memory or in-memory processor(s), as shown by the transfer of a coalescing configuration bit 944 from a near-memory storage, for example from any of the processing logic components 902, 904, 906, 908. The processor may update his near-memory storage value whenever it determines a condition to switch between coalesced mode and contiguous mode. For example, the processor(s) may be capable of detecting a condition indicative of potential for a multicast coalesced block data transfer, such as those previously explained herein. _ Furthermore, some implementations may also involve pipelining of transfers/operations in which the data is buffered in one or more intermediate buffer registers, for example the block data register 1102 shown in FIGs. 11 to 14, which may be disposed between the mode selector 922 and the multicast coalesce logic 900.[0085] FIG. 10 illustrates the different types of registers which may be utilized by the processing logic components 902, 904, 906, 908 as part of the multicast coalesce logic 900.In some embodiments, an offset register 1000, 1002, 1004, 1006 may be implemented to store the offset information for the short data word that is either loaded from or to be stored in the corresponding submodule. For example, in the CLD operation, the offset register 1000, 1002, 1004, 1006 may store a specific address offset for the short data word when it is being coalesced with the other short data words into the multicast coalesced block data (such as the short data word from the memory submodule 112 occupying bits 0 to 15, the short data word from the memory submodule 114 occupying bits 16 to 31, and so forth). Each address offset
is different and unique to the corresponding memory submodule in order to avoid an short data word entry overwriting another short data word entry due to an unintended overlap of the occupied bits.[0086] Furthermore, in the CST operation, the address offset may help the processing logic 902, 904, 906, 908 identify which bits of the multicast coalesced block data to retrieve the short data word from in order to store the retrieved short data word in the corresponding memory submodule 112, 114, 116, 118. In some embodiments, the offset register 1000,1002, 1004, 1006 contains a plurality of offset information such that when the processor supplies a base address, the processing logic 902, 904, 906, 908 can calculate the unique location information associated with each of the plurality of memory submodules based on the stored offset information and the provided base address. The offset values stored in the register may be preprogrammed or stored a priori before the CLD or CST operation is issued by the host processor 104.[0087] In some embodiments, a coalesce configuration register 1008, 1010, 1012, 1014 may be implemented to store data regarding the mode to be selected by the mode selector 922, as determined by the processing logic components 902, 904, 906, 908 in response to detecting conditions indicative of a multicast coalesced block data transfer. The register may indicate a single bit, where the first mode is activated when the bit is 1, and the second mode is activated when the bit is 0. Short data registers 1016, 1018, 1020, 1022 are the registers which store the short data word to be coalesced into the multicast coalesced block data during the CLD operations or to be stored in the corresponding memory submodule 112, 114, 116, 118 during the CST operations. In some examples, the CLD operation reads from the short data registers 1016, 1018, 1020, 1022 (or one or more per-submodule near-memory registers) to obtain the short data words, in which case the short data words are already stored in the registers as a result of a prior near-memory processing operation. Therefore, the short data words may be stored a priori in these short data registers before the operation is issued.[0088] FIG. 11 illustrates one example of a computing system that utilizes data transfers between a memory module 100 and a host processor 104. In this example, the host processor 104 provides the instruction to operate the mode selector 922, the address offset for each of the short data words, and the location address for each of the memory submodules 112, 114, 116, 118. Address offset and location address are sent as an instruction command 1100 to the multicast coalesce logic 900, coupled with a block data register 1102 to store the multicast coalesced block data, and the memory submodules, respectively, from the host processor 104. The address offset defines how much a position of the short data word is shifted during the
multicast coalescing operation, and the location address defines the specific location within a specified memory submodule (for example, a column index of a memory array within the memory submodule) that is to be accessed for the CLD or CST operations. In some examples, near-memory integer adders may be implemented to generate the location addresses for the memory submodules. FIGs. 12 and 14 illustrate the dataflow within the system shown in FIG. 11 during the CLD operations, and FIG. 13 illustrates the dataflow within the system during the CST operations.[0089] FIG. 12 illustrates one example of a system operating the CLD operation, where the multicast coalesce logic 900 includes a concatenate logic 1200 coupled with address shift components, which in this example is shifter/selector logic 1202, 1204, 1206, 1208, which may be programmable, configured to receive the data to loaded from the memory submodules 112, 114, 116, 118. The shifter/selector logic 1202, 1204, 1206, 1208 is configured to receive contiguous block data from the corresponding memory submodule 112, 114, 116,118. The shifter/selector logic 1202, 1204, 1206, 1208 is then configured to (1) select the short data word from the contiguous block data to store in the block data register 1102, or (2) shift the short data word by an address offset before storing the short data word in the block data register 1102.[0090] If the selector logic (1) is implemented, the short data word is selected using the predetermined address offset without performing address shifting. If the shifter logic (2) is used, the short data words are separately shifted using the respective offsets into the first predetermined number of bits, and the predetermined number of bits, starting with the first bit, are selected to obtain the short data words. The stored short data words are coalesced using the concatenate logic 1200, which performs string concatenation to join the short data words end-to-end.[0091] Although not shown, the shifter/selector logic 1202, 1204, 1206, 1208 may include register(s) configured to store the location address (to determine which bits or location within the contiguous block data should be selected as the short data word) and/or the address offset value (to determine the amount of shifting to be performed on the short data word prior to coalescing).[0092] Short data word extraction logic 1210 may be implemented in the host processor 104 to extract the individual short data words from the multicast coalesced block data after receiving the same via the memory channel 108 in a single block data transfer. Each shifter/selector logic 1202, 1204, 1206, 1208 may receive the address offset information in the instruction command 1100 provided by the host processor 104, where the address offset
information defines where the short data word from each memory submodule is to be located in the multicast coalesced block data. Each memory submodule 112, 114, 116, 118 may receive the location address information in the instruction command 1100 to store in the memory address register the location from which the short data word is to be retrieved.[0093] FIG. 13 illustrates one example of a system operating the CST operation, where the host processor 1300 includes concatenate logic 1300 to form the multicast coalesced block data including short data words that are to be stored in a plurality of the memory submodules 112, 114, 116, 118. The multicast coalesce logic 900, which in this case receives the multicast coalesced block data via a de-coalescing path, includes subset distribution logic 1302, 1304, 1306, 1308 to distribute the short data words to their respective memory submodules as intended by the host processor 104. In some examples, the subset distribution log further implements additional address offset bits communicated from the multicast coalesce logic to the memory submodule to indicate which short data word(s) needs to be written, as well as to prevent writing other bits in the column index of the memory array. [0094] The block data register 1102 stores the multicast coalesced block data received from the host processor 104, and the stored data is transferred to each subset distribution logic 1302, 1304, 1306, 1038 via a data channel 1310, 1312, 1314, 1316. Each subset distribute logic may receive the address offset information in the instruction command 1100 provided by the host processor 104, where the address offset information defines which bits within the multicast coalesced block data is to be distributed to which memory submodule. In some examples, each subset distribute logic may be utilized to drive a subset of the data stored in the block data register 1102. Each memory submodule 112, 114, 116, 118 may receive the location address information in the instruction command 1100 to store in the memory address register the location in which the short data word is to be stored.[0095] FIG. 14 illustrates one example of a system operating the CLD operation, where the shifter/selector logic 1202, 1204, 1206, 1208 from FIG. 12 is excluded from the multicast coalesce logic 900. Instead, the address offset of the short data word from the corresponding memory submodule 112, 114, 116, 118 within the block data register 1102 are configurable a priori or predetermined. That is, the offset of the short data word may be a static offset, in which case a data connection to the appropriate bits within the block data register 1102 may be hardwired (thus forming the address shift components) such that the address bits of the short data word from each memory submodule are automatically shifted to be stored in certain predetermined or preconfigured bits within the block data register 1102.
[0096] Furthermore, the concatenate logic 1200 is also excluded from the multicast coalesce logic 900. Instead of using the concatenate logic 1200 to control the coalescing of the short data words into a single block of data to be stored in the block data register 1102 as did FIG. 12, the position-shifted short data words from the memory submodules 112, 114, 116, 118 are concatenated together by joining wires from these memory submodules directly into a data bus to transfer the bits of the short data words to the block data register 1102.[0097] In some embodiments, the CLD and CST operations may be performed by communicating partial command information via the data bus rather than the command bus. For example, in the CLD operation involving the memory module 100 with PNM or PIM capabilities as shown in FIG. 9, the short data words combined may occupy a portion of the multicast coalesced block data, such that the remaining portion of the block data is used to store other information such as the address bits for each of the short data words. Similarly, in the CST operation, the address bits for each of the short data words may be implemented in a portion of the multicast coalesced block data that is not occupied by the short data words that are to be stored in the memory submodules.[0098] In certain embodiments, a non-transitory storage medium, such as memory, includes executable instructions, commands, or address information that when executed by one or more processors, such as the host processor 104 or the PNM/PIM device, causes the one or more processors to place the short data words such that sparse accesses can be orchestrated via the CLD and CST operations to the short data word associated with the corresponding locations within each memory submodule. This may be applicable in situations with deterministic sparse access patterns, such as accesses along the secondary axes of multidimensional matrices or tensors, or where large table entries are split across memory submodules for efficient use of PNM or PIM operations, including but not limited to applications involving machine-learning-based recommendation systems. In some embodiments, the coalescing may be performed explicitly via software implementation, the executable instructions for which is stored in the non-transitory storage medium. For example, the processor that is running the software may send a pre-coalesced request which bypasses the cache, and the request is handled at a memory controller without the need for any additional hardware for coalescing.[0099] In some embodiments, the executable instructions, when executed by the one or more processors, causes the one or more processors to send independent fine-grained sparse accesses, which are then coalesced via hardware such as discrete logic, state machines,FPGA, or any suitable combination of processors executing instructions, for example. The
host processor 104 may be capable of dynamically detecting coalescing opportunities based on monitoring the data channels and memory submodules targeted by independent and concurrent accesses. The host processor or the PNM/PIM devices may also be capable of merging or coalescing independent requests (that is, CLD or CST command requesting for access to the memory submodules) and splitting the responses during the CLD or CST operations.[00100] If sparse commands are capable of traversing a cache hierarchy, according to some examples, there may be a need to differentiate between sparse accesses and non-sparse data accesses such as contiguous block data access involving a single memory submodule, since these requests are to be handled differently by cache controlled s). In such situations, the differentiation may be facilitated by implementing an additional bit or opcode in the command, for example. In some embodiments, sparse accesses are handled differently from non-sparse data accesses because the sparse accesses do not access a full cache line of data. As such, sparse accesses may simply bypass the cache (e.g., using existing support for uncached address ranges) according to some examples. Alternatively, sparse accesses may traverse the caches but may be prevented from allocating or populating cache blocks on a miss, according to some examples. In some embodiments, caches are enhanced with a sector mask which tracks the state information at the granularity of the sparse accesses, allowing caches to store partially-valid cache blocks.[00101] For multicast memory coalesce operations utilizing register offset information (when PNM/PIM registers are implemented) or address offset information, the command from the host processor may require additional address information as compared to a non-sparse access. This may be implemented by splitting the request into more packets, using bit masks or a byte mask to indicate which byte(s) within a cache line are accessed by a sparse command, or by sending some or all of the request along a dedicated data path, such as the command bus 928.[00102] For systems in which the host processor is capable of dynamically detecting coalescing opportunities, the processor may be implemented with a means for merging and/or splitting the requests and responses associated with sparse memory access. In some embodiments, the coalescing operation occurs in the memory controller, where the requests are already sorted into different queues based on target memory submodule. If a sparse memory access at the front of the multiple memory submodule queues is detected, the memory controller merges the requests together and issues a single CLD or CST operation that includes all the requests associated with the memory submodules. In some
embodiments, when the sparse memory access reaches the front of a bank queue, elements in the other queues (or a subset of elements in the other queues) are searched for sparse memory accesses that can be coalesced into a single memory request with the original sparse memory access. In some embodiments, sparse memory access requests are placed in separate queues, from which the sparse memory access requests are interleaved with dense memory access requests (that is, requests to access a contiguous block of data from a single memory submodule) when a threshold of coalesce-able sparse memory accesses to different submodules is reached, or when a timing threshold is exceeded.[00103] When sending a response for requests involved in a CLD or CST operation, the memory controller may split the response and return the individual sparse memory access responses (potentially through the cache hierarchy) to the requesting processor(s). This may be accomplished by storing the response metadata (e.g., requestor ID, sparse address offset, or access granularity) for all pending sparse memory accesses in a small structure in the memory controller. When a sparse memory access response is returned from the memory module, the memory controller may split the data contents thereof into multiple response packets based on sparsity granularity, appending the stored metadata to these packets. If sparse memory access responses are returned to requestors through the cache hierarchy, caches may not allocate space for this data if the granularity of valid state tracking is greater than the sparsity granularity.[00104] Advantages of implementing systems with the capability to perform multicast memory coale sce/extract operations as disclosed herein include increased efficiency in reading/loading metadata (e.g., the number of data elements participating in an irregular computation at each PNM/PIM device) to the host processor from a collection of PNM/PIM devices associated with the memory channel in a single load operation. Also, metadata (e.g., the different loop iteration counts) may be efficiently written/stored to a collection of PNM/PIM devices associated with the memory channel in a single store operation. The condition code (e.g., which PNM/PIM devices have data that meet a dynamically calculated, data-dependent condition) may be efficiently read or loaded from each of a collection of PNM/PIM devices associated with the memory channel in a single store operation.[00105] Furthermore, short data words distributed across a collection of memory banks or submodules coupled with the memory channel may be efficiently loaded or stored without the need to wastefully transfer a full cache line of data from each of the memory submodules. This improves performance in many application domains such as scientific computing (e.g., high-performance computing, or HPC) and graph analytics, as well as in implementing
widely-used data structures such as hash tables and set membership data structures. The systems and methods disclosed herein also provide capabilities to improve the performance of near-memory or in-memory processing technologies which often transfer short data words to or from multiple near-memory or in-memory processing units associated with memory banks or submodules. Additionally, a fine-grain operand may be efficiently provided to a plurality of memory submodules concurrently for a load operation that necessitates loading a short data word from different memory submodules and combining them with the supplied operand (e.g., for high-throughput fine-grained atomic accesses). The multicast memory coalesce operations as disclosed herein are also effective at improving the efficiency of the memory module (for example, DRAM efficiency) for sparse memory access patterns, such as those that exist in graph analytics, sparse matrix algebra, sparse machine learning models, and so on.[00106] Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The apparatus described herein in some implementations are manufactured by using a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general-purpose computer or a processor. Examples of computer-readable storage mediums include a read-only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).[00107] In the preceding detailed description of the various embodiments, reference has been made to the accompanying drawings which form a part thereof, and in which is shown by way of illustration specific preferred embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized, and that logical, mechanical and electrical changes may be made without departing from the scope of the invention. To avoid detail not necessary to enable those skilled in the art to practice the invention, the description may omit certain information known to those skilled in the art. Furthermore, many other varied embodiments that incorporate the teachings of the disclosure may be easily constructed by those skilled in the art. Accordingly, the present invention is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be
reasonably included within the scope of the invention. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. The above detailed description of the embodiments and the examples described therein have been presented for the purposes of illustration and description only and not by limitation. For example, the operations described are done in any suitable order or manner. It is therefore contemplated that the present invention covers any and all modifications, variations or equivalents that fall within the scope of the basic underlying principles disclosed above and claimed herein.[00108] As previously mentioned, systems and methods as disclosed herein help reduce the data transfer overhead by coalescing or aggregating short data words from a plurality of disparate memory submodules and transferring or communicating the multicast coalesced block data over the memory channel simultaneously in a single block data transfer. A short data word is returned or loaded from each of a collection of partitioned memory submodules to a host processor at a unique position within the single block data transfer, or is written or stored into each of a collection of partitioned memory submodules from a host processor at a unique position within the single block data transfer. Further, in some PIM architectures where execution units are associated with memory banks, it may be necessary to read or write small amounts of data between the PIM units and the main (or host) processor, for example to report data-dependent conditions or status to host processor or write loop iteration counts that can vary among PIM units. The systems and methods as disclosed herein facilitate reducing the overheads of narrow data accesses in such examples.[00109] The above detailed description and the examples described therein have been presented for the purposes of illustration and description only and not for limitation. |
Technologies for enabling and metering the utilization of components on demand include a compute device. The compute device includes a network interface controller and circuitry configured to receive, through a network and with the network interface controller, a request to enable a component of a sled to assist in the execution of a workload. The circuitry is further configured to enable, in response to the request, the component to assist in the execution of the workload, and meter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component. |
WHAT IS CLAIMED IS:1. A compute device comprising:a network interface controller; andcircuitry to:receive, through a network and with the network interface controller, a request to enable a component of a sled to assist in the execution of a workload;enable, in response to the request, the component to assist in the execution of the workload; andmeter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.2. The compute device of claim 1 , wherein the circuitry is further to send, to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.3. The compute device of claim 1 , wherein to enable the component comprises to send a request to the sled on which the component is located to provide power to the component.4. The compute device of claim 1 , wherein to receive a request to enable a component comprises to receive a request that includes data indicative of a type of component to enable and the circuity is further to identify, as a function of data included in the request, the component to enable.5. The compute device of claim 1 , wherein to receive a request to enable a component comprises to receive a request that includes utilization limit data indicative of a total amount of utilization requested, and wherein the circuitry is further to:determine whether a present total cost of utilization of the component satisfies the utilization limit data; anddisable, in response to a determination that the present total cost of utilization satisfies the utilization limit data, the component.6. The compute device of claim 1, wherein the circuitry is further to:receive a request from the workload to discontinue utilization of the component; and send, in response to the request to discontinue utilization, a request to the sled on which the component is located to no longer provide power to the component.7. The compute device of claim 1, wherein the circuitry is further to:determine whether to discontinue utilization, by the workload, of the component; andsend, in response to a determination to discontinue utilization, a request to the sled on which the component is located to disable the component.8. The compute device of claim 7, wherein the circuitry is further to send, to the sled on which the component is located, a replacement license key for use in verifying subsequent requests by a workload to perform one or more operations with the component.9. The compute device of claim 1, wherein to enable the component comprises to enable an I/O virtualization logic unit.10. The compute device of claim 1, wherein to enable the component comprises to enable a core of the compute sled.11. The compute device of claim 1, wherein to enable the component comprises to enable an accelerator device.12. The compute device of claim 1, wherein to enable the component comprises to enable a memory device.13. The compute device of claim 1, wherein to enable the component comprises to enable a data storage device.14. The compute device of claim 1, wherein to enable the component comprises to send a request to a provisioner compute device to send a message to the sled to enable the component.15. The compute device of claim 1, wherein to receive the request to enable a component comprises to receive a request to enable a specified feature of a set of features supported by the component; and wherein to enable the component comprises to enable the specified feature of the component.16. The compute device of claim 1 , wherein to meter the utilization of the component by the workload comprises to meter utilization of the component outside of a service-level agreement of the customer.17. One or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute device to:receive, through a network, a request to enable a component of a sled to assist in the execution of a workload;enable, in response to the request, the component to assist in the execution of the workload; andmeter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.18. The one or more machine-readable storage media of claim 17, wherein the plurality of instructions further cause the compute device to send, to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.19. The one or more machine-readable storage media of claim 17, wherein to enable the component comprises to send a request to the sled on which the component is located to provide power to the component.20. The one or more machine-readable storage media of claim 17, wherein to receive a request to enable a component comprises to receive a request that includes data indicative of a type of component to enable and the plurality of instructions further cause the compute device to identify, as a function of data included in the request, the component to enable.21. The one or more machine-readable storage media of claim 17, wherein to receive a request to enable a component comprises to receive a request that includes utilization limit data indicative of a total amount of utilization requested, and wherein the plurality of instructions further cause the compute device to: determine whether a present total cost of utilization of the component satisfies the utilization limit data; anddisable, in response to a determination that the present total cost of utilization satisfies the utilization limit data, the component.22. A method comprising:receiving, by a compute device and through a network, a request to enable a component of a sled to assist in the execution of a workload;enabling, by the compute device and in response to the request, the component to assist in the execution of the workload; andmetering, by the compute device, the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.23. The method of claim 22, further comprising sending, by the compute device and to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.24. The method of claim 22, wherein enabling the component comprises sending a request to the sled on which the component is located to provide power to the component.25. The method of claim 22, wherein receiving a request to enable a component comprises receiving a request that includes data indicative of a type of component to enable, the method further comprising identifying, by the compute device and as a function of data included in the request, the component to enable. |
TECHNOLOGIES FOR ENABLING AND METERING THE UTILIZATION OFFEATURES ON DEMANDCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims the benefit of Indian Provisional PatentApplication No. 201741030632, filed August 30, 2017 and U.S. Provisional Patent Application No. 62/584,401, filed November 10, 2017.BACKGROUND[0002] In some data centers, such as cloud data centers, compute devices execute operations (e.g., workloads executed in virtualized environments, such as in virtual machines or in containers) on behalf of customers as a service. A data center may be provisioned with a variety of hardware components that, for the data center operator, may have different levels of monetary expenditures associated with them (e.g., initial purchase cost, electricity costs for powering the components even when they are not being used by a service, electricity costs for cooling the components, etc.) that are not directly accounted for and passed on to the customers who are using the services of the data center. Rather, the agreement between the data center operator, known as service level agreement (SLA), typically specifies a set of quality of service (QoS) targets (e.g., a target latency, a target throughput, etc.) that the data center is expected to satisfy and a fee (e.g., a monthly fee) for satisfying the QoS targets, without regard to the specific hardware components utilized to satisfy the QoS targets.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0004] FIG. 1 is a simplified diagram of at least one embodiment of a data center for executing workloads with disaggregated resources;[0005] FIG. 2 is a simplified diagram of at least one embodiment of a pod that may be included in the data center of FIG. 1;[0006] FIG. 3 is a perspective view of at least one embodiment of a rack that may be included in the pod of FIG. 2;[0007] FIG. 4 is a side elevation view of the rack of FIG. 3;[0008] FIG. 5 is a perspective view of the rack of FIG. 3 having a sled mounted therein; [0009] FIG. 6 is a is a simplified block diagram of at least one embodiment of a top side of the sled of FIG. 5;[0010] FIG. 7 is a simplified block diagram of at least one embodiment of a bottom side of the sled of FIG. 6;[0011] FIG. 8 is a simplified block diagram of at least one embodiment of a compute sled usable in the data center of FIG. 1;[0012] FIG. 9 is a top perspective view of at least one embodiment of the compute sled of FIG. 8;[0013] FIG. 10 is a simplified block diagram of at least one embodiment of an accelerator sled usable in the data center of FIG. 1 ;[0014] FIG. 11 is a top perspective view of at least one embodiment of the accelerator sled of FIG. 10;[0015] FIG. 12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center of FIG. 1;[0016] FIG. 13 is a top perspective view of at least one embodiment of the storage sled of FIG. 12;[0017] FIG. 14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center of FIG. 1; and[0018] FIG. 15 is a simplified block diagram of a system that may be established within the data center of FIG. 1 to execute workloads with managed nodes composed of disaggregated resources;[0019] FIG. 16 is a simplified block diagram of at least one embodiment of a system for enabling and metering the utilization of features of components on an as-requested basis in a disaggregated architecture; and[0020] FIGS. 17-19 are a simplified block diagram of at least one embodiment of a method for enabling and metering the utilization of features of components on an as-requested basis that may be performed by an orchestrator server in the system of FIG. 16.DETAILED DESCRIPTION OF THE DRAWINGS[0021] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. [0022] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0023] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0024] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0025] Referring now to FIG. 1, a data center 100 in which disaggregated resources may cooperatively execute one or more workloads (e.g., applications on behalf of customers) includes multiple pods 110, 120, 130, 140, each of which includes one or more rows of racks. Of course, although data center 100 is shown with multiple pods, in some embodiments, the data center 100 may be embodied as a single pod. As described in more detail herein, each rack houses multiple sleds, each of which may be primarily equipped with a particular type of resource (e.g., memory devices, data storage devices, accelerator devices, general purpose processors), i.e., resources that can be logically coupled to form a composed node, which can act as, for example, a server. In the illustrative embodiment, the sleds in each pod 110, 120, 130, 140 are connected to multiple pod switches (e.g., switches that route data communications to and from sleds within the pod). The pod switches, in turn, connect with spine switches 150 that switch communications among pods (e.g., the pods 1 10, 120, 130, 140) in the data center 100. In some embodiments, the sleds may be connected with a fabric using Intel Omni-Path technology. In other embodiments, the sleds may be connected with other fabrics, such as InfiniBand or Ethernet. As described in more detail herein, resources within sleds in the data center 100 may be allocated to a group (referred to herein as a "managed node") containing resources from one or more sleds to be collectively utilized in the execution of a workload. The workload can execute as if the resources belonging to the managed node were located on the same sled. The resources in a managed node may belong to sleds belonging to different racks, and even to different pods 1 10, 120, 130, 140. As such, some resources of a single sled may be allocated to one managed node while other resources of the same sled are allocated to a different managed node (e.g., one processor assigned to one managed node and another processor of the same sled assigned to a different managed node).[0026] A data center comprising disaggregated resources, such as data center 100, can be used in a wide variety of contexts, such as enterprise, government, cloud service provider, and communications service provider (e.g., Telco's), as well in a wide variety of sizes, from cloud service provider mega-data centers that consume over 100,000 sq. ft. to single- or multi- rack installations for use in base stations.[0027] The disaggregation of resources to sleds comprised predominantly of a single type of resource (e.g., compute sleds comprising primarily compute resources, memory sleds containing primarily memory resources), and the selective allocation and deallocation of the disaggregated resources to form a managed node assigned to execute a workload improves the operation and resource usage of the data center 100 relative to typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources in a single chassis. For example, because sleds predominantly contain resources of a particular type, resources of a given type can be upgraded independently of other resources. Additionally, because different resources types (processors, storage, accelerators, etc.) typically have different refresh rates, greater resource utilization and reduced total cost of ownership may be achieved. For example, a data center operator can upgrade the processors throughout their facility by only swapping out the compute sleds. In such a case, accelerator and storage resources may not be contemporaneously upgraded and, rather, may be allowed to continue operating until those resources are scheduled for their own refresh. Resource utilization may also increase. For example, if managed nodes are composed based on requirements of the workloads that will be running on them, resources within a node are more likely to be fully utilized. Such utilization may allow for more managed nodes to run in a data center with a given set of resources, or for a data center expected to run a given set of workloads, to be built using fewer resources.[0028] Referring now to FIG. 2, the pod 110, in the illustrative embodiment, includes a set of rows 200, 210, 220, 230 of racks 240. Each rack 240 may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein. In the illustrative embodiment, the racks in each row 200, 210, 220, 230 are connected to multiple pod switches 250, 260. The pod switch 250 includes a set of ports 252 to which the sleds of the racks of the pod 110 are connected and another set of ports 254 that connect the pod 110 to the spine switches 150 to provide connectivity to other pods in the data center 100. Similarly, the pod switch 260 includes a set of ports 262 to which the sleds of the racks of the pod 110 are connected and a set of ports 264 that connect the pod 110 to the spine switches 150. As such, the use of the pair of switches 250, 260 provides an amount of redundancy to the pod 110. For example, if either of the switches 250, 260 fails, the sleds in the pod 110 may still maintain data communication with the remainder of the data center 100 (e.g., sleds of other pods) through the other switch 250, 260. Furthermore, in the illustrative embodiment, the switches 150, 250, 260 may be embodied as dual -mode optical switches, capable of routing both Ethernet protocol communications carrying Internet Protocol (IP) packets and communications according to a second, high-performance link-layer protocol (e.g., Intel's Omni-Path Architecture's, InfiniBand, PCI Express) via optical signaling media of an optical fabric.[0029] It should be appreciated that each of the other pods 120, 130, 140 (as well as any additional pods of the data center 100) may be similarly structured as, and have components similar to, the pod 110 shown in and described in regard to FIG. 2 (e.g., each pod may have rows of racks housing multiple sleds as described above). Additionally, while two pod switches 250, 260 are shown, it should be understood that in other embodiments, each pod 110, 120, 130, 140 may be connected to a different number of pod switches, providing even more failover capacity. Of course, in other embodiments, pods may be arranged differently than the rows-of- racks configuration shown in FIGS. 1-2. For example, a pod may be embodied as multiple sets of racks in which each set of racks is arranged radially, i.e., the racks are equidistant from a center switch.[0030] Referring now to FIGS. 3-5, each illustrative rack 240 of the data center 100 includes two elongated support posts 302, 304, which are arranged vertically. For example, the elongated support posts 302, 304 may extend upwardly from a floor of the data center 100 when deployed. The rack 240 also includes one or more horizontal pairs 310 of elongated support arms 312 (identified in FIG. 3 via a dashed ellipse) configured to support a sled of the data center 100 as discussed below. One elongated support arm 312 of the pair of elongated support arms 312 extends outwardly from the elongated support post 302 and the other elongated support arm 312 extends outwardly from the elongated support post 304.[0031] In the illustrative embodiments, each sled of the data center 100 is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below. As such, the rack 240 is configured to receive the chassis-less sleds. For example, each pair 310 of elongated support arms 312 defines a sled slot 320 of the rack 240, which is configured to receive a corresponding chassis-less sled. To do so, each illustrative elongated support arm 312 includes a circuit board guide 330 configured to receive the chassis- less circuit board substrate of the sled. Each circuit board guide 330 is secured to, or otherwise mounted to, a top side 332 of the corresponding elongated support arm 312. For example, in the illustrative embodiment, each circuit board guide 330 is mounted at a distal end of the corresponding elongated support arm 312 relative to the corresponding elongated support post 302, 304. For clarity of the Figures, not every circuit board guide 330 may be referenced in each Figure.[0032] Each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 configured to receive the chassis-less circuit board substrate of a sled 400 when the sled 400 is received in the corresponding sled slot 320 of the rack 240. To do so, as shown in FIG. 4, a user (or robot) aligns the chassis-less circuit board substrate of an illustrative chassis- less sled 400 to a sled slot 320. The user, or robot, may then slide the chassis-less circuit board substrate forward into the sled slot 320 such that each side edge 414 of the chassis-less circuit board substrate is received in a corresponding circuit board slot 380 of the circuit board guides 330 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320 as shown in FIG. 4. By having robotically accessible and robotically manipulable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 240, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. As such, in some embodiments, the data center 100 may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor. In other embodiments, a human may facilitate one or more maintenance or upgrade operations in the data center 100. [0033] It should be appreciated that each circuit board guide 330 is dual sided. That is, each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 on each side of the circuit board guide 330. In this way, each circuit board guide 330 can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to the rack 240 to turn the rack 240 into a two-rack solution that can hold twice as many sled slots 320 as shown in Fig. 3. The illustrative rack 240 includes seven pairs 310 of elongated support arms 312 that define a corresponding seven sled slots 320, each configured to receive and support a corresponding sled 400 as discussed above. Of course, in other embodiments, the rack 240 may include additional or fewer pairs 310 of elongated support arms 312 (i.e., additional or fewer sled slots 320). It should be appreciated that because the sled 400 is chassis-less, the sled 400 may have an overall height that is different than typical servers. As such, in some embodiments, the height of each sled slot 320 may be shorter than the height of a typical server (e.g., shorter than a single rank unit, "1U"). That is, the vertical distance between each pair 310 of elongated support arms 312 may be less than a standard rack unit "1U." Additionally, due to the relative decrease in height of the sled slots 320, the overall height of the rack 240 in some embodiments may be shorter than the height of traditional rack enclosures. For example, in some embodiments, each of the elongated support posts 302, 304 may have a length of six feet or less. Again, in other embodiments, the rack 240 may have different dimensions. For example, in some embodiments, the vertical distance between each pair 310 of elongated support arms 312 may be greater than a standard rack until "1U". In such embodiments, the increased vertical distance between the sleds allows for larger heat sinks to be attached to the physical resources and for larger fans to be used (e.g., in the fan array 370 described below) for cooling each sled, which in turn can allow the physical resources to operate at increased power levels. Further, it should be appreciated that the rack 240 does not include any walls, enclosures, or the like. Rather, the rack 240 is an enclosure-less rack that is opened to the local environment. Of course, in some cases, an end plate may be attached to one of the elongated support posts 302, 304 in those situations in which the rack 240 forms an end- of-row rack in the data center 100.[0034] In some embodiments, various interconnects may be routed upwardly or downwardly through the elongated support posts 302, 304. To facilitate such routing, each elongated support post 302, 304 includes an inner wall that defines an inner chamber in which interconnects may be located. The interconnects routed through the elongated support posts 302, 304 may be embodied as any type of interconnects including, but not limited to, data or communication interconnects to provide communication connections to each sled slot 320, power interconnects to provide power to each sled slot 320, and/or other types of interconnects. [0035] The rack 240, in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted. Each optical data connector is associated with a corresponding sled slot 320 and is configured to mate with an optical data connector of a corresponding sled 400 when the sled 400 is received in the corresponding sled slot 320. In some embodiments, optical connections between components (e.g., sleds, racks, and switches) in the data center 100 are made with a blind mate optical connection. For example, a door on each cable may prevent dust from contaminating the fiber inside the cable. In the process of connecting to a blind mate optical connector mechanism, the door is pushed open when the end of the cable approaches or enters the connector mechanism. Subsequently, the optical fiber inside the cable may enter a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism.[0036] The illustrative rack 240 also includes a fan array 370 coupled to the cross- support arms of the rack 240. The fan array 370 includes one or more rows of cooling fans 372, which are aligned in a horizontal line between the elongated support posts 302, 304. In the illustrative embodiment, the fan array 370 includes a row of cooling fans 372 for each sled slot 320 of the rack 240. As discussed above, each sled 400 does not include any on-board cooling system in the illustrative embodiment and, as such, the fan array 370 provides cooling for each sled 400 received in the rack 240. Each rack 240, in the illustrative embodiment, also includes a power supply associated with each sled slot 320. Each power supply is secured to one of the elongated support arms 312 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320. For example, the rack 240 may include a power supply coupled or secured to each elongated support arm 312 extending from the elongated support post 302. Each power supply includes a power connector configured to mate with a power connector of the sled 400 when the sled 400 is received in the corresponding sled slot 320. In the illustrative embodiment, the sled 400 does not include any on-board power supply and, as such, the power supplies provided in the rack 240 supply power to corresponding sleds 400 when mounted to the rack 240. Each power supply is configured to satisfy the power requirements for its associated sled, which can vary from sled to sled. Additionally, the power supplies provided in the rack 240 can operate independent of each other. That is, within a single rack, a first power supply providing power to a compute sled can provide power levels that are different than power levels supplied by a second power supply providing power to an accelerator sled. The power supplies may be controllable at the sled level or rack level, and may be controlled locally by components on the associated sled or remotely, such as by another sled or an orchestrator. [0037] Referring now to FIG. 6, the sled 400, in the illustrative embodiment, is configured to be mounted in a corresponding rack 240 of the data center 100 as discussed above. In some embodiments, each sled 400 may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc. For example, the sled 400 may be embodied as a compute sled 800 as discussed below in regard to FIGS. 8-9, an accelerator sled 1000 as discussed below in regard to FIGS. 10-11, a storage sled 1200 as discussed below in regard to FIGS. 12-13, or as a sled optimized or otherwise configured to perform other specialized tasks, such as a memory sled 1400, discussed below in regard to FIG. 14.[0038] As discussed above, the illustrative sled 400 includes a chassis-less circuit board substrate 602, which supports various physical resources (e.g., electrical components) mounted thereon. It should be appreciated that the circuit board substrate 602 is "chassis-less" in that the sled 400 does not include a housing or enclosure. Rather, the chassis-less circuit board substrate 602 is open to the local environment. The chassis-less circuit board substrate 602 may be formed from any material capable of supporting the various electrical components mounted thereon. For example, in an illustrative embodiment, the chassis-less circuit board substrate 602 is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-less circuit board substrate 602 in other embodiments.[0039] As discussed in more detail below, the chassis-less circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602. As discussed, the chassis-less circuit board substrate 602 does not include a housing or enclosure, which may improve the airflow over the electrical components of the sled 400 by reducing those structures that may inhibit air flow. For example, because the chassis-less circuit board substrate 602 is not positioned in an individual housing or enclosure, there is no vertically-arranged backplane (e.g., a backplate of the chassis) attached to the chassis-less circuit board substrate 602, which could inhibit air flow across the electrical components. Additionally, the chassis-less circuit board substrate 602 has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-less circuit board substrate 602. For example, the illustrative chassis-less circuit board substrate 602 has a width 604 that is greater than a depth 606 of the chassis-less circuit board substrate 602. In one particular embodiment, for example, the chassis-less circuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches. As such, an airflow path 608 that extends from a front edge 610 of the chassis-less circuit board substrate 602 toward a rear edge 612 has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of the sled 400. Furthermore, although not illustrated in FIG. 6, the various physical resources mounted to the chassis-less circuit board substrate 602 are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below. That is, no two electrical components, which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-less circuit board substrate 602 linearly in-line with each other along the direction of the airflow path 608 (i.e., along a direction extending from the front edge 610 toward the rear edge 612 of the chassis-less circuit board substrate 602).[0040] As discussed above, the illustrative sled 400 includes one or more physical resources 620 mounted to a top side 650 of the chassis-less circuit board substrate 602. Although two physical resources 620 are shown in FIG. 6, it should be appreciated that the sled 400 may include one, two, or more physical resources 620 in other embodiments. The physical resources 620 may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of the sled 400 depending on, for example, the type or intended functionality of the sled 400. For example, as discussed in more detail below, the physical resources 620 may be embodied as high-performance processors in embodiments in which the sled 400 is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which the sled 400 is embodied as an accelerator sled, storage controllers in embodiments in which the sled 400 is embodied as a storage sled, or a set of memory devices in embodiments in which the sled 400 is embodied as a memory sled.[0041] The sled 400 also includes one or more additional physical resources 630 mounted to the top side 650 of the chassis-less circuit board substrate 602. In the illustrative embodiment, the additional physical resources include a network interface controller (NIC) as discussed in more detail below. Of course, depending on the type and functionality of the sled 400, the physical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments.[0042] The physical resources 620 are communicatively coupled to the physical resources 630 via an input/output (I/O) subsystem 622. The I/O subsystem 622 may be embodied as circuitry and/or components to facilitate input/output operations with the physical resources 620, the physical resources 630, and/or other components of the sled 400. For example, the I/O subsystem 622 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, waveguides, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In the illustrative embodiment, the I/O subsystem 622 is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDR5 data bus.[0043] In some embodiments, the sled 400 may also include a resource-to-resource interconnect 624. The resource-to-resource interconnect 624 may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications. In the illustrative embodiment, the resource-to-resource interconnect 624 is embodied as a highspeed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the resource-to-resource interconnect 624 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications.[0044] The sled 400 also includes a power connector 640 configured to mate with a corresponding power connector of the rack 240 when the sled 400 is mounted in the corresponding rack 240. The sled 400 receives power from a power supply of the rack 240 via the power connector 640 to supply power to the various electrical components of the sled 400. That is, the sled 400 does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of the sled 400. The exclusion of a local or onboard power supply facilitates the reduction in the overall footprint of the chassis-less circuit board substrate 602, which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602 as discussed above. In some embodiments, voltage regulators are placed on a bottom side 750 (see FIG. 7) of the chassis-less circuit board substrate 602 directly opposite of the processors 820 (see FIG. 8), and power is routed from the voltage regulators to the processors 820 by vias extending through the circuit board substrate 602. Such a configuration provides an increased thermal budget, additional current and/or voltage, and better voltage control relative to typical printed circuit boards in which processor power is delivered from a voltage regulator, in part, by printed circuit traces.[0045] In some embodiments, the sled 400 may also include mounting features 642 configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled 600 in a rack 240 by the robot. The mounting features 642 may be embodied as any type of physical structures that allow the robot to grasp the sled 400 without damaging the chassis-less circuit board substrate 602 or the electrical components mounted thereto. For example, in some embodiments, the mounting features 642 may be embodied as non-conductive pads attached to the chassis-less circuit board substrate 602. In other embodiments, the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-less circuit board substrate 602. The particular number, shape, size, and/or make-up of the mounting feature 642 may depend on the design of the robot configured to manage the sled 400.[0046] Referring now to FIG. 7, in addition to the physical resources 630 mounted on the top side 650 of the chassis-less circuit board substrate 602, the sled 400 also includes one or more memory devices 720 mounted to a bottom side 750 of the chassis-less circuit board substrate 602. That is, the chassis-less circuit board substrate 602 is embodied as a double- sided circuit board. The physical resources 620 are communicatively coupled to the memory devices 720 via the I/O subsystem 622. For example, the physical resources 620 and the memory devices 720 may be communicatively coupled by one or more vias extending through the chassis-less circuit board substrate 602. Each physical resource 620 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Alternatively, in other embodiments, each physical resource 620 may be communicatively coupled to each memory device 720.[0047] The memory devices 720 may be embodied as any type of memory device capable of storing data for the physical resources 620 during operation of the sled 400, such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.[0048] In one embodiment, the memory device 720 is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include next-generation nonvolatile devices, such as Intel 3D XPoint™ memory or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. In some embodiments, the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.[0049] Referring now to FIG. 8, in some embodiments, the sled 400 may be embodied as a compute sled 800. The compute sled 800 is optimized, or otherwise configured, to perform compute tasks. Of course, as discussed above, the compute sled 800 may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks. The compute sled 800 includes various physical resources (e.g., electrical components) similar to the physical resources of the sled 400, which have been identified in FIG. 8 using the same reference numbers. The description of such components provided above in regard to FIGS. 6 and 7 applies to the corresponding components of the compute sled 800 and is not repeated herein for clarity of the description of the compute sled 800.[0050] In the illustrative compute sled 800, the physical resources 620 are embodied as processors 820. Although only two processors 820 are shown in FIG. 8, it should be appreciated that the compute sled 800 may include additional processors 820 in other embodiments. Illustratively, the processors 820 are embodied as high-performance processors 820 and may be configured to operate at a relatively high power rating. Although the processors 820 generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-less circuit board substrate 602 discussed above facilitate the higher power operation. For example, in the illustrative embodiment, the processors 820 are configured to operate at a power rating of at least 250 W. In some embodiments, the processors 820 may be configured to operate at a power rating of at least 350 W. [0051] In some embodiments, the compute sled 800 may also include a processor-to- processor interconnect 842. Similar to the resource-to-resource interconnect 624 of the sled 400 discussed above, the processor-to-processor interconnect 842 may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communications. In the illustrative embodiment, the processor-to-processor interconnect 842 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the processor-to-processor interconnect 842 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.[0052] The compute sled 800 also includes a communication circuit 830. The illustrative communication circuit 830 includes a network interface controller (NIC) 832, which may also be referred to as a host fabric interface (HFI). The NIC 832 may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, or other devices that may be used by the compute sled 800 to connect with another compute device (e.g., with other sleds 400). In some embodiments, the NIC 832 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 832. In such embodiments, the local processor of the NIC 832 may be capable of performing one or more of the functions of the processors 820. Additionally or alternatively, in such embodiments, the local memory of the NIC 832 may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels.[0053] The communication circuit 830 is communicatively coupled to an optical data connector 834. The optical data connector 834 is configured to mate with a corresponding optical data connector of the rack 240 when the compute sled 800 is mounted in the rack 240. Illustratively, the optical data connector 834 includes a plurality of optical fibers which lead from a mating surface of the optical data connector 834 to an optical transceiver 836. The optical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector. Although shown as forming part of the optical data connector 834 in the illustrative embodiment, the optical transceiver 836 may form a portion of the communication circuit 830 in other embodiments.[0054] In some embodiments, the compute sled 800 may also include an expansion connector 840. In such embodiments, the expansion connector 840 is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to the compute sled 800. The additional physical resources may be used, for example, by the processors 820 during operation of the compute sled 800. The expansion chassis-less circuit board substrate may be substantially similar to the chassis-less circuit board substrate 602 discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate. For example, the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources. As such, the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.[0055] Referring now to FIG. 9, an illustrative embodiment of the compute sled 800 is shown. As shown, the processors 820, communication circuit 830, and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602. Any suitable attachment or mounting technology may be used to mount the physical resources of the compute sled 800 to the chassis-less circuit board substrate 602. For example, the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets. In some cases, some of the electrical components may be directly mounted to the chassis-less circuit board substrate 602 via soldering or similar techniques.[0056] As discussed above, the individual processors 820 and communication circuit830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other. In the illustrative embodiment, the processors 820 and communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those physical resources are linearly in-line with others along the direction of the airflow path 608. It should be appreciated that, although the optical data connector 834 is in-line with the communication circuit 830, the optical data connector 834 produces no or nominal heat during operation.[0057] The memory devices 720 of the compute sled 800 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400. Although mounted to the bottom side 750, the memory devices 720 are communicatively coupled to the processors 820 located on the top side 650 via the I/O subsystem 622. Because the chassis-less circuit board substrate 602 is embodied as a double- sided circuit board, the memory devices 720 and the processors 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis- less circuit board substrate 602. Of course, each processor 820 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Alternatively, in other embodiments, each processor 820 may be communicatively coupled to each memory device 720. In some embodiments, the memory devices 720 may be mounted to one or more memory mezzanines on the bottom side of the chassis-less circuit board substrate 602 and may interconnect with a corresponding processor 820 through a ball-grid array.[0058] Each of the processors 820 includes a heatsink 850 secured thereto. Due to the mounting of the memory devices 720 to the bottom side 750 of the chassis-less circuit board substrate 602 (as well as the vertical spacing of the sleds 400 in the corresponding rack 240), the top side 650 of the chassis-less circuit board substrate 602 includes additional "free" area or space that facilitates the use of heatsinks 850 having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate 602, none of the processor heatsinks 850 include cooling fans attached thereto. That is, each of the heatsinks 850 is embodied as a fan-less heatsink. In some embodiments, the heat sinks 850 mounted atop the processors 820 may overlap with the heat sink attached to the communication circuit 830 in the direction of the airflow path 608 due to their increased size, as illustratively suggested by FIG. 9.[0059] Referring now to FIG. 10, in some embodiments, the sled 400 may be embodied as an accelerator sled 1000. The accelerator sled 1000 is configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task. In some embodiments, for example, a compute sled 800 may offload tasks to the accelerator sled 1000 during operation. The accelerator sled 1000 includes various components similar to components of the sled 400 and/or compute sled 800, which have been identified in FIG. 10 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the accelerator sled 1000 and is not repeated herein for clarity of the description of the accelerator sled 1000.[0060] In the illustrative accelerator sled 1000, the physical resources 620 are embodied as accelerator circuits 1020. Although only two accelerator circuits 1020 are shown in FIG. 10, it should be appreciated that the accelerator sled 1000 may include additional accelerator circuits 1020 in other embodiments. For example, as shown in FIG. 11 , the accelerator sled 1000 may include four accelerator circuits 1020 in some embodiments. The accelerator circuits 1020 may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations. For example, the accelerator circuits 1020 may be embodied as, for example, field programmable gate arrays (FPGA), application- specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), neuromorphic processor units, quantum computers, machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.[0061] In some embodiments, the accelerator sled 1000 may also include an accelerator- to-accelerator interconnect 1042. Similar to the resource-to-resource interconnect 624 of the sled 600 discussed above, the accelerator-to-accelerator interconnect 1042 may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect 1042 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the accelerator-to-accelerator interconnect 1042 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other highspeed point-to-point interconnect dedicated to processor-to-processor communications. In some embodiments, the accelerator circuits 1020 may be daisy-chained with a primary accelerator circuit 1020 connected to the NIC 832 and memory 720 through the I/O subsystem 622 and a secondary accelerator circuit 1020 connected to the NIC 832 and memory 720 through a primary accelerator circuit 1020.[0062] Referring now to FIG. 11, an illustrative embodiment of the accelerator sled1000 is shown. As discussed above, the accelerator circuits 1020, communication circuit 830, and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602. Again, the individual accelerator circuits 1020 and communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other as discussed above. The memory devices 720 of the accelerator sled 1000 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 600. Although mounted to the bottom side 750, the memory devices 720 are communicatively coupled to the accelerator circuits 1020 located on the top side 650 via the I/O subsystem 622 (e.g., through vias). Further, each of the accelerator circuits 1020 may include a heatsink 1070 that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks 870, the heatsinks 1070 may be larger than traditional heatsinks because of the "free" area provided by the memory resources 720 being located on the bottom side 750 of the chassis-less circuit board substrate 602 rather than on the top side 650.[0063] Referring now to FIG. 12, in some embodiments, the sled 400 may be embodied as a storage sled 1200. The storage sled 1200 is configured, to store data in a data storage 1250 local to the storage sled 1200. For example, during operation, a compute sled 800 or an accelerator sled 1000 may store and retrieve data from the data storage 1250 of the storage sled 1200. The storage sled 1200 includes various components similar to components of the sled 400 and/or the compute sled 800, which have been identified in FIG. 12 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the storage sled 1200 and is not repeated herein for clarity of the description of the storage sled 1200.[0064] In the illustrative storage sled 1200, the physical resources 620 are embodied as storage controllers 1220. Although only two storage controllers 1220 are shown in FIG. 12, it should be appreciated that the storage sled 1200 may include additional storage controllers 1220 in other embodiments. The storage controllers 1220 may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into the data storage 1250 based on requests received via the communication circuit 830. In the illustrative embodiment, the storage controllers 1220 are embodied as relatively low-power processors or controllers. For example, in some embodiments, the storage controllers 1220 may be configured to operate at a power rating of about 75 watts.[0065] In some embodiments, the storage sled 1200 may also include a controller-to- controller interconnect 1242. Similar to the resource-to-resource interconnect 624 of the sled 400 discussed above, the controller-to-controller interconnect 1242 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1242 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1242 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to- point interconnect dedicated to processor-to-processor communications.[0066] Referring now to FIG. 13, an illustrative embodiment of the storage sled 1200 is shown. In the illustrative embodiment, the data storage 1250 is embodied as, or otherwise includes, a storage cage 1252 configured to house one or more solid state drives (SSDs) 1254. To do so, the storage cage 1252 includes a number of mounting slots 1256, each of which is configured to receive a corresponding solid state drive 1254. Each of the mounting slots 1256 includes a number of drive guides 1258 that cooperate to define an access opening 1260 of the corresponding mounting slot 1256. The storage cage 1252 is secured to the chassis-less circuit board substrate 602 such that the access openings face away from (i.e., toward the front of) the chassis-less circuit board substrate 602. As such, solid state drives 1254 are accessible while the storage sled 1200 is mounted in a corresponding rack 204. For example, a solid state drive 1254 may be swapped out of a rack 240 (e.g., via a robot) while the storage sled 1200 remains mounted in the corresponding rack 240.[0067] The storage cage 1252 illustratively includes sixteen mounting slots 1256 and is capable of mounting and storing sixteen solid state drives 1254. Of course, the storage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments. Additionally, in the illustrative embodiment, the solid state drivers are mounted vertically in the storage cage 1252, but may be mounted in the storage cage 1252 in a different orientation in other embodiments. Each solid state drive 1254 may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives 1254 may include volatile and non-volatile memory devices discussed above.[0068] As shown in FIG. 13, the storage controllers 1220, the communication circuit830, and the optical data connector 834 are illustratively mounted to the top side 650 of the chassis-less circuit board substrate 602. Again, as discussed above, any suitable attachment or mounting technology may be used to mount the electrical components of the storage sled 1200 to the chassis-less circuit board substrate 602 including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques.[0069] As discussed above, the individual storage controllers 1220 and the communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other. For example, the storage controllers 1220 and the communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those electrical components are linearly in-line with each other along the direction of the airflow path 608.[0070] The memory devices 720 of the storage sled 1200 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400. Although mounted to the bottom side 750, the memory devices 720 are communicatively coupled to the storage controllers 1220 located on the top side 650 via the I/O subsystem 622. Again, because the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board, the memory devices 720 and the storage controllers 1220 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate 602. Each of the storage controllers 1220 includes a heatsink 1270 secured thereto. As discussed above, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate 602 of the storage sled 1200, none of the heatsinks 1270 include cooling fans attached thereto. That is, each of the heatsinks 1270 is embodied as a fan-less heatsink. [0071] Referring now to FIG. 14, in some embodiments, the sled 400 may be embodied as a memory sled 1400. The storage sled 1400 is optimized, or otherwise configured, to provide other sleds 400 (e.g., compute sleds 800, accelerator sleds 1000, etc.) with access to a pool of memory (e.g., in two or more sets 1430, 1432 of memory devices 720) local to the memory sled 1200. For example, during operation, a compute sled 800 or an accelerator sled 1000 may remotely write to and/or read from one or more of the memory sets 1430, 1432 of the memory sled 1200 using a logical address space that maps to physical addresses in the memory sets 1430, 1432. The memory sled 1400 includes various components similar to components of the sled 400 and/or the compute sled 800, which have been identified in FIG. 14 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the memory sled 1400 and is not repeated herein for clarity of the description of the memory sled 1400.[0072] In the illustrative memory sled 1400, the physical resources 620 are embodied as memory controllers 1420. Although only two memory controllers 1420 are shown in FIG. 14, it should be appreciated that the memory sled 1400 may include additional memory controllers 1420 in other embodiments. The memory controllers 1420 may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets 1430, 1432 based on requests received via the communication circuit 830. In the illustrative embodiment, each memory controller 1420 is connected to a corresponding memory set 1430, 1432 to write to and read from memory devices 720 within the corresponding memory set 1430, 1432 and enforce any permissions (e.g., read, write, etc.) associated with sled 400 that has sent a request to the memory sled 1400 to perform a memory access operation (e.g., read or write).[0073] In some embodiments, the memory sled 1400 may also include a controller-to- controller interconnect 1442. Similar to the resource-to-resource interconnect 624 of the sled 400 discussed above, the controller-to-controller interconnect 1442 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1442 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1442 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to- point interconnect dedicated to processor-to-processor communications. As such, in some embodiments, a memory controller 1420 may access, through the controller-to-controller interconnect 1442, memory that is within the memory set 1432 associated with another memory controller 1420. In some embodiments, a scalable memory controller is made of multiple smaller memory controllers, referred to herein as "chiplets", on a memory sled (e.g., the memory sled 1400). The chiplets may be interconnected (e.g., using EMIB (Embedded Multi- Die Interconnect Bridge)). The combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels). In some embodiments, the memory controllers 1420 may implement a memory interleave (e.g., one memory address is mapped to the memory set 1430, the next memory address is mapped to the memory set 1432, and the third address is mapped to the memory set 1430, etc.). The interleaving may be managed within the memory controllers 1420, or from CPU sockets (e.g., of the compute sled 800) across network links to the memory sets 1430, 1432, and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device.[0074] Further, in some embodiments, the memory sled 1400 may be connected to one or more other sleds 400 (e.g., in the same rack 240 or an adjacent rack 240) through a waveguide, using the waveguide connector 1480. In the illustrative embodiment, the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Tx (i.e., transmit) lanes. Each lane, in the illustrative embodiment, is either 16 GHz or 32 GHz. In other embodiments, the frequencies may be different. Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets 1430, 1432) to another sled (e.g., a sled 400 in the same rack 240 or an adjacent rack 240 as the memory sled 1400) without adding to the load on the optical data connector 834.[0075] Referring now to FIG. 15, a system for executing one or more workloads (e.g., applications) may be implemented in accordance with the data center 100. In the illustrative embodiment, the system 1510 includes an orchestrator server 1520, which may be embodied as a managed node comprising a compute device (e.g., a processor 820 on a compute sled 800) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled to multiple sleds 400 including a large number of compute sleds 1530 (e.g., each similar to the compute sled 800), memory sleds 1540 (e.g., each similar to the memory sled 1400), accelerator sleds 1550 (e.g., each similar to the accelerator sled 1000), and storage sleds 1560 (e.g., each similar to the storage sled 1200). One or more of the sleds 1530, 1540, 1550, 1560 may be grouped into a managed node 1570, such as by the orchestrator server 1520, to collectively perform a workload (e.g., an application 1532 executed in a virtual machine or in a container). The managed node 1570 may be embodied as an assembly of physical resources 620, such as processors 820, memory resources 720, accelerator circuits 1020, or data storage 1250, from the same or different sleds 400. Further, the managed node may be established, defined, or "spun up" by the orchestrator server 1520 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. In the illustrative embodiment, the orchestrator server 1520 may selectively allocate and/or deallocate physical resources 620 from the sleds 400 and/or add or remove one or more sleds 400 from the managed node 1570 as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application 1532). In doing so, the orchestrator server 1520 may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in each sled 400 of the managed node 1570 and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied. The orchestrator server 1520 may additionally determine whether one or more physical resources may be deallocated from the managed node 1570 while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload). Alternatively, if the QoS targets are not presently satisfied, the orchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application 1532) while the workload is executing. Similarly, the orchestrator server 1520 may determine to dynamically deallocate physical resources from a managed node if the orchestrator server 1520 determines that deallocating the physical resource would result in QoS targets still being met.[0076] Additionally, in some embodiments, the orchestrator server 1520 may identify trends in the resource utilization of the workload (e.g., the application 1532), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application 1532) and pre-emptively identifying available resources in the data center 100 and allocating them to the managed node 1570 (e.g., within a predefined time period of the associated phase beginning). In some embodiments, the orchestrator server 1520 may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in the data center 100. For example, the orchestrator server 1520 may utilize a model that accounts for the performance of resources on the sleds 400 (e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA). As such, the orchestrator server 1520 may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and the sled 400 on which the resource is located).[0077] In some embodiments, the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100. Additionally or alternatively, in some embodiments, the orchestrator server 1520 may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within the data center 100 and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes. The orchestrator server 1520 may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in the data center 100.[0078] To reduce the computational load on the orchestrator server 1520 and the data transfer load on the network, in some embodiments, the orchestrator server 1520 may send self- test information to the sleds 400 to enable each sled 400 to locally (e.g., on the sled 400) determine whether telemetry data generated by the sled 400 satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Each sled 400 may then report back a simplified result (e.g., yes or no) to the orchestrator server 1520, which the orchestrator server 1520 may utilize in determining the allocation of resources to managed nodes.[0079] Referring now to FIG. 16, a system 1610 for enabling and metering (e.g., tracking) the utilization of features of components on an as-requested (e.g., on demand) basis includes an orchestrator server 1620, similar to the orchestrator server 1520, in communication with multiple sleds 1616, including a compute sled 1622, similar to the compute sled 800, that executes workloads 1630, 1632 (e.g., sets of operations, such as software applications) similar to the application 1532, in corresponding virtual machines 1640, 1642 or containers, on behalf of a client device 1614. The illustrative sleds 1616 also include an accelerator sled 1624 similar to the accelerator sled 1000, a memory sled 1626 similar to the memory sled 1400, and a data storage sled 1628, similar to the data storage sled 1200. The orchestrator server 1620, in the illustrative embodiment, also includes a utilization manager logic unit 1697, which may be embodied as any device or circuitry (e.g., a processor, a controller, reconfigurable circuitry, an FPGA, an ASIC, etc.) or software configured to selectively enable or disable components, referred to herein as meterable components, of the sleds 1616 on an as-requested basis (e.g., in response to a request from a workload 1630, 1632) and meter (e.g., track) the utilization of those meterable components to determine corresponding costs to be passed on to the customer associated with the requesting workload 1630, 1632. As described above, for typical data centers, a service level agreement (SLA) may specify a set of quality of service (QoS) targets and a fee for satisfying the QoS targets, without regard to the specific hardware components utilized to satisfy the QoS targets. Accordingly, in the illustrative embodiment, metering by the orchestrator server 1620 may meter utilization outside of the SLA of the customer. Furthermore, in the illustrative embodiment, the orchestrator server 1620 includes a key manager logic unit 1698 which may be embodied as any device or circuitry (e.g., a processor, a controller, reconfigurable circuitry, an FPGA, an ASIC, etc.) or software configured to provide keys (e.g., license keys) to the sleds (e.g., to a workload that is requesting use of a meterable component) and to the meterable component that is to be utilized by the workload, to ensure that the meterable component only performs operations on behalf of workloads that have been authorized (e.g., by the orchestrator server 1620) to have operations performed by the corresponding meterable component. In some embodiments, the system 1610 may also include a provisioner compute device 1618, which may be embodied as any computer (e.g., a compute device having an architecture similar to the orchestrator server 1620) associated with a manufacturer of the meterable components of the sleds 1616 that communicates with the orchestrator server 1620 (e.g., receives requests from the orchestrator server 1620 to enable or disable certain meterable components or features thereof) and communicates with the corresponding sleds 1616 (e.g., with controllers of those sleds 1616) to selectively enable meterable components and features thereof in response to the requests from the orchestrator server 1620, as described in more detail herein.[0080] In the illustrative embodiment, the compute sled 1622 includes an I/O (input and output) virtualization logic unit 1660, which may be embodied as any device or circuitry (e.g., an integrated circuit, a processor, etc.) configured to enable the isolation of I/O devices for use by virtual machines and containers, such as in single-root input/output virtualization (SR-IOV) or scalable I/O virtualization. As such, the I/O virtualization logic unit 1664 may individually assign any meterable component 1650, 1652, 1654, 1656 of any of the sleds 1616 to a corresponding workload (e.g., to a virtual machine or container in which the workload is executed). The meterable components 1650, in the illustrative embodiment, include a set of cores 1660, 1662, each of which may be embodied as a device or circuitry capable of reading and executing instructions to perform operations. In some embodiments, the I/O virtualization logic unit 1664 may be included in the set of meterable components 1650. Further, each meterable component 1650 (e.g., each core 1660, 1662) includes a corresponding controller 1664, 1666 each of which may be embodied as any device or circuitry (e.g., an integrated circuit, a processor, etc.) configured to selectively enable (e.g., provide power to) or disable (e.g., discontinue power to) the corresponding meterable component 1650, selectively enable, disable, or adjust certain features of the corresponding meterable component (e.g., an amount of memory capacity supported by the corresponding core, a memory bandwidth supported by the core, a cryptographic processing instruction set of the core, etc.), track the utilization of the meterable component (e.g., with a monotonic timer), store a key usable for encryption and to verify permissions on requests (e.g., from workloads), report features of the meterable component including the enabled features and the disabled features, report usage of each enabled meterable component (e.g., how long a particular workload has utilized the feature(s), how many operations have been performed using the feature for the workload, etc.) and report a unique identifier of the corresponding meterable component 1650 to another compute device (e.g., to the orchestrator server or to an intermediary compute device that may aggregate data reported by multiple controllers of meterable components and send the aggregated data to the orchestrator server 1620).[0081] In the accelerator sled 1624, the meterable components 1652 include a set of accelerator devices 1670, 1672, similar to the accelerator circuits 1020 of the accelerator device 1020, and may include FPGAs, ASICs, neural network processor units (N PUs), graphics processing units (GPUs), quantum computers, neuromorphic processor units, or other devices capable of performing operations faster than a general purpose processor. The meterable components 1652 each include controllers 1674, 1676 which are similar to the controllers 1664, 1666 described above with reference to the meterable components 1650 of the compute sled 1622. The features of the meterable components 1652 may include a number of slots (e.g., sets of gates of a field programmable gate array (FPGA) available to be configured with a bit stream (e.g., a set of data defining a configuration of the gates to implement a particular function)) available, bit streams available for use, a memory capacity and/or memory bandwidth supported by each meterable component 1652, or other features. The meterable components 1654 of the memory sled 1626 include multiple memory devices 1680, 1682 similar to the memory devices 720. The meterable components 1654 each include controllers 1684, 1686 similar to the controllers 1664, 1666 described above. The features of the meterable components 1654 may include a memory bandwidth supported, memory capacity supported, a type of memory architecture utilized (e.g., dynamic random access memory, 3DXP, etc.), and/or other features. The meterable components 1656 of the data storage sled 1628 include multiple data storage devices 1690, 1692, similar to the data storage 1250 (e.g., solid state drives 1254, hard disk drives, etc.). The meterable components 1656 each include controllers 1694, 1696 similar to the controllers 1664, 1666 described above. The features of the meterable components 1656 may include the storage capacity, read/write throughput (e.g., bytes per second, etc.), and/or other features. By selectively enabling the hardware components (e.g., the meterable components 1650, 1652, 1654, 1656) and specific features of those components for workloads on an as-requested basis, the system 1610 provides more fine grained control and tracking of costs incurred during the execution of services (e.g., workloads) and enables more cost efficient operation (e.g., in terms of power usage and monetary cost) of the data center, as compared to typical data centers. Furthermore, by providing meterable components with sets of available features that may be selectively enabled and disabled, on an as requested basis, the administrator of the system 1610 may need to only track one SKU (stock keeping unit) for each type of meterable component (core, accelerator device, data storage device, memory device, etc.), rather than multiple SKUs for different variations on each type of meterable component (e.g., one version of a data storage device has features A, B, and C, while another version of the data storage device only has features A and B).[0082] The orchestrator server 1620, the sleds 1616, the provisioner compute device1618, and the client device 1614 are illustratively in communication via a network 1612, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.[0083] Referring now to FIG. 17, in operation, the orchestrator server 1620 may execute a method 1700 for enabling and metering the utilization of components (e.g., the meterable components 1650, 1652, 1654, 1656) on demand. The method 1700 begins with block 1702 in which the orchestrator server 1620 determines whether to enable component and feature utilization metering. In doing so, the orchestrator server 1620 may determine to enable metering in response to detecting a set of available components and the features of those components (e.g., the features of the meterable components 1650, 1652, 1654, 1656) as reported by the corresponding controllers 1664, 1666, 1674, 1676, 1684, 1686, 1694, 1696 (e.g., in response to a query from the orchestrator server 1620), detecting that a setting in a configuration file accessible to the orchestrator server (e.g., configured by an administrator of the system 1610) indicates to enable component and feature utilization metering, in response to a determination that the orchestrator server 1620 is equipped with the utilization manager logic unit 1697, and/or based on other factors. Regardless, in response to a determination to enable component utilization metering, the method 1700 advances to block 1704 in which the orchestrator server 1620 may disable all of the meterable components 1650, 1652, 1654, 1656 and their features (e.g., as a default state, such that they may be selectively enabled upon request, as described in more detail herein). In doing so, the orchestrator server 1620 may send a request to the sleds 1616 in the data center to disable each meterable component 1650, 1652, 1654, 1656, as indicated in block 1706. Further, the orchestrator server 1620 may request the sleds 1616 to not provide power to the meterable components 1650, 1652, 1654, 1556 (e.g., to conserve electricity), as indicated in block 1708. Accordingly, each corresponding controller 1664, 1666, 1674, 1676, 1684, 1686, 1694, 1696, in the illustrative embodiment, may discontinue any provisioning of power to the corresponding meterable components.[0084] Subsequently, in block 1710, the orchestrator server 1620, in the illustrative embodiment, receives a request to enable one or more components, and optionally, certain features of those components, in the data center (e.g., in the system 1610) to assist in the execution of a workload. In doing so, in the illustrative embodiment, the orchestrator server 1620 receives the request from a compute sled (e.g., the compute sled 1622) assigned to execute the workload (e.g., the workload 1630), as indicated in block 1712. As indicated in block 1714, in the illustrative embodiment, the orchestrator server 1620 receives a request that identifies the type(s) of component(s) to enable. For example, and as indicated in block 1716, the orchestrator server 1620 may receive a request that identifies a requested feature set (e.g., a set of operations that the component(s) should be able to perform). The request may indicate the feature set using any data (e.g., numbers or codes that are associated with each type of service, a textual description of the features, which may be matched, by the orchestrator server 1620 against a database of features of each meterable component 1650, 1652, 1654, 1656 of each sled 1616, etc.) indicative of one or more features. In doing so, the orchestrator server 1620 may receive a request that identifies I/O virtualization functions, similar to those provided by the I/O virtualization logic unit 1664 described with reference to FIG. 16, as indicated in block 1718. Additionally or alternatively, the orchestrator server 1620 may receive a request that identifies acceleration operations, corresponding to features of one or more of the accelerator devices 1670, 1672, as indicated in block 1720. For example, the orchestrator server 1620 may receive a request that identifies machine learning operations, as indicated in block 1722. Additionally or alternatively, the request may identify cryptographic operations (e.g., encryption and/or decryption of data) to be accelerated, as indicated in block 1724 or may identify data compression and/or decompression operations, as indicated in block 1726. The orchestrator server 1620 may additionally or alternatively receive a request that identifies memory operations (e.g., a request to have read and write access to a specified amount of volatile memory), as indicated in block 1728. Similarly, the orchestrator server 1620 may receive a request that identifies data storage operations (e.g., a request to have access to block storage or obj ect storage), as indicated in block 1730. The requested features may also include a supported memory bandwidth of the enabled component, an instruction set to be supported by a component (e.g., by a core 1660), a number of FPGA slots to enable, or other features. Further, the request may include the quantity of each type of component to enable (e.g., number of data storage devices, a number of cores, etc.). As indicated in block 1732, the orchestrator server 1620 may receive a request that additionally indicates a performance target to be satisfied by the component(s) (e.g., a target number of I/O operations per second, a target number of neural network convolution operations per second, a target number of floating point operations per second, a target latency, etc.). The orchestrator server 1620 may also receive a request that includes data indicative of the total amount of utilization being requested, as indicated in block 1734. For example, and as indicated in block 1736, the orchestrator server 1620 may receive a request that identifies a total cost (e.g., a monetary cost) that is not to be exceeded and/or a total time period in which the components are to be utilized by the workload 1630. The orchestrator server 1620 may also receive, in the request, data indicative of a cost per unit of time (e.g., per second) or per operation (e.g., per write or read of data, per encryption or de-encryption of a data set, etc.), as indicated in block 1738. Subsequently, the method 1700 advances to block 1740 of FIG. 18, in which the orchestrator server 1620 identifies components to enable.[0085] Referring now to FIG. 18, in identifying the components to enable, the orchestrator server 1620, in the illustrative embodiment, identifies one or more components (e.g., of the meterable components 1650, 1652, 1654, 1656) that match the requested type(s) of components (e.g., by comparing the requested features to a database of features and performance associated with each meterable component 1650, 1652, 1654, 1656), as indicated in block 1742. As such, and as indicated in block 1744, the orchestrator server 1620 may identify component(s) configured to perform operations identified in the request, as indicated in block 1744, and may identify, of those components, the components that are configured to satisfy the requested performance targets, as indicated in block 1746. Further, and as indicated in block 1748, the orchestrator server 1620 may further narrow the selection by identifying component(s) (e.g., one or more of the meterable components 1650, 1652, 1654, 1656) that also satisfy the requested cost per unit of time or cost per operation (e.g., by comparing the costs specified in block 1738 of FIG. 17 to a database of costs associated with the features of the meterable components 1650, 1652, 1654, 1656), as indicated in block 1748.[0086] Subsequently, and as indicated in block 1750, the orchestrator server 1620 enables the identified component(s). In doing so, and as indicated in block 1752, the orchestrator server 1620 may enable one or more components of a compute sled (e.g., one or more of the meterable components 1650 of the compute sled 1622). In doing so, in the illustrative embodiment, the orchestrator server 1620 enables the I/O virtualization logic unit 1664, as indicated in block 1754. The orchestrator server 1620 may also enable one or more of the cores 1662, as indicated in block 1756. As indicated in block 1758, the orchestrator server 1620 may enable one or more accelerator devices (e.g., one or more of the accelerator devices 1670, 1672 of the accelerator sled 1624). In doing so, the orchestrator server 1620 may enable an FPGA, as indicated in block 1760, a neural network processing unit (NNPU) as indicated in block 1762, a graphics processing unit (GPU) as indicated in block 1764, an ASIC as indicated in block 1766 and/or other meterable components 1654 of the accelerator sled 1626. Additionally or alternatively, the orchestrator server 1620 may enable one or more memory devices (e.g., one or more of the memory devices 1680, 1682 of the memory sled 1626), as indicated in block 1768 and/or one or more data storage devices (e.g., one or more of the data storage devices 1690, 1692 of the data storage sled 1628), as indicated in block 1770. In some embodiments, the orchestrator server 1620 may send a request to the provisioner compute device 1618, identifying the meterable component and feature(s) thereof to be enabled (e.g., by a specified unique identifier such as a universally unique identifier (UUID)), and the provisioner compute device 1618 sends a corresponding encrypted message to the corresponding sled 1616 to enable the component and feature(s) thereof. In some embodiments, as indicated in block 1772, to enable the component(s) and feature(s) of the component, the orchestrator server 1620 may send a request to the sled 1616 on which each identified component (e.g., each meterable component that is be enabled) is located to enable (e.g., provide power to) the component and specified features of that component. The request may include the unique identifier of the meterable component (usable by the receiving sled to identify and enable the corresponding component and features thereof, such as by providing a corresponding request to the corresponding controller 1664, 1666, 1674, 1676, 1684, 1686, 1694, 1696). The orchestrator server 1620 may also send the unique identifier of each component (e.g., a universally unique identifier (UUID), an address, such as a media access control (MAC) address or internet protocol (IP) of the component, etc.) to the workload (e.g., the workload 1630) to enable the workload to send requests to the component(s), as indicated in block 1774. Further, in the illustrative embodiment, the orchestrator server 1620 sends a license key (e.g., a cryptographic key) associated with each enabled component to the workload to enable the workload 1630 to use the component and the selected features thereof (e.g., by including the license key in any requests to the corresponding component to perform one or more operations). The orchestrator server 1620 may also send those license keys to the corresponding components for use in verifying (e.g., comparing the license key in a request from the workload to the license key received from the orchestrator server 1620) requests from the workload. In other embodiments, the components may already have the license keys (e.g., stored in firmware of each component) and/or the license key may be provided by the provisioner compute device 1618. As indicated in block 1778, the orchestrator server 1620 may also send, to each enabled component, an identifier of the workload that has been given authorization to utilize those components and features thereof (e.g., for use in verifying requests from the workload). Subsequently, the method 1700 advances to block 1780 of FIG. 19 in which the orchestrator server 1620 meters the utilization of the enabled component(s) and features thereof.[0087] Referring now to FIG. 19, in metering the utilization of the enabled component(s) and features thereof, the orchestrator server 1620 may monitor the amount of time that each component and features thereof have been utilized (e.g., monitor the amount of time since the components were enabled for the workload, in block 1750 of FIG. 18), as indicated in block 1782. Information indicative of the amount of time that each component and features thereof have been utilized may be reported by each corresponding controller 1664, 1666, 1674, 1676, 1684, 1686, 1694, 1696 to the orchestrator server 1620 or to the provisioner compute device 1618, which may then report the information to the orchestrator server 1620. In some embodiments, the information may be aggregated by one or more compute devices (e.g., a pod controller, a rack controller, etc.) before being sent to the orchestrator server 1620 and/or to the provisioner compute device 1618. As indicated in block 1784, the orchestrator server 1620 may monitor the number of operations that each component has performed for the workload (e.g., by receiving telemetry data from the corresponding sleds indicative of the operations performed for the workload). Further, and as indicated in block 1786, the orchestrator server 1620 may determine a total cost for the utilization of the enabled component(s) (e.g., by multiplying the amount of time from block 1782 by a cost per unit of time or by multiplying the number of operations by a cost per operation). In some embodiments, the orchestrator server 1620 may determine the total cost as a function of the amount of capacity (e.g., bandwidth) reserved for a workload over a particular time period, regardless of whether the workload actually utilized that entire reserved capacity over the time period. In some embodiments, the orchestrator server 1620 may multiply the determined total cost by a factor (e.g., 1.10) to increase the total cost if the requested components and features were utilized during a period of high demand (e.g., a time period in which the orchestrator server 1620 received at least a predefined number of requests for the component/feature). In other embodiments, the determination of the total cost is performed by the provisioner compute device 1618 and reported to the orchestrator server 1620.[0088] Subsequently, in block 1788, the orchestrator server 1620 may determine whether to discontinue the utilization of the component(s). In doing so, and as indicated in block 1790, the orchestrator server 1620 may determine whether the total cost for utilization of the component(s) (e.g., from block 1786) satisfies (e.g., is equal to) a threshold cost, such as the cost specified in block 1736. If so, the orchestrator server 1620, in the illustrative embodiment, determines to discontinue utilization of the component(s). Additionally or alternatively, the orchestrator server 1620 may receive a request from the workload 1630 to discontinue utilization of the component(s), as indicated in block 1792. In block 1794, the orchestrator server 1620 determines the subsequent course of action as a function of the determination made in block 1788 (e.g., whether to discontinue utilization of the components )). If not, the method 1700 loops back to block 1710 of FIG. 17, in which the orchestrator server 1620 may receive and respond to another request from a workload (e.g., the workload 1630 or the workload 1632) to enable one or more components. Otherwise, the method 1700 advances to block 1796 in which the orchestrator server 1620 disables the component(s).[0089] In disabling the component(s), the orchestrator server 1620 may send a request to each sled 1616 on which each enabled component is located to disable the corresponding component, as indicated in block 1798. As indicated in block 1798, in doing so, the orchestrator server 1620 may send a request to each corresponding sled 1616 to stop providing power to the corresponding component, as indicated in block 1800. Further, the orchestrator server 1620 may send a notification to the workload (e.g., the workload 1630) that the one or more components have been disabled, as indicated in block 1802. As indicated in block 1804, the orchestrator server 1620 may also send a replacement license key to each component being disabled, for use in verifying request(s) from a workload in the future (e.g., expiring the license key from block 1776). The orchestrator server 1620, in the illustrative embodiment, also discontinues metering of the disabled components, as indicated in block 1806 and, as indicated in block 1808, may bill (e.g., deduct money from an account, send a request for payment, etc.) the customer associated with the workload (e.g., the workload 1630) for utilization of the component(s). In some embodiments, the request to enable a component and potentially specific features of the component may include the amount of time that the component and feature(s) is to be enabled and the components/features are deactivated upon expiration of that time period (e.g., by the corresponding controller 1664, 1666, 1674, 1676, 1684, 1686, 1694, 1696) without any affirmative action on the part of the orchestrator server 1620. Further, in some embodiments, one or more of the operations of blocks 1780 through 1808 may be performed by the provisioner compute device 1618, rather than the orchestrator server 1620. Additionally, in some embodiments, the provisioner compute device 1618 may provide an invoice to the operator of the system 1610 (e.g., to the orchestrator server 1620) for the utilization of the components, and the orchestrator server 1620 may subsequently pass on all or a portion of the cost to customer(s) associated with the workloads that utilized the components. Subsequently, the method 1700 loops back to block 1710 of FIG. 17, in which the orchestrator server 1620 may receive another request to enable one or more components.EXAMPLES[0090] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0091] Example 1 includes a compute device comprising a network interface controller; and circuitry to receive, through a network and with the network interface controller, a request to enable a component of a sled to assist in the execution of a workload; enable, in response to the request, the component to assist in the execution of the workload; and meter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.[0092] Example 2 includes the subject matter of Example 1 , and wherein the circuitry is further to send, to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.[0093] Example 3 includes the subj ect matter of any of Examples 1 and 2, and wherein to enable the component comprises to send a request to the sled on which the component is located to provide power to the component.[0094] Example 4 includes the subject matter of any of Examples 1 -3, and wherein to receive a request to enable a component comprises to receive a request that includes data indicative of a type of component to enable and the circuity is further to identify, as a function of data included in the request, the component to enable.[0095] Example 5 includes the subject matter of any of Examples 1-4, and wherein to receive a request to enable a component comprises to receive a request that includes utilization limit data indicative of a total amount of utilization requested, and wherein the circuitry is further to determine whether a present total cost of utilization of the component satisfies the utilization limit data; and disable, in response to a determination that the present total cost of utilization satisfies the utilization limit data, the component.[0096] Example 6 includes the subject matter of any of Examples 1-5, and wherein the circuitry is further to receive a request from the workload to discontinue utilization of the component; and send, in response to the request to discontinue utilization, a request to the sled on which the component is located to no longer provide power to the component.[0097] Example 7 includes the subject matter of any of Examples 1-6, and wherein the circuitry is further to determine whether to discontinue utilization, by the workload, of the component; and send, in response to a determination to discontinue utilization, a request to the sled on which the component is located to disable the component.[0098] Example 8 includes the subject matter of any of Examples 1-7, and wherein the circuitry is further to send, to the sled on which the component is located, a replacement license key for use in verifying subsequent requests by a workload to perform one or more operations with the component.[0099] Example 9 includes the subject matter of any of Examples 1-8, and wherein to enable the component comprises to enable an I/O virtualization logic unit.[00100] Example 10 includes the subject matter of any of Examples 1-9, and wherein to enable the component comprises to enable a core of the compute sled.[00101] Example 11 includes the subject matter of any of Examples 1-10, and wherein to enable the component comprises to enable an accelerator device.[00102] Example 12 includes the subject matter of any of Examples 1-11, and wherein to enable the component comprises to enable a memory device.[00103] Example 13 includes the subject matter of any of Examples 1-12, and wherein to enable the component comprises to enable a data storage device.[00104] Example 14 includes the subject matter of any of Examples 1-13, and wherein to enable the component comprises to send a request to a provisioner compute device to send a message to the sled to enable the component.[00105] Example 15 includes the subject matter of any of Examples 1-14, and wherein to receive the request to enable a component comprises to receive a request to enable a specified feature of a set of features supported by the component; and wherein to enable the component comprises to enable the specified feature of the component.[00106] Example 16 includes the subject matter of any of Examples 1-15, and wherein to meter the utilization of the component by the workload comprises to meter utilization of the component outside of a service-level agreement of the customer.[00107] Example 17 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute device to receive, through a network, a request to enable a component of a sled to assist in the execution of a workload; enable, in response to the request, the component to assist in the execution of the workload; and meter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.[00108] Example 18 includes the subject matter of Example 17, and wherein the plurality of instructions further cause the compute device to send, to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.[00109] Example 19 includes the subject matter of any of Examples 17 and 18, and wherein to enable the component comprises to send a request to the sled on which the component is located to provide power to the component.[00110] Example 20 includes the subject matter of any of Examples 17-19, and wherein to receive a request to enable a component comprises to receive a request that includes data indicative of a type of component to enable and the plurality of instructions further cause the compute device to identify, as a function of data included in the request, the component to enable.[00111] Example 21 includes the subject matter of any of Examples 17-20, and wherein to receive a request to enable a component comprises to receive a request that includes utilization limit data indicative of a total amount of utilization requested, and wherein the plurality of instructions further cause the compute device to determine whether a present total cost of utilization of the component satisfies the utilization limit data; and disable, in response to a determination that the present total cost of utilization satisfies the utilization limit data, the component.[00112] Example 22 includes a method comprising receiving, by a compute device and through a network, a request to enable a component of a sled to assist in the execution of a workload; enabling, by the compute device and in response to the request, the component to assist in the execution of the workload; and metering, by the compute device, the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.[00113] Example 23 includes the subject matter of Example 22, and further including sending, by the compute device and to a compute sled assigned to execute the workload, a license key to include in one or more requests to the component to execute one or more operations.[00114] Example 24 includes the subject matter of any of Examples 22 and 23, and wherein enabling the component comprises sending a request to the sled on which the component is located to provide power to the component.[00115] Example 25 includes the subject matter of any of Examples 22-24, and wherein receiving a request to enable a component comprises receiving a request that includes data indicative of a type of component to enable, the method further comprising identifying, by the compute device and as a function of data included in the request, the component to enable. |
One embodiment provides for a machine-learning hardware accelerator comprising a compute unit having an adder and a multiplier that are shared between integer data path and a floating-point datapath,the upper bits of input operands to the multiplier to be gated during floating-point operation. |
1.A graphics processing unit (GPU), including:Multiple memory controllers;A cache memory, which is coupled with the plurality of memory controllers;A graphics multiprocessor coupled with the cache memory and the plurality of memory controllers, the graphics multiprocessor having a single instruction multithreading (SIMT) architecture, wherein the graphics multiprocessor includes:Register file; andA circuit coupled with the register file, the circuit including a first core for performing mixed-precision matrix operations, and a second core for performing multiple calculation operations in response to a single instruction, wherein the multiple calculations The operation includes a first operation for performing a fusion multiplication-addition, and a second operation for applying a modified linear unit function to the result of the first operation.2.The GPU of claim 1, wherein the first operation and the second operation are single instruction multiple data (SIMD) operations.3.The GPU of claim 1, wherein the plurality of calculation operations are performed on an input in a 16-bit floating point format having a 1-bit sign and an 8-bit exponent.4.The GPU of claim 3, wherein the second core includes a dynamic precision processing resource, the dynamic precision processing resource can be configured to combine the execution of the single instruction to automatically input a 32-bit floating point format Converted to the 16-bit floating point format.5.The GPU of claim 4, wherein the dynamic precision processing resource includes a configuration circuit for dynamically configuring the precision of the functional circuit in the dynamic precision processing resource.6.The GPU of claim 5, wherein the configuration circuit is configured to configure the first stage of the dynamic precision processing resource to operate at the first precision, and configure the second stage of the dynamic precision processing resource to Operate with second precision.7.The GPU of claim 1, wherein the modified linear unit function is an activation function associated with the first neuron of the neural network.8.A system including:Memory deviceA graphics processing unit (GPU), which includes a plurality of memory controllers coupled with the memory device, a cache memory coupled with the plurality of memory controllers, and the cache memory and the plurality of memory controllers A graphics multiprocessor coupled with a processor, the graphics multiprocessor has a single instruction multithreading (SIMT) architecture, wherein the graphics multiprocessor includes:Register file; andA circuit coupled with the register file, the circuit including a first core for performing mixed-precision matrix operations, and a second core for performing multiple calculation operations in response to a single instruction, wherein the multiple calculations The operation includes a first operation for performing a fusion multiplication-addition, and a second operation for applying a modified linear unit function to the result of the first operation.9.The system of claim 8, wherein the first operation and the second operation are single instruction multiple data (SIMD) operations.10.The system of claim 8, wherein the plurality of calculation operations are performed on an input in a 16-bit floating point format having a 1-bit sign and an 8-bit exponent.11.The system of claim 10, wherein the second core includes a dynamic precision processing resource, the dynamic precision processing resource can be configured to combine the execution of the single instruction to automatically input a 32-bit floating point format Converted to the 16-bit floating point format.12.The system of claim 11, wherein the dynamic precision processing resource includes a configuration circuit for dynamically configuring the precision of the functional circuit in the dynamic precision processing resource.13.The system of claim 12, wherein the configuration circuit is configured to configure the first stage of the dynamic precision processing resource to operate at the first precision, and configure the second stage of the dynamic precision processing resource to Operate with second precision.14.The system of claim 8, wherein the modified linear unit function is an activation function associated with the first neuron of the neural network.15.One method includes:Fetch a single instruction from the high-speed buffer memory of the graphics processing unit (GPU);Loading the operand associated with the single instruction into the register file of the GPU;Decoding the single instruction into a decoded instruction;The decoded instruction is executed by a circuit coupled with the register file. The circuit includes a first core and a second core. The first core is used to perform mixed-precision matrix operations, and the second core is used to Execute the decoded instruction to perform multiple calculation operations; andThe plurality of calculation operations are performed by the second core, wherein the plurality of calculation operations include a first operation for performing a fusion multiplication-addition, and a modified linear unit function for applying the first operation The second operation of the result of the operation.16.The method of claim 15, wherein the first operation and the second operation are single instruction multiple data (SIMD) operations.17.The method of claim 15, further comprising: performing the plurality of calculation operations on an input in a 16-bit floating point format having a 1-bit sign and an 8-bit exponent.18.The method of claim 17, further comprising: automatically converting the 32-bit floating point format input into the 16-bit floating point format in conjunction with the execution of the decoded instruction.19.The method according to claim 18, further comprising: dynamically configuring the accuracy of the functional circuit in the second core in conjunction with the execution of the fusion multiply-add.20.The method of claim 19, wherein dynamically configuring the accuracy of the functional circuit comprises: configuring the first stage of the functional circuit to operate with the first accuracy, and configuring the second stage of the functional circuit To operate with second precision.21.A machine-readable medium that stores code that, when executed, causes a machine to perform the method according to any one of claims 15-20.22.An apparatus comprising a module for executing the method according to any one of claims 15-20. |
Instructions and logic used to execute floating-point and integer operations for machine learningThis application is a divisional application of the patent application of the same name with the application date of April 27, 2018 and the application number of 201810394160.7.cross referenceThis application claims the benefit of U.S. Provisional Application No. 62/491,699 filed on April 28, 2017, which is hereby incorporated by reference.Technical fieldThe embodiments generally relate to data processing, and more specifically to data processing via a general graphics processing unit.Background techniqueCurrent parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data (such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc.). Traditionally, graphics processors have used fixed-function computing units to process graphics data; however, recently, parts of graphics processors have been made programmable so that such processors can support a wide variety of processing vertex and fragment data. operate.In order to further increase performance, graphics processors generally implement processing techniques such as pipeline operations that try to process as much graphics data as possible across different parts of the graphics pipeline in parallel. Parallel graphics processors with single instruction multithreading (SIMT) architecture are designed to maximize the amount of parallel processing in the graphics pipeline. In the SIMT architecture, groups of parallel threads try to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of the software and hardware used for the SIMT architecture can be found in Shane Cook's CUDA Programming, Chapter 3, pages 37-51 (2013) and/or Nicholas Wilt's CUDA Handbook, A Comprehensive Guide to GPU Programming, Chapter 2.6. Found in sections 2 to 3.1.2 (June 2013).Description of the drawingsA more detailed description of the present invention may be provided by referring to embodiments so that the features of the present invention may be understood in detail, some of which are shown in the accompanying drawings. It should be noted, however, that the drawings only show typical embodiments, and therefore are not to be considered as limiting the scope of all embodiments.Figure 1 is a block diagram showing a computer system configured to implement one or more aspects of the embodiments described herein.Figures 2A-2D show parallel processor components according to embodiments.3A-3B are block diagrams of a graphics multiprocessor according to an embodiment.Figures 4A-4F show exemplary architectures in which multiple GPUs are communicatively coupled to multiple multi-core processors.Fig. 5 shows a graphics processing pipeline according to an embodiment.Figure 6 shows a machine learning software stack according to an embodiment.Fig. 7 shows a highly parallel general graphics processing unit according to an embodiment.Fig. 8 shows a multi-GPU computing system according to an embodiment.Figures 9A-9B show the layers of an exemplary deep neural network.Figure 10 shows an exemplary recurrent neural network.Figure 11 shows the training and deployment of a deep neural network.Fig. 12 is a block diagram showing distributed learning.Figure 13 shows an exemplary inference system on chip (SOC) suitable for performing inference using a trained model.Fig. 14 is a block diagram of a multi-processor unit according to an embodiment.15A-15B show the design of a logic unit that performs integer and floating-point fusion multiply-add operations according to an embodiment.Figure 16 shows a merged multiply-add logic unit with merged floating point and integer data paths according to an embodiment.Figures 17A-17B show a logic unit including a combined calculation circuit to perform floating point and integer fusion multiplication and accumulation operations according to an embodiment.Figures 18A-18B show a data processing system and associated calculation and logic units that perform accelerated training and inference operations for machine learning.FIG. 19 shows the details of the activation instruction module according to the embodiment.Fig. 20 shows a random quantization unit according to an embodiment.Figure 21 shows an FPU encoding and configuration module according to one embodiment.FIG. 22 shows the logic of processing instructions using a dynamically configurable computing unit according to an embodiment.Figures 23A-23B are flowcharts showing the logic for performing sparse computing operations within the GPGPU provided by the embodiments described herein.Fig. 24 is a block diagram of a processing system according to an embodiment.Fig. 25 is a block diagram of a processor according to an embodiment.Fig. 26 is a block diagram of a graphics processor according to an embodiment.Figure 27 is a block diagram of a graphics processing engine of a graphics processor according to some embodiments.Fig. 28 is a block diagram of a graphics processor provided by an additional embodiment.Figure 29 shows thread execution logic including an array of processing elements employed in some embodiments.Figure 30 is a block diagram illustrating a graphics processor instruction format according to some embodiments.FIG. 31 is a block diagram of a graphics processor according to another embodiment.Figures 32A-32B illustrate graphics processor command formats and command sequences according to some embodiments.Figure 33 illustrates an exemplary graphics software architecture for a data processing system according to some embodiments.FIG. 34 is a block diagram showing a development system of an IP core according to an embodiment.Figure 35 is a block diagram illustrating an exemplary system-on-chip integrated circuit according to an embodiment.FIG. 36 is a block diagram showing an additional graphics processor according to an embodiment.Fig. 37 is a block diagram showing an additional exemplary graphics processor of a system-on-chip integrated circuit according to an embodiment.Detailed waysIn some embodiments, a graphics processing unit (GPU) is communicatively coupled to the host/processor core to accelerate graphics operations, machine learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/core through a bus or another interconnect (for example, a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the core and communicatively coupled to the core through an internal processor bus/interconnect (ie, inside the package or chip). Regardless of the way the GPU is connected, the processor core can allocate work to the GPU in the form of a sequence of commands/instructions contained in the work descriptor. The GPU then uses dedicated circuits/logic for efficiently processing these commands/instructions.In the following description, many specific details are explained to provide a more thorough understanding. However, it will be obvious to those skilled in the art that the embodiments described herein can be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the details of the embodiments of the present invention.System OverviewFigure 1 is a block diagram showing a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processors 102 and system memory 104 that communicate via an interconnection path, which may include a memory hub 105. The memory hub 105 may be a separate component within the chipset component, or may be integrated in the one or more processors 102. The memory hub 105 is coupled with the I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input devices 108. In addition, the I/O hub 107 may enable a display controller to provide output to one or more display devices 110A, which may be included in the one or more processors 102. In one embodiment, the one or more display devices 110A coupled with the I/O hub 107 may include local, internal or embedded display devices.In one embodiment, the processing subsystem 101 includes one or more parallel processors 112 that are coupled to the memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards-based communication link technologies or protocols (such as but not limited to PCI Express), or may be a supplier-specific communication interface or communication structure. In one embodiment, the one or more parallel processors 112 form a computing-intensive parallel or vector processing system, which includes a large number of processing cores and/or processing clusters, such as many integrated core (MIC) processors. In one embodiment, the one or more parallel processors 112 form a graphics processing subsystem, and the graphics processing subsystem can be connected to one of the one or more display devices 110A coupled via the I/O hub 107 Output pixels. The one or more parallel processors 112 may also include a display controller and a display interface (not shown) to enable direct connection to one or more display devices 110B.In the I/O subsystem 111, the system storage unit 114 may be connected to the I/O hub 107 to provide a storage mechanism for the computing system 100. The I/O switch 116 can be used to provide an interface mechanism to enable the I/O hub 107 and other components that can be integrated into the platform (such as the network adapter 118 and/or the wireless network adapter 119) and can be plugged in via one or more Type device 120 adds connections between various other devices. The network adapter 118 may be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 may include one or more of the following: Wi-Fi, Bluetooth, Near Field Communication (NFC), or other network equipment including one or more radio devices.The computing system 100 may include other components that are not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, etc., and may also be connected to the I/O hub 107. Any suitable protocol may be used, such as a PCI (Peripheral Component Interconnect)-based protocol (for example, PCI-Express), or any other bus or point-to-point communication interface and/or protocol(s), such as NV-Link high-speed interconnect Or an interconnection protocol known in the art to implement a communication path that interconnects various components in FIG. 1.In one embodiment, the one or more parallel processors 112 combine circuits optimized for graphics and video processing, and the circuits include, for example, a video output circuit, and constitute a graphics processing unit (GPU). In another embodiment, the one or more parallel processors 112 incorporate circuits optimized for general processing while maintaining the basic computing architecture described in more detail herein. In yet another embodiment, the components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processors 112, memory hub 105, processor(s) 102, and I/O hub 107 may be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 may be integrated into a single package to form a system-in-package (SIP) configuration. In one embodiment, at least a part of the components of the computing system 100 may be integrated into a multi-chip module (MCM), which may be interconnected with other multi-chip modules to form a modular computing system.It will be appreciated that the computing system 100 shown herein is illustrative and changes and modifications are possible. The connection topology can be modified as desired, and the connection topology includes the number and arrangement of bridges, the number of processors 102 (multiple), and the number of parallel processors 112 (multiple). For example, in some embodiments, the system memory 104 is connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with the system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processors 102 instead of the memory hub 105. In other embodiments, the I/O hub 107 and the memory hub 105 may be integrated into a single chip. Some embodiments may include two or more sets of processor(s) 102 attached via multiple sockets, which may be coupled with two or more instances of parallel processor(s) 112.Some of the specific components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of plug-in cards or peripherals can be supported, or some components can be eliminated. In addition, some architectures may use different terminology for components similar to those shown in FIG. 1. For example, in some architectures, the memory hub 105 may be referred to as a north bridge, and the I/O hub 107 may be referred to as a south bridge.FIG. 2A shows a parallel processor 200 according to an embodiment. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices such as a programmable processor, an application specific integrated circuit (ASIC), or a field programmable gate array (FPGA). According to an embodiment, the parallel processor 200 shown is a variant of the one or more parallel processors 112 shown in FIG. 1.In one embodiment, the parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices including other instances of the parallel processing unit 202. The I/O unit 204 can be directly connected to other devices. In one embodiment, the I/O unit 204 is connected to other devices via the use of a hub such as a memory hub 105 or a switch interface. The connection between the memory hub 105 and the I/O unit 204 forms a communication link 113. Within the parallel processing unit 202, the I/O unit 204 is connected to a host interface 206 and a memory cross switch 216, where the host interface 206 receives commands related to performing processing operations, and the memory cross switch 216 receives commands related to performing memory operations.When the host interface 206 receives the command buffer via the I/O unit 204, the host interface 206 can direct work operations for executing those commands to the front end 208. In one embodiment, the front end 208 is coupled with a scheduler 210 that is configured to distribute commands or other work items to the processing cluster array 212. In one embodiment, the scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before distributing tasks to the processing clusters of the processing cluster array 212. In one embodiment, the scheduler 210 is implemented via firmware logic executing on a microcontroller. The scheduler 210 implemented by the microcontroller can be configured to perform complex scheduling and work distribution operations with coarse granularity and fine granularity, thereby enabling context switching and rapid preemption of threads executing on the processing array 212. In one embodiment, the host software can verify the workload for scheduling on the processing array 212 via one of multiple graphics processing doorbells. The workload can then be automatically distributed across the processing array 212 by the scheduler 210 logic within the scheduler microcontroller.The processing cluster array 212 may include up to "N" processing clusters (eg, cluster 214A, cluster 214B to cluster 214N). Each cluster 214A-214N processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 may use various scheduling and/or work distribution algorithms to distribute work to the clusters 214A-214N of the processing cluster array 212, which may vary according to the workload generated by each type of program or calculation. Scheduling may be handled dynamically by the scheduler 210, or may be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. In one embodiment, the different clusters 214A-214N of the processing cluster array 212 may be allocated for processing different types of programs or for performing different types of calculations.The processing cluster array 212 may be configured to perform various types of parallel processing operations. In one embodiment, the processing cluster array 212 is configured to perform general-purpose parallel computing operations. For example, the processing cluster array 212 may include logic for performing processing tasks including filtering video and/or audio data, performing modeling operations including physical operations, and performing data transformation.In one embodiment, the processing cluster array 212 is configured to perform parallel graphics processing operations. In an embodiment in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 may include additional logic for supporting the execution of such graphics processing operations, including but not limited to texture sampling logic for performing texture operations. And tessellation logic and other vertex processing logic. In addition, the processing cluster array 212 may be configured to execute shader programs related to graphics processing, such as but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 may transfer data from the system memory via the I/O unit 204 for processing. During processing, the transferred data can be stored in on-chip memory (e.g., parallel processor memory 222) during processing and then written back to system memory.In one embodiment, when the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 may be configured to divide the processing workload into approximately equal-sized tasks to better enable the graphics processing operations to be distributed to the processing cluster array 212 Of multiple clusters 214A-214N. In some embodiments, portions of the processing cluster array 212 may be configured to perform different types of processing. For example, the first part can be configured to perform vertex shading and topology generation, the second part can be configured to perform tessellation and geometric coloring, and the third part can be configured to perform pixel shading or other screen space operations to produce Render the image. Intermediate data generated by one or more of the clusters 214A-214N may be stored in a buffer to allow the intermediate data to be transferred between the clusters 214A-214N for further processing.During operation, the processing cluster array 212 may receive processing tasks to be executed via the scheduler 210, which receives commands from the front end 208 that define processing tasks. For graphics processing operations, processing tasks may include the data to be processed and the state parameters and command indexes that define how to process the data (for example, what program to execute), such as surface (patch) data, primitives (primitive) data, vertex data and/or pixel data. The scheduler 210 may be configured to obtain the index corresponding to the task or may receive the index from the front end 208. The front end 208 may be configured to ensure that the processing cluster array 212 is configured in a valid state before the workload specified by the incoming command buffer (eg, batch buffer, push buffer, etc.) is initiated.Each of the one or more instances of the parallel processing unit 202 may be coupled with the parallel processor memory 222. The parallel processor memory 222 can be accessed via a memory cross switch 216, which can receive memory requests from the processing cluster array 212 and the I/O unit 204. The memory crossbar switch 216 can access the parallel processor memory 222 via the memory interface 218. The memory interface 218 may include a plurality of partition units (for example, a partition unit 220A, a partition unit 220B to a partition unit 220N), which may each be coupled to a part of the parallel processor memory 222 (for example, a memory unit). In one implementation, the number of partition units 220A-220N is configured to be equal to the number of memory units, so that the first partition unit 220A has a corresponding first memory unit 224A, the second partition unit 220B has a corresponding memory unit 224B, and the second partition unit 220B has a corresponding memory unit 224B. The N partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.In various embodiments, the memory units 224A-224N may include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics Double data rate (GDDR) memory. In one embodiment, the memory units 224A-224N may also include 3D stacked memories, including but not limited to high-bandwidth memories (HBM). Those skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. It is possible to store rendering targets such as frame buffers or texture maps (maps) across memory units 224A-224N, allowing partition units 220A-220N to write portions of each rendering target in parallel to efficiently use parallel processor memory 222 available bandwidth. In some embodiments, local instances of parallel processor memory 222 may be excluded to support a unified memory design that utilizes system memory along with local cache memory.In one embodiment, any of the clusters 214A-214N of the processing cluster array 212 can process any data to be written into the memory units 224A-224N in the parallel processor memory 222. The memory crossbar 216 may be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or another cluster 214A-214N, which may perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one embodiment, the memory crossbar switch 216 has a connection to the memory interface 218 for communication with the I/O unit 204 and a connection to a local instance of the parallel processor memory 222, thereby enabling different processing clusters 214A-214N The processing unit within can communicate with the system memory or other memory that is not local to the parallel processing unit 202. In one embodiment, the memory crossbar switch 216 may use virtual channels to separate the traffic flow between the clusters 214A-214N and the partition units 220A-220N.Although a single instance of the parallel processing unit 202 is shown within the parallel processor 200, any number of instances of the parallel processing unit 202 may be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single plug-in card, or multiple plug-in cards can be interconnected. Even if different instances of the parallel processing unit 202 have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences, the different instances can be configured to interoperate. For example and in one embodiment, some instances of the parallel processing unit 202 may include a floating point unit with higher accuracy relative to other instances. The system incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop computers, laptop computers or handheld personal computers, servers, workstations, Game console and/or embedded system.FIG. 2B is a block diagram of the partition unit 220 according to an embodiment. In one embodiment, the partition unit 220 is an example of one of the partition units 220A-220N of FIG. 2A. As shown, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operation unit). The L2 cache 221 is a read/write cache configured to perform load and store operations received from the memory crossbar 216 and the ROP 226. The L2 cache 221 outputs read misses and urgent write-back requests to the frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment, the frame buffer interface 225 interfaces with one of the memory units in the parallel processor memory, such as (for example, in the parallel processor memory 222) memory units 224A-224N of FIG. 2 .In graphics applications, ROP 226 is a processing unit that performs raster operations such as stencil, z-check, blending, and the like. The ROP 226 then outputs the processed graphics data, which is stored in the graphics memory. In some embodiments, the ROP 226 includes compression logic to compress the depth or color data written to the memory and decompress the depth or color data read from the memory. The compression logic may be a lossless compression logic using one or more compression algorithms among multiple compression algorithms. The type of compression performed by the ROP 226 may vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis.In some embodiments, the ROP 226 is included in each processing cluster (eg, clusters 214A-214N of FIG. 2) instead of the partition unit 220. In such embodiments, read and write requests for pixel data are transmitted through the memory crossbar 216 instead of pixel segment data. The processed graphics data may be displayed on a display device (such as one of the one or more display devices 110 of FIG. 1), routed for further processing by the processor(s) 102, or routed for It is further processed by one of the processing entities within the parallel processor 200 of FIG. 2A.Figure 2C is a block diagram of a processing cluster 214 in a parallel processing unit according to an embodiment. In one embodiment, the processing cluster is an instance of one of the processing clusters 214A-214N of FIG. 2. The processing cluster 214 may be configured to execute multiple threads in parallel, where the term "thread" refers to an instance of a specific program executed on a specific set of input data. In some embodiments, without providing multiple independent instruction units, single instruction multiple data (SIMD) instruction issuance technology is used to support parallel execution of a large number of threads. In other embodiments, single instruction multithreading (SIMT) technology is used to support the parallel execution of a large number of generally synchronized threads using a common instruction unit that is configured to provide a set of instructions to each of the processing clusters. The processing engine issues instructions. Unlike the SIMD execution system in which all processing engines usually execute the same instructions, SIMT execution allows different threads to more easily follow divergent execution paths through a given thread program. Those skilled in the art will understand that the SIMD processing system represents a functional subset of the SIMT processing system.The operation of the processing cluster 214 may be controlled via the pipeline manager 232 that distributes processing tasks to the SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of FIG. 2 and manages the execution of those instructions via the graphics multiprocessor 234 and/or the texture unit 236. The graphics multiprocessor 234 shown is an exemplary example of a SIMT parallel processor. However, various types of SIMT parallel processors of different architectures may be included in the processing cluster 214. One or more instances of the graphics multiprocessor 234 may be included in the processing cluster 214. The graphics multiprocessor 234 can process data, and the data crossbar 240 can be used to distribute the processed data to one of a number of possible destinations including other shader units. The pipeline manager 232 may facilitate the distribution of processed data by designating a destination for the processed data to be distributed via the data crossbar 240.Each graphics multiprocessor 234 in the processing cluster 214 may include a set of the same function execution logic (for example, arithmetic logic unit, load-store unit, etc.). The function execution logic can be configured in a pipelined manner, where new instructions can be issued before the previous instructions are completed. The function execution logic supports a variety of operations, including integer and floating-point arithmetic, comparison operations, Boolean operations, shifts, and calculations of various algebraic functions. In one embodiment, the same functional unit hardware may be used to perform different operations, and there may be any combination of functional units.The instructions transmitted to the processing cluster 214 constitute threads. The set of threads executed across the set of parallel processing engines is a thread group. The thread group executes the same program for different input data. Each thread in the thread group can be assigned to a different processing engine in the graphics multiprocessor 234. The thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during the period in which the thread group is processed. The thread group may also include more threads than the number of processing engines in the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines in the graphics multiprocessor 234, processing can be performed in consecutive clock cycles. In one embodiment, multiple thread groups may be executed on the graphics multiprocessor 234 at the same time.In one embodiment, the graphics multiprocessor 234 includes an internal cache memory for performing load and store operations. In one embodiment, the graphics multiprocessor 234 may abandon the internal cache and instead use the cache memory within the processing cluster 214 (e.g., L1 cache 308). Each graphics multiprocessor 234 can also access the L2 cache in a partition unit (eg, partition unit 220A-220N of FIG. 2) that is shared among all processing clusters 214 and can be used to transfer data between threads. The graphics multiprocessor 234 may also access an off-chip global memory, and the off-chip global memory may include one or more of a local parallel processor memory and/or a system memory. Any memory outside the parallel processing unit 202 can be used as a global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data that can be stored in the L1 cache 308.Each processing cluster 214 may include an MMU 245 (Memory Management Unit) configured to map virtual addresses to physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2. The MMU 245 includes a set of page table entries (PTE) that are used to map virtual addresses to physical addresses of tiles and optionally to cache line indexes. The MMU 245 may include an address translation lookaside buffer (TLB) or cache, which may reside in the graphics multiprocessor 234 or L1 cache or processing cluster 214. Process physical addresses to distribute surface data access locality to allow efficient request interleaving between partition units. The cache line index can be used to determine whether a request for a cache line is a hit or a miss.In graphics and computing applications, the processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to the texture unit 236 for performing texture mapping operations, such as determining the location of texture samples, reading texture data, and filtering texture data. As needed, read from the internal texture L1 cache (not shown) or in some embodiments from the L1 cache in the graphics multiprocessor 234 and retrieve the texture from the L2 cache, local parallel processor memory, or system memory data. Each graphics multiprocessor 234 outputs a processed task to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in the L2 cache via the memory crossbar 216, Local parallel processor memory or system memory. The preROP 242 (pre-raster operation unit) is configured to receive data from the graphics multiprocessor 234 and direct the data to the ROP unit, which can be combined with the partition unit as described herein (for example, the partition unit 220A- of FIG. 2 220N) are located together. The preROP 242 unit can perform optimization of color mixing, organize pixel color data, and perform address conversion.It will be appreciated that the core architecture described in this article is illustrative and changes and modifications are possible. Any number of processing units, such as graphics multiprocessor 234, texture unit 236, preROP 242, etc., may be included in the processing cluster 214. Further, although only one processing cluster 214 is shown, the parallel processing unit as described herein may include any number of instances of the processing cluster 214. In one embodiment, each processing cluster 214 may be configured to use separate and different processing units, L1 caches, etc. to operate independently of the other processing clusters 214.Figure 2D shows a graphics multiprocessor 234 according to one embodiment. In such an embodiment, the graphics multiprocessor 234 is coupled with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general graphics processing unit (GPGPU) cores 262, and one Or multiple load/store units 266. The GPGPU core 262 and the load/store unit 266 are coupled with the cache memory 272 and the shared memory 270 via a memory and cache interconnect 268.In one embodiment, the instruction cache 252 receives a stream of instructions to be executed from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 may dispatch instructions into a thread group (for example, a warp), where each thread of the thread group is assigned to a different execution unit within the GPGPU core 262. Instructions can access any address space in the local, shared, or global address space by specifying an address in the unified address space. The address mapping unit 256 may be used to convert addresses in the unified address space into different memory addresses that can be accessed by the load/store unit 266.The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 324. The register file 258 provides temporary storage for the operands of the data path connected to the functional units of the graphics multiprocessor 324 (for example, the GPGPU core 262, the load/store unit 266). In one embodiment, the register file 258 is divided between each of the functional units such that each functional unit is allocated a dedicated part of the register file 258. In one embodiment, the register file 258 is divided between the different warps being executed by the graphics multiprocessor 324.The GPGPU cores 262 may each include a floating point unit (FPU) and/or an integer arithmetic logic unit (ALU) for executing instructions of the graphics multiprocessor 324. According to an embodiment, the GPGPU core 262 may be similar in architecture, or may be different in architecture. For example and in one embodiment, the first part of the GPGPU core 262 includes single-precision FPUs and integer ALUs, while the second part of the GPGPU core includes double-precision FPUs. In one embodiment, the FPU may implement the IEEE754-2008 standard for floating-point arithmetic or enable variable-precision floating-point arithmetic. The graphics multiprocessor 324 may additionally include one or more fixed function or special function units to perform specific functions such as a copy rectangle or pixel blending operation. In one embodiment, one or more of the GPGPU cores may also include fixed or special function logic.In one embodiment, the GPGPU core 262 includes SIMD logic capable of executing a single instruction on multiple sets of data. In one embodiment, the GPGPU core 262 may physically execute the SIMD4, SIMD8, and SIMD16 instructions, and logically execute the SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU core may be generated by the shader compiler at compile time, or may be automatically generated when a program written and compiled for single program multiple data (SPMD) or SIMT architecture is executed. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads performing the same or similar operations may be executed in parallel via a single SIMD8 logic unit.The memory and cache interconnection 268 is an interconnection network that connects each of the functional units of the graphics multiprocessor 324 to the register file 258 and the shared memory 270. In one embodiment, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU core 262, so the data transfer between the GPGPU core 262 and the register file 258 has a very low latency. The shared memory 270 may be used to enable communication between threads executing on functional units within the graphics multiprocessor 234. For example, the cache memory 272 may be used as a data cache to cache texture data transferred between the functional unit and the texture unit 236. The shared memory 270 can also be used as a cached managed program. In addition to the automatically cached data stored in the cache memory 272, threads executing on the GPGPU core 262 may also programmatically store data in the shared memory.Figures 3A-3B show additional graphics multiprocessors according to embodiments. The graphics multiprocessors 325, 350 shown are variants of the graphics multiprocessor 234 of FIG. 2C. The illustrated graphics multiprocessors 325, 350 may be configured as streaming multiprocessors (SM) capable of executing a large number of threads of execution simultaneously.FIG. 3A shows a graphics multiprocessor 325 according to an additional embodiment. The graphics multiprocessor 325 includes a number of additional instances of execution resource units related to the graphics multiprocessor 234 of FIG. 2D. For example, the graphics multiprocessor 325 may include multiple instances of instruction units 332A-332B, register files 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or calculation execution units (for example, GPGPU cores 336A-336B, GPGPU cores 337A-337B, GPGPU cores 338A-338B) and multiple sets of load/store units 340A-340B. In one embodiment, the execution resource unit has a common instruction cache 330, a texture and/or data cache 342, and a shared memory 346.Various components can communicate via the interconnect structure 327. In one embodiment, the interconnect structure 327 includes one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. In one embodiment, the interconnect structure 327 is a separate high-speed network structure layer, and each component of the graphics multiprocessor 325 is stacked on the separate high-speed network structure layer. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect structure 327. For example, the GPGPU cores 336A-336B, 337A-337B, and 3378A-338B may each communicate with the shared memory 346 via the interconnect structure 327. The interconnect structure 327 can arbitrate the communication within the graphics multiprocessor 325 to ensure fair bandwidth allocation between components.FIG. 3B shows a graphics multiprocessor 350 according to an additional embodiment. The graphics processor includes multiple sets of execution resources 356A-356D, where each set of execution resources includes multiple instruction units, register files, GPGPU cores, and load storage units, as shown in FIGS. 2D and 3A. The execution resources 356A-356D can work in unison with the texture unit(s) 360A-360D for texture operations, while sharing the instruction cache 354 and the shared memory 362. In one embodiment, execution resources 356A-356D may share multiple instances of instruction cache 354 and shared memory 362 and texture and/or data caches 358A-358B. Various components can communicate via an interconnect structure 352 similar to the interconnect structure 327 of FIG. 3A.Those skilled in the art will understand that the architecture described in FIGS. 1, 2A-2D, and 3A-3B is descriptive and not restrictive in terms of the scope of the embodiments of the present invention. Therefore, the technology described herein can be implemented on any suitably configured processing unit, including but not limited to one or more mobile application processors, one or more desktop computers or server central processing units (CPU ) (Including a multi-core CPU), one or more parallel processing units (such as the parallel processing unit 202 of FIG. 2), and one or more graphics processors or dedicated processing units, without departing from the scope of the embodiments described herein .In some embodiments, a parallel processor or GPGPU as described herein is communicatively coupled to the host/processor core to accelerate graphics operations, machine learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/core through a bus or other interconnection (for example, a high-speed interconnection such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the core and communicatively coupled to the core through an internal processor bus/interconnect (ie, inside the package or chip). Regardless of the way the GPU is connected, the processor core can assign work to the GPU in the form of a sequence of commands/instructions contained in the work descriptor. The GPU then uses dedicated circuits/logic for efficiently processing these commands/instructions.Technology used for GPU to host processor interconnectionFIG. 4A shows an exemplary architecture in which multiple GPUs 410-413 are communicatively coupled to multiple multi-core processors 405-406 through high-speed links 440-443 (eg, bus, point-to-point interconnection, etc.). In one embodiment, the high-speed links 440-443 support 4GB/s, 30GB/s, 80GB/s or higher communication throughput, depending on the implementation. Various interconnection protocols can be used, including but not limited to PCIe 4.0 or 5.0 and NVLink 2.0. However, the basic principle of the present invention is not limited to any specific communication protocol or throughput.In addition, in one embodiment, two or more of the GPUs 410-413 are interconnected by high-speed links 444-445, and the high-speed links 444-445 can be used with the high-speed links 440-443. Those protocols/links are the same or different protocols/links. Similarly, two or more of the multi-core processors 405-406 can be connected by a high-speed link 433, which can operate at 20GB/s, 30GB/s, 120GB/s or higher Symmetric Multi-Processor (SMP) bus. Alternatively, all communications between the various system components shown in FIG. 4A can be accomplished using the same protocol/link (e.g., through a common interconnect structure). However, as mentioned, the basic principles of the present invention are not limited to any particular type of interconnection technology.In one embodiment, each multi-core processor 405-406 is communicatively coupled to the processor memories 401-402 via memory interconnects 430-431, and each GPU 410-413 communicates via GPU memory interconnects 450-453, respectively Ground is coupled to GPU memories 420-423. The memory interconnects 430-431 and 450-453 may utilize the same or different memory access technologies. By way of example and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random access memory (DRAM) (including stacked DRAM), graphics DDR SDRAM (GDDR) (for example, GDDR5, GDDR6) or high bandwidth memory (HBM), and/or may be non-volatile memory, such as 3D XPoint or Nano-RAM. In one embodiment, a certain part of the memory may be a volatile memory and another part may be a non-volatile memory (for example, using a two-level memory (2LM) hierarchy).As described below, although various processors 405-406 and GPUs 410-413 can be physically coupled to specific memories 401-402, 420-423, respectively, a unified memory architecture can be realized, in which the same virtual system address space (Also called "effective address" space) is distributed among all kinds of physical memory. For example, the processor memories 401-402 may each include 64GB of system memory address space, and the GPU memories 420-423 may each include 32GB of system memory address space (resulting in a total of 256GB of addressable memory in this example).Figure 4B shows additional details of the interconnection between the multi-core processor 407 and the graphics acceleration module 446 according to one embodiment. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.The illustrated processor 407 includes multiple cores 460A-460D, each having a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The core may include various other components for executing instructions and processing data (for example, an instruction fetch unit, a branch prediction unit, a decoder, an execution unit, a reorder buffer, etc.), which are not shown to avoid ambiguity. The basic principle of the invention. The caches 462A-462D may include level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches 426 may be included in the cache hierarchy and shared by the collection of cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, 12 shared L2 caches, and 12 shared L3 caches. In this embodiment, one of the L2 cache and the L3 cache is shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 are connected to the system memory 441, and the system memory 441 may include processor memories 401-402.The coherency bus 464 maintains coherency for the data and instructions stored in the various caches 462A-462D, 456 and the system memory 441 via inter-core communication. For example, each cache may have cache coherency logic/circuitry associated with it to communicate through coherency bus 464 in response to detected reads or writes to a particular cache line. In one implementation, the cache snooping protocol is implemented through the coherency bus 464 to snoop cache accesses. The cache snooping/coherence technique is well understood by those skilled in the art, and will not be described in detail here to avoid obscuring the basic principles of the present invention.In one embodiment, the proxy circuit 425 communicatively couples the graphics acceleration module 446 to the coherency bus 464, thereby allowing the graphics acceleration module 446 as a core peer to participate in the cache coherency protocol. Specifically, the interface 435 provides connectivity to the proxy circuit 425 through a high-speed link 440 (for example, PCIe bus, NVLink, etc.), and the interface 437 connects the graphics acceleration module 446 to the high-speed link 440.In one implementation, the accelerator integrated circuit 436 represents the multiple graphics processing engines 431, 432, N of the graphics acceleration module 446 to provide cache management, memory access, context management, and interrupt management services. The graphics processing engines 431, 432, N may each include a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may include different types of graphics processing engines within the GPU, such as a graphics execution unit, a media processing engine (for example, a video encoder/decoder), a sampler, and a bit block transmission engine. In other words, the graphics acceleration module may be a GPU with multiple graphics processing engines 431-432, N, or the graphics processing engines 431-432, N may be a separate GPU integrated on a common package, line card, or chip.In one embodiment, the accelerator integrated circuit 436 includes a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory conversion (also known as effective-to-real memory conversion) and for accessing System memory 441 memory access protocol. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching virtual/valid to physical/real address translation. In one implementation, the cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. In one embodiment, the data stored in the cache 438 and the graphics memories 433-434, N is kept consistent with the core caches 462A-462D, 456 and the system memory 411. As mentioned, this can be done via the proxy circuit 425, which represents the cache 438 and the memories 433-434, N participates in the cache coherency mechanism (for example, sends to the cache 438 and the processor cache 462A-462D, cache line modification/access related updates on 456 and receive updates from cache 438).A set of registers 445 stores context data for threads executed by the graphics processing engines 431-432, N, and the context management circuit 448 manages the thread context. For example, the context management circuit 448 may perform save and restore operations to save and restore the context of various threads during context switching (eg, where the first thread is saved and the second thread is stored so that the second thread can be executed by the graphics processing engine) . For example, during context switching, the context management circuit 448 may store the current register value in a designated area in the memory (for example, identified by the context pointer). It can then restore the register value when returning to that context. In one embodiment, the interrupt management circuit 447 receives and processes interrupts received from system devices.In one implementation, the virtual/effective address from the graphics processing engine 431 is converted into a real/physical address in the system memory 411 by the MMU 439. One embodiment of the accelerator integrated circuit 436 supports multiple (eg, 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407, or may be shared among multiple applications. In one embodiment, a virtualized graphics execution environment is presented, in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). Resources can be subdivided into "slices" that are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.Therefore, the accelerator integrated circuit acts as a bridge to the system of the graphics acceleration module 446 and provides address translation and system memory cache services. In addition, the accelerator integrated circuit 436 may provide a virtualization facility for the host processor to manage the virtualization of the graphics processing engine, interrupts, and memory management.Because the hardware resources of the graphics processing engines 431-432, N are explicitly mapped to the actual address space seen by the host processor 407, any host processor can directly address these resources using effective address values. In one embodiment, one function of the accelerator integrated circuit 436 is the physical separation of the graphics processing engines 431-432, N, so that they appear as independent units to the system.As mentioned, in the illustrated embodiment, one or more graphics memories 433-434, M are coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories, such as DRAM (including stacked DRAM), GDDR memories (for example, GDDR5, GDDR6) or HBM, and/or may be nonvolatile memories, such as 3D XPoint or Nano -Ram.In one embodiment, in order to reduce the data traffic on the high-speed link 440, biasing technology is used to ensure that the data stored in the graphics memory 433-434, M will be used most frequently by the graphics processing engine 431-432, N and The cores 460A-460D preferably do not use (at least infrequently used) data. Similarly, the biasing mechanism attempts to keep the data required by the core (and preferably not the graphics processing engine 431-432, N) in the core's caches 462A-462D, 456 and system memory 411.FIG. 4C shows another embodiment in which the accelerator integrated circuit 436 is integrated in the processor 407. In this embodiment, the graphics processing engines 431-432 and N communicate directly with the accelerator integrated circuit 436 via the high-speed link 440 via the interface 437 and the interface 435 (again, it can utilize any form of bus or interface protocol). The accelerator integrated circuit 436 can perform the same operations as those described with respect to FIG. 4B, but considering its close proximity to the coherency bus 462 and caches 462A-462D, 426, it may perform operations with higher throughput.One embodiment supports different programming models, including a dedicated process programming model (without graphics acceleration module virtualization) and a shared programming model (with virtualization). The shared programming model may include a programming model controlled by the accelerator integrated circuit 436 and a programming model controlled by the graphics acceleration module 446.In one embodiment of the dedicated process model, the graphics processing engines 431-432, N are dedicated to a single application or process under a single operating system. This single application can aggregate other application requests to graphics engines 431-432, N, thereby providing virtualization within the VM/partition.In the dedicated process programming model, the graphics processing engines 431-432, N can be shared by multiple VM/application partitions. The shared model requires the hypervisor to virtualize the graphics processing engines 431-432 and N to allow access by each operating system. For a single-partition system without a hypervisor, the graphics processing engine 431-432, N is owned by the operating system (own). In both cases, the operating system can virtualize the graphics processing engines 431-432 and N to provide access to each process or application.For the shared programming model, the graphics acceleration module 446 or a separate graphics processing engine 431-432, N uses the process handle to select process elements. In one embodiment, the process element is stored in the system memory 411 and can be addressed using the effective address-to-real address translation technique described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (ie, calling system software to add process elements to the process element linked list). The lower 16 bits of the process handle can be the offset of the process element in the process element linked list.Figure 4D shows an exemplary accelerator integrated slice 490. As used herein, a “slice” includes a designated portion of the processing resources of the accelerator integrated circuit 436. The application effective address space 482 in the system memory 411 stores process elements 483. In one embodiment, the process element 483 is stored in response to a GPU call 481 from an application 480 executing on the processor 407. The process element 483 contains the process status for the corresponding application 480. The work descriptor (WD) 484 contained in the process element 483 may be a single job requested by the application, or may contain a pointer to a job queue. In the latter case, WD 484 is a pointer to the job request queue in the address space 482 of the application.The graphics acceleration module 446 and/or the separate graphics processing engines 431-432, N may be shared by all or a subset of the processes in the system. The embodiment of the present invention includes an infrastructure for establishing a process state and sending WD484 to the graphics acceleration module 446 to start a job in a virtualized environment.In one implementation, the dedicated process programming model is implementation specific. In this model, a single process has a graphics acceleration module 446 or a separate graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integrated circuit 436 for the owned partition, and the operating system initializes the accelerator integrated circuit 436 for the owned process when the graphics acceleration module 446 is assigned.In operation, the WD acquisition unit 491 in the accelerator integrated slice 490 acquires the next WD 484 that includes an indication of the work to be completed by one of the graphics processing engines of the graphics acceleration module 446. Data from WD 484 may be stored in register 445 and used by MMU 439, interrupt management circuit 447, and/or context management circuit 448 as shown. For example, one embodiment of the MMU 439 includes segment/page walk circuitry for accessing the segment/page table 486 in the OS virtual address space 485. The interrupt management circuit 447 can process the interrupt event 492 received from the graphics acceleration module 446. When performing graphics operations, the MMU 439 converts the effective addresses 493 generated by the graphics processing engines 431-432, N into actual addresses.In one embodiment, the same set of registers 445 is copied for each graphics processing engine 431-432, N and/or graphics acceleration module 446, and the same set of registers 445 may be initialized by the hypervisor or the operating system. Each of these replicated registers may be included in the accelerator integrated slice 490. Exemplary registers that can be initialized by the hypervisor are shown in Table 1.Table 1-Registers initialized by the management program1 Slice control register 2 Real address (RA) scheduling process area pointer 3 Authority masking coverage register 4 Interrupt vector table entry offset 5 Interrupt vector table entry limit 6 Status register 7 Logical partition ID 8 Real address (RA) hypervisor accelerator utilization record Pointer 9 storage description registerExemplary registers that can be initialized by the operating system are shown in Table 2.Table 2-Registers initialized by the operating system1 Process and thread identification 2 Effective address (EA) context save/restore pointer 3 Virtual address (VA) accelerator utilization record pointer 4 Virtual address (VA) memory segment table pointer 5 Permission mask 6 Work descriptorIn one embodiment, each WD 484 is specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432,N. It contains all the information needed by the graphics processing engine 431-432, N to do its work, or it may be a pointer to the memory location of the command queue where the application has established the work to be completed.Figure 4E shows additional details of one embodiment of the sharing model. This embodiment includes a hypervisor actual address space 498 in which a list of process elements 499 is stored. The hypervisor actual address space 498 can be accessed via the hypervisor 496, which virtualizes the graphics acceleration module engine for the operating system 495.The shared programming model allows all or a subset of processes from all or a subset of partitions in the system to use the graphics acceleration module 446. There are two programming models in which the graphics acceleration module 446 is shared by multiple processes and partitions: time slice sharing and graphics orientation sharing.In this model, the hypervisor 496 owns the graphics acceleration module 446 and makes its functions available to all operating systems 495. In order for the graphics acceleration module 446 to support the virtualization performed by the hypervisor 496, the graphics acceleration module 446 can comply with the following requirements: 1) The job request of the application must be autonomous (that is, there is no need to maintain state between jobs), or The graphics acceleration module 446 must provide a context saving and restoring mechanism. 2) The graphics acceleration module 446 ensures that the job request of the application is completed within a specified amount of time, including any conversion failures, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) When operating in the directional sharing programming model, the fairness of the graphics acceleration module 446 must be guaranteed between processes.In one embodiment, for the shared model, the application 480 is required to use the graphics acceleration module 446 type, work descriptor (WD), permission mask register (AMR) value, and context save/restore area pointer (CSRP) to make the operating system 495 System call. The graphics acceleration module 446 type describes the target acceleration function used for system calls. The graphics acceleration module 446 type may be a system-specific value. WD is formatted specifically for the graphics acceleration module 446, and can take the following forms: graphics acceleration module 446 commands, effective address pointers to user-defined structures, effective address pointers to command queues, or used to describe the graphics acceleration module 446 Any other data structure for the completed work. In one embodiment, the AMR value is the AMR state for the current process. The value passed to the operating system is similar to the application that sets the AMR. If the implementation of the accelerator integrated circuit 436 and the graphics acceleration module 446 does not support the User Rights Masking Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. Before placing the AMR in the process element 483, the hypervisor 496 may optionally apply the current authorization mask overwrite register (AMOR) value. In one embodiment, the CSRP is one of the registers 445, which contains the effective address of a region in the address space 482 of the application for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if it is not required to save state between jobs or when the job is preempted. The context save/restore area may be pinned system memory.Upon receiving the system call, the operating system 495 can verify that the application 480 is registered and given the permission to use the graphics acceleration module 446. The operating system 495 then uses the information shown in Table 3 to call the hypervisor 496.Table 3-OS call parameters to the management program1 Work Descriptor (WD) 2 (may be masked) Authorization Masking Register (AMR) value 3 Effective Address (EA) Context Save/Restore Area Pointer (CSRP) 4 Process ID (PID) and optional Thread ID (TID) 5 Virtual address (VA) accelerator uses record pointer (AURP) 6 Storage segment table pointer (SSTP) virtual address 7 Logical interrupt service number (LISN)Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has been registered and is given the authority to use the graphics acceleration module 446. The management program 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4-Process element information1 Work Descriptor (WD) 2 (may be masked) Authorization Masking Register (AMR) value 3 Effective Address (EA) Context Save/Restore Area Pointer (CSRP) 4 Process ID (PID) and optional Thread ID (TID) 5 Virtual address (VA) accelerator uses record pointer (AURP) 6 Storage segment table pointer (SSTP) virtual address 7 Logical interrupt service number (LISN) 8 Interrupt vector table derived from hypervisor call parameters 9 Status register (SR) value 10 Logical Partition ID (LPID) 11 Real Address (RA) Management Program Accelerator Utilization Record Pointer 12 Storage Descriptor Register (SDR)In one embodiment, the hypervisor initializes multiple accelerator integrated slice 490 registers 445.As shown in Figure 4F, one embodiment of the present invention employs a unified memory addressable via a common virtual memory address space for accessing physical processor memories 401-402 and GPU memories 420-423. In this implementation, operations performed on GPUs 410-413 utilize the same virtual/effective memory address space to access processor memories 401-402, and vice versa, thereby simplifying programmability. In one embodiment, the first part of the virtual/effective address space is allocated to the processor memory 401, the second part is allocated to the second processor memory 402, the third part is allocated to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) is thus distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to utilize virtual memory mapped to any physical memory. Address to access the memory.In one embodiment, the bias/coherence management circuit 494A-494E in one or more of the MMU 439A-439E ensures the cache between the host processor (eg, 405) and the cache of the GPU 410-413 Consistency, and implement a bias technique that indicates the physical memory in which certain types of data should be stored. Although multiple examples of bias/uniformity management circuits 494A-494E are shown in FIG. 4F, the bias/consistency can be implemented within the MMU of one or more host processors 405 and/or within the accelerator integrated circuit 436. Conformance circuit.One embodiment allows the GPU-attached memory 420-423 to be mapped as part of the system memory and accessed using shared virtual memory (SVM) technology, but does not suffer from the typical performance deficiencies associated with system-wide cache coherency. The ability of GPU-attached memories 420-423 to be accessed as system memories without heavy cache coherency overhead provides a favorable operating environment for GPU offloading. This arrangement allows the host processor 405 software to set the operands and access the calculation results without the overhead of traditional I/O direct memory access (DMA) data copying. This type of traditional copy involves drive calls, interrupts, and memory-mapped I/O (MMIO) accesses, which are inefficient compared to simple memory accesses. At the same time, the ability to access GPU-attached memories 420-423 without cache coherency overhead may be critical for the execution time of offloading calculations. For example, in the case of a large number of streaming write memory services, the cache coherency overhead can significantly reduce the effective write bandwidth seen by GPUs 410-413. The efficiency of operand setting, the efficiency of result access, and the efficiency of GPU calculation all play a role in determining the effectiveness of GPU offloading.In one implementation, the choice between GPU bias and host processor bias is driven by the bias tracker data structure. For example, a bias table may be used, and the bias table may be a page granularity structure including 1 or 2 bits per GPU attached memory page (that is, controlled at the granularity of the memory page). The offset table can be implemented in the stolen memory range of one or more GPU attached memories 420-423, with or without offset cache in GPU 410-413 (for example, to cache frequently/recently used offsets) Set the table entry). Alternatively, the entire bias table can be maintained within the GPU.In one implementation, the offset table entry associated with each access to the GPU attached memory 420-423 is accessed before the actual access to the GPU memory, causing the following operations. First, the local requests from GPUs 410-413 for pages that are found in the GPU bias are directly forwarded to the corresponding GPU memories 420-423. The local request from the GPU for its page found in the host bias is forwarded to the processor 405 (e.g., via a high-speed link as discussed above). In one embodiment, the request from the processor 405 to find the requested page in the host processor bias completes the request like a normal memory read. Alternatively, requests involving GPU offset pages can be forwarded to GPUs 410-413. If the page is not currently being used by the GPU, the GPU can then convert the page into a host processor offset.The page bias state can be changed through a software-based mechanism, a hardware-assisted software-based mechanism, or a purely hardware-based mechanism for a limited set of situations.One mechanism for changing the bias state uses an API call (such as OpenCL), which then calls the device driver of the GPU, and the device driver then sends a message to the GPU to direct it to change the bias state (or set the command descriptor Enqueue), and for some conversions, perform a cache dump cleanup operation in the host. The cache dump clear operation is required for the conversion from the host processor 405 bias to the GPU bias, but not the other way around.In one embodiment, cache coherency is maintained by temporarily rendering GPU offset pages that are not cacheable by the host processor 405. In order to access these pages, the processor 405 may request access to the GPU 410, which may or may not grant access immediately, depending on the implementation. Therefore, in order to reduce the communication between the processor 405 and the GPU 410, it is advantageous to ensure that the GPU offset pages are those pages required by the GPU but not the host processor 405, and vice versa.Graphics processing pipelineFIG. 5 shows a graphics processing pipeline 500 according to an embodiment. In one embodiment, a graphics processor may implement the graphics processing pipeline 500 shown. The graphics processor may be included in a parallel processing subsystem as described herein (such as the parallel processor 200 of FIG. 2), which in one embodiment is a variant of the parallel processor(s) 112 of FIG. 1 . Various parallel processing systems may implement the graphics processing pipeline 500 via one or more instances of parallel processing units as described herein (e.g., parallel processing unit 202 of FIG. 2). For example, the shader unit (eg, graphics multiprocessor 234 of FIG. 3) may be configured to execute vertex processing unit 504, tessellation control processing unit 508, tessellation evaluation processing unit 512, geometry processing unit 516, and fragment/pixel The function of one or more of the processing units 524. The functions of the data assembler 502, the primitive assemblers 506, 514, 518, the tessellation unit 510, the rasterizer 522, and the raster operation unit 526 can also be performed by other processing clusters (for example, the processing cluster 214 in FIG. 3) The processing engine and the corresponding partition unit (for example, the partition unit 220A-220N of FIG. 2) are executed. The graphics processing pipeline 500 may also be implemented using a dedicated processing unit for one or more functions. In one embodiment, one or more parts of the graphics processing pipeline 500 may be executed by parallel processing logic within a general-purpose processor (eg, CPU). In one embodiment, one or more parts of the graphics processing pipeline 500 may access the on-chip memory via a memory interface 528 (for example, the parallel processor memory 222 in FIG. 2), which may be the memory interface 528 of FIG. Examples of memory interface 218.In one embodiment, the data assembler 502 is a processing unit that collects vertex data of surfaces and primitives. The data assembler 502 then outputs the vertex data including the vertex attributes to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes a vertex shader program, thereby lighting and transforming vertex data as specified by the vertex shader program. The vertex processing unit 504 reads data stored in the cache, local or system memory for use in processing vertex data, and can be programmed to transform the vertex data from object-based coordinate representation into world space coordinate space or normalization The coordinate space of the device.The first instance of the primitive assembler 506 receives vertex attributes from the vertex processing unit 504. The primitive assembler 506 reads the stored vertex attributes and constructs the graphics primitives for processing by the tessellation control processing unit 508 as needed. Graphic primitives include triangles, line segments, points, patches, etc., supported by various graphics processing application programming interfaces (APIs).The tessellation control processing unit 508 regards the input vertex as a control point for the geometric patch. The control points are transformed from the input representation from the patch (for example, the basis of the patch) into a representation suitable for use in the surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 may also calculate the tessellation factor for the edge of the geometric patch. The tessellation factor is applied to a single edge and quantifies the level of detail associated with the edge that depends on the view. The tessellation unit 510 is configured to receive the tessellation factor for the edge of the patch and subdivide the patch into a plurality of geometric primitives such as lines, triangles, or quadrilateral primitives, which are transmitted to Surface subdivision evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on the parameterized coordinates of the subdivided patches to generate vertex attributes and surface representations of each vertex associated with the geometric primitive.The second instance of the primitive assembler 514 receives the vertex attributes from the tessellation evaluation processing unit 512, reads the stored vertex attributes as needed, and constructs the graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes a geometry shader program to transform the graphics primitives received from the primitive assembler 514 as specified by the geometry shader program. In one embodiment, the geometric processing unit 516 is programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters for rasterizing the new graphics primitives.In some embodiments, the geometry processing unit 516 may add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying the new graphic primitive to the primitive assembler 518. The primitive assembler 518 receives parameters and vertices from the geometry processing unit 516, and constructs the graphical primitives to be processed by the viewport zoom, cull, and clip unit 520. The geometry processing unit 516 reads data stored in the parallel processor memory or the system memory for use in processing geometric data. The viewport zooming, picking, and editing unit 520 performs editing, picking, and viewport zooming, and outputs the processed graphic primitives to the rasterizer 522.The rasterizer 522 can perform depth picking and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and outputs those fragments and associated overlay data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit configured to execute a fragment shader program or a pixel shader program. The fragment/pixel processing unit 524 transforms the fragments or pixels received from the rasterizer 522 as specified by the fragment or pixel shader program. For example, the fragment/pixel processing unit 524 may be programmed to perform operations including, but not limited to, texture mapping, shading, blending, texture correction, and perspective correction to generate colored fragments or pixels that are output to the raster operation unit 526. The segment/pixel processing unit 524 can read the data stored in the parallel processor memory or the system memory for use in processing the segment data. The fragment or pixel shader program can be configured to perform coloring with samples, pixels, tiles, or other granularities according to the sampling rate configured for the processing unit.The raster operation unit 526 is a processing unit that performs raster operations including but not limited to stencil printing, z-check, blending, etc., and outputs pixel data as processed graphics data to be stored in a graphics memory (for example, as shown in FIG. 2 Parallel processor memory 222, and/or system memory 104 in FIG. 1) to be displayed on one or more display devices 110 or used by one or more processors 102 or (multiple) parallel processors One of 112 is further processed. In some embodiments, the raster operation unit 526 is configured to compress the z or color data written to the memory, and decompress the z or color data read from the memory.Machine learning overviewA machine learning algorithm is an algorithm that can learn based on a set of data. Embodiments of machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories a given input belongs to; regression algorithms can output values for a given input; and pattern recognition algorithms can be used to generate translated text or perform text-to-speech And/or voice recognition.An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; one simple type of neural network is a feedforward network. The feedforward network can be implemented as an acyclic graph, where nodes are arranged in layers. Generally, a feedforward network topology includes an input layer and an output layer, which are separated by at least one hidden layer. The hidden layer transforms the input received by the input layer into a useful representation for generating output in the output layer. Network nodes are fully connected to nodes in adjacent layers via edges, but there are no edges between nodes in each layer. The data received at the nodes of the input layer of the feedforward network is propagated (ie, "feedforward") to the nodes of the output layer via an activation function, which calculates each of the nodes in the network based on coefficients ("weights") The state of the nodes of successive layers, and the coefficients are respectively associated with each of the edges connecting the layers. Depending on the specific model represented by the executed algorithm, the output from the neural network algorithm can take various forms.Before a machine learning algorithm can be used to model a particular problem, the training data set is used to train the algorithm. Training a neural network involves selecting the network topology, using a set of training data that represents the problem modeled by the network, and adjusting the weights until the network model performs with the smallest error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output generated by the network in response to an input representing an example in the training data set is compared with the "correct" labeled output of that example, and the calculated representation output is compared with the labeled output. The difference between the output of the error signal, and when the error signal is propagated backwards through the layers of the network, the weights associated with the connection are adjusted to minimize the error. The network is considered "trained" when the error of each output generated from the instance of the training data set is minimized.The accuracy of a machine learning algorithm can be significantly affected by the quality of the data set used to train the algorithm. The training process can be computationally intensive and can require a lot of time on conventional general-purpose processors. Therefore, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, because the calculations performed when adjusting the coefficients in the neural network naturally contribute to parallel implementation. Specifically, many machine learning algorithms and software applications have been adapted to use parallel processing hardware within general graphics processing devices.FIG. 6 is a generalized diagram of the machine learning software stack 600. The machine learning application 602 may be configured to use a training data set to train a neural network or configured to use a trained deep neural network to implement machine intelligence. The machine learning application 602 may include specialized software and/or neural network training and inference functions that can be used to train a neural network before deployment. The machine learning application 602 can implement any type of machine intelligence, including but not limited to image recognition, mapping and positioning, autonomous navigation, speech synthesis, medical imaging, or language translation.The hardware acceleration for the machine learning application 602 can be enabled via the machine learning block 604. The machine learning block 604 can provide a library of machine learning primitives. Machine learning primitives are the basic operations that machine learning algorithms usually perform. Without the machine learning block 604, the developer of the machine learning algorithm will be required to create and optimize the main calculation logic associated with the machine learning algorithm, and then re-optimize the calculation logic when a new parallel processor is developed. Instead, the machine learning application can be configured to use the primitives provided by the machine learning block 604 to perform the necessary calculations. Exemplary primitives include tensor convolution, activation function, and pooling, which are computational operations performed when training a convolutional neural network (CNN). The machine learning block 604 can also provide primitives to implement basic linear algebra subroutines performed by many machine learning algorithms, such as matrix and vector operations.The machine learning block 604 can process the input data received from the machine learning application 602 and generate appropriate input to the calculation block 606. The computing block 606 can abstract the basic instructions provided to the GPGPU driver 608, so that the machine learning block 604 can utilize hardware acceleration via the GPGPU hardware 610 without requiring the machine learning block 604 to be very familiar with the architecture of the GPGPU hardware 610. In addition, the computing block 606 can enable hardware acceleration for the machine learning block 604 across multiple types and generations of GPGPU hardware 610.GPGPU machine learning accelerationFIG. 7 illustrates a highly parallel general graphics processing unit 700 according to an embodiment. In one embodiment, the general purpose processing unit (GPGPU) 700 may be configured to be particularly efficient in processing the type of computational workload associated with training deep neural networks. In addition, the GPGPU 700 can be directly linked to other instances of the GPGPU to create a multi-GPU cluster to improve the training speed of particularly deep neural networks.The GPGPU 700 includes a host interface 702 for enabling connection with a host processor. In one embodiment, the host interface 702 is a PCI Express interface. However, the host interface can also be a supplier-specific communication interface or communication structure. The GPGPU 700 receives commands from the host processor and uses the global scheduler 704 to distribute the execution threads associated with those commands to a set of computing clusters 706A-706H. The computing clusters 706A-706H share the cache memory 708. The cache memory 708 may act as a high-level cache in the cache memory within the computing clusters 706A-706H.The GPGPU 700 includes memories 714A-714B that are coupled to the computing cluster 706A-H via a set of memory controllers 712A-712B. In various embodiments, the memories 714A-714B may include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM) (including graphics dual Data rate (GDDR) memory) or 3D stacked memory (including but not limited to high bandwidth memory (HBM)).In one embodiment, each computing cluster 706A-706H includes a set of graphics multiprocessors, such as the graphics multiprocessor 400 of FIG. 4A. The graphics multiprocessor of the computing cluster includes multiple types of integer and floating-point logic units, which can perform calculation operations at a range of precisions (including precisions suitable for machine learning calculations). For example and in one embodiment, at least a subset of the floating-point units in each of the computing clusters 706A-706H may be configured to perform 16-bit or 32-bit floating-point operations, while a different subset of the floating-point units may Is configured to perform 64-bit floating point operations.Multiple instances of GPGPU 700 may be configured to operate as a computing cluster. The communication mechanism used by the computing cluster for synchronization and data exchange varies across embodiments. In one embodiment, multiple instances of GPGPU 700 communicate through host interface 702. In one embodiment, GPGPU 700 includes an I/O hub 709 that couples GPGPU 700 with GPU link 710, which enables direct connection to other instances of GPGPU. In one embodiment, GPU link 710 is coupled to a dedicated GPU-to-GPU bridge, which enables communication and synchronization between multiple instances of GPGPU 700. In one embodiment, the GPU link 710 is coupled with a high-speed interconnect to transmit data to other GPGPU or parallel processors and receive data. In one embodiment, multiple instances of the GPGPU 700 are located in separate data processing systems and communicate via a network device, which is accessible via the host interface 702. In one embodiment, in addition to or as an alternative to host interface 702, GPU link 710 may be configured to enable a connection to a host processor.Although the illustrated configuration of GPGPU 700 may be configured to train a neural network, one embodiment provides an alternative configuration of GPGPU 700, which may be configured for deployment in a high-performance or low-power inference platform. In the inference configuration, GPGPU 700 includes fewer computing clusters 706A-706H relative to the training configuration. In addition, the memory technology associated with the memories 714A-714B may differ between the inferred configuration and the training configuration. In one embodiment, the inferred configuration of GPGPU 700 may support inferred specific instructions. For example, the inference configuration may provide support for one or more 8-bit integer dot product instructions, which are typically used during inference operations for deployed neural networks.FIG. 8 illustrates a multi-GPU computing system 800 according to an embodiment. The multi-GPU computing system 800 may include a processor 802 that is coupled to a plurality of GPGPUs 806A-806D via a host interface switch 804. In one embodiment, the host interface switch 804 is a PCI express switch device that couples the processor 802 to a PCI express bus, and the processor 802 can communicate with the group of GPGPUs 806A-806D through the PCI express bus. Each of the plurality of GPGPUs 806A-806D may be an example of the GPGPU 700 of FIG. 7. GPGPU 806A-806D can be interconnected via a set of high-speed point-to-point GPU-to-GPU links 816. The high-speed GPU-to-GPU link may be connected to each of the GPGPU 806A-806D via a dedicated GPU link (such as GPU link 710 in FIG. 7). The P2P GPU link 816 enables direct communication between each of the GPGPUs 806A-806D without requiring communication through the host interface bus to which the processor 802 is connected. In the case where the GPU-to-GPU service involves a P2P GPU link, the host interface bus can still be used for system memory access or communication with other instances of the multi-GPU computing system 800, for example, via one or more network devices. Although in the illustrated embodiment the GPGPU 806A-806D are connected to the processor 802 via the host interface switch 804, in one embodiment the processor 802 includes direct support for the P2PGPU link 816 and can be directly connected to the GPGPU 806A- 806D.Machine learning neural network implementationThe computing architecture provided by the embodiments described herein can be configured to perform a type of parallel processing that is particularly suitable for training and deploying neural networks for machine learning. The neural network can be summarized as a network with functions of graph relations. As is well known in the art, there are many types of neural network implementations used in machine learning. An exemplary type of neural network is the feedforward network as previously described.The second exemplary type of neural network is Convolutional Neural Network (CNN). CNN is a specialized feedforward neural network for processing data (such as image data) with a known grid-like topology. Therefore, CNNs are commonly used for computational vision and image recognition applications, but they can also be used for other types of pattern recognition, such as speech and language processing. The nodes in the CNN input layer are organized into a set of "filters" (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The calculations for CNN include applying convolution mathematical operations to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function, which is a modified version of one of the two original functions. In convolutional network terminology, the first function of the convolution can be called the input, and the second function can be called the convolution kernel. The output can be referred to as a feature map. For example, the input to the convolutional layer may be a multi-dimensional data array, which defines various color components of the input image. The convolution kernel may be a multi-dimensional parameter array, where the parameters are adapted through a training process for the neural network.A recurrent neural network (RNN) is a type of feedforward neural network that includes feedback connections between layers. RNN enables modeling of sequence data by sharing parameter data across different parts of the neural network. The architecture of RNN includes loops. The loop represents the influence of the current value of the variable on its own value at a future time, because at least a part of the output data from the RNN is used as feedback for processing subsequent inputs in the sequence. Due to the variable nature of language data that can be included, this feature makes RNNs particularly useful for language processing.The figures described below present exemplary feedforward, CNN, and RNN networks, and describe the general process for separately training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting with respect to any specific embodiments described herein, and that the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.The exemplary neural network described above can be used to perform deep learning. Deep learning is machine learning that uses deep neural networks. In contrast to a shallow neural network that includes only a single hidden layer, the deep neural network used in deep learning is an artificial neural network composed of multiple hidden layers. Neural networks that are trained more deeply are generally more computationally intensive. However, the additional hidden layer of the network enables multi-step pattern recognition, which results in reduced output errors relative to shallow machine learning techniques.The deep neural network used in deep learning usually includes a front-end network to perform feature recognition coupled to a back-end network representing a mathematical model that can perform operations based on the feature representation provided to the model (e.g., object classification, voice Recognition, etc.). Deep learning enables machine learning to be performed without requiring manual feature engineering to be performed on the model. In contrast, deep neural networks can learn features based on the statistical structure or correlation within the input data. The learned features can be provided to a mathematical model, which can map the detected features to an output. The mathematical models used by the network are generally dedicated to specific tasks to be performed, and different models will be used to perform different tasks.Once the neural network is structured, the learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights in the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. The input vector is presented to the network for processing. Use the loss function to compare the output of the network with the expected output, and calculate the error value for each neuron in the output layer. Then, the error value is propagated backwards until each neuron has an associated error value that roughly represents its contribution to the original output. The network can then use algorithms such as stochastic gradient descent algorithms to learn from those errors to update the weights of the neural network.Figures 9A-9B illustrate exemplary convolutional neural networks. Figure 9A illustrates various layers within a CNN. As shown in Figure 9A, an exemplary CNN for modeling image processing can receive input 902 that describes the red, green, and blue (RGB) components of the input image. The input 902 may be processed by multiple convolutional layers (e.g., convolutional layer 904, convolutional layer 906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 908. The neurons in the fully connected layer have full connections to all activation functions in the previous layer, as previously described for the feedforward network. The output from the fully connected layer 908 can be used to generate output from the network. Matrix multiplication can be used instead of convolution to calculate the activations in the fully connected layer 908. Not all CNN implementations use the fully connected layer 908. For example, in some implementations, the convolutional layer 906 may generate the output of the CNN.The convolutional layers are sparsely connected, which is different from the traditional neural network configuration found in the fully connected layer 908. The traditional neural network layer is fully connected so that each output unit interacts with each input unit. However, the convolutional layers are sparsely connected because the output of the convolution of the domain (rather than the corresponding state value of each node in the domain) is input to the nodes of the subsequent layer, as illustrated. The kernel associated with the convolution layer performs a convolution operation, and the output of the convolution operation is sent to the next layer. The dimensionality reduction performed within the convolutional layer is one aspect that enables the CNN to scale to handle large images.Figure 9B illustrates an exemplary calculation stage within the convolutional layer of the CNN. The input 912 to the convolutional layer of the CNN can be processed in three stages of the convolutional layer 914. These three stages may include a convolution stage 916, a detector stage 918, and a pooling stage 920. The convolutional layer 914 may then output the data to successive convolutional layers. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate classification values for input to CNN.In the convolution stage 916, several convolutions are performed in parallel to generate a set of linear activations. The convolution stage 916 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformation includes rotation, translation, scaling, and a combination of these transformations. The convolution stage calculates the output of a function (for example, a neuron) connected to a specific area in the input, which can be determined as a local area associated with the neuron. The neuron calculates the dot product between the weight of the neuron and the area in the local input to which the neuron is connected. The output from the convolution stage 916 defines a set of linear activations processed by successive stages of the convolution layer 914.Linear activation can be processed by the detector stage 918. In the detector stage 918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the non-linear nature of the overall network without affecting the receptive field of the convolutional layer. Several types of non-linear activation functions can be used. One specific type is a modified linear unit (ReLU), which uses an activation function defined as f(x)=max(0,x) so that the activation is thresholded at zero.The pooling stage 920 uses a pooling function that replaces the output of the convolutional layer 906 with summary statistics of nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, so that a small translation of the input does not change the pooling output. The invariance of local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 920, including maximum pooling, average pooling, and l2-norm pooling. In addition, some CNN implementations do not include a pooling stage. In contrast, such implementations replace and additional convolution stages have an increased stride relative to the previous convolution stages.The output from the convolutional layer 914 can then be processed by the next layer 922. The next layer 922 may be an additional convolutional layer or one of the fully connected layers 908. For example, the first convolutional layer 904 of FIG. 9A may be output to the second convolutional layer 906, and the second convolutional layer may be output to the first layer in the fully connected layer 908.Figure 10 illustrates an exemplary recurrent neural network 1000. In a recurrent neural network (RNN), the previous state of the network affects the output of the current state of the network. A variety of functions can be used to build RNNs in a variety of ways. The use of RNN generally revolves around the use of mathematical models to predict the future based on previous input sequences. For example, RNN can be used to perform statistical language modeling to predict upcoming words given the previous word sequence. The illustrated RNN 1000 can be described as having an input layer 1002 for receiving input vectors, a hidden layer 1004 for implementing a recursive function, a feedback mechanism 1005 for enabling the'memory' of the previous state, and a feedback mechanism for outputting the result. Output layer 1006. RNN 1000 operates based on time steps. The feedback mechanism 1005 influences the state of the RNN at a given time step based on the previous time step. For a given time step, the state of the hidden layer 1004 is defined by the previous state and the input at the current time step. The initial input (x1) at the first time step can be processed by the hidden layer 1004. The second input (x2) can be processed by the hidden layer 1004 using the state information determined during the processing of the initial input (x1). The given state can be calculated as st=f(Uxt+Wst-1), where U and W are parameter matrices. The function f is generally nonlinear, such as a hyperbolic tangent function (Tanh) or a modification of the modified function f(x)=max(0,x). However, the specific mathematical function used in the hidden layer 1004 may vary according to the specific implementation details of the RNN 1000.In addition to the basic CNN and RNN networks described, changes to those networks can also be enabled. An example RNN variant is a long short-term memory (LSTM) RNN. LSTM RNN can learn the long-term dependencies that may be necessary for processing long language sequences. A variant of CNN is a convolutional deep belief network, which has a structure similar to CNN and is trained in a manner similar to a deep belief network. The Deep Belief Network (DBN) is a generative neural network composed of multiple layers of random (random) variables. Greedy unsupervised learning can be used to train the DBN layer by layer. The learned weights of the DBN can then be used to provide a pre-trained neural network by determining a set of optimal initial weights for the neural network.Figure 11 illustrates the training and deployment of a deep neural network. Once a given network has been structured for the task, the training data set 1102 is used to train the neural network. Various training blocks 1104 have been developed to enable hardware acceleration of the training process. For example, the machine learning block 604 of FIG. 6 may be configured as a training block 604. The training block 604 can be linked to the untrained neural network 1106 and enables the parallel processing resources described herein to be used to train the untrained neural network to generate the trained neural network 1108.To start the training process, the initial weights can be selected randomly or by pre-training using a deep belief network. Then perform the training cycle in a supervised or unsupervised manner.Supervised learning is a learning method in which, for example, when the training data set 1102 includes this input paired with the desired output of the input, or when the training data set includes an input with a known output and the output of the neural network is manually graded In the case of, the training is performed as a mediation operation. The network processes the input and compares the resulting output with a set of expected or desired outputs. Then the error is propagated back through the system. The training block 1104 can be adjusted to adjust the weight of the untrained neural network 1106. The training block 1104 can provide tools to monitor how well the untrained neural network 1106 converges towards a model suitable for generating correct answers based on known input data. When adjusting the weights of the network to improve the output generated by the neural network, the training process occurs repeatedly. The training process can continue until the neural network reaches the statistically desired accuracy associated with the trained neural network 1108. The trained neural network 1108 can then be deployed to implement any number of machine learning operations.Unsupervised learning is a learning method in which the network tries to use unlabeled data to train itself. Therefore, for unsupervised learning, the training data set 1102 will include input data without any associated output data. The untrained neural network 1106 can learn groupings within unlabeled inputs and can determine how individual inputs relate to the overall data set. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1107 that can perform operations useful in reducing data dimensionality. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in the input data set that deviate from the normal data pattern.Variations in supervised and unsupervised training can also be used. Semi-supervised learning is a technique in which the training data set 1102 includes a mixture of labeled data and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is used continuously to further train the model. Incremental learning enables the trained neural network 1108 to adapt to the new data 1112 without forgetting to instill the knowledge in the network during the initial training.Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single computing node. A distributed network of computing nodes can be used to speed up the training process instead of using a single computing node.Fig. 12 is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of neural networks. The distributed computing nodes may each include one or more host processors and one or more general processing nodes, such as a highly parallel general graphics processing unit 700 as shown in FIG. 7. As illustrated, distributed learning may perform model parallelism 1202, data parallelism 1204, or a combination of model and data parallelism 1204.In model parallelism 1202, different computing nodes in a distributed system can perform training calculations for different parts of a single network. For example, each layer of the neural network can be trained by different processing nodes of the distributed system. The benefits of model parallelism include the ability to scale to extremely large models. Splitting the calculations associated with different layers of the neural network enables the training of very large neural networks, where the weights of all layers will not be assembled into the memory of a single computing node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.In data parallelism 1204, different nodes of the distributed network have complete instances of the model, and each node receives a different part of the data. Then combine the results from different nodes. Although different methods for data parallelism are possible, data parallel training methods require techniques that combine results and synchronize model parameters between each node. Exemplary methods for combining data include parameter averaging and update-based data parallelism. Parameters are averaged on a subset of the training data to train each node, and the global parameters (eg, weight, bias) are set to the average of the parameters from each node. The parameter averaging uses a central parameter server that maintains parameter data. Update-based data parallelism is similar to parameter averaging, except that updates to the model are transmitted instead of parameters from nodes to the parameter server. In addition, update-based data parallelism can be performed in a decentralized manner, where updates are compressed and transferred between nodes.For example, the combined model and data parallelism 1206 can be implemented in a distributed system where each computing node includes multiple GPUs. Each node can have a complete instance of the model, where a separate GPU within each node is used to train different parts of the model.Distributed training has increased overhead compared to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques for reducing the overhead of distributed training, including techniques for enabling high-bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.Demonstration of machine learning applicationsMachine learning can be applied to solve a variety of technical problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. The applications of computer vision range from reproducing human visual capabilities (such as recognizing faces) to creating new categories of visual capabilities. For example, computer vision applications can be configured to recognize sound waves from vibrations induced in objects visible in the video. Parallel processor-accelerated machine learning enables the use of significantly larger training data sets than previously feasible training data sets to train computer vision applications, and enables the use of low-power parallel processors to deploy inference systems.Machine learning accelerated by parallel processors has autonomous driving applications, including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train a driving model based on a data set that defines an appropriate response to a specific training input. The parallel processors described herein can enable rapid training of increasingly complex neural networks for autonomous driving solutions, and enable low-power inference processors to be deployed in mobile platforms suitable for integration into autonomous vehicles.Deep neural networks accelerated by parallel processors have enabled machine learning methods for automatic speech recognition (ASR). ASR includes the creation of functions to calculate the most probable language sequence given the input sound sequence. Accelerated machine learning using deep neural networks has made it possible to replace the Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM) previously used for ASR.Machine learning accelerated by parallel processors can also be used to accelerate natural language processing. Automatic learning programs can use statistical inference algorithms to produce models that are robust to erroneous or unfamiliar inputs. Exemplary natural language processor applications include automatic machine translation between human languages.The parallel processing platform used for machine learning can be divided into a training platform and a deployment platform. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single-node training and multi-node multi-GPU training. Exemplary parallel processors suitable for training include the highly parallel general graphics processing unit 700 of FIG. 7 and the multi-GPU computing system 800 of FIG. 8. In contrast, deployed machine learning platforms generally include low-power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.Figure 13 illustrates an exemplary inference system on chip (SOC) 1300 suitable for performing inference using a trained model. The SOC 1300 can integrate processing components, which include a media processor 1302, a vision processor 1304, a GPGPU 1306, and a multi-core processor 1308. The SOC 1300 may additionally include on-chip memory 1305, which may enable a shared on-chip data pool that can be accessed by each of the processing components. The processing components can be optimized for low-power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of SOC 1300 can be used as part of the main control system for autonomous vehicles. Where the SOC1300 is configured for use in autonomous vehicles, the SOC is designed and configured to comply with the relevant functional safety standards of the deployment jurisdiction.During operation, the media processor 1302 and the vision processor 1304 may work in unison to accelerate computer vision operations. The media processor 1302 may enable low-latency decoding of multiple high-resolution (for example, 4K, 8K) video streams. The decoded video stream can be written to the buffer in the on-chip memory 1305. The visual processor 1304 may then parse the decoded video in preparation for processing the frames of the decoded video using the trained image recognition model and perform preliminary processing operations on the frames of the decoded video. For example, the visual processor 1304 can accelerate the convolution operation for CNN used to perform image recognition on high-resolution video data, while the back-end model calculation is performed by the GPGPU 1306.The multi-core processor 1308 may include control logic to assist in the sequencing and synchronization of shared memory operations and data transfers performed by the media processor 1302 and the vision processor 1304. The multi-core processor 1308 can also act as an application processor to execute software applications that can use the inferred computing power of the GPGPU 1306. For example, at least part of the navigation and driving logic may be implemented in software executed on the multi-core processor 1308. Such software can directly issue computational workloads to the GPGPU 1306, or can issue computational workloads to the multi-core processor 1308, which can offload at least part of those operations to the GPGPU 1306.The GPGPU 1306 may include a computing cluster, such as a low-power configuration of the computing clusters 706A-706H within the highly parallel general graphics processing unit 700. The computing cluster within the GPGPU 1306 can support instructions that are specifically optimized to perform inference calculations on the trained neural network. For example, GPGPU 1306 may support instructions for performing low-precision calculations, such as 8-bit and 4-bit integer vector operations.Dedicated hardware for efficient machine learning operationsThe embodiments described herein provide high-level machine learning calculation primitives that can be used to abstract many low-level calculation details for performing machine learning calculations. The high-level primitives described in this article enable software logic to request high-level machine learning operations while abstracting the low-level implementation details of those operations. For example and in one embodiment, software logic may use a given set of filters to request a convolution operation of the image. A single high-level instruction can be executed that has operands to define the address of the buffer storing the filter and/or kernel data and the input and output buffer addresses. The GPGPU can then divide the high-level convolution instructions into multiple sub-operations performed by the underlying computing unit of the GPGPU. In one embodiment, direct hardware support is provided for one or more subroutines of the Basic Linear Algorithm Subroutine (BLAS), although the embodiment may provide hardware support for other subroutine libraries. The compiler logic and the associated runtime library can compile and utilize the source code of the supported advanced calculation subroutines, and output the compiled source code (which is called into the machine learning macro instruction unit).Instructions and logic used to perform computational operations for machine learningHardware accelerators for computer vision and machine learning can improve the energy efficiency of applications such as object, face, and voice recognition by orders of magnitude. These accelerators use an array of interconnected processing elements (PE), where multiply-add circuits are used to map the dominant performance, area, and energy of the key algorithms used for CNN computing operations. For example, some machine learning hardware accelerators use narrow bit width (16b) fixed point multiply-add data path building blocks to meet the strict memory, area, and power budgets of SoCs in low-power or embedded spaces. It is possible to achieve better result quality for certain data sets and algorithms with the higher dynamic range provided by floating point numbers/calculations, while still maintaining the same memory footprint (16b operands). Previous hardware solutions used to accommodate both types of numerical calculations used separate fixed-point and floating-point data paths or PEs, causing high area costs to achieve this flexibility. In contrast, the embodiments described herein provide an integer/floating point fusion multiply-add and multiply-accumulate data path that utilizes the existing signed integer multiply-add circuit to implement the merged floating-point mantissa multiply-add operation. In one embodiment, by adding only the circuitry required for alignment/normalization shift and exponent units, floating-point support is enabled in the combined floating-point/integer unit without increasing the input/output data width and data memory footprint space. A single control signal is used to switch between floating-point and integer calculation modes on a per-cycle basis.The combined integer/floating point unit provided by the embodiment is supplemented with multiple types of machine learning acceleration units that can be integrated into the GPGPU. The embodiments described herein provide logic to enable additional instructions that combine fusion-multiply-add operations with neural network activation functions, such as modified linear unit functions (RELU), sigmoid functions, or hard sigmoid functions.One embodiment enables the extension of 16-bit floating point encoding to support alternative encodings from the standard IEEE 754 half-precision floating point format. The IEEE half-precision floating-point format specifies a 1-bit sign, 5-bit exponent, and 10-bit fractional part. The embodiments described herein may selectively support alternative encoding of FP16 data based on the mode of the data to be encoded. In one embodiment, the supported alternative format specifies a 1-bit symbol with an 8-bit exponent and a 7-bit fractional component. One embodiment allows encoding with a 1-bit sign, a 3-bit exponent, and a 12-bit fractional component. In such embodiments, different instruction sets support different floating-point encodings, allowing developers to select encodings based on instructions specified in the program code. In one embodiment, when rounding or down-sampling floating-point data, different floating-point encodings may be used, for example, from an accumulated 32-bit floating-point value to a 16-bit value.The merged floating-point unit described herein can selectively perform 16-bit integer or floating-point operations on a per-cycle basis. One embodiment enables dynamic reconfiguration of the floating point unit described herein to enable multi-format support. For example, using a multi-channel configuration, a 16-bit integer or floating-point unit can be configured to perform two-channel 32-bit operations or four-channel 64-bit operations. Such logic enables floating-point logic optimized for low-precision inference operations to be clustered for higher-precision training operations.One embodiment provides random rounding units and statistical accumulators for low-precision networks. Random rounding enables the increased accuracy of classic quantization and rounding of low-precision deep neural networks. The rounding unit can work in different modes. The first mode is a random mode that uses a random number generator to control the rounding unit. The second mode uses the probability distribution of the output on the subsequent input and utilizes a near data statistical estimator unit coupled to the GPGPU memory.The techniques described herein can be implemented in a general-purpose computing system with machine learning optimization provided via a machine learning accelerator unit. The multiprocessor provided by the embodiments described herein is shown in FIG. 14.FIG. 14 is a block diagram of a multi-processor unit 1400 according to an embodiment. The multi-processor unit 1400 may be a modification of the graphics multi-processor 234 of FIG. 2D. The multi-processor unit 1400 includes an acquisition and decoding unit 1402, a branch unit 1404, a register file 1406, a thread manager 1406, a single instruction multi-thread unit (SIMT unit 1410), and a voltage and frequency manager 1420. The fetching and decoding unit 1402 may fetch instructions for execution by the multi-processor unit 1400. The branch unit 1404 may calculate the instruction pointer adjustment based on the executed jump instruction. The register file 1406 can store general and architectural registers used by the SIMT unit 1410. The thread manager 1406 can distribute and redistribute threads among the computing units of the SIMT unit 1410. In one embodiment, the SIMT unit 1410 is configured to execute a single instruction as multiple threads, where each thread of the instruction is executed by a separate computing unit. In one embodiment, the calculation units 1411 to 1418 each include an integer ALU (for example, ALU 1411A-1418A) and a floating point unit (for example, FPU 1411B-1418B). The voltage and frequency of each calculation unit 1411-1418 in the SIMT unit 1410 can be dynamically managed by the voltage and frequency manager 1420. When the components of the calculation unit are enabled and disabled, the voltage and frequency manager 1420 can increase or decrease The voltage and clock frequency supplied to various computing units.In some previously enabled configurations, each computing unit can execute a single thread of integer instructions or floating point instructions. If any of the ALU 1411A-1418A is assigned tasks to execute threads of integer instructions, the corresponding FPU1411B-FPU1418B is not available for threads executing floating-point instructions, and can be power gated during the operation of the corresponding ALU1411A-ALU 1418A control. For example, when ALU 1411A can execute threads of integer instructions and FPU 1413B executes threads of floating-point instructions, FPU 1411B is power-gated while ALU 1411A is active. The embodiments described herein overcome such limitations by, for example, enabling ALU 1411A to execute threads of instructions, while FPU 1411B executes threads of different instructions. In addition, one embodiment provides support for mixed precision or mixed data type operands, so that a single computing unit can perform operations on instructions with floating-point and integer operands and/or operands with different precisions at the same time.The embodiments described herein enable increased operating throughput for a cluster of computing units by making all logic units within each computing unit available to perform calculations. In such an embodiment, the logic unit within the calculation unit that is designed to selectively perform calculations with multiple precisions or multiple data types can be configured for each precision or data type supported by the calculation unit. Perform multiple simultaneous operations. For a given calculation unit 1411-1418, ALU1411A-1418A can perform integer operations, while FPU 1411B-1418B performs floating-point operations. These operations can be performed for a single instruction or for multiple instructions. In one embodiment, a new class of mixed-precision instructions is enabled, in which one or more operands have one data type or precision, and one or more different operands have different data types or precisions. For example, instructions can accept two or more multi-element operands including floating-point and integer data types, and a single instruction is executed on a per-data type or per-precision basis.Reconfigurable 16-bit floating point/integer fusion multiply-add unitThe logic unit design provided by the embodiments described herein has single-cycle and multi-cycle latency, and is compatible with multiply-add (e.g., 3-operand input with no dependencies across cycles) and multiply-accumulate (e.g., cross-cycle Data-dependent 2-operand input) single-cycle throughput of the two. On the contrary, the logic unit design known in the art implements fusion multiply-add without considering multi-cycle latency and single-cycle throughput multiply-accumulate operations, which may be the execution of key machine learning operations (such as dot product operations) Limit factor.One embodiment described herein provides a merged integer/floating point fusion multiply-add data path, which utilizes existing signed integer multiply-add circuits to also implement floating-point mantissa multiply-add operations. In the case of adding only the circuits required for the alignment/normalization shift and exponent unit, floating point support is enabled. The input/output data width and data memory footprint remain the same, where only a single control signal is required to switch between the two calculation modes on a per cycle basis.One embodiment provides a combined 16-bit integer/floating point fusion multiply-add design, which improves on the conventional single-cycle design with separate integer/floating point data paths. The design described in this article implements a combined int16/float16 data path multiply-add circuit, which reduces the total area by as much as 29%. One embodiment provides an improved floating-point data path with alignment only for the addend, along with a combined negation and rounding incrementer that contributes 11% to the total area reduction. One embodiment provides a multiply-accumulate variant with two inputs and dual-cycle latency, single-cycle throughput. One embodiment provides an alternative circuit that significantly increases the accumulation accuracy by doubling the width of the accumulator in the increased area at a cost of only 11%.Figures 15A-15B show the design of a logic unit for performing integer and floating point fusion multiply-add operations according to an embodiment. Figure 15A shows a conventional design of a logic unit 1500 that enables fusion multiplication-add operations while maintaining complete intermediate product accuracy and range. In the IEEE half-precision floating point (float16) or signed 16b integer (int16) mode, a fusion multiply-add operation (o=a*b+c) is performed on three 16-bit input operands 1501. The input is provided to a 16-bit floating point data path 1510 or a 16-bit integer data path 1520, where the output port (o 1530) selects the appropriate result (f16 1518 or i16o 1528) based on the operating mode 1532. The int 16 result (i16o 1528) is selected and rounded to the nearest high half of the 32b signed integer result (isum 1525) generated by the signed 16bx16b multiplier 1521 and 32b adder 1522. Float16 data path 1510 right shift (1511) unsigned 11bx11b multiplier 1617 product of the smaller mantissa and right shift the addend before processing the product via the 22-bit mantissa adder 1513 for use in the alignment shifter 1512A Alignment. The 22-bit leading zero presensor (LZA 1519) predicts the position of the most significant bit position of the floating-point addition result performed by the 22-bit mantissa adder 1513 based on the input to the adder. Before providing the intermediate result to the rounding logic 1516, a left shift is performed by the normalized shifter 1515 (1514).FIG. 15B is a block diagram of a multiply-add logic unit 1540 according to an embodiment. The logic unit 1540 of FIG. 15B maintains a separate 16-bit floating point/integer circuit while improving the floating point data path of the logic unit 1500. In one embodiment, the design of the logic unit 1540 removes the alignment shifter 1512B from the critical path by performing alignment only on the addend (in parallel with the multiplication operation (1541)). The wider 33-bits and only require 11-bit incrementers for the upper ones. In addition, for subtraction operations, the output of the adder can be inverted to produce an unsigned mantissa. In one embodiment, by combining the increment operation and the final rounding incrementer (1542), the incrementer is removed from the critical path of the data path of the logic unit 1540. In contrast, the logic unit 1500 of FIG. 15A requires the incrementer to complete any required twos complement inversion operations after the adder. The reduction of the critical path using the 16-bit floating point data path of the logic unit 1540 results in smaller gates and allows the 11% area reduction associated with the logic unit 1500 while maintaining the same single cycle latency.Figure 16 shows a merged multiply-add logic unit 1600 with merged floating point and integer data paths according to an embodiment. The 16-bit x 16-bit signed multiplier 1602A and 32-bit adder 1604 of the integer data path are reused for floating-point mantissa operations, where the upper operand bits are gated to produce an 11-bit mantissa (1602B) result. When the floating point mode is enabled, the input switches 1601A-1601C are used to redirect the upper 6 bits of the input operands (a, b, c) to the exponent unit 1608. The sign and exponent values from the input are packed and provided to the exponent unit 1608 via a fixed 3-bit sign operand bus 1609A and a 15-bit exponent bus 1609B. For 16-bit floating point operations, the shared 32-bit adder uses a 1-bit incrementer 1605 to create the high-order 1606 of the 33-bit sum(s). The bypass circuits (1610A, 1610B) in the exponent unit 1608 and in the alignment shifter 1612 and normalization shifter 1613 ensure fixed alignment/normalization, which has minimal switching in those units used for integer mode Active, and the zero high mantissa bit ensures that there is no switching activity in the unused part of the multiplier in floating point mode. The rounding logic 1616 and the incrementer of the floating point data path are reused in integer mode to calculate the lower 10 bits of the integer result i16o by rounding. The upper 6 bits of i16o are calculated by mapping this operation to the existing exponential incrementer 1611, which also performs any rounding overflow operations from the mantissa data path in floating point mode. When processing is complete, a 16-bit floating point or integer value can be provided via output 1630.FIG. 17A shows a logic unit 1700 including a merge calculation circuit to perform floating-point and integer fusion-multiply-accumulate operations according to an embodiment. The logic unit 1700 includes an exponent unit 1708 and a mantissa unit 1709, two 16-bit input ports 1701 and a 16-bit output port 1730. The input port 1701 includes a switch for switching the sign bit and exponent bit of input data to the exponent unit 1708. The exponent unit 1708 and the mantissa unit 1709 are used when performing integer operations. In one embodiment, the logic unit supports 8.8 input and 16.0 output formats for 16-bit fixed point mode. The logic unit 1700 supports dual-cycle latency and single-cycle throughput requirements. Some of the circuits shown are shared between operating modes, including signed multipliers 1702A-1702B and 32-bit adders 1704 for both integer and floating point modes. During the accumulation in the second cycle, the 16-bit accumulator input 1703A is asserted, where the value of the accumulator is provided to the 32-bit adder 1704. The upper 10 bits of the accumulator input 1703A (for example, c[15:6]) are dedicated to 16-bit integer operations. For the two calculation modes, multiplication is performed in the first cycle, and addition/rounding is performed in the second cycle.The logic unit 1700 of FIG. 17A uses three key techniques to enable efficient merge design. First, the direct pipelineization of the single-cycle merge design of Figure 16 for the accumulation operation will reduce the throughput by half in the first cycle through addition alignment, or calculate the sum by right shifting in the critical path of the second cycle. 33b aligns to increase the cycle time. In contrast, the design of the logic unit 1700 utilizes the timing/area non-criticality of the exponent unit 1708 to pre-calculate the larger (or smaller) mantissa and right shift amount of the alignment shifter 1713. In one embodiment, the logic unit 1700 performs a two-cycle operation while maintaining single-cycle throughput by feeding back the output to the second cycle as the addend input, picking the smaller mantissa only used for 22-bit alignment and using the previous The output of the multiplier and the exponent of the accumulator calculated in the two stages are pre-calculated with a smaller mantissa/right shift amount in the first cycle.Second, the round-to-nearest operation in 16-bit integer mode utilizes the 8.8 fixed-point format and eliminates the need to map integer rounds to floating-point rounding incrementers. Before the adder, the multiplexer logic 1705 inserts 1 in bit position 15 instead of 0 to achieve the same rounding operation.Third, the flip-flop is reused for mutual exclusion signals between the two modes, such as exponential calculation (e.g., Eun1707, right shift 1710) and the high 10b of the product (1711). The timing path reduction in the second cycle is also achieved by combining negation/rounding incrementers and by using optimization based on far/near paths to reduce the critical paths through the alignment shifter 1713 and the normalization shifter 1714 .As shown in FIG. 17B, by only doubling the width of the accumulator to 32 bits, the accuracy of the two-cycle multiply-accumulate design is significantly increased. The accumulator can accumulate a 16-bit integer result in a 16.16 fixed-point format and a 16-bit floating point result based on an intermediate result with a 5-bit exponent and 22-bit mantissa (implicit leading 1 is not stored). In various embodiments, the 22-bit mantissa of the intermediate result can be rounded, truncated, or quantized into an IEEE standard mantissa. The design of the logic unit 1740 mainly limits the cost of the doubling accumulator to the output flip-flop and the final incrementer in the mantissa data path, because the remaining data path after the multiplier has been adapted to the additional width for the product. In one embodiment, higher accuracy enables reduction of rounding to simple truncation to generate a 16-bit output 1750 from a 32-bit accumulator. The post-exponent normalization incrementer is removed from the exponent unit 1708 in the logic unit 1740. Conversely, when the output of the adder is to be negated, the negation incrementer 1742 performs the final increment in the mantissa to calculate the twos complement. During the accumulation in the second cycle, the 32-bit accumulator input 1703B is asserted, where the value of the accumulator is provided to the 32-bit adder 1704. The upper 10 bits of the accumulator input 1703B (for example, c[31:22]) are dedicated to 16-bit integer operations. Compared with the design of the logic unit 1700 of FIG. 17A, the combined total area of this design only shows an area increase of 11%, while doubling the accumulator accuracy.Although the above description is provided for 16-bit operands, these techniques can easily be extended to larger data widths to achieve similar goals. In addition, although IEEE half-precision output is described, the design described herein can also be adjusted to support non-standard floating-point formats. In addition, different non-standard floating point formats can be used for intermediate values, as described below.The above-described embodiments provide various implementations of a reconfigurable 16-bit floating point/integer fusion multiply-add unit, which provides multiple advantages over existing designs. The proposed design does not affect the memory footprint of floating point or integer storage. The proposed design only increases the area of the multiplier without changing the rest of the floating-point data path. On the contrary, the logic design known in the art expands the entire floating-point effective digits/mantissa to the same width as the integer, and the additional storage area for the sign and exponent is separate and only dedicated to the floating-point number, causing the floating-point number to be stored The occupied space and the size of the register file increase. Existing designs also increase the width of the entire mantissa data path, which can cause a significant area increase. Both single-cycle (e.g., logic unit 1600 of FIG. 16) and multi-cycle (e.g., logic unit 1700 of FIG. 17A and logic unit 1740 of FIG. 17B) designs are provided, where multiple cycles generate output for each cycle after the initial waiting time. The logic unit 1740 of Figure 17B provides a combined floating point/integer multiply-accumulate design with a local accumulator width twice the width of the input operand. This enables a much higher accumulation accuracy of operations like dot products without impacting the memory storage footprint of the input operands and impacting a small part of the design (for only 11% of the total area impact). In addition, each logic unit maps a part of the integer operation onto the existing exponential data path to maximize circuit reuse when reconfiguring the integer mode. In addition, for floating point operations with subtraction operations, the logic unit 1540 of FIG. 15B and 1700 of FIG. 17A combine the twos complement increments into the rounded increments for reduced delay and area.Machine learning data processing system and acceleration logicOne embodiment uses the multi-processor unit 1400 of FIG. 14 and one or more floating-point/integer logic units of FIGS. 15A-17B can be used as the building blocks of a machine learning data processing system, which includes being optimized to perform in the use of deep neural The hardware, software, and firmware of the type of computing operations that are usually performed when the network performs training or inference. Figures 18A-18B show a data processing system and associated calculation and logic units used to perform accelerated training and inference operations for machine learning, for example, via the use of deep neural networks. Figure 18A shows an exemplary machine learning data processing system provided by the embodiments described herein. Figure 18B shows the components of a machine learning accelerator according to one embodiment.The data processing system 1800 of FIG. 18A is a heterogeneous processing system having a GPGPU 1820 including machine learning acceleration logic, a processor 1802, and a unified memory 1810. The processor 1802 and GPGPU 1820 may be any processors and GPGPU/parallel processors as described herein. The processor 1802 may execute instructions for the compiler 1815 stored in the system memory 1812. The compiler 1815 executes on the processor 1802 to compile the source code 1814A into the compiled code 1814B. The compiled code 1814B may include code that can be executed by the processor 1802 and/or code that can be executed by the GPGPU 1820. During compilation, the compiler 1815 may perform operations to insert metadata, including hints about the level of data parallelism present in the compiled code 1814B and/or hints about the data locality associated with threads to be dispatched based on the compiled code 1814B . The compiler 1815 may include information required to perform such operations or operations that may be performed with the help of the runtime library 1816. The runtime library 1816 may also facilitate the compiler 1815 to compile the source code 1814A, and may also include instructions that link with the compiled code 1814B at runtime to facilitate execution of the compiled instructions on the GPGPU 1820.The unified memory 1810 represents a unified address space that can be accessed by the processor 1802 and the GPGPU 1820. The unified memory includes system memory 1812 and GPGPU memory 1818. The GPGPU memory 1818 includes the GPGPU local memory 1834A-1834B in the GPGPU 1820 and may also include some or all of the system memory 1812. For example, the compiled code 1814B stored in the system memory 1812 may also be mapped into the GPGPU memory 1818 for access by the GPGPU 1820.The GPGPU 1820 includes multiple computing blocks 1824A-1824N, which may be instances of the processing clusters 214A-214N of FIG. 2A and may include one or more instances of the graphics multiprocessor 234 described herein. In various embodiments, the calculation blocks 1824A-1824N include a calculation unit having one or more of the logic units of FIGS. 15B-17B. The GPGPU 1820 also includes a power and performance module 1826, a cache memory 1827, and a collection of registers 1825 that can be used as shared resources of the computing blocks 1824A-1824N. In one embodiment, the registers 1825 include directly and indirectly accessible registers, where the indirectly accessible registers can be optimized for matrix calculation operations. The power and performance module 1826 may be configured to adjust the power delivery and clock frequency of the calculation blocks 1824A-1824N to power gating idle components within the calculation blocks 1824A-1824N under heavy workloads. The GPGPU 1820 includes a GPGPU local memory 1828, which is a physical memory module that shares a graphics card or a multi-chip module with the GPGPU 1820.In one embodiment, the GPGPU 1820 includes hardware logic including an acquisition and decoding unit 1821, a scheduler controller 1822, and a machine learning accelerator 1823. The instruction fetching and decoding unit 1821 is an fetching and decoding unit, which includes logic for fetching and decoding instructions (including machine learning specific instructions) that can define complex and customizable behaviors. The instructions may cause the computing logic, via the scheduler controller 1822, to schedule a set of operations to be executed via one or more of the computing blocks 1824A-1824N. In one embodiment, the scheduler controller 1822 is an ASIC that can be configured to perform advanced scheduling operations. In one embodiment, the scheduler controller 1822 is a microcontroller capable of executing instructions loaded from a firmware module or a processing core with low energy per instruction.In one embodiment, some functions to be performed by the calculation blocks 1824A-1824N may be directly dispatched to the machine learning accelerator 1823 or offloaded to the machine learning accelerator 1823. The machine learning accelerator 1823 includes processing element logic that is configured to efficiently perform matrix and other calculation operations that are normally performed during machine learning.In some embodiments, the GPGPU 1820 additionally includes a statistics unit 1829 that can be configured as a near data calculation unit. For example, the statistical unit 1829 may be integrated into one or more memory controllers of the GPGPU local memory 1828 or spread across the one or more memory controllers. In one embodiment, the statistical unit 1829, when enabled by the machine learning accelerator 1823, can be used to determine the weight or the probability of activating the mapped data when performing machine learning operations that are written to or read from the GPGPU local memory 1828 distributed. The statistical unit 1829 is used to determine whether the data accessed in the GPGPU local memory 1828 is within one or more statistical distributions (for example, Gaussian, unified, Poisson, etc.) based on the address and data pattern during memory access . In one embodiment, for at least a subset of memory accesses, statistical information (e.g., average, median, mode, standard deviation, etc.) may be collected during the sampling period. The statistical unit 1829 may be configured such that collecting statistical information does not significantly increase the latency of memory access performed by the memory controller hosting the statistical unit 1829. The statistical information may be periodically provided to the machine learning accelerator 1823 or the machine learning accelerator 1823 may request data from the statistical unit. In one embodiment, the statistical unit 1829 may check the data associated with the memory access against the set of known possible distributions. The vector including the set of probabilities associated with each known possible distribution may be provided to the machine learning accelerator 1823 on a periodic basis or upon request. In various embodiments, the machine learning accelerator 1823 may use the probability and/or statistical information provided by the statistical unit 1829 for various operations. In one embodiment, as further described in FIGS. 18B and 20, the machine learning accelerator 1823 may use the data provided by the statistical unit 1829 to perform random rounding during the quantization of the low-precision neural network.The machine learning accelerator 1823 of FIG. 18A is shown in further detail in FIG. 18B. In one embodiment, the machine learning accelerator 1823 includes an activation instruction module 1832, an FPU encoding and configuration module 1834, a random quantization unit 1838, and a cache memory 1836 shared among various modules within the machine learning accelerator 1823.The activation instruction module 1832 includes logic to sequence the execution of combined fused multiply-add and activation in response to a single instruction. In response to the decoding of the FMAC or FMADD plus activation function on the GPGPU 1820, the scheduler unit 1822 may schedule operations via the machine learning accelerator 1823. Through the activation instruction module 1832, the machine learning accelerator 1823 can perform a set of fusion multiplication-addition or fusion multiplication-accumulation operations on two or three input operands of each thread or vector element, and for each thread or element, the output is provided to Hardware logic configured to execute one of multiple optional activation functions. Different activation functions can be associated with different commands, or a single command can include a field to enable selection of the activation function. In one embodiment, the activation instruction module may perform a vector or winding operation to generate an intermediate FMADD or FMAC result and store the intermediate result in the cache memory 1836. The activation instruction module 1832 can then apply the activation function to the intermediate data. The activation functions supported by the demonstration include the modified linear unit (RELU) function of equation (1), the sigmoid function of equation (2), or the hard sigmoid function of equation (3).f(x)=max(0,x) (1)The FPU encoding and configuration module 1834 includes logic to define parameters for the dynamic configuration of floating-point units within the calculation blocks 1824A-1824N of the GPGPU 1820. In one embodiment, certain dynamic aspects of the combined integer/floating point unit of FIGS. 16 and 17A-17B may be configured via the FPU encoding and configuration module 1834. For example, the computing blocks 1825A-1824N may be over-provisioned to contain more computing units than can be most active at any one time given the power budget of the GPGPU 1820. However, the FPU encoding and configuration module 1834 can configure the dynamic floating point unit to gate certain logic blocks to operate with reduced accuracy and reduced power consumption. The reduced precision and power requirements of each unit can enable a larger number of units to be online, allowing a larger number of threads to be performed on lower precision operations. For example and in one embodiment, a logic unit that can be configured to perform 16-bit integer operations can be configured to perform 8-bit integer operations, reducing power requirements. In one embodiment, dual 8-bit integer operations can be performed, increasing throughput without significantly increasing power consumption. In one embodiment, multiple half-precision logic units can work in parallel to perform single-precision or double-precision floating-point operations. In one embodiment, higher precision operations can be performed via multiple channels through the logic unit.In one embodiment, the FPU encoding and configuration module 1834 may also configure the floating-point encoding method supported by the floating-point unit. In addition to the IEEE 754 floating-point standard for half-precision, single-precision, and double-precision encoding of floating-point values, a large number of alternative encoding formats can also be supported based on the dynamic range of the data currently being processed. For example, based on the dynamic range and/or distribution of a given data set, by using larger or fewer bits for exponent or mantissa data, the data can be quantified more accurately from higher to lower accuracy. In one embodiment, the supported alternative format specifies a 1-bit symbol with an 8-bit exponent and a 7-bit fractional component. One embodiment allows encoding with a 1-bit sign, a 3-bit exponent, and a 12-bit fractional component. In such embodiments, different instruction sets support different floating-point encodings, allowing developers to select encodings based on instructions specified in the program code. In one embodiment, when rounding or down-sampling floating-point data, for example, from an accumulated 32-bit floating-point value to a 16-bit value, different floating-point encodings can be used. In one embodiment, the statistical unit 1829 can be used to determine which 16-bit encoding is most suitable for a given data block.In one embodiment, the machine learning accelerator 1823 additionally includes a random quantization unit 1838 to enable random quantization for machine learning operations. The random quantization unit 1838 may be used to enable random rounding during the quantization operation. One embodiment uses a random number generator to enable random rounding, where a decimal value can be used to determine the rounding probability. One embodiment utilizes the statistical unit 1829 to determine the probability distribution associated with the set of output data from a given layer of the neural network. For each layer, the probability density of the data value can be determined, where the probability density is determined by statistical characteristics, including the average, standard deviation, and variance of the data determined for each layer of the neural network. Using such data, random rounding can be performed in a manner that does not change the probability distribution of the data within each layer of the neural network.FIG. 19 shows details of the activation instruction module 1832 according to an embodiment. The activation instruction module 1832 includes logic to sequence the execution of combined fused multiply-add and activation in response to a single instruction. In response to the decoding of the FMAC/FMADD+ activation function by the instruction fetching and decoding unit 1821 of FIG. 18A, the instruction execution may be dispatched to the activation instruction module 1832 via the machine learning accelerator 1823. When receiving the instruction, the machine learning accelerator 1823 can use the fused multiply-add/fused multiply-accumulate thread scheduler unit 1902 to schedule the fused multiply-add or fused multiply-accumulate operation of the units in the calculation block 1824A-1824N. gather. In one embodiment, the intermediate data output from the calculation blocks 1824A-1824N may be stored in the cache memory 1836 in the machine learning accelerator 1823. In one embodiment, the chunks of intermediate data may be processed in a streaming manner within the activation instruction module 1832. In one embodiment, the intermediate data may represent the activation map to which the non-linearity of the activation function will be applied. The selected one of the activation functions can be applied by the activation function logic 1904A-1904N. The activation function may be selected based on a specific instruction processed by the activation instruction module 1832 or parameters supplied with the instruction. The specific instruction can be formatted based on any instruction format of the instruction format described herein.Floating point operations at various points include rounding operations. Rounding is used in floating-point calculations because floating-point numbers have a limited number of digits and cannot accurately represent all real numbers. Therefore, when numbers are assigned tasks to represent values that require more numbers than allowed by the selected floating point format, the remaining numbers are omitted, and the numbers are rounded to the nearest value that can be represented by the floating point format. The specific number that may be represented depends on the floating-point format selected.Various methods used for rounding during floating-point calculations can be implemented. The embodiments described herein include hardware logic to perform random rounding for machine learning operations. In contrast to other rounding methods (round to the nearest number or strictly up and down), the random method rounds numbers randomly. The embodiments described herein enable random rounding for the quantification of data values of deep neural networks. Provides a rounding unit that enables hardware random rounding using one of multiple rounding modes. One embodiment uses a random number generator to enable random rounding. You can use decimal values to determine the rounding probability. The random number can be compared with the rounding probability to determine which of the nearest representable values to round during quantization. Alternatively, one embodiment utilizes statistical accumulator/estimator logic to determine the probability distribution associated with the set of output data from a given layer of the neural network. For each layer, the probability density of the data value distribution can be determined, where the probability density is defined by the average value, standard deviation, and variance of the data determined for each layer of the neural network. Using such data, random rounding can be performed in a way that does not change the probability distribution of each layer of the neural network.Fig. 20 shows a random quantization unit 1838 according to an embodiment. In one embodiment, the random quantization unit 1838 is used to quantize the raw output data generated in the layer of the neural network into a format used by the next layer of the neural network. For example, calculation operations used to generate output data can be processed with higher precision, and the results can be quantified to lower precision before being provided as input to the next layer. In one embodiment, the output 2002B from a given layer n is processed in 32 bits, for example, and quantized by the quantization unit 2004 into a 16-bit data type. The quantization operation can utilize random rounding, which can be implemented via the random rounding unit 2009. The quantized and rounded values can then be provided to the next layer (layer N+1) 2010 of the neural network.In various embodiments, the random quantization unit 1838 may perform random rounding via the use of the random number generator 2006. In floating-point arithmetic, rounding aims to turn a given value x into a value z with a specified number of significant figures, where z is a multiple of the number m, which depends on the magnitude of x. The number m is a power of the base (usually 2 or 10) of floating point representation. The number z is a representable value close to the value x. Whether the value x is rounded up or down to achieve the value z is based on the random value selected by the random number generator 2006. Compare the fractional part between the generated random value and the valid representation. The fractional part can be used as the probability of rounding up or down to the nearest representable value. The gap between representable values during quantization depends on the encoding format of the floating point representation at the appropriate location. As an example, if the quantization is to be rounded to an integer value and the decimal value is 0.3, the probability of rounding up may be equal to 30%, and the probability of rounding down may be equal to 70%. In such scenarios (where the random number generator 2006 is a properly verified true random number generator), the random rounding unit 2009 will round up or down in proportion to the decimal value.Alternatively, the random rounding unit 2009 may utilize a statistical accumulator/estimator 2008. In one embodiment, the statistical accumulator/estimator 2008 is a near data statistical unit 1829 as shown in FIG. 18A. The statistical accumulator/estimator 2008 can analyze the output from the previous layers 2002A-2002B to determine the distribution associated with the neural network data. The random rounding unit 2009 may then round the data during quantization so that the quantized data has a similar distribution to the pre-quantized data.Figure 21 shows the FPU encoding and configuration module 1834 according to one embodiment. In one embodiment, the GPU encoding and configuration module 1834 includes an FPU configuration module 2102 and an FPU encoding module 2104. The FPU configuration module 2102 can be configured to execute a 16-bit integer logic unit to perform 8-bit integer operations (including dual 8-bit integer operations). In one embodiment, multiple half-precision logic units can work in parallel to perform single-precision or double-precision floating-point operations. The FPU encoding module 2104 can be used to configure a specific floating point encoding format used in the calculation blocks 1824A-1824N during data calculation. In one embodiment, the FPU encoding module 2104 may configure one or more of the calculation blocks 1824A-1824N in response to an instruction specifying that the input or output data is to be stored in a non-standard floating point format. The calculation block used to execute the instruction can then be configured to interpret the data in a non-standard format before the operation of the instruction is executed. In one embodiment, the FPU encoding module 2104 is to configure one or more of the calculation blocks to use a floating-point encoding format that can store the data to be processed most efficiently. Such determination can be performed based in part on the probability and statistical information provided by the statistical unit 1829, which can function as a near data calculation unit located in the memory controller 2106 of the GPGPU local memory 1828.Figure 22 shows logic 2200 for processing instructions using a dynamically configurable computing unit according to an embodiment. The logic 2200 may be hardware or firmware logic within a GPGPU and/or a GPGPU multiprocessor as described herein (such as the multiprocessor unit 1400 in FIG. 14 or the GPGPU 1820 in FIG. 18). As shown at block 2202, the logic 2200 is configured to fetch and decode a single instruction to perform a combined multiply-add operation on the set of operands. As shown at block 2204, the logic 2200 may then issue a single instruction for execution by the computing unit for execution by the dynamically configurable computing unit. As shown at block 2206, the logic 2200 may then configure one or more logic units of the calculation unit to perform operations with the precision and data type of the operands. As shown at block 2208, the logic 2200 may then execute a single instruction in the computing unit to generate output based on the multiplication and addition operations.In one embodiment, the combined multiplication and addition operations performed at block 2202 may be fused floating point operations that include a single rounding. For example, the multiplication and addition operations may be merged multiply-add or merged multiply-accumulate operations. The combined multiplication and addition operations can also be integer operations. Integer operations can include rounding operations between multiplication and addition. The rounding can be performed by inserting zeros at the most significant position of the integer data type via a multiplexer in the logic unit. The multiplexer can be positioned after the multiplier and before the adder in the logic unit.In one embodiment, the dynamically configurable logic unit of block 2204 is a combined floating point and integer logic unit that can be configured to perform integer or floating point operations. For example, the dynamically configurable logic unit may be one of the logic unit 1600 of FIG. 16, 1700 of FIG. 17A, or 1740 of FIG. 17B. The computing unit may include multiple different instances of such a logic unit. In one embodiment, the logic unit is configurable on a per-cycle basis. In one embodiment, the logic unit is the first logic unit configured to perform a single-cycle fusion multiply-add operation using multipliers and adders shared between the floating-point data path and the integer data path. In one embodiment, the logic unit is a second logic unit configured to perform a two-cycle fusion multiply-accumulate operation with a single-cycle throughput. In one embodiment, the logic unit is a third logic unit configured to perform a two-period fusion multiplication and accumulation operation, wherein the third logic includes an accumulator having two bit widths of input and output operands. In one embodiment, the die area of the third logic unit is at most eleven percent larger than the die area of the second logic unit.The dynamically configurable logic unit described herein can be configured to perform integer or floating point operations. In one embodiment, one or more of the logic units may be configured to perform operations with multiple different precisions. In one embodiment, the logic unit may be used to perform operations with multiple different precisions via multi-cycle operations. In one embodiment, different floating-point encodings can be selected, including IEEE 754 half-precision floating-point format, single-precision floating-point format, and double-precision floating-point format. It is also possible to use a non-standard floating-point format, where different bit allocations are used for the exponent and mantissa of the floating-point value.In one embodiment, the output based on the multiplication and addition operations can then be additionally processed by the activation function. For example, in response to a single instruction, the FMADD or FMAC operation may be scheduled by the FMADD/FMAC thread scheduler unit, as shown in FIG. 19. The output of such operations may be activation mapping data that can be provided to activation function logic (for example, activation function logic 1904 in FIG. 19) to generate neuron activation data.Figure 23A shows logic 2300 for executing machine learning instructions according to an embodiment. The logic 2300 may be hardware or firmware logic within a GPGPU and/or a GPGPU multiprocessor as described herein (such as the multiprocessor unit 1400 in FIG. 14 or the GPGPU 1820 in FIG. 18). As shown at block 2302, the logic 2300 is configured to obtain and decode a single instruction to perform a set of machine learning operations via a machine learning accelerator unit. The machine learning accelerator unit includes the elements of the machine learning accelerator 1823 described herein, including the activation instruction module 1832, the FPU encoding and configuration module 1834, and the random quantization unit 1838 of FIG. 18B. As shown at block 2304, the logic 2300 may then issue a single instruction for execution by the set of dynamically configurable computing units. As shown at block 2306, the logic may then configure the set of computing units to perform the set of machine learning operations with higher accuracy than the input and output of the operation. In one embodiment, the configuration is performed by the FPU configuration module as described herein. The FPU configuration module may configure the calculation unit to perform a convolution operation on 16-bit floating-point matrix data, for example, using 32-bit intermediate data. As shown at block 2308, the logic 2300 may then quantize the higher precision intermediate value to a lower precision before passing the output of the random rounding logic within the machine learning accelerator. For example, random rounding can be used to quantize 32-bit intermediate data to 16 bits for output.FIG. 23B shows a logic 2310 for configuring floating-point operations based on the distribution of neural network data according to an embodiment. In one embodiment, the logic 2300 includes the hardware and firmware logic and logic units described herein, including the random quantization unit 1838 of FIGS. 18B and 20, and the FPU encoding and configuration module 1834 of FIG. 18B. The statistical accumulator/estimator 2008 of FIG. 20 is included in the statistical unit 1829 of FIG. 18A in one embodiment. The statistical unit 1829 may be a near data calculation unit included in the memory controller for the GPGPU, as shown in FIG. 21.As shown at block 2312, using the statistical unit, the logic 2310 can determine a set of statistical metrics for neural network data stored in memory. The logic 2310 can then determine the distribution of neural network data in the memory via statistical metrics, as shown at block 2314. In one embodiment, the logic 2310 may configure floating point encoding for the computing unit for performing a set of machine learning operations, as shown at block 2316. The logic 2310 may then configure the random rounding logic within the machine learning accelerator to round based on the distribution, as shown at block 2318. The random rounding logic may be configured to round based on the distribution so that the probability distribution of the quantized neural network data is closer to the pre-quantized data than may be possible using random rounding techniques based on a random number generator.Additional demonstration graphics processing systemThe details of the above-described embodiments may be incorporated in the graphics processing system and device described below. The graphics processing systems and devices of FIGS. 24 to 37 show alternative systems and graphics processing hardware that can implement any and all of the technologies described above.Additional demonstration graphics processing system overviewFIG. 24 is a block diagram of a processing system 2400 according to an embodiment. In various embodiments, the system 2400 includes one or more processors 2402 and one or more graphics processors 2408, and may be a single-processor desktop system, a multi-processor workstation system, or a large number of processors 2402 or processors. Core 2407 server system. In one embodiment, the system 2400 is a processing platform incorporated in a system-on-chip (SoC) integrated circuit for use in a mobile device, a handheld device, or an embedded device.Embodiments of the system 2400 may include server-based game platforms, game consoles, including game and media consoles, mobile game consoles, handheld game consoles, or online game consoles, or be incorporated into them. In some embodiments, the system 2400 is a mobile phone, smart phone, tablet computing device, or mobile Internet device. The data processing system 2400 may also include a wearable device (such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device), coupled with the wearable device, or integrated in the wearable device. In some embodiments, the data processing system 2400 is a television or set-top box device that has one or more processors 2402 and a graphical interface generated by one or more graphics processors 2408.In some embodiments, the one or more processors 2402 each include one or more processor cores 2407 for processing instructions that, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 2407 is configured to process a specific instruction set 2409. In some embodiments, the instruction set 2409 may facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computation via very long instruction words (VLIW). The multiple processor cores 2407 may each process a different instruction set 2409, and the instruction set 2409 may include instructions for facilitating the simulation of other instruction sets. The processor core 2407 may also include other processing devices, such as a digital signal processor (DSP).In some embodiments, the processor 2402 includes a cache memory 2404. Depending on the architecture, the processor 2402 may have a single internal cache or multiple internal cache levels. In some embodiments, cache memory is shared among the various components of the processor 2402. In some embodiments, the processor 2402 also uses an external cache (for example, a level 3 (L3) cache or a last level cache (LLC)) (not shown), and a known cache coherency technique can be used in The external cache is shared between processor cores 2407. The register file 2406 is additionally included in the processor 2402, which may include different types of registers for storing different types of data (for example, integer registers, floating point registers, status registers, and instruction pointer registers). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 2402.In some embodiments, the processor 2402 is coupled to the processor bus 2410 to transmit communication signals, such as address, data, or control signals, between the processor 2402 and other components in the system 2400. In one embodiment, the system 2400 uses an exemplary "hub" system architecture, including a memory controller hub 2416 and an input output (I/O) controller hub 2430. The memory controller hub 2416 facilitates communication between the memory device and other components of the system 2400, and the I/O controller hub (ICH) 2430 provides connections to I/O devices via the local I/O bus. In one embodiment, the logic of the memory controller hub 2416 is integrated within the processor.The memory device 2420 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with suitable performance to act as a process memory. In one embodiment, the memory device 2420 may operate as a system memory of the system 2400 to store data 2422 and instructions 2421 for use when the one or more processors 2402 execute applications or processes. The memory controller hub 2416 is also coupled with an optional external graphics processor 2412, which can communicate with the one or more graphics processors 2408 in the processor 2402 to execute graphics and media operate.In some embodiments, the ICH 2430 enables peripheral devices to be connected to the memory device 2420 and the processor 2402 via a high-speed I/O bus. I/O peripheral devices include, but are not limited to, audio controller 2446, firmware interface 2428, wireless transceiver 2426 (e.g., Wi-Fi, Bluetooth), data storage device 2424 (e.g., hard drive, flash memory, etc.), and The legacy (eg, Personal System 2 (PS/2)) device is coupled to the legacy I/O controller 2440 of the system. One or more Universal Serial Bus (USB) controllers 2442 are connected to input devices, such as a keyboard and mouse 2444 combination. The network controller 2434 can also be coupled with the ICH 2430. In some embodiments, a high-performance network controller (not shown) is coupled to the processor bus 2410. It will be appreciated that the system 2400 shown is exemplary and not restrictive, as other types of data processing systems configured differently may also be used. For example, the I/O controller hub 2430 may be integrated in the one or more processors 2402, or the memory controller hub 2416 and the I/O controller hub 2430 may be integrated into separate external graphics processors (such as external graphics The processor 2412).25 is a block diagram of an embodiment of a processor 2500, which has one or more processor cores 2502A-2502N, an integrated memory controller 2514, and an integrated graphics processor 2508. Those elements of FIG. 25 that have the same reference numbers (or names) as the elements of any other figure in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited to this . The processor 2500 may include up to and including additional cores 2502N represented by dashed blocks. Each of the processor cores 2502A-2502N includes one or more internal cache units 2504A-2504N. In some embodiments, each processor core can also access one or more shared cache units 2506.The internal cache units 2504A-2504N and the shared cache unit 2506 represent the hierarchical structure of the cache memory in the processor 2500. The cache memory hierarchy can include at least one level of instruction and data caches and one or more levels of shared intermediate caches in each processor core, such as level 2 (L2), level 3 (L3), level 4 (L4) ), or other levels of cache, where the highest level of cache before the external memory is classified as LLC. In some embodiments, the cache coherency logic maintains coherency between the various cache units 2506 and 2504A-2504N.In some embodiments, the processor 2500 may further include a collection of a system agent core 2510 and one or more bus controller units 2516. The one or more bus controller units 2516 manage a group of peripheral buses, such as one or more peripheral component interconnect buses (for example, PCI, PCI Express). The system agent core 2510 provides management functions for various processor components. In some embodiments, the system agent core 2510 includes one or more integrated memory controllers 2514 for managing access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 2502A-2502N includes support for simultaneous multithreading. In such embodiments, the system agent core 2510 includes components for coordinating and operating the processor cores 2502A-2502N during multithreading. The system agent core 2510 may additionally include a power control unit (PCU) that includes logic and components for adjusting the power state of the processor cores 2502A-2502N and the graphics processor 2508.In some embodiments, the processor 2500 additionally includes a graphics processor 2508 for performing graphics processing operations. In some embodiments, the graphics processor 2508 is coupled with a set of shared cache units 2506 and a system proxy core 2510, which includes the one or more integrated memory controllers 2514. In some embodiments, the display controller 2511 is coupled with the graphics processor 2508 to drive the graphics processor output to one or more coupled displays. In some embodiments, the display controller 2511 may be a separate module coupled with the graphics processor via at least one interconnection, or may be integrated in the graphics processor 2508 or the system agent core 2510.In some embodiments, the ring-based interconnection unit 2512 is used to couple the internal components of the processor 2500. However, alternative interconnection units may be used, such as point-to-point interconnection, switched interconnection, or other technologies, including those known in the art. In some embodiments, graphics processor 2508 is coupled to ring interconnect 2512 via I/O link 2513.Exemplary I/O link 2513 represents at least one of a variety of I/O interconnections, including on-package I/O to facilitate communication between various processor components and high-performance embedded memory modules 2518, such as eDRAM modules. O interconnect. In some embodiments, each of the processor cores 2502A-2502N and the graphics processor 2508 use the embedded memory module 2518 as a shared last-level cache.In some embodiments, the processor cores 2502A-2502N are homogeneous cores that execute the same instruction set architecture. In another embodiment, the processor cores 2502A-2502N are heterogeneous in terms of instruction set architecture (ISA), where one or more of the processor cores 2502A-2502N execute the first instruction set, and the other cores At least one executes a subset of the first instruction set or a different instruction set. In one embodiment, the processor cores 2502A-2502N are heterogeneous in terms of microarchitecture, where one or more cores with relatively high power consumption are coupled with one or more power cores with relatively low power consumption. In addition, the processor 2500 may be implemented on one or more chips or as an SoC integrated circuit having the components shown in addition to other components.FIG. 26 is a block diagram of a graphics processor 2600. The graphics processor 2600 may be a discrete graphics processing unit, or may be a graphics processor integrated with multiple processing cores. In some embodiments, the graphics processor communicates via a memory-mapped I/O interface to registers on the graphics processor and using commands placed in the processor memory. In some embodiments, the graphics processor 2600 includes a memory interface 2614 for accessing memory. The memory interface 2614 may be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, the graphics processor 2600 further includes a display controller 2602 for driving display output data to the display device 2620. The display controller 2602 includes hardware for one or more overlapping planes of the display and a composition of multiple layers of video or user interface elements. In some embodiments, the graphics processor 2600 includes media for encoding, decoding, or encoding media to one or more media encoding formats, from one or more media encoding formats, or between one or more media encoding formats. A video codec engine 2606 for transcoding. The one or more media encoding formats include, but are not limited to, Moving Picture Experts Group (MPEG) format (such as MPEG-2), Advanced Video Coding (AVC) format (such as H. 264/MPEG-4AVC), and Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1 and Joint Photographic Experts Group (JPEG) formats (such as JPEG, and Motion JPEG (MJPEG) formats).In some embodiments, the graphics processor 2600 includes a block image transfer (BLIT) engine 2604 for performing two-dimensional (2D) rasterizer operations including, for example, bit boundary block transfer. However, in one embodiment, one or more components of the graphics processing engine (GPE) 2610 are used to perform 2D graphics operations. In some embodiments, GPE 2610 is a calculation engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 310 includes a 3D pipeline 2612 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act on 3D primitive shapes (eg, rectangles, triangles, etc.). The 3D pipeline 2612 includes programmable and fixed functional elements that perform various tasks within the elements and/or generate a large number of execution threads to the 3D/media subsystem 2615. Although the 3D pipeline 2612 may be used to perform media operations, an embodiment of the GPE 2610 also includes a media pipeline 2616, which is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, the media pipeline 2616 includes fixed-function or programmable logic units to replace or represent the video codec engine 2606 to perform one or more specialized media operations, such as video decoding acceleration, video deinterleaving, and Video encoding acceleration. In some embodiments, the media pipeline 2616 additionally includes a thread mass production unit to mass produce threads for execution on the 3D/media subsystem 2615. The massively generated threads perform calculations for media operations on one or more graphics execution units included in the 3D/media subsystem 2615.In some embodiments, the 3D/media subsystem 2615 includes logic for executing threads that are mass-produced through the 3D pipeline 2612 and the media pipeline 2616. In one embodiment, the pipeline sends thread execution requests to the 3D/media subsystem 2615, which includes thread dispatch logic for arbitrating various requests and dispatching various requests to available thread execution resources . Execution resources include an array of graphics execution units for processing 3D and media threads. In some embodiments, the 3D/media subsystem 2615 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory (including registers and addressable memory) to share data between threads and store output data.Additional demonstration graphics processing engineFigure 27 is a block diagram of a graphics processing engine 2710 of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE) 2710 is a version of the GPE 2610 shown in FIG. 26. The elements of FIG. 27 having the same reference numbers (or names) as the elements of any other figures in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited to such. For example, the 3D pipeline 2612 and the media pipeline 2616 of FIG. 26 are shown. The media pipeline 2616 is optional in some embodiments of the GPE 2710, and may not be explicitly included in the GPE 2710. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 2710.In some embodiments, the GPE 2710 is coupled to or includes a command streamer 2703, which provides a command stream to the 3D pipeline 2612 and/or the media pipeline 2616. In some embodiments, the command streamer 2703 is coupled with a memory, and the memory may be one or more of a system memory, or an internal cache memory and a shared cache memory. In some embodiments, the command streamer 2703 receives commands from the memory and sends the commands to the 3D pipeline 2612 and/or the media pipeline 2616. The command is an instruction obtained from a ring buffer storing commands for the 3D pipeline 2612 and the media pipeline 2616. In one embodiment, the ring buffer may additionally include a batch command buffer that stores batches of multiple commands. The commands for the 3D pipeline 2612 may also include references to data stored in the memory, such as but not limited to vertex and geometric data for the 3D pipeline 2612 and/or image data and memory objects for the media pipeline 2616 . The 3D pipeline 2612 and the media pipeline 2616 process commands and data by executing operations via logic within the corresponding pipeline or by dispatching one or more execution threads to the graphics core array 2714.In various embodiments, the 3D pipeline 2612 can execute one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, by processing instructions and dispatching execution threads to the graphics core array 2714. , Compute shaders or other shader programs. The graphics core array 2714 provides a unified execution resource block. The multi-purpose execution logic (eg, execution unit) in the graphics core array 2714 includes support for various 3D API shader languages, and can execute multiple simultaneous execution threads associated with multiple shaders.In some embodiments, the graphics core array 2714 also includes execution logic for performing media functions such as video and/or image processing. In one embodiment, in addition to graphics processing operations, the execution unit additionally includes general logic that is programmable to perform parallel general computing operations. The general logic may perform processing operations in parallel or in combination with the general logic within the processor core(s) 2407 of FIG. 24 or the processor cores 2502A-2502N in FIG. 25.The output data generated by the threads executing on the graphics core array 2714 can output the data to the memory in the unified return buffer (URB) 2718. URB 2718 can store data for multiple threads. In some embodiments, URB2718 can be used to send data between different threads executing on the graphics core array 2714. In some embodiments, URB2718 may be additionally used for synchronization between fixed function logic in shared function logic 2720 and threads on the graphics core array.In some embodiments, the graphics core array 2714 is scalable such that the array includes a variable number of graphics cores, each with a variable number of execution units based on the target power and performance level of the GPE 2710. In one embodiment, the execution resources are dynamically scalable, so that the execution resources can be enabled or disabled as needed.The graphics core array 2714 is coupled with the shared function logic 2720, which includes a plurality of resources shared between the graphics cores in the graphics core array. The shared function in the shared function logic 2720 is a hardware logic unit that provides specialized supplementary functions to the graphics core array 2714. In various embodiments, the shared function logic 2720 includes, but is not limited to, sampler 2721, math 2722, and inter-thread communication (ITC) 2723 logic. In addition, some embodiments implement one or more caches 2725 within the shared function logic 2720. The shared function is realized when the requirement for a given specialized function is not enough to be included in the graphics core array 2714. Instead, a single instantiation of this specialized function is implemented as a separate entity in the shared function logic 2720 and shared among execution resources within the graphics core array 2714. The precise set of functions shared between and included in the graphics core array 2714 varies between embodiments.FIG. 28 is a block diagram of another embodiment of a graphics processor 2800. The elements of FIG. 28 having the same reference numbers (or names) as the elements of any other figure in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited to such.In some embodiments, the graphics processor 2800 includes a ring interconnect 2802, a pipeline front end 2804, a media engine 2837, and graphics cores 2880A-2880N. In some embodiments, the ring interconnect 2802 couples the graphics processor to other processing units, which include other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated in a multi-core processing system.In some embodiments, the graphics processor 2800 receives multiple batches of commands via a ring interconnect 2802. The incoming commands are interpreted by the command streamer 2803 in the front end 2804 of the pipeline. In some embodiments, the graphics processor 2800 includes scalable execution logic for performing 3D geometry processing and media processing via the graphics core(s) 2880A-2880N. For 3D geometry processing commands, the command streamer 2803 supplies the commands to the geometry pipeline 2836. For at least some media processing commands, the command streamer 2803 supplies the commands to the video front end 2834, which is coupled with the media engine 2837. In some embodiments, the media engine 2837 includes a video quality engine (VQE) 2830 for video and image post-processing, and a multi-format encoding/decoding (MFX) 2833 engine for providing hardware accelerated media data encoding and decoding. In some embodiments, the geometric pipeline 2836 and the media engine 2837 each generate execution threads for the thread execution resources provided by the at least one graphics core 2880A.In some embodiments, the graphics processor 2800 includes scalable thread execution resources featuring modular cores 2880A-2880N (sometimes referred to as core slices), each of which has multiple Two daughter cores 2850A-550N, 2860A-2860N (sometimes called nuclear slices). In some embodiments, the graphics processor 2800 may have any number of graphics cores 2880A to 2880N. In some embodiments, the graphics processor 2800 includes a graphics core 2880A, which has at least a first sub-core 2850A and a second sub-core 2860A. In other embodiments, the graphics processor is a low-power processor with a single sub-core (e.g., 2850A). In some embodiments, the graphics processor 2800 includes multiple graphics cores 2880A-2880N, each including a set of first sub-cores 2850A-2850N and a set of second sub-cores 2860A-2860N. Each sub-core in the group of first sub-cores 2850A-2850N includes at least a first group of execution units 2852A-2852N and a media/texture sampler 2854A-2854N. Each sub-core in the group of second sub-cores 2860A-2860N includes at least a second group of execution units 2862A-2862N and a sampler 2864A-2864N. In some embodiments, each sub-core 2850A-2850N, 2860A-2860N shares a set of shared resources 2870A-2870N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in various embodiments of the graphics processor.Additional demonstration execution unitFigure 29 shows thread execution logic 2900 that includes an array of processing elements employed in some embodiments of GPE. The elements of FIG. 29 having the same reference numbers (or names) as the elements of any other figure in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited to this.In some embodiments, the thread execution logic 2900 includes a shader processor 2902, a thread dispatcher 2904, an instruction cache 2906, a scalable execution unit array including multiple execution units 2908A-2908N, a sampler 2910, a data cache 2912, and data port 2914. In one embodiment, the scalable execution unit array can enable or disable one or more execution units (e.g., execution units 2908A, 2908B, 2908C, 2908D to 2908N-1 and 2908N) based on the computing requirements of the workload. Any) to dynamically zoom. In one embodiment, the included components are interconnected via an interconnection structure that is linked to each of the components. In some embodiments, the thread execution logic 2900 includes one or more of the instruction cache 2906, data port 2914, sampler 2910, and execution units 2908A-2908N to a memory (such as system memory or cache memory). Or multiple connections. In some embodiments, each execution unit (eg, 2908A) is an independently programmable general-purpose computing unit capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 2908A-2908N is scalable to include any number of individual execution units.In some embodiments, the execution units 2908A-2908N are mainly used to execute shader programs. The shader processor 2902 can process various shader programs and dispatch execution threads associated with the shader programs via the thread dispatcher 2904. In one embodiment, the thread dispatcher includes logic for arbitrating thread-initiated requests from the graphics and media pipelines and instantiating the requested threads on one or more of the execution units 2908A-2908N. For example, the geometry pipeline (e.g., 2836 of FIG. 28) may dispatch vertices, tessellation, or geometry shaders to the thread execution logic 2900 (FIG. 29) for processing. In some embodiments, the thread dispatcher 2904 can also handle a large number of requests generated by runtime threads from executing shader programs.In some embodiments, the execution units 2908A-2908N support an instruction set that includes native support for many standard 3D graphics shader instructions, so that shaders from graphics libraries (for example, Direct 3D and OpenGL) are executed with minimal conversion. program. The execution unit supports vertex and geometry processing (e.g., vertex program, geometry program, vertex shader), pixel processing (e.g., pixel shader, fragment shader), and general processing (e.g., calculation and media shader). Each of the execution units 2908A-2908N has the capability of multiple issue single instruction multiple data (SIMD) execution, and multi-threaded operation enables an efficient execution environment in the face of high latency memory access. Each hardware thread in each execution unit has a dedicated high-bandwidth register file and associated independent thread state. For pipelines with integer, single and double-precision floating-point operations, SIMD branching capabilities, logical operations, transcendence operations, and other miscellaneous operations capabilities, the execution is multiple releases per clock. While waiting for data from one of the memory or the shared function, the dependent logic within the execution units 2908A-2908N puts the waiting thread to sleep until the requested data has returned. When the waiting thread is sleeping, hardware resources may be dedicated to processing other threads. For example, during the delay associated with the vertex shader operation, the execution unit may execute the operation of a pixel shader, a fragment shader, or another type of shader program including a different vertex shader.Each of the execution units 2908A-2908N operates on an array of data elements. The number of data elements is the "execution size", or the number of channels used for instructions. The execution channel is a logical unit for the execution of data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical arithmetic logic units (ALU) or floating point units (FPU) for a particular graphics processor. In some embodiments, the execution units 2908A-2908N support integer and floating point data types.The execution unit instruction set includes SIMD instructions. Various data elements can be stored in registers as compressed data types, and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256-bit vector is stored in a register and the execution unit compresses data elements according to four separate 64-bit data elements (data elements of four times the word length (QW) size), Eight individual 32-bit compressed data elements (double word (DW) sized data elements), sixteen individual 16-bit compressed data elements (word (W) sized data elements), or thirty-two individual 8 Bit data elements (bytes (B) size data elements) operate on this vector. However, different vector widths and register sizes are possible.One or more internal instruction caches (e.g., 2906) are included in the thread execution logic 2900 to cache thread instructions for execution units. In some embodiments, one or more data caches (e.g., 2912) are included for caching thread data during thread execution. In some embodiments, the sampler 2910 is included to provide texture samples for 3D operations and media samples for media operations. In some embodiments, the sampler 2910 includes a dedicated texture or media sampling function to process the texture or media data during the sampling process before providing the sampling data to the execution unit.During execution, the graphics and media pipeline sends a thread initiation request to the thread execution logic 2900 via the thread mass generation and dispatch logic. Once a set of geometric objects have been processed and rasterized into pixel data, the pixel processor logic (eg, pixel shader logic, fragment shader logic, etc.) in the shader processor 2902 is called to further calculate the output information and make The result is written to the output surface (e.g., color buffer, depth buffer, stencil buffer, etc.). In some embodiments, the pixel shader or fragment shader calculates the values of various vertex attributes to be interpolated across the rasterized object. In some embodiments, the pixel processor logic within the shader processor 2902 then executes the pixel or fragment shader program supplied by the application programming interface (API). In order to execute the shader program, the shader processor 2902 dispatches the threads to the execution unit (for example, 2908A) via the thread dispatcher 2904. In some embodiments, the pixel shader 2902 uses the texture sampling logic in the sampler 2910 to access the texture data in the texture map stored in the memory. Arithmetic operations on texture data and input geometric data calculate the pixel color data of each geometric segment, or discard one or more pixels to avoid further processing.In some embodiments, the data port 2914 provides a memory access mechanism for the thread execution logic 2900 to output processed data to the memory for processing on the graphics processor output pipeline. In some embodiments, the data port 2914 includes or is coupled to one or more cache memories (e.g., data cache 2912) to cache data via the data port for memory access.Figure 30 is a block diagram illustrating a graphics processor instruction format 3000 according to some embodiments. In one or more embodiments, the graphics processor execution unit supports an instruction set with instructions in multiple formats. The solid line blocks show components that are generally included in the execution unit instructions, while the dashed lines include components that are optional or only included in a subset of instructions. In some embodiments, the instruction format 3000 described and shown are macro instructions because they are instructions supplied to the execution unit, as opposed to micro-operations caused by instruction decoding once the instruction is processed.In some embodiments, the graphics processor execution unit natively supports instructions in the 128-bit instruction format 3010. Based on the selected instructions, instruction options, and the number of operands, the 64-bit compressed instruction format 3030 can be used for some instructions. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are limited to the 64-bit format 3030. The native instructions available in the 64-bit format 3030 vary from embodiment to embodiment. In some embodiments, the instruction is partially compressed using a set of index values in the index field 3013. The execution unit hardware references a set of compression tables based on the index value, and uses the compression table output to reconstruct native instructions in the 128-bit instruction format 3010.For each format, the instruction opcode 3012 defines the operation to be performed by the execution unit. The execution unit executes each instruction in parallel across multiple data elements of each operand. For example, in response to the adding instruction, the execution unit performs the simultaneous adding operation across each color channel, and each color channel represents a texture element or a picture element. By default, the execution unit executes each instruction across all data channels of the operand. In some embodiments, the command control field 3014 enables control of certain execution options, such as channel selection (e.g., prediction) and data channel ordering (e.g., mixing). For instructions in the 128-bit instruction format 3010, the execution size field 3016 limits the number of data channels that will be executed in parallel. In some embodiments, the execution size field 3016 is not available for use in the 64-bit compressed instruction format 3030.Some execution unit instructions have up to three operands, including two source operands-src0 3020, src13022, and a destination 3018. In some embodiments, the execution unit supports dual-destination instructions, where one of the destinations is implicit. The data manipulation instruction may have a third source operand (for example, SRC2 3024), where the instruction opcode 3012 determines the number of source operands. The last source operand of the instruction may be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 3010 includes an access/addressing mode field 3026 that specifies whether to use a direct register addressing mode or an indirect register addressing mode, for example. When using direct register addressing mode, the register address of one or more operands is provided directly by the bit in the instruction.In some embodiments, the 128-bit instruction format 3010 includes an access/addressing mode field 3026 that specifies the addressing mode and/or access mode of the instruction. In one embodiment, the access mode is used to define the alignment of data access for instructions. Some embodiments support an access mode including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operand. For example, when in the first mode, the instruction can use byte-aligned addressing for the source and destination operands, and when in the second mode, the instruction can use 16-byte-aligned addressing. For all source operands and destination operands.In one embodiment, the addressing mode portion of the access/addressing mode field 3026 determines whether the instruction will use direct addressing or indirect addressing. When using direct register addressing mode, the bits in the instruction directly provide the register address of one or more operands. When using the indirect register addressing mode, the register address of one or more operands can be calculated based on the address register value and the address immediate digit field in the instruction.In some embodiments, the instructions are grouped based on the opcode 3012 bit field to simplify opcode decoding 3040. For 8-bit opcodes, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is exemplary only. In some embodiments, the move and logic operation code group 3042 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, move and logic group 3042 share five most significant bits (MSB), where move (mov) instructions take the form of 0000xxxxb and logical instructions take the form of 0001xxxxb. The flow control instruction group 3044 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). The miscellaneous instruction group 3046 includes a mixture of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). The parallel math instruction group 3048 includes component arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 3048 performs arithmetic operations in parallel across data channels. The vector math group 3050 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic on vector operands, such as dot product calculations.Additional demonstration graphics pipelineFIG. 31 is a block diagram of another embodiment of a graphics processor 3100. The elements of FIG. 31 having the same reference numbers (or names) as the elements of any other figure in this document can operate or function in any manner similar to that described elsewhere in this document, but are not limited to such.In some embodiments, the graphics processor 3100 includes a graphics pipeline 3120, a media pipeline 3130, a display engine 3140, a thread execution logic 3150, and a rendering output pipeline 3170. In some embodiments, the graphics processor 3100 is a graphics processor in a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writing to one or more control registers (not shown) or via commands issued to the graphics processor 3100 through the ring interconnect 3102. In some embodiments, the ring interconnect 3102 couples the graphics processor 3100 to other processing components, such as other graphics processors or general purpose processors. The commands from the ring interconnect 3102 are interpreted by the command streamer 3103, which supplies the commands to separate components of the graphics pipeline 3120 or the media pipeline 3130.In some embodiments, the command streamer 3103 directs the operation of the vertex fetcher 3105, which reads vertex data from the memory and executes the vertex processing commands provided by the command streamer 3103. In some embodiments, the vertex fetcher 3105 provides vertex data to the vertex shader 3107, which performs coordinate space transformation and lighting operations on each vertex. In some embodiments, the vertex fetcher 3105 and the vertex shader 3107 execute vertex processing instructions by dispatching execution threads to the execution units 3152A-3152B via the thread dispatcher 3131.In some embodiments, the execution units 3152A-3152B are arrays of vector processors with instruction sets for performing graphics and media operations. In some embodiments, the execution units 3152A-3152B have attached L1 caches 3151 that are specific to each array or shared between arrays. The cache can be configured as a data cache, an instruction cache, or a single cache, which is partitioned to contain data and instructions in different partitions.In some embodiments, the graphics pipeline 3120 includes a tessellation component for performing hardware accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 is configured for tessellation operations. The programmable domain shader 817 provides back-end evaluation of the tessellation output. The tessellator 3113 operates in the direction of the hull shader 3111 and contains dedicated logic for generating a set of detailed geometric objects based on the rough geometric model, which are provided as input to the graphics pipeline 3120. In some embodiments, if tessellation is not used, tessellation components (eg, hull shader 3111, tessellator 3113, and domain shader 3117) can be bypassed.In some embodiments, the complete geometric object may be processed by the geometric shader 3119 via one or more threads assigned to the execution units 3152A-3152B, or may directly travel to the clipper 3129. In some embodiments, the geometry shader operates on the entire geometry object rather than on vertices or vertex patches as in previous stages of the graphics pipeline. If tessellation is disabled, the geometry shader 3119 receives input from the vertex shader 3107. In some embodiments, the geometry shader 3119 can be programmed by the geometry shader program to perform geometric tessellation when the tessellation unit is disabled.Before rasterization, the clipper 3129 processes the vertex data. The clipper 3129 may be a fixed-function clipper or a programmable clipper with editing and geometry shader functions. In some embodiments, the rasterizer and depth test component 3173 in the rendering output pipeline 3170 dispatches pixel shaders to convert geometric objects into their per-pixel representations. In some embodiments, the pixel shader logic is included in the thread execution logic 3150. In some embodiments, the application can bypass the rasterizer and depth test component 3173 and access the non-rasterized vertex data via the outflow unit 3123.The graphics processor 3100 has an interconnection bus, interconnection structure, or some other interconnection mechanism that allows data and messages to pass between the main components of the processor. In some embodiments, the execution units 3152A-3152B and the associated cache(s) 3151, the texture and media sampler 3154, and the texture/sampler cache 3158 are interconnected via the data port 3156 to perform memory access and Communicate with the rendering output pipeline component of the processor. In some embodiments, the sampler 3154, caches 3151, 3158, and execution units 3152A-3152B each have a separate memory access path.In some embodiments, the rendering output pipeline 3170 includes a rasterizer and depth testing component 3173 that converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit for performing fixed function triangle and line rasterization. The associated render cache 3178 and depth cache 3179 are also available in some embodiments. The pixel manipulation component 3177 performs pixel-based operations on the data. However, in some instances, pixel operations associated with 2D operations (for example, using mixed bit-block image transfer) are performed by the 2D engine 3141, or are controlled by the display at display time The device 3143 uses an overlapping display plane instead. In some embodiments, a shared L3 cache 3175 can be used for all graphics components, allowing data to be shared without using main system memory.In some embodiments, the graphics processor media pipeline 3130 includes a media engine 3137 and a video front end 3134. In some embodiments, the video front end 3134 receives pipeline commands from the command streamer 3103. In some embodiments, the media pipeline 3130 includes a separate command streamer. In some embodiments, the video front end 3134 processes the media command before sending it to the media engine 3137. In some embodiments, the media engine 3137 includes a thread mass generation function to mass generate threads for dispatch to the thread execution logic 3150 via the thread dispatcher 3131.In some embodiments, the graphics processor 3100 includes a display engine 3140. In some embodiments, the display engine 3140 is external to the processor 3100 and is coupled to the graphics processor via a ring interconnect 3102 or some other interconnect bus or structure. In some embodiments, the display engine 3140 includes a 2D engine 3141 and a display controller 3143. In some embodiments, the display engine 3140 includes dedicated logic that can operate independently of the 3D pipeline. In some embodiments, the display controller 3143 is coupled with a display device (not shown), which may be a system integrated display device (such as in a laptop computer) or attached via a display device connector External display device.In some embodiments, the graphics pipeline 3120 and the media pipeline 3130 may be configured to perform operations based on multiple graphics and media programming interfaces and are not specific to any application programming interface (API). In some embodiments, the driver software for the graphics processor converts API calls specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and computing APIs all from Khronos Group. In some embodiments, support may also be provided for the Direct3D library from Microsoft Corporation. In some embodiments, a combination of these libraries can be supported. It can also provide support for the open source computer vision library (OpenCV). If the mapping from the pipeline of the future API to the pipeline of the graphics processor can be made, the future API with a compatible 3D pipeline will also be supported.Graphics pipeline programmingFigure 32A is a block diagram illustrating a graphics processor command format 3200 according to some embodiments. FIG. 32B is a block diagram showing a graphics processor command sequence 3210 according to an embodiment. The solid-line blocks in FIG. 32A show components that are generally included in the graphics command, and the dashed lines include components that are optional or only included in a subset of the graphics command. The exemplary graphics processor command format 3200 of FIG. 32A includes data fields for identifying the target client 3202 of the command, the command operation code (operation code) 3204, and the related data 3206 of the command. Some commands also include sub-opcode 3205 and command size 3208.In some embodiments, the client 3202 specifies the client unit of the graphics device that processes the command data. In some embodiments, the graphics processor command parser examines the client field of each command to adjust further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline for processing commands. Once the command is received by the client unit, the client unit reads the operation code 3204 and the sub-op code 3205 (if any) to determine the operation to be performed. The client unit uses the information in the data field 3206 to execute the command. For some commands, an explicit command size 3208 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, commands are aligned via multiples of double words.The flowchart in FIG. 32B shows an exemplary graphics processor command sequence 3210. In some embodiments, the software or firmware of a data processing system featuring an embodiment of a graphics processor uses the version of the command sequence shown to establish, execute, and terminate a set of graphics operations. The sample command sequence is shown and described for the purpose of example only, as the embodiment is not limited to these specific commands or this command sequence. Moreover, the commands may be issued as a batch of commands in a sequence of commands, so that the graphics processor will process the sequence of commands at least partially simultaneously.In some embodiments, the graphics processor command sequence 3210 may start with a pipeline dump cleanup command 3212 so that any active graphics pipeline completes the currently pending commands for that pipeline. In some embodiments, the 3D pipeline 3222 and the media pipeline 3224 do not operate at the same time. Perform pipeline dump cleanup so that the active graphics pipeline completes any pending commands. In response to the pipeline dump clearing, the command parser for the graphics processor will suspend command processing until the active graphics engine completes the pending operations and the related read cache is invalid. Optionally, any data marked as'dirty' in the rendering cache can be dumped and cleared to the memory. In some embodiments, the pipeline dump purge command 3212 may be used for pipeline synchronization or before placing the graphics processor in a low power state.In some embodiments, when the command sequence requires the graphics processor to explicitly switch between pipelines, the pipeline selection command 3213 is used. In some embodiments, the pipeline selection command 3213 is only required once within the execution context before issuing the pipeline command, unless the context is to issue commands for two pipelines. In some embodiments, the pipeline dump purge command 3212 is required immediately before the pipeline switch via the pipeline selection command 3213.In some embodiments, the pipeline control command 3214 configures the graphics pipeline for operation and is used to program the 3D pipeline 3222 and the media pipeline 3224. In some embodiments, the pipeline control command 3214 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 3214 is used for pipeline synchronization and is used to clear data from one or more caches in the active pipeline before processing a batch of commands.In some embodiments, commands specific to the return buffer state 3216 are used to configure a set of return buffers for writing data to the corresponding pipeline. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers that write intermediate data into the one or more return buffers during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communication. In some embodiments, the return buffer state 3216 includes selecting the size and number of return buffers to be used for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline used for the operation. Based on the pipeline determination 3220, the command sequence is suitable for the 3D pipeline 3222 starting in the 3D pipeline state 3230 or the media pipeline 3224 starting in the media pipeline state 3240.The commands for configuring the 3D pipeline state 3230 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables to be configured before processing 3D primitive commands. The values of these commands are determined based at least in part on the specific 3D API in use. In some embodiments, the 3D pipeline status 3230 command can also selectively disable or bypass certain pipeline elements if those elements will not be used.In some embodiments, the 3D primitive 3232 command is used to submit 3D primitives to be processed by the 3D pipeline. The commands and associated parameters passed to the graphics processor via the 3D primitive 3232 commands are forwarded to the vertex acquisition function in the graphics pipeline. The vertex acquisition function uses the 3D primitive 3232 command data to generate the vertex data structure. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 3232 command is used to perform vertex operations on 3D primitives via the vertex shader. In order to process the vertex shader, the 3D pipeline 3222 dispatches the shader execution thread to the graphics processor execution unit.In some embodiments, the 3D pipeline 3222 is triggered via execution of 3234 commands or events. In some embodiments, a register write triggers command execution. In some embodiments, execution is triggered via a'go' or'kick' command in the command sequence. In one embodiment, pipeline synchronization commands are used to trigger command execution to dump a sequence of cleanup commands through the graphics pipeline. The 3D pipeline will perform geometric processing for 3D primitives. Once the operation is complete, the generated geometric objects are rasterized and the pixel engine colorizes the generated pixels. For those operations, additional commands for controlling pixel shading and pixel back-end operations can also be included.In some embodiments, when performing media operations, the graphics processor command sequence 3210 follows the media pipeline 3224 path. Generally, the specific use and manner of programming for the media pipeline 3224 depends on the media or computing operation to be performed. During media decoding, specific media decoding operations can be offloaded to the media pipeline. In some embodiments, the media pipeline can also be bypassed, and resources provided by one or more general-purpose processing cores can be used to perform media decoding in whole or in part. In one embodiment, the media pipeline also includes elements for general graphics processing unit (GPGPU) operations, where the graphics processor is used to execute SIMD vectors using computational shader programs that are not explicitly related to rendering graphics primitives operate.In some embodiments, the media pipeline 3224 is configured in a similar manner to the 3D pipeline 3222. A set of commands used to configure the media pipeline state 3240 is dispatched or placed in the command queue, before the media object command 3242. In some embodiments, the media pipeline status command 3240 includes data for configuring media pipeline elements that will be used to process media objects. This includes data used to configure video decoding and video encoding logic within the media pipeline, such as encoding or decoding formats. In some embodiments, the media pipeline state command 3240 also supports the use of one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, the media object command 3242 supplies a pointer to the media object for processing by the media pipeline. The media object includes a memory buffer that contains video data to be processed. In some embodiments, all media pipeline states must be valid before issuing the media object command 3242. Once the pipeline state is configured and the media object command 3242 is queued, the media pipeline 3224 is triggered via the execution of the command 3244 or equivalent execution event (e.g., register write). The output from the media pipeline 3224 can then be post-processed through operations provided by the 3D pipeline 3222 or the media pipeline 3224. In some embodiments, GPGPU operations are configured and executed in a similar manner to media operations.Graphics software architectureFigure 33 illustrates an exemplary graphics software architecture of a data processing system 3300 according to some embodiments. In some embodiments, the software architecture includes a 3D graphics application 3310, an operating system 3320, and at least one processor 3330. In some embodiments, the processor 3330 includes a graphics processor 3332 and one or more general-purpose processor cores 3334. The graphics application 3310 and the operating system 3320 are each executed in the system memory 3350 of the data processing system.In some embodiments, the 3D graphics application 3310 includes one or more shader programs that include shader instructions 3312. The shader language instructions may use a high-level shader language, such as high-level shader language (HLSL) or OpenGL shader language (GLSL). The application also includes executable instructions 3314 in machine language suitable for execution by the general-purpose processor core 3334. The application also includes graphical objects 3316 defined by vertex data.In some embodiments, the operating system 3320 is an operating system from Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 3320 may support graphics API 3322, such as Direct3D API, OpenGL API, or Vulkan API. When the Direct3D API is in use, the operating system 3320 uses the front-end shader compiler 3324 to compile any shader instructions 3312 of HLSL into a lower-level shader language. The compilation may be just-in-time (JIT) compilation, or the application may perform shader pre-compilation. In some embodiments, during compilation of the 3D graphics application 3310, high-level shaders are compiled into low-level shaders. In some embodiments, shader instructions 3312 are provided in an intermediate form, such as the version of the standard portable intermediate representation (SPIR) used by the Vulkan API.In some embodiments, the user-mode graphics driver 3326 includes a back-end shader compiler 3327 for converting the shader instructions 3312 into a hardware-specific representation. When the OpenGL API is in use, the shader instructions 3312 in the GLSL high-level language are passed to the user-mode graphics driver 3326 for compilation. In some embodiments, the user mode graphics driver 3326 uses operating system kernel mode functions 3328 to communicate with the kernel mode graphics driver 3329. In some embodiments, the kernel mode graphics driver 3329 communicates with the graphics processor 3332 to dispatch commands and instructions.IP core implementationOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium that represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions that represent various logic within the processor. When read by a machine, the instructions can cause the machine to manufacture logic for performing the techniques described herein. This type of representation (referred to as "IP core") is a reusable unit for the logic of an integrated circuit, which can be stored on a tangible, machine-readable medium as a hardware model describing the structure of the integrated circuit. The hardware model can be supplied to various consumers or manufacturing facilities that load the hardware model on a manufacturing machine that manufactures integrated circuits. An integrated circuit can be manufactured such that the circuit performs operations described in association with any of the embodiments described herein.FIG. 34 is a block diagram showing an IP core development system 3400 that can be used to manufacture integrated circuits to perform operations according to an embodiment. The IP core development system 3400 can be used to generate modular, reusable designs that can be incorporated into larger designs or used to build entire integrated circuits (for example, SOC integrated circuits). The design facility 3430 may use a high-level programming language (for example, C/C++) to generate a software simulation 3410 of the IP core design. The software simulation 3410 can be used to use the simulation model 3412 to design, test, and verify the behavior of the IP core. The simulation model 3412 may include function, behavior, and/or timing simulation. A register transfer level (RTL) design 3415 can then be created from the simulation model 3412 or synthesized. The RTL design 3415 is an abstraction of the behavior of an integrated circuit that models the flow of digital signals between hardware registers, and includes associated logic that is executed using the modeled digital signals. In addition to the RTL design 3415, lower-level designs at the logic level or transistor level can also be created, designed, or synthesized. Therefore, the specific details of the initial design and simulation may vary.The RTL design 3415 or an equivalent solution can be further synthesized into a hardware model 3420 by the design facility, and the hardware model 3420 can be expressed in a hardware description language (HDL) or some other representation of physical design data. The HDL can be further simulated or tested to verify the IP core design. The non-volatile memory 3440 (for example, a hard disk, flash memory, or any non-volatile storage medium) may be used to store the IP core design for delivery to a third-party manufacturing facility 3465. Alternatively, the IP core design can be transmitted via a wired connection 3450 or a wireless connection 3460 (for example, via the Internet). Manufacturing facility 3465 can then manufacture integrated circuits based at least in part on the IP core design. The manufactured integrated circuit may be configured to perform operations in accordance with at least one embodiment described herein.Demonstration system on chip integrated circuitFigures 35-37 illustrate exemplary integrated circuits and associated graphics processors that can be manufactured using one or more IP cores in accordance with various embodiments described herein. In addition to what is shown, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.Figure 35 is a block diagram illustrating an exemplary system-on-chip integrated circuit 3500 that can be manufactured using one or more IP cores according to an embodiment. The exemplary integrated circuit 3500 includes one or more application processors 3505 (eg, CPU), at least one graphics processor 3510, and may additionally include an image processor 3515 and/or a video processor 3520, any of which may be from the same Or multiple modular IP cores with different design facilities. The integrated circuit 3500 includes peripheral or bus logic, which includes a USB controller 3525, a UART controller 3530, an SPI/SDIO controller 3535, and an I2S/I2C controller 3540. In addition, the integrated circuit may include a display device 3545 coupled to one or more of a high-definition multimedia interface (HDMI) controller 3550 and a mobile industry processor interface (MIPI) display interface 3555. Storage can be provided by a flash memory subsystem 3560 including flash memory and a flash memory controller. A memory interface can be provided via the memory controller 3565 for access to the SDRAM or SRAM memory device. Some integrated circuits additionally include an embedded security engine 3570.36 is a block diagram showing an exemplary graphics processor 3610 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores according to an embodiment. The graphics processor 3610 may be a variation of the graphics processor 3610 of FIG. 36. The graphics processor 3610 includes a vertex processor 3605 and one or more fragment processors 3615A-3615N (for example, 3615A, 3615B, 3615C, 3615D to 3615N-1, and 3615N). The graphics processor 3610 can execute different shader programs via separate logic, so that the vertex processor 3605 is optimized to perform operations for the vertex shader program, and the one or more fragment processors 3615A-3615N execute for Fragment or pixel shader program fragment (for example, pixel) shading operation. The vertex processor 3605 executes the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 3615A-3615N uses the primitive and vertex data generated by the vertex processor 3605 to generate a frame buffer for display on the display device. In one embodiment, the fragment processor(s) 3615A-3615N are optimized to execute fragment shader programs as provided in the OpenGL API, which can be used to execute the same as those provided in the Direct3D API. Similar operations to pixel shader programs.The graphics processor 3610 additionally includes one or more memory management units (MMU) 3620A-3620B, cache(s) 3625A-3625B, and circuit interconnect(s) 3630A-3630B. The one or more MMUs 3620A-3620B are integrated circuits 3610, including vertex processor 3605 and/or fragment processor(s) 3615A-3615N, which provide virtual-to-physical address mapping. The virtual-to-physical address mapping is in addition to In addition to the vertices or image/texture data stored in the one or more caches 3625A-3625B, the vertices or image/texture data stored in the memory may also be referenced. In one embodiment, the one or more MMUs 3625A-3625B can be synchronized with other MMUs in the system, and the other MMUs include the one or more application processors 3605, image processors 3615 and/ Or one or more MMUs associated with the video processor 3620, so that each processor 3605-3620 can participate in a shared or unified virtual memory system. According to an embodiment, the one or more circuit interconnections 3630A-3630B enable the graphics processor 3610 to interface with other IP cores in the SoC via the internal bus of the SoC or via a direct connection.Figure 37 is a block diagram illustrating an additional exemplary graphics processor 3710 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores according to an embodiment. The graphics processor 3710 may be a variation of the graphics processor 3510 of FIG. 35. The graphics processor 3710 includes the one or more MMUs 3520A-3520B, caches 3525A-3525B, and circuit interconnections 3530A-3530B of the integrated circuit 3500 of FIG. 35.The graphics processor 3710 includes one or more shader cores 3715A-3715N (for example, 3715A, 3715B, 3715C, 3715D, 3715E, 3715F to 3715N-1 and 3715N), which provide a unified shader core architecture, where a single core or The type or core can execute all types of programmable shader code, the programmable shader code including shader program code for implementing vertex shaders, fragment shaders, and/or computational shaders. The exact number of shader cores present can vary among embodiments and implementations. In addition, the graphics processor 3710 includes an inter-core task manager 3705 that serves as a thread dispatcher for dispatching execution threads to one or more shader cores 3715A-3715N, and for distributing The block operation is accelerated for the block unit 3718 for tile-based rendering, in which the rendering operation for the scene is subdivided in the image space, for example, to take advantage of the local spatial coherence in the scene or to optimize the internal high-speed Use of cache.The present invention also discloses a set of technical solutions, as follows:1.A machine learning hardware accelerator, including:A calculation unit with an adder and a multiplier shared between the integer data path and the floating point data path, the multiplier is configured to gate the high bits of the input operand during the floating point operation to enable the first operand based on The calculation of the product of the second operand's mantissa.2.According to the machine learning hardware accelerator described in technical solution 1, the calculation unit has a mode input to switch the calculation unit between integer operation and floating point operation.3.In the machine learning hardware accelerator according to technical solution 2, the calculation unit includes an exponent unit and a mantissa unit, wherein the exponent unit and the mantissa unit are included in the floating-point data path and the integer data path.4.The machine learning hardware accelerator according to technical solution 3, wherein the mode input is to enable the switch to provide exponents and signs of the first operand and the second operand to the exponent unit for processing during floating-point operations.5.The machine learning hardware accelerator according to technical solution 4, wherein the exponent unit includes an incrementer to increment the high bits of the sum output by the adder during integer operations.6.According to the machine learning hardware accelerator described in technical solution 1, the calculation unit may be configured to output an integer result during the first period and a floating point result during the second period.7.According to the machine learning hardware accelerator described in technical solution 1, the multiplier of the calculation unit performs a multiplication operation during the first stage of the fusion multiplication and accumulation operation, and performs the addition operation during the second stage of the fusion multiplication and accumulation operation.8.According to the machine learning hardware accelerator described in technical solution 7, the first stage of the fusion multiply-accumulate operation should be executed during the first clock cycle, and the second stage of the fused multiply-accumulate operation should be executed at the second clock It is executed during the period, and the calculation unit outputs the result during each period of the first clock period and the second clock period.9.According to the machine learning hardware accelerator described in technical solution 8, the calculation unit is used to output the result of the second stage during the first clock cycle.10.According to the machine learning hardware accelerator described in technical solution 9, the calculation unit is used to store intermediate floating point data in a non-IEEE format with a 22-bit mantissa.11.A method for accelerating machine learning operations, the method comprising:Acquire and decode a single instruction to perform combined multiplication and addition operations on the set of operands;Issuing the single instruction for execution by the dynamically configurable computing unit;Configure one or more logic units of the calculation unit to perform operations with the data type of the operand set and the accuracy; andAt least a part of the single instruction is executed at the dynamically configurable computing unit to generate and output based on the multiplication and addition operations.12.The method according to technical solution 11, wherein the combined multiplication and addition operations are fusion multiplication-addition or fusion multiplication-accumulation operations.13.The method according to technical solution 11 additionally includes executing at least a part of the single instruction via a machine learning accelerator unit.14.The method according to technical solution 13, wherein executing at least a part of the single instruction comprises: quantizing an intermediate value with a first precision to a second precision lower than the first precision, and the quantization comprises randomly rounding Enter the decimal part of the intermediate data.15.The method according to technical solution 14, additionally comprising randomly rounding the decimal part of the intermediate data based on a probability distribution associated with the intermediate data.16.A data processing system includes:A non-transitory machine-readable medium used to store instructions for execution by one or more processors of the data processing system; andA general graphics processing unit including a machine learning hardware accelerator and a dynamic precision calculation unit, the machine learning hardware accelerator including hardware logic for performing multiple machine learning calculation operations in response to a single instruction.17.According to the data processing system according to technical solution 16, the dynamic precision calculation unit includes calculation logic having an adder and a multiplier shared between an integer data path and a floating point data path, and the calculation logic may Configured to generate floating point data encoded in a non-standard format.18.The data processing system according to claim 17, wherein the plurality of machine learning calculation operations in response to a single instruction includes a first operation for performing a fusion multiplication-addition operation and a first operation for applying an activation function to the fusion multiplication -The second operation of the output of the addition operation.19.The data processing system according to technical solution 18, wherein the activation function is a sigmoid function.20.According to the data processing system of technical solution 18, the machine learning hardware accelerator includes a random quantization unit to perform random rounding during the quantization of the neural network data during the plurality of machine learning calculation operations.The embodiments described herein provide a logic unit that includes combined integer/floating point data for both multiply-add (e.g. a*b+c) and multiply-accumulate (e.g. c=c+a*b) operations path. In one embodiment, the addend used for the addition operation is based on the accumulation of the previous operation. In one embodiment, the integer data path of the logic unit is merged into a floating point data path with an addend alignment operation in parallel with the multiplication operation. In one embodiment, the integer data path is merged into a floating point data path with an addend aligned operation after the multiplication operation. The multiply-add and multiply-accumulate data paths described herein can be single-cycle or multi-cycle.In one embodiment, during the two-cycle floating point multiply-accumulate, the logic unit does not compare the mantissa at the beginning of the second stage (for example, the adder stage). Instead, the logic unit pre-calculates a larger (or smaller) mantissa based on the accumulator exponent from the second stage and the multiplier output calculated during the first stage.In one embodiment, the bit width of the mantissa of the accumulator or addend is greater than the bit width of the mantissa of the multiplier input. In one embodiment, integer operations are mapped to floating point units. In addition to the mantissa circuit of the floating-point unit, some of the integer operations are also mapped to the existing exponential circuit. In one embodiment, the logic unit described herein includes a multiplier unit and an adder unit that are shared between floating point and integer operations and used to perform both floating point and integer operations.The following clauses and/or examples refer to specific embodiments or examples thereof. The specific details in the examples can be used anywhere in one or more embodiments. Various features of different embodiments or examples can be combined with some of the included features and other excluded features in various ways to suit a variety of different applications. Examples may include subject matter, such as methods, components for performing the actions of the methods, at least one machine-readable medium including instructions that, when executed by a machine, cause the machine to perform the method or The actions of equipment or systems. Various components may be components for performing the described operations or functions.One embodiment provides a machine learning hardware accelerator that includes a calculation unit having an adder and a multiplier shared between an integer data path and a floating point data path, and the upper bits of the input operand to the multiplier To be gated during floating point operations. In one embodiment, the adder and multiplier can be configured to perform floating point operations and integer operations. In one embodiment, the calculation unit performs a multiplication-add operation via a multiplier and an adder. In one embodiment, the calculation unit accepts at least two input operands. One embodiment provides a calculation unit to perform a multiplication-accumulation operation using a dual input operand and an accumulation value. One embodiment provides a calculation unit to perform a multiply-add operation using three input operands. In one embodiment, the calculation unit needs to perform a multiply-accumulate operation or a multiply-add operation in a single cycle. In one embodiment, the calculation unit needs to perform a double-cycle multiply-accumulate operation or a two-cycle multiply-accumulate operation. In one embodiment, the multiplier in the calculation unit is to produce an output during the first period, and the adder is to produce an output during the second period. In one embodiment, the calculation unit is to perform a double-cycle multiply-accumulate operation, where the first cycle is associated with the first logic phase, the second cycle is associated with the second logic phase, and the calculation unit includes an exponential unit to pass the second phase The accumulated output of the previous cycle and the multiplier output from the first stage precompute the larger mantissa and alignment shift of the second stage.In one embodiment, the integer data path is merged into a floating point data path with an addend alignment operation in parallel with the multiplication operation. In one embodiment, the integer data path is merged into the floating point data path with addend aligned operation after the multiplication operation. The calculation unit may have a mode input to switch the calculation unit between integer operation and floating point operation. In one embodiment, the calculation unit can be configured for 8.8 fixed point input and 16.0 fixed point output.One embodiment provides a data processing system that includes a non-transitory machine-readable medium to store instructions for execution by one or more processors of the data processing system; and includes a machine learning hardware accelerator and A general graphics processing unit of a dynamic precision calculation unit, and the machine learning hardware accelerator includes hardware logic to perform multiple machine learning calculation operations in response to a single instruction. In one embodiment, the dynamic precision calculation unit is switchable between integer operations and floating point operations. In one embodiment, the dynamic precision calculation unit includes an integer data path and a floating point data path that share a multiplier and an adder, where the multiplier is to perform a multiplication operation on the integer data path and the floating point data path. In one embodiment, the floating-point data path includes an addend alignment operation performed in parallel with the multiplication operation. In one embodiment, the floating-point data path includes an addend alignment operation performed after the multiplication operation. In one embodiment, the dynamic accuracy calculation unit is configured for a single-period fusion multiply-accumulate operation or a two-period fusion multiply-accumulate operation.One embodiment provides a method of accelerating machine learning operations, the method comprising fetching and decoding a single instruction to perform combined multiplication and addition operations on a set of operands; issuing a single instruction for execution by a dynamically configurable computing unit; Configure one or more logic units of the calculation unit to perform operations with the accuracy and data type of the operand set; and execute at least a part of a single instruction at the dynamically configurable calculation unit to generate sums based on multiplication and addition operations Output.The embodiments described herein refer to a specific configuration of hardware (for example, an application specific integrated circuit (ASIC)) configured to perform certain operations or have predetermined functionality. Such electronic devices generally include a collection of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (such as keyboards). , Touch screen and/or display) and network connection. The coupling of the processor assembly and its other components is usually through one or more buses and bridges (also called bus controllers). Storage devices and signals carrying network services respectively represent one or more machine-readable storage media and machine-readable communication media. Therefore, the storage device of a given electronic device generally stores the code and/or data used for storage on the set of one or more processors of the electronic device for execution.Of course, one or more parts of the embodiments may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the embodiments can be practiced without some of these specific details. In some instances, well-known structures and functions are not described in detailed details to avoid obscuring the inventive subject matter of the embodiments. Accordingly, the scope and spirit of the present invention should be judged based on the claims that follow. |
Spreading or keep out zones may be formed in integrated circuit packages by altering the roughness of package surfaces. The surface roughness can be altered by applying or growing particles having a dimension less than 500 nanometers. Hydrophilic surfaces may be made hemi-wicking and hydrophobic surfaces may be made hemi-wicking by particles of the same general characteristics. |
1.A method comprising:Forming a surface of a semiconductor integrated circuit package;Forming particles having a size of less than 500 nanometers on the surface;A liquid is applied to the surface of the package.2.The method of claim 1 comprising forming said particles in said two different regions on said surface such that one region is hydrophobic and the other region is hydrophilic.3.The method of claim 2 including forming a semiconductor integrated circuit package surface in the form of a package substrate.4.The method of claim 3 comprising forming said particles in substantially similar sizes.5.The method of claim 4 comprising forming said particles by growing particles on said substrate.6.The method of claim 1 comprising forming said particles by depositing particles on said surface.7.The method of claim 4 wherein coating the liquid comprises providing a die on the substrate and injecting an underfill between the die and the substrate.8.The method of claim 7 including: defining a region for preventing intrusion on said substrate and providing a region having hydrophilic particles between said die and said substrate using hydrophobic particles surrounding said die .9.The method of claim 1 comprising growing a rod on said substrate to form said particles.10.The method of claim 9 including growing the rod using a metallographic deposition technique.11.The method of claim 1 including forming said surface of the integrated circuit die and treating said particles such that said particles are hydrophobic in one region and hydrophilic in the other region.12.The method of claim 11 including providing a die attach to said surface such that said hydrophobic particles reduce spillage of the die attach material.13.An integrated circuit package comprising:Package surfaceForming particles having a size of less than 500 nanometers on the surface;A liquid coated on the particles.14.The package of claim 13 wherein said particles are hydrophobic.15.The package of claim 13 wherein said particles are hydrophilic.16.The package of claim 13 wherein some of said particles are hydrophilic and some are hydrophobic.17.The package of claim 13 wherein said liquid is an underfill.18.The package of claim 13 wherein said liquid is a die attach.19.The package of claim 13 wherein said surface is a die surface.20.The package of claim 13 wherein said surface is a package substrate surface.21.The package of claim 20 wherein hydrophobic particles are formed on said substrate, a die is disposed over said substrate, said hydrophobic particles being formed on said substrate around said die, The surface of the substrate below the die has hydrophilic particles formed thereon and solder balls provided between the die and the substrate.22.The package of claim 13 wherein said surface is a die surface and a die attach is attached to said die surface, said contact area attached to said die being hydrophilic while The area around the die attach is hydrophobic.23.An integrated circuit package comprising:a package surface having particles having a size of less than 500 nanometers on the surface;The surface has a surface energy of 70 mN/m or more or 20 mN/m or less.24.The package of claim 23 wherein said surface comprises both a hydrophilic region and a hydrophobic region.25.The package of claim 23 wherein said surface is a die surface.26.The package of claim 23 wherein said surface is a substrate surface.27.The package of claim 23 wherein said surface is partially covered with an underfill and said die is disposed to sandwich an underfill between said die and said surface.28.The package of claim 23 wherein said particles are upright bars.29.The package of claim 28 wherein said rod is hydrofluoric acid treated.30.The package of claim 23 comprising: a substrate and a die stacked on said substrate, a die attach being disposed between said substrate and said die, said die attached Surrounded by a hydrophobic region of the surface and the die contacts a hydrophilic region of the surface. |
Electronic package with wetted and non-wetting areasTechnical fieldThe present invention relates to the fabrication of integrated circuit packages for housing integrated circuit chips.Background techniqueIn some integrated circuit packages, the substrate can be assembled with one or more integrated circuit chips. There may be an underfill between the substrate and the chip. Advantageously, this material fills the area between the substrate and the chip, but does not extend excessively outward. Doing so may adversely affect the operation of the packaged components. For example, when an underfill is injected between a substrate and an integrated circuit, it tends to flow outward, creating a so-called material tongue that protrudes from the bottom of the integrated circuit die.The underfill can be done by capillary flow. In order to achieve high throughput times, the underfill can be arranged to have very low viscosity and good wettability to the substrate solder resist. Moreover, the underfill can be dispensed at elevated temperatures. The result of all of these factors is that the underfill tongue remains on the underfill distribution side of the package. The tongue effectively increases the coverage area of the package.Summary of the inventionAccording to a first aspect of the invention, a method is provided comprising:Forming a surface of a semiconductor integrated circuit package;Forming particles having a size of less than 500 nanometers on the surface;A liquid is applied to the surface of the package.According to a second aspect of the present invention, an integrated circuit package is provided, comprising:Package surfaceForming particles having a size of less than 500 nanometers on the surface;A liquid coated on the particles.According to a third aspect of the present invention, an integrated circuit package is provided, comprising:a package surface having particles having a size of less than 500 nanometers on the surface;The surface has a surface energy of 70 mN/m or more or 20 mN/m or less.DRAWINGS1 is an enlarged cross-sectional view of a package in accordance with one embodiment of the present invention;Figure 2 is a high magnification cross-sectional view of a portion of the upper surface of the package substrate shown in Figure 1;Fig. 3 is an enlarged cross-sectional view showing another embodiment.Detailed waysIn some semiconductor integrated circuit package applications, it is desirable to have a substrate that has both wet and non-wetting regions. It is more desirable to have a substrate with a super wettable and super non-wettable region. In other words, the same substrate can have hemi-wicking and hydrophobic and semi-capillary and hydrophilic surface areas. Thus, the underfill and other fluxes can be precisely controlled to spread out in a limited area on the substrate.In some embodiments of the invention, the particulate coating can be applied over the entire surface of the substrate. For example, the coating can be a silicon nanorod grown on a substrate and extending to a height of 500 nanometers. If the upper surface of the substrate is relatively hydrophilic, the presence of surface roughened nanoparticles can greatly increase the hydrophilicity of the surface, which is called semi-capillary. Conversely, if the same surface is hydrophobic, half capillary occurs, however, making the surface very hydrophobic.Typically, the hydrophilic surface has a surface energy greater than or equal to 70 mN/m. The hydrophobic surface has a surface energy of less than or equal to 20 mN/m.Referring to FIG. 1, substrate 12 has an integrated circuit die 14 that is flip-chip mounted to a substrate using solder balls 16 to electrically and mechanically connect die 14 to substrate 12. The substrate 12 has interconnects that provide signals to and from the die 14 to external devices.The upper surface of the substrate 12 can have peripheral regions 22 (e.g., 22a and 22b) that are highly hydrophobic or semi-capillary. Conversely, the area 24 under the die and slightly rising from under the chip is very hydrophobic and semi-capillary. Thus, once the underfill 20 is injected in the direction A, for example, with capillary forces, it will exit from the hydrophobic surfaces 22a and 22b and spread over the hydrophilic surface 24. Because surfaces 22 and 24 are semi-capillary, conventional wetting and non-wetting effects are enhanced. As a result, the underfill 20 has a reduced tendency to form a tongue by extending outward in the opposite direction to the arrow A. This can result in a smaller package footprint in some cases because the substrate surface is not consumed by the underlying filler tongue.As yet another example, package 30 can include a substrate 36 that includes interconnects 44, such as solder balls, as shown in FIG. Electrical vertical vias 38 may be found in the substrate 36 that connect the horizontal metallization 41 to distribute signals between the outside coupled through the interconnect 44 and the integrated circuit blocks 32a, 32b, and 32c in the package 30. The sealing material 52 can encapsulate the integrated circuit blocks 32a, 32b, and 32c.Die 32a is joined to solder joint 46 on substrate 36 by wire bonding 56. Solder joints 46 are connected to electrical vertical vias 38 by horizontal metallization 41 and, ultimately, down to solder joints 43 of interconnects 44. In this way, communication between the external components and the die 32a is established. Likewise, wire bond 48 is connected to die 32b by contact 50. The connection to the die 32c can be provided in a number of different ways. The die 32c is coupled to the die 32b by a die attach adhesive layer 34. Likewise, die 32b can be coupled to die 32a by die attach bonding layer 34. However, other techniques can also be used to secure the integrated circuit blocks together.In this case, it is desirable to prevent the adhesive for the die attach 34 from oozing out. If the chip sticks out, it will contaminate the area used for wire bonding contact. Thus, surface 54 is treated to be highly hydrophobic and semi-capillary. These surfaces can be provided to both the upper surface of the die 32b and the upper surface of the die 32c.Referring again to FIG. 2, in some embodiments of the invention, particles 40 are grown on substrate 12. The microparticles 40 may be, for example, nanorods, spherical particles, or tetrapods, and the like. However, other ingredients and shapes can also be utilized. They are made of materials including, but not limited to, silica, alumina, zirconia, silicon, or carbon. Generally, it is desirable for these particles 40 to have a height of 5 to 500 nanometers beyond the surface of the substrate 12. This effectively enhances the hydrophobic or hydrophilic nature of the surface produced.When it is desired to form both hydrophilic and hydrophobic structures on the same surface, the same fine elements can be formed. That is, particles 40 of similar composition and size are formed over the entire surface, assuming that the surface is ultimately semi-capillary and hydrophobic or semi-capillary and hydrophilic. The hydrophobic surface can then be exposed to hydrofluoric acid treatment. The surface that remains hydrophilic is covered with a suitable, removable mask 42.Other hydrophobic treatments can also be used. For example, fluorinated silanes are hydrophobic. They can be readily applied to the surface by alcohol or by plasma treatment prior to functionalization. For example, the component R3-Si-OH together with the HO-substrate solder resist produces an R3-Si-O-substrate solder resist. This component R can be, but is not limited to, an alkane, a vinyl or a fluorine. As an alternative, different treatments can be used to create a hydrophilic surface. For example, the terminal aminosilane is hydrophobic. In addition, alkanesilanes are hydrophobic. Moreover, long chain alkane self-polymerizes into a single layer, which appears as a very high density of silane on the surface. Such a single layer can be deposited by a dissolution route or vapor deposition. Further, the hydroxyl group on the surface of the solder resist may be bonded to the silanol with a suitable branch to impart non-wetting property to the underfill. A particular area of the surface can be patterned by silane treatment to give a non-wetting area of the underfill.In some embodiments of the invention, the structure can be dip applied to apply hydrofluoric acid. In some embodiments of the invention, the hydrofluoric acid may be from 48 to 51 percent and may be exposed for one minute.The growth of the particles 40 in the form of nanorods can be accomplished by metallographic deposition techniques. The metallographic deposition technique involves physical vapor deposition on a substrate that is rotated in two different directions. The metallographic phase is formed between the input vapor source and the surface on which the nanorods are to be grown. In some cases, the angle can be from 70 to 90 degrees. A deposition rate of 0.2 nMs-1 and a rotation speed of 0.05 revs-1 can be applied. An electron beam evaporator with a quartz crystal thickness monitor can be used to detect the film thickness.Therefore, the surface can be selectively made highly hydrophilic or highly hydrophobic. In a few examples, the hydrophobic region can be an effective barrier to intrusion that avoids intrusion of flux, underfill, or encapsulant. Conversely, the dispersion of the underfill and the molding compound through the narrow channels on the shrink-wrapped package can be facilitated by the fabrication of a semi-capillary surface.At least one dimension of the nanoparticles is typically less than 100 nanometers. However, as used in this application, the microparticles are particles up to 500 nanometers in size. In a few examples, suitable shapes include, but are not limited to, spheres, quadruplets, rods, tubes, and platelets. Suitable materials include, but are not limited to, silica, alumina, titania, zirconia, and carbon.Instead of growing particles, deposited particles can be utilized. In one embodiment, the particles, such as at least two microspheres of different sizes, are mixed and subsequently deposited. The particles can be protected by a bond coat, but other techniques can also be used.The word "one embodiment" or "an embodiment" as used throughout the specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearance of the phrase "a" or "an" Furthermore, the particular features, structures, or characteristics may be constructed in other suitable forms than the specific embodiments described, and all such forms may be included within the scope of the claims.While the invention has been described with respect to the embodiments of the embodiments the embodiments It is intended that the appended claims cover all such modifications and |
The invention relates to methods and systems for manufacturing semiconductor devices. A thermocompression bonding (TCB) apparatus can include a wall having a height measured in a first direction and configured to be positioned between a first pressing surface and a second pressing surface of a semiconductor bonding apparatus. The apparatus can include a cavity at least partially surrounded by thewall, the cavity sized to receive a semiconductor substrate and a stack of semiconductor dies positioned between the semiconductor substrate and the first pressing surface, the stack of semiconductordies and semiconductor substrate having a combined unpressed stack height as measured in the first direction. In some embodiments, the unpressed stack height is greater than the height of the wall, and the wall is configured to be contacted by the first pressing surface to limit movement of the first pressing surface toward the second pressing surface during a semiconductor bonding process. |
1. A method of manufacturing a semiconductor device, the method comprising:The stop wall is positioned on the wafer substrate between a first platform of the semiconductor bonding apparatus and a second platform of the semiconductor bonding apparatus, the first platform having a first pressing surface, and the second platform having a surface facing a second pressing surface of the first pressing surface of the first platform, wherein the stop wall and the wafer substrate have a direction extending from the second pressing surface in a direction perpendicular to the first pressing surface A measured combined height, wherein the stop wall at least partially surrounds a stack of semiconductor dies, and wherein the stack of semiconductor dies is positioned between the wafer substrate and the first platform of the semiconductor bonding apparatus , and wherein said stack of semiconductor dies has an unpressed stack height measured from said second pressing surface in a direction perpendicular to said first pressing surface; andOne or both of the first platform and the second platform of the semiconductor bonding apparatus are moved toward each other until the first pressing surface contacts the stop wall, thereby perpendicular to the compressing the stack of semiconductor dies in the direction of the first pressing surface;wherein the unpressed stack height of the stack of semiconductor dies is greater than the combined height of the stop wall and the wafer substrate; andwherein after one or both of the first platform and the second platform of the semiconductor bonding apparatus move toward each other until the first pressing surface contacts the stop wall, all of the semiconductor die The stack has a press stack height measured from the second press surface in a direction perpendicular to the first press surface, the press stack height being less than or equal to the stop wall and the wafer substrate. Combined height.2. The method of claim 1, wherein the stop wall is constructed from a rigid material.3. The method of claim 2, wherein the stop wall is constructed of one or more of silicon, metal, polymer, and glass.4. The method of claim 1, wherein the stop wall has a diameter smaller than a diameter of the second platform of the semiconductor bonding device.5. The method of claim 1, further comprising moving one or both of the first platform and the second platform of the semiconductor bonding apparatus toward each other until the first pressing Positioning a second stack of semiconductor dies inside the stop wall before the surface contacts the stop wall between the wafer substrate and the first platform of the semiconductor bonding apparatus, the semiconductor dies The second stack of semiconductor dies has a second unpressed stack height measured from the second pressing surface in a direction perpendicular to the first pressing surface, wherein the second stack of semiconductor dies The unpressed stack height is greater than the combined height of the stop wall and the wafer substrate.6. The method of claim 1, further comprising moving one or both of the first platform and the second platform of the semiconductor bonding apparatus toward each other until the first pressing At least twenty stacks of semiconductor dies are positioned inside the stop wall between the wafer substrate and the first platform of the semiconductor bonding apparatus before surfaces contact the stop wall.7. A thermocompression bonding TCB equipment for manufacturing semiconductor devices, the TCB equipment comprising:a wall having a height measured in a first direction, the wall configured to be positioned on the wafer between a first pressing surface and a second pressing surface of the semiconductor bonding apparatus; andA cavity surrounded by the wall, the cavity being sized to receive a stack of semiconductor dies positioned between the wafer and the first pressing surface, the stack of semiconductor dies having a structure as in an unpressed stack height measured from the wafer in the first direction;wherein the unpressed stack height is greater than the height of the wall, and wherein the wall is configured to be contacted by the first pressing surface to restrict the first pressing surface towards the second pressing surface during a semiconductor bonding process The movement of pressing a surface.8. The TCB device of claim 7, wherein the TCB device is constructed of rigid material.9. The TCB device of claim 7, wherein the TCB device is constructed of silicon.10. The TCB device of claim 7, wherein the cavity of the TCB device is configured to receive at least twenty stacks of semiconductor dies simultaneously.11. A semiconductor manufacturing system, comprising:a first pressing platform having a first pressing surface;a second pressing platform having a second pressing surface facing the first pressing surface;A wafer positioned between the first pressing platform and the second pressing platform;a stop positioned on the wafer between the first pressing platform and the second pressing platform, the stop including at least one lumen and a continuous wall surrounding the at least one lumen ;A first stack of semiconductor dies connected to the wafer, the first stack of semiconductor dies being at least partially positioned within the at least one interior cavity of the stopper, the wafer being pressed against the first pressure between platforms;wherein the stop is configured to limit movement of the first and second pressing surfaces toward each other, thereby limiting compression of the first stack of semiconductor dies to a desired thickness.12. The semiconductor manufacturing system of claim 11, wherein the first stack of semiconductor dies includes a first die positioned closest to the wafer and a last die positioned furthest from the wafer, and wherein At least a portion of the last die is positioned beyond the stop in a direction toward the first pressing surface before the first and second pressing surfaces move toward each other.13. The semiconductor manufacturing system of claim 11, further comprising a second stack of semiconductor dies connected to the wafer and positioned at least partially within the at least one cavity of the stop.14. The semiconductor manufacturing system of claim 13, wherein the second stack of semiconductor dies includes a first die positioned closest to the wafer and a last die positioned furthest from the wafer, and wherein At least a portion of the last die is positioned beyond the stop in a direction toward the first pressing surface before the first and second pressing surfaces move toward each other.15. The semiconductor manufacturing system of claim 11, wherein the stopper has an annular shape.16. The semiconductor manufacturing system of claim 11, wherein the first stack of semiconductor dies includes at least a first semiconductor die and a second semiconductor die, wherein the first semiconductor die and the second semiconductor die Semiconductor dies each include:a first surface facing the first pressing platform;a second surface facing the second pressing platform;At least one substrate through-hole TSV extending through the first and second surfaces of the first and second semiconductor dies, the TSV having a first end and a second Ends;a conductive pad connected to said first end of at least one TSV; andA conductive post connected to the second end of at least one TSV.17. The semiconductor manufacturing system of claim 16, wherein the conductive pad of the TSV of the first semiconductor die faces the conductive pillar of the TSV of the second semiconductor die and is in contact with the conductive pillar. Spaced apart, and wherein a solder ball is positioned between the conductive pad of the first semiconductor die and the conductive post of the second semiconductor die.18. The semiconductor manufacturing system of claim 17, wherein when the first semiconductor die and the second semiconductor die are in an uncompressed configuration, the TSV of the first semiconductor die The conductive pad is spaced apart from the conductive pillar of the TSV of the second semiconductor die by a first distance, and wherein when the first semiconductor die and the second semiconductor die respond to the first When one or both of the pressing platform and the second pressing platform move toward each other and are compressed, the conductive pads of the TSV of the first semiconductor die and the conductive pads of the second semiconductor die The conductive pillars of the TSV are spaced a second distance apart.19. The semiconductor manufacturing system of claim 18, wherein the second distance is less than the first distance, and wherein the second distance is at least 3 μm.20. The semiconductor manufacturing system of claim 18, wherein the second distance is less than the first distance, and wherein the second distance is at least 5 μm. |
Methods and systems for manufacturing semiconductor devicesTechnical fieldThe technology relates generally to semiconductor devices and, more particularly, to methods and systems for fabricating semiconductor devices.Background techniquePackaged semiconductor dies, including memory chips, microprocessor chips, and imager chips, typically include semiconductor dies mounted on a substrate and encased in a protective covering. Semiconductor dies contain functional features, such as memory cells, processor circuitry, and imager devices, as well as bond pads electrically connected to the functional features. The bond pads may be electrically connected to terminals outside the protective covering to allow the semiconductor die to be connected to higher level circuitry. Within some packages, semiconductor dies may be stacked on adjacent semiconductor dies and electrically connected to each other through individual interconnects between adjacent semiconductor dies. In such packages, each interconnect may include a conductive material (eg, solder) and a pair of contacts on opposing surfaces adjacent to the semiconductor die. For example, metal solder can be placed between the contacts and resoldered to form a conductive joint. However, conventional processes can cause solder connection failures.Contents of the inventionIn one aspect, the present application provides a method of manufacturing a semiconductor device, the method comprising positioning a stop wall on a wafer substrate between a first platform of a semiconductor bonding apparatus and a second platform of the semiconductor bonding apparatus. During this time, the first platform has a first pressing surface, and the second platform has a second pressing surface facing the first pressing surface of the first platform, wherein the stop wall and the wafer liner A base has a combined height measured from the second pressing surface in a direction perpendicular to the first pressing surface, wherein the stop wall at least partially surrounds the stack of semiconductor dies, wherein the semiconductor dies are positioned between the wafer substrate and the first platform of the semiconductor bonding apparatus, and wherein the stack of semiconductor dies has a thickness measured from the second pressing surface in a direction perpendicular to the first pressing surface an unpressed stack height; and moving one or both of the first platform and the second platform of the semiconductor bonding device toward each other until the first pressing surface contacts the stop wall, The stack of semiconductor dies is thereby compressed in a direction perpendicular to the first pressing surface; wherein the unpressed stack height of the stack of semiconductor dies is greater than the stop wall and the wafer substrate the combined height; and wherein one or both of the first platform and the second platform of the semiconductor bonding device move toward each other until the first pressing surface contacts the stop wall Thereafter, the stack of semiconductor dies has a pressing stack height that is less than or equal to the combined height of the stop wall and the wafer substrate.In another aspect, the present application provides a thermocompression bonding (TCB) apparatus for manufacturing a semiconductor device, the TCB apparatus including: a wall having a height measured in a first direction, the wall being configured to position on the wafer between the first pressing surface and the second pressing surface of the semiconductor bonding apparatus; and a cavity at least partially surrounded by the wall, the cavity being sized to receive a position between the wafer and the a stack of semiconductor dies between the first pressing surfaces, the stack of semiconductor dies having an unpressed stack height as measured from the wafer in the first direction; wherein the unpressed stack height is greater than the the height of the wall, and wherein the wall is configured to be contacted by the first pressing surface to limit movement of the first pressing surface toward the second pressing surface during a semiconductor bonding process.In another aspect, the present application provides a semiconductor manufacturing system, which includes: a first pressing platform having a first pressing surface; a second pressing platform having a second pressing surface facing the first pressing surface; The wafer is positioned between the first pressing platform and the second pressing platform; the stopper is positioned on the wafer between the first pressing platform and the second pressing platform, so The stop includes at least one lumen; a first stack of semiconductor dies connected to the wafer, the first stack of semiconductor dies being at least partially positioned within the at least one lumen of the stop between the semiconductor substrate and the first pressing platform; wherein the stop is configured to limit movement of the first pressing surface and the second pressing surface towards each other, thereby pressuring the semiconductor The compression of the first stack is limited to the desired thickness.Description of the drawingsMany aspects of this technology can be better understood by reference to the following drawings. Components in drawings are not necessarily to scale. Indeed, the emphasis is on clearly explaining the principles of the technology.Figure 1 is a side cross-sectional view of an embodiment of a semiconductor bonding apparatus.2 is a side cross-sectional view of an embodiment of a semiconductor die assembly positioned between two platforms of a semiconductor bonding apparatus.3 is a side cross-sectional view of the semiconductor die assembly and semiconductor bonding apparatus of FIG. 2 with the die stack compressed between two platforms of the semiconductor bonding apparatus.Figure 4 is a side cross-sectional close-up view of TSV and solder connections of a semiconductor die assembly without stops.Figure 5 is a side cross-sectional close-up view of TSV and solder connections of a semiconductor die assembly with stops.6-10 are top plan views of various embodiments of semiconductor die assemblies.11 is a side cross-sectional view of another embodiment of a semiconductor die assembly positioned between two platforms of a semiconductor bonding apparatus.12 is a side cross-sectional view of the semiconductor die assembly and semiconductor bonding apparatus of FIG. 11 with the die stack compressed between two platforms of the semiconductor bonding apparatus.Figure 13 is a top plan view of an embodiment of a semiconductor die assembly.Figure 14 is a side cross-sectional view of another embodiment of a semiconductor die assembly positioned between two platforms of a semiconductor bonding apparatus.15 is a side cross-sectional view of the semiconductor die assembly and semiconductor bonding apparatus of FIG. 14 with the die stack compressed between two platforms of the semiconductor bonding apparatus.16 is a side cross-sectional view of another embodiment of a semiconductor die assembly positioned between two platforms of a semiconductor bonding apparatus.17 is a side cross-sectional view of the semiconductor die assembly and semiconductor bonding apparatus of FIG. 16 with the die stack compressed between two platforms of the semiconductor bonding apparatus.18 is a schematic diagram of a system including a semiconductor device configured in accordance with an embodiment of the present technology.Detailed waysOne challenge with conventional semiconductor packaging is controlling compression of the die stack during fabrication. Often, all or part of the die stack is overstressed during manufacturing. Overvoltage of a die stack can cause a variety of problems, including depletion of solder between a pair of contacts, leakage from non-conductive films around the stack perimeter, and undesired electrical shorting via leaking solder from adjacent pairs of contacts.The following describes semiconductor devices having spacer structures (eg, stops) or other TCB devices for limiting compression of solder or other bonding materials during thermocompression bonding (TCB) operations or other die stacking operations. Specific details of several embodiments and associated systems and methods. The structures and methods disclosed herein are also applicable to other compression bonding methods besides TCB. Those skilled in the art will recognize that suitable stages of the methods described herein may be performed at the wafer level or at the die level. Thus, depending on the context in which it is used, the term "substrate" may refer to a wafer-level substrate or a singulated die-level substrate. Furthermore, unless context dictates otherwise, conventional semiconductor fabrication techniques may be used to form the structures disclosed herein. For example, materials may be deposited using chemical vapor deposition, physical vapor deposition, atomic layer deposition, spin coating, and/or other suitable techniques. Similarly, material may be removed using, for example, plasma etching, wet etching, chemical-mechanical planarization, or other suitable techniques. Those skilled in the relevant art will also understand that the technology may have additional embodiments and that the technology may be practiced without some of the details of the embodiments described below with reference to FIGS. 1-18 .In some of the embodiments described below, a semiconductor manufacturing system includes: a first pressing platform having a first pressing surface, a second pressing platform having a second pressing surface facing the first pressing surface, and a second pressing platform positioned on the first pressing surface. A stop between the compression platform and the second compression platform (eg, TCB device). The stop may contain at least one lumen. As shown in various embodiments herein, a semiconductor substrate can be positioned between a first pressing platform and a second pressing platform, and a first stack of semiconductor dies can be connected to the semiconductor substrate. The first stack of semiconductor dies may be positioned at least partially within the interior cavity of the stopper between the semiconductor substrate and the first pressing platform.In some embodiments, the stop is configured to limit movement of the first and second pressing surfaces toward each other, thereby controlling compression of the first stack of semiconductors to a desired thickness. Limiting compression of the stack of semiconductor dies to a desired thickness reduces the likelihood of extrusion of excess solder or other bonding material from the solder joints. Limiting compression can also reduce caulking of non-conductive film (NCF) or other materials between individual dies in the stack. These and other advantages of using stops can increase the yield of the TCB process by increasing the speed and reliability of manufacturing hundreds or even thousands of die packages in a single platform.As used herein, the terms "vertical," "lateral," "upper," and "lower" may refer to the relative orientation or position of features in a semiconductor device with respect to the orientation illustrated in the figures. For example, "upper" or "topmost" may refer to a feature positioned closer to the top of the page than another feature. However, these terms should be broadly construed to include semiconductor devices having other orientations, such as inverted or tilted orientations, where top/bottom, above/below, above/below, up/down, and left/right Interchangeable depending on orientation. The headings provided herein are for convenience only and should not be construed as limiting the subject matter disclosed.Figure 1 illustrates an embodiment of a semiconductor manufacturing system 100. As illustrated, manufacturing system 100 may include an upper (eg, first) pressing platform 102 and a lower (eg, second) pressing platform 104. The upper compression platform 102 may include a first compression surface 106 facing the second compression platform 104 . Similarly, lower compression platform 104 may include a second compression surface 108 facing upper compression platform 102 . In some embodiments, at least one of the first compression platform 102 and the second compression platform 104 is positioned within the cavity of the engagement device. In some embodiments, the first compression platform 102 and the second compression platform 104 have substantially equal cross-sectional areas as measured parallel to the first compression surface 106 .The system 100 may include a stop 110 between the two compression platforms 102, 104. The stop 110 may include a first side 112 and a second side 114 opposite the first side 112 . In the illustrated embodiment, the first side 112 of the stop 110 contacts the second pressing platform 104 (eg, the second pressing surface 108 of the second pressing platform 104). The reverse arrangement may also be used, where the second surface 114 contacts the second pressing platform 104 .At least one of the first side 112 and the second side 114 of the stop 110 may be planar. In some embodiments, at least one of first side 112 and second side 114 includes one or more notches, holes, undulations, ribs, slits, protrusions, and/or other surface features. Preferably, the first side 112 and the second side 114 are sized and shaped such that the planar rigid structure will rest when the planar rigid structure is disposed on the stop 110 and the stop 110 is disposed on a horizontal surface. in the horizontal plane.When the stop 110 is located on the second pressing surface 108, the stop 110 may have a height (eg, a maximum height) H1 as measured from the second pressing surface 108 and perpendicular to the second pressing surface. As illustrated, stop 110 includes at least one lumen 116 . The lumen 116 may extend through the height H1 of the stop 110 . In some embodiments, as described below, stop 110 includes a plurality of cavities 116 .In some embodiments, stop 110 may be constructed from rigid, semi-rigid, and/or elastic materials. Preferably, the stop 110 is constructed from materials commonly used in TCB operations that are configured to withstand high temperature gradients. Such material may be silicon. For example, a silicon wafer can be cut to a desired height and width with one or more desired cavities penetrating the wafer. Other materials, including metals, ceramics, polymers, semiconductors, and/or other materials or combinations of materials, may be used to construct stop 110 .As illustrated in FIG. 2 , semiconductor assembly 120 may be positioned within cavity 116 of stop 110 . Semiconductor assembly 120 includes substrate 122 . Preferably, when the substrate 122 is positioned within the cavity 116 , the cavity 116 is sized and shaped such that movement of the substrate 122 in a direction perpendicular to the height H1 of the stop 110 is inhibited or prevented. For example, cavity 116 may have substantially the same cross-sectional area as substrate 122 when measured in a plane perpendicular to height H1 of stop 110 . In some embodiments, cavity 116 has substantially the same cross-section as a plurality of substrates 122 to be simultaneously positioned within cavity 116 . Inhibiting or preventing substrate 122 from moving laterally (eg, perpendicular to height H1 of stop 110 ) may increase the reliability of the manufacturing process and reduce the likelihood of manufacturing errors due to misalignment or movement of semiconductor assembly 120 .Substrate 122 may have a first surface 124 and a second surface 126 opposite surface 122 . Preferably, at least one of first surface 124 and second surface 126 of substrate 122 is planar. In the illustrated embodiment, all or portions of each of first surface 124 and second surface 126 of substrate 122 are parallel to each other.One or more die stacks may be positioned on upper substrate 122 . In the illustrated example, first die stack 130 a , second die stack 130 b , and third die stack 130 c (collectively, “die stacks 130 ”) are each positioned on first surface 124 of substrate 122 . Each die stack 130 may include die 132 stacked on top of each other and attached together by adhesive 133 (FIG. 4). Adhesive 133 is located between individual die 132 and the lowest die 132 and substrate 122 . For example, adhesive 133 may be an uncured or partially cured underfill material. When substrate 122 is positioned on second pressing surface 108 , die stack 130 may have an initial (eg, pre-pressing) height H2 as measured from and perpendicular to second pressing surface 108 . In some embodiments, the initial height H2 of each individual die stack 130 may vary. The initial height H2 of the die stack 130 is substantially higher than the height H1 of the stopper 110 . However, in some embodiments, some of the die stacks 130 may have a height H2 that is equal to or less than the height H1.Figure 3 illustrates the manufacturing system 100 and semiconductor assembly 120 after TCB operations. As illustrated, the first pressing platform 102 moves toward the second pressing platform 104 until the first pressing surface 106 contacts the stop 110 . Moving the first pressing platform 102 into contact with the stop 110 compresses the die stack 130 to a desired (e.g., compressed )Height H3. As illustrated, the compressed height H3 of the die stack 130 is substantially equal to the height of the stop 110 .Stop 110 is contemplated to provide controlled compression of die stack 130 , where the level of compression is limited to reducing or eliminating over-compression of die stack 130 , as reflected by controlled compression height H3 . Avoiding excessive compression of the die stack 130 may result in improvements to the overall semiconductor manufacturing process because some known manufacturing defects may be reduced or eliminated. Additionally, the stopper is expected to enable faster compression times and thereby increase throughput in manufacturing packaged semiconductor devices.Figure 4 illustrates such manufacturing defects that may occur in the absence of a stop. More specifically, FIG. 4 illustrates a portion of die stack 130 that has been compressed via a TCB operation without stops. The illustrated die stack 130 includes a first semiconductor die 132a, a second semiconductor die 132b adjacent (eg, stacked thereon) the first semiconductor die 102a, and an adjacent second semiconductor die 132b (eg, stacked thereon). a third semiconductor die 132c stacked thereon. The semiconductor dies (collectively, semiconductor dies 132 ) each include a first (eg, upper) surface 134 and a second (eg, lower) surface 136 opposite the first surface 134 . Die stack 130 also includes an array of individual interconnects 140 extending vertically between first surface 134 of first semiconductor die 132a and second surface 136 of second semiconductor die 132b. One or more of the individual interconnects may include a first conductive feature (eg, conductive pad 142) on an end and a second conductive feature (eg, conductive post 144) on a second end. In the illustrated embodiment, the interconnects 140 each include a conductive pad 142 on the first surface 134 of the first semiconductor die 132a, a conductive post on the second surface 136 of the second semiconductor die 132b. 144. Through silicon vias (TSVs) 145 extending through the semiconductor material of wafer 132 between conductive pads 142 and conductive posts 144, and a bonding material joining conductive posts 144 to conductive pads 142 (e.g., solder, tin silver, or other joining materials)146. In some embodiments, die stack 130 may include a smaller or larger number of interconnects 140 than shown in FIG. 4 . For example, die stack 130 may include tens, hundreds, thousands, or more interconnects 140 arranged between semiconductor dies 132 .In some embodiments, interconnect 140 has an overall height or thickness of between approximately 20 and 35 μm and/or between 5 and 50 μm. In certain embodiments, conductive pillars 144 have a thickness of between about 4 and 45 μm and/or between about 10 and 30 μm (eg, about 18 μm), and conductive liner 142 has a thickness of between about 1 and 5 μm (eg, about 18 μm). , about 4 μm) thickness.In the configuration illustrated in FIG. 4 , individual semiconductor dies 132 a , 132 b , 132 c are spaced apart from each other by gaps G1 , G2 that are substantially filled with adhesive 133 . In some embodiments, the gaps G1, G2 between the semiconductor dies 132 are not uniform. For example, the gap G2 between the first semiconductor die 132a and the second semiconductor die 132b may be larger or smaller than the gap G1 between the second semiconductor die 132b and the third semiconductor die 132c. The adhesive 133, which may be a nonconductive film (NCF) 150, may be distributed in the gaps G1 and G2. In some embodiments, NCF 150 is used to pre-adhere one or more of the semiconductor dies 132 to the substrate 122 and to adhere the remaining semiconductor dies 132 to each other.As illustrated in FIG. 4 , TCB processing can cause excessive compression of the bonding material 146 between the interconnects 140 without the stop 110 . Such excessive compression of bonding material 146 may result in excess bonding material 146 being squeezed out from between conductive posts 144 and corresponding conductive pads 142 (eg, referred to as "squeezing out"). Extruded material 146 in one interconnect may diffuse into contact with extruded material 146 in another adjacent interconnect. Such contacts may result in undesired electrical connections within die stack 130, such as short circuits. However, the extruded materials may not need to be in physical contact with each other to impair performance because, in some embodiments, simply being too close to each other may create interference that impairs the electrical operation of the device. In some applications, excessive compression of die stack 130 may also result in undesirable caulking from NCF 150 between individual dies 132 and/or from between die stack 130 and substrate 122 . Caulking of NCF 150 may increase the footprint of die stacks 130 on substrate 122 and reduce the number of die stacks 130 that can be fabricated on a given substrate 122 .Figure 5 illustrates a semiconductor die stack 130 formed by a TCB operation for use with a stopper in accordance with the present technology. As illustrated, the bonding material 146 is not overly compressed when the stop is used during TCB operation. Rather, a sufficient portion of the bonding material 146 remains between the corresponding conductive post 144 and the conductive pad 142 with little or no extrusion. The height of the bonding material 146 as measured parallel to the gaps G3, G4 may be maintained at a minimum value of at least 1 μm, at least 2 μm, at least 3 μm, at least 4 μm, at least 6 μm, and/or at least 8 μm (or 1 μm to 8 μm, or 2 μm to 6 μm , or any value between 3μm and 5μm). The gap G3 between the first semiconductor die 132a and the second semiconductor die 132b of FIG. 5 is larger than either of the gaps G1 and G2 between the semiconductor dies 132a and 132b of FIG. 4 . In some embodiments, the gap G4 between the second semiconductor die 132b and the third semiconductor die 132c of FIG. 5 is also larger than any of the gaps G1 and G2 between the semiconductor dies 132a and 132b in FIG. 4 By. In some embodiments, height H1 of stopper 110 ( FIG. 2 ) is selected to maintain a desired average gap between semiconductor dies 132 in a given semiconductor die stack 130 . For example, the height H1 of the stop 110 may be approximately equal to the sum of: (a) the height of the substrate 122; (b) the cumulative height of the individual semiconductor die 132; and (c) the desired gap size times the gap number (e.g., one less than the number of semiconductor dies). In some applications, the natural compression resistance of bonding material 146 and/or NCF 150 may help maintain relative uniformity of gaps between semiconductor dies 132 when stop 110 is used. Maintaining the desired gap width between the semiconductor dies 132 and the desired thickness of the bonding material 146 between the interconnects of the semiconductor dies 132 enables less extrusion of the bonding material and/or misalignment between the semiconductor dies 132 Manufacturing defects associated with alignment/tilt. The lack of excessive compression may also cause less misalignment (eg, skew) between individual dies 132 in the die stack 130 .Figures 6-10 illustrate various embodiments of semiconductor die assemblies and semiconductor fabrication assemblies. As illustrated, various semiconductor die assembly shapes and sizes may be used with various stops. For example, as illustrated in Figure 6, stop 210 may include two or more cavities 216a, 216b. Cavities 216a, 216b may be the same size and/or shape. In some embodiments, first cavity 216a has a larger or smaller cross-sectional area than second cavity 216b (eg, the area shown in the plan view in Figure 6). Cavities 216a, 216b may be sized to receive first substrate 222a and second substrate 222b, respectively. Substrates 222a, 222b may each be sized and shaped to have a cross-section substantially equal to the cross-section of cavity 216a, 216b.In some embodiments, as illustrated, two or more substrates may be positioned within a single cavity. For example, second substrate 222b may actually be two separate substrates (eg, resulting in third substrate 222c identified by the dashed line). The combined cross-sectional shape of the two or more substrates positioned in the second cavity 216b may be substantially identical to the cross-sectional shape of the cavity 216b. In some embodiments, as explained above, the cross-sectional shape (eg, or combined cross-sectional shapes) of substrate 222 may be selected to inhibit or prevent rotation of substrate 222 within cavity 216 , even if the substrate and cavity The corresponding cross-sectional shapes are different from each other or are substantially the same. The overall outer shape of stop 210 may be circular (Figs. 6-9), polygonal (Fig. 10), oval, and/or some combination thereof.Each of the substrates 222 may be configured to accommodate one or more semiconductor die stacks 230 . Stacks 230 may be arranged in rows and/or columns on each of substrates 222 . In some embodiments, each of the substrates 222 may be configured to accommodate the same number of die stacks 230 . In some embodiments, at least one of the substrates 222 is configured to accommodate a different number of stacks 230 than one or more other substrates 222 .FIG. 7 illustrates an embodiment of the stop 310 having more cavities than the stop 210 of FIG. 6 (eg, four cavities 316a, 316b, 316c, 316d). The cavity 316 of the stop 310 may be smaller than the cavity 216 of the stop 210 . In some embodiments, the cavity 316 of the stop 310 is configured to accommodate smaller substrates 322a, 322b, 322c, 322d than the substrate 222 used in the stop 210. In some embodiments, the cumulative cross-sectional area of the cavity 316 of the stop 310 is approximately equal to the cumulative cross-sectional area of the cavity 216 of the stop 210 . In some embodiments, the cumulative cross-sectional area of the cavity 316 of the stop 310 is greater or smaller than the cumulative cross-sectional area of the cavity 216 of the stop 210 . In some embodiments, the overall number of semiconductor die stacks 330 configured to be positioned on substrate 322 in FIG. 7 is equal to the overall number of semiconductor die stacks 230 configured to be positioned on substrate 222 in FIG. 6 number. In some embodiments, the overall number of semiconductor die stacks 330 configured to be positioned on substrate 322 in FIG. 7 is greater or smaller than the overall number of semiconductor die stacks 230 configured to be positioned on substrate 222 in FIG. 6 the overall number.FIG. 8 illustrates an embodiment of a stop 410 having a single cavity 416. The single cavity 416 may be larger than the cavities 216, 316 described above. In some embodiments, the cross-sectional area of the cavity 416 is approximately the same as the cumulative cross-sectional area of the cavity 316 of the stop 310 . In some embodiments, the cross-sectional area of the cavity 416 is greater or smaller than the cumulative cross-sectional area of the cavity 316 of the stop 310 . The single cavity 416 of the stop 410 may be configured to receive a single larger substrate 422 with at least one semiconductor die stack 430 thereon. In some embodiments, multiple substrates are positioned within cavity 416.Figure 9 illustrates an embodiment of a stop 510 that includes cavities that vary in size and shape. For example, first cavity 516a of stopper 510 may have a lower aspect ratio than second cavity 516b, as viewed from above. In some embodiments, one or both of the first cavity 516a and the second cavity 516b are smaller than the third cavity 516c.Figure 10 illustrates an embodiment of a stop 610 having a non-circular outer perimeter (eg, overall shape). In the illustrated example, stop 610 has a rectangular outer perimeter. Other outer perimeter shapes may be used, including polygons, ovals, circles, or some combination thereof. As illustrated, the cavities 616a, 616b, 616c of the stop 610 may be similar or identical to the cavity 516 of the stop 510. In some embodiments, the cavity 616 of the stop 610 is a different size and/or shape than the cavity 516 of the stop 510 .11-13 illustrate an embodiment of a semiconductor manufacturing system 700 similar to manufacturing system 100 described above. Unless otherwise described, similar reference numbers (eg, numbers sharing the same last two digits) correspond to structures that are structurally and/or functionally the same or similar between systems 100 and 700 (eg, upper compression platform 702 and Upper pressing platform 102).As illustrated, stop 710 of semiconductor manufacturing system 700 is located on substrate 722 . More specifically, the stop 710 is located on the first surface 724 of the substrate 722 opposite the second pressing platform 704 . Stop 710 may be constructed using the same or similar materials and in the same or similar manner as stop 110 described above. The height of the stop 710 may be selected such that the cumulative height H4 of the stop 710 and the substrate 722 limits the range in which the first pressing platform 702 can move toward the second pressing platform 704, and thereby limits the pressure applied to the bare surface during TCB operation. The amount of compression for slice stack 730. For example, stop 710 limits compression height H6 (FIG. 12) of die stack 730 and substrate 722 to reduce or eliminate the extrusion issues described above. As illustrated, the uncompressed height H5 of the die stack 730 is greater than the cumulative height H4 of the stop 710 and substrate 722, while the compressed height H6 of the die stack is equal to the height H4.By positioning the stop 710 on the substrate 722, existing manufacturing systems 700 can be easily retrofitted to use the stop system. For example, because stop 710 is located on substrate 722, compression platforms 702, 704 that are limited in their footprint to receive substrate 722 can still be used without changing the size of compression platforms 702, 704.In some embodiments, stop 710 is configured for use with wafer substrate 722 in the manner illustrated in FIGS. 11-13 . Wafer substrate 722 may be constructed of silicon or other materials. Stop 710 may be positioned on wafer substrate 722 prior to initiating TCB operations and prior to other processing of the wafer. For example, TCB operations using stop 710 may be performed prior to wafer singulation or other wafer processing.In some embodiments, use of one or more stops 710 on one or more substrates 722 may facilitate compression of die stack 730 positioned inside cavity 716 of stop 710 and outside cavity 716 (e.g., control of the movement of the compression platforms 702, 704 toward each other). More specifically, the compression control provided by the stop 710 may limit the overall movement of the compression platforms 702, 704 toward each other, thereby limiting the compression within the cavity 716 of the stop 710 and outside the cavity 716 of the stop. Compression of die stacks. In some embodiments, stop 710 may be used to simultaneously limit at least ten die stacks, at least twenty die stacks, at least forty die stacks, at least seventy-five die stacks, and/or at least one die stack. Compression of one hundred fifty dies stacked in a single TCB or other compression operation. 13 provides a top view of a semiconductor manufacturing system utilizing stop 710 positioned on substrate 722.14-15 illustrate an embodiment of a semiconductor manufacturing system 800 similar to manufacturing system 100 described above. Unless otherwise described, similar reference numbers (eg, numbers sharing the same last two digits) correspond to structures that are structurally and/or functionally the same or similar between systems 100 and 800 (eg, upper compression platform 102 and Upper pressing platform 802). As illustrated, the cavity 816 of the stop 810 may be larger than the substrate 822 such that the outer edge of the substrate 822 (eg, as measured parallel to the first surface 824 of the substrate 822 ) is within the same distance as the interior of the cavity 816 . A gap 870 is provided between the edges (eg, as measured parallel to the first surface 812 of the stop 810 ). In some embodiments, stop 810 is used in combination with a substrate 822 (eg, a wafer) whose outer perimeter is smaller than the inner perimeter of stop 810 . For example, stop 810 may be used with wafers that have not yet been cut into precise shapes or sizes.Similar to the above-described embodiment, the die stack 830 has an initial height H8 that is greater than the height H7 of the stop 810 and a compressed height H9 that is approximately equal to the height H7 of the stop 810 .In some embodiments, as illustrated in Figures 16-17, the stop can be integrally formed with the first and/or second pressing platform. In some such embodiments, stops (eg, edges, ridges, or other structures) are formed on both the first and second pressing platforms. In the illustrated embodiment, cavity 916 is formed in second (eg, lower) compression platform 904 . Cavity 916 may have a similar size and shape to the cavities in the stop described above. Cavity 916 may include a floor or surface 960 upon which substrate 922 is configured to rest. Cavity 916 may be surrounded by walls (eg, stops) 910 . In some embodiments, wall 910 is annular and/or continuous around the perimeter of cavity 916 .As illustrated, the original or uncompressed height H11 of the die stack 930 as measured from and perpendicular to the base plate 960 is greater than the height H10 of the wall 910 as measured from the base plate 960 . The compression height H12 (eg, the height at the end of the TCB operation) is equal to or approximately equal to the height H10 of the wall 910 .Any of the semiconductor devices having the features described above (eg, with reference to Figures 1-17) may be incorporated into any of a number of larger and/or more complex systems, representative examples of which is system 1000 shown schematically in Figure 18. System 1000 may include a processor 1002, memory 1004 (eg, SRAM, DRAM, flash memory, and/or other memory devices), input/output devices 1005, and/or other subsystems or components 1008. The semiconductor dies and semiconductor die assemblies described above may be included in any of the elements shown in FIG. 18 . The resulting system 1000 may be configured to perform any of a wide variety of suitable computing, processing, storage, sensing, imaging, and/or other functions. Accordingly, representative examples of system 1000 include, but are not limited to: computers and/or other data processors, such as desktop computers, laptop computers, networked appliances, handheld devices (e.g., palmtop computers, wearable computers, cellular or mobile telephones, personal digital assistants, music players, etc.), tablet computers, multi-processor systems, processor-based or programmable consumer electronic devices, network computers and microcomputers. Additional representative examples of system 1000 include lights, cameras, vehicles, and the like. With regard to these and other examples, system 1000 may be housed in a single unit or distributed over multiple interconnected units, such as via a communications network. Accordingly, components of system 1000 may include local and/or remote memory storage and any of a wide variety of suitable computer-readable media.The above detailed description of embodiments of the technology is not intended to be exhaustive or to limit the technology to the precise forms disclosed. As those skilled in the relevant art will recognize, while specific embodiments and examples of the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology. For example, although the steps are presented in a given order, alternative embodiments may perform the steps in a different order. Additionally, the various embodiments described herein may also be combined to provide additional embodiments. Reference herein to "one embodiment," "an embodiment," or similar expressions means that a particular feature, structure, operation, or characteristic described in connection with the embodiment may be included in at least one embodiment of the technology. Therefore, the appearances of such phrases or expressions herein are not necessarily all referring to the same embodiment.Certain aspects of the technology may take the form of computer-executable instructions comprising routines executed by a controller or other data processor. In some embodiments, a controller or other data processor is specifically programmed, configured, and/or constructed to execute one or more of these computer-executable instructions. Furthermore, some aspects of the present technology may be in the form of data (e.g., non-transitory data) stored or distributed on computer-readable media, including magnetically or optically readable and/or removable computer optical disks and electronic media distributed over the Internet. Accordingly, data structures and transmission of data specific to aspects of the technology are included within the scope of the technology. The technology also encompasses methods of programming a computer-readable medium to perform specific steps and methods of performing both said steps.Furthermore, use of "or" in such a list may be understood to include (a) unless the word "or" is expressly limited to mean only a single item exclusive of other items referring to a list of two or more items. Any single item in the list, (b) all the items in the list, or (c) any combination of items in the list. Where the context permits, singular or plural terms may also include plural or singular terms respectively. Additionally, the term "comprising" is used throughout to mean the inclusion of at least one or more of the recited features, such that any greater number of the same features and/or additional types of other features is not excluded. Directional terms such as "upper," "lower," "front," "back," "vertical," and "horizontal" may be used herein to express and clarify the relationship between various elements. It should be understood that such terms do not imply absolute orientation. Additionally, while advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need to exhibit such advantages. fall within the scope of this technology. Accordingly, the present disclosure and associated technology may encompass other embodiments not expressly shown or described herein. |
Embodiments include semiconductor packages and a method of forming the semiconductor packages. A semiconductor package includes a resist layer disposed on a conductive layer. The semiconductor packagealso has a bump disposed on the conductive layer. The bump has a top surface and one or more sidewalls. The semiconductor package further includes a surface finish disposed on the top surface and theone or more sidewalls of the bump. The semiconductor package may have the surface finish surround the top surface and sidewalls of the bumps to protect the bumps from Galvanic corrosion. The surfacefinish may include a nickel-palladium-gold (NiPdAu) surface finish. The semiconductor package may also have a seed disposed on a top surface of the resist layer, and a dielectric disposed on the seed.The dielectric may surround the sidewalls of the bump. The semiconductor package may include the seed to be an electroless copper seed. |
1.A semiconductor package comprising:a resist layer on the conductive layer;a bump on the conductive layer, wherein the bump has a top surface and one or more sidewalls;The top surface of the bump and the surface polish on the one or more sidewalls.2.The semiconductor package of claim 1 wherein said surface polish surrounds said top surface of said bump and said one or more sidewalls to protect said bump from corrosion.3.The semiconductor package according to any one of claims 1 to 2, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.4. The semiconductor package according to any one of claims 1 to 3, further comprising:a seed crystal on a top surface of the resist layer;a dielectric on the seed crystal, wherein the dielectric surrounds the one or more sidewalls of the bump.5.The semiconductor package of any of claims 1 to 4, wherein the seed crystal is an electroless copper seed.6.A semiconductor package according to any one of claims 1 to 5, further comprising a gap opening formed between the dielectric and the one or more side walls of the bump, wherein the protrusion The one or more side walls of the block are exposed through the gap opening.7.The semiconductor package of any of claims 1-6, wherein the bumps are copper bumps.8.The semiconductor package of any of claims 1-7, wherein the dielectric comprises a polymeric material.9.A method of forming a semiconductor package, comprising:Laying the resist layer on the conductive layer;Positioning a bump on the conductive layer, wherein the bump has a top surface and one or more sidewalls;A surface polish is disposed on the top surface of the bump and the one or more sidewalls.10.The method of claim 9 wherein said surface finish surrounds said top surface of said bump and said one or more sidewalls to protect said bump from corrosion.11.The method according to any one of claims 9 to 10, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.12. A method according to any one of claims 9-11, further comprising:Seeding the seed on the top surface of the resist; andA dielectric is disposed on the seed crystal, wherein the dielectric surrounds the one or more sidewalls of the bump.13.A method according to any one of claims 9 to 12, wherein the seed crystal is an electroless copper seed.14.A method according to any one of claims 9-13, further comprising forming a gap opening between the dielectric and the one or more side walls of the bump, wherein the bump is One or more side walls are exposed through the gap opening.15.The method of any of claims 9-14, wherein the bumps are copper bumps.16.The method of any of claims 9-15, wherein the dielectric comprises a polymeric material.17.A semiconductor package comprising:Interposer on the substrate;a die on the interposer;A surface polish on the plurality of bumps, wherein the plurality of bumps electrically couple the die to the interposer and electrically couple the interposer to the substrate.18.The semiconductor package of claim 17, wherein the surface polishing agent surrounds the top surface of the bump and the one or more sidewalls to protect the bump from corrosion, wherein Each bump has at least a top surface and one or more side walls, and wherein the surface polish is disposed on at least one of the top surface of each bump and the one or more side walls.19.The semiconductor package according to any one of claims 17 to 18, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.20.The semiconductor package of any of claims 17-19, wherein the plurality of bumps comprise a metallic material, and wherein the metallic material comprises copper.21.The semiconductor package of any of claims 17-20, further comprising one or more underfill layers surrounding the plurality of bumps.22.A semiconductor package according to any of claims 17-21, wherein the substrate comprises a package and a printed circuit board.23.The semiconductor package of any of claims 17-22, wherein the die comprises an integrated circuit, a central processing unit, a microprocessor, a platform controller hub, a memory, and a field programmable gate array.24.The semiconductor package of any of claims 17-23, further comprising a plurality of bumps electrically coupling the substrate to the second substrate.25.A semiconductor package according to any one of claims 17 to 24, wherein the second substrate comprises a mother board. |
Protected by galvanic corrosion and missing bumps by copper bump sidewall protectionTechnical fieldEmbodiments relate to packaging semiconductor devices. More particularly, embodiments relate to packaged semiconductor devices having bumps (eg, copper (Cu) bumps/pads) surrounded by surface finishes around each sidewall of a bump. .Background techniqueThere are several problems with the packaging of semiconductor devices such as integrated circuits (ICs). One of the major issues involved in packaging ICs is the failure modes at various interconnect levels (eg, in a first level interconnect (FLI)). These failure modes are typically associated with missing bumps and over-etched bumps at the interconnect level due to galvanic corrosion.Galvanic corrosion is an electrochemical process in which the metal erodes in the presence of an electrolyte (especially when the metal is in electrical contact with another metal). This corrosion affects, for example, the formation of ICs, such as the fabrication of bumps interconnecting microelectronic devices. The IC is typically formed on a semiconductor wafer made of a material such as silicon. The semiconductor wafer is then processed to form various electronic devices. The wafers are typically diced into semiconductor chips (or dies) that can then be attached to the substrate. The substrate is typically designed to couple the die directly or indirectly to a printed circuit board (PCB), socket or other connector. The substrate can also perform one or more other functions (such as protecting, isolating, insulating, and/or thermally controlling the die).A substrate (eg, an interposer) has traditionally been formed from a core composed of a laminated multilayer structure. Typically, bumps (or microbumps) and other such interconnect structures are formed in or formed on the structure in various ways to facilitate electrical coupling of the die to one or more other devices (such as In FLI). As successive generations of preparation techniques continue to expand in scale, the metallurgical properties of various materials have a significant and significant impact on the formation and operation of interconnect structures.There is a growing need for growth in the fabrication of structures interconnecting microelectronic devices. These improvements are necessary in the FLI structure because the missing bumps due to galvanic corrosion are a common factor contributing to the FLI failure mode. The missing bumps in Cu FLI usually occur due to the potential difference between Cu and gold (Au) during electroless Cu removal because the Cu bumps are connected to the Au-terminated die-side capacitor (DSC) Pad or Au terminated Landing Side Capacitor (LSC) pads (eg, the yield loss of the device can be increased to approximately 100% based on the FLI design). Some packaging solutions that reduce this yield loss associated with galvanic corrosion may include optimizing electroless Cu removal conditions or developing new invasive new chemistries. However, these packaging solutions require increased assembly costs, uncertainty, and time. As such, there is a growing need to mitigate galvanic corrosion in interconnect structures in ICs without increasing assembly costs, uncertainty, and time.DRAWINGSThe embodiments described herein are illustrated by way of example and not limitation, in the FIG In addition, some conventional details have been omitted so as not to obscure the inventive concepts described herein.1A-1D are cross-sectional views of a process flow for forming a semiconductor package having bumps surrounded by surface polishes on and around each sidewall of the bumps, in accordance with some embodiments.2 is a plan view of a semiconductor package having a dielectric layer with a plurality of bumps surrounded by a plurality of gap openings formed between each bump and a dielectric layer, in accordance with an embodiment.3 is a cross-sectional view of a semiconductor package having a plurality of bumps surrounded by a surface polish on and around the sidewalls of the bumps, in accordance with one embodiment.4 is a process flow diagram illustrating a method of including a semiconductor package surrounded by bumps of surface polish on and around each sidewall of each bump, in accordance with one embodiment.5 is a schematic block diagram illustrating a computer system utilizing a semiconductor package having bumps surrounded by surface polishes on and around each sidewall of each bump, in accordance with one embodiment.Detailed waysA system including a semiconductor package with sidewall protection for one or more bumps (or copper (Cu) bumps) and a method of forming such a semiconductor package are described herein. According to some embodiments, a semiconductor package and a method of forming such a semiconductor package as described hereinafter include a resist layer on a conductive layer, a bump disposed on the conductive layer (wherein the bump has a top surface and one or a plurality of sidewalls) and a surface finish on the top surface of the bump and one or more sidewalls. For one embodiment, the surface finish may include a nickel-palladium-gold (NiPdAu) surface (or plated) polish surrounding the top and side walls of the bump, which allows for the entire surface of the bump (rather than just The galvanic corrosion protection of the surface polishing agent placed on the top surface of the bump.In some semiconductor packages (eg, stacked package (PoP)) substrates, there is more than one interconnect region (such as PoP interconnects and controllable collapsed chip connections) that appear on the same substrate (C4 ) interconnections). A surface finish that is only deposited (or disposed) on the top surface of the interconnect (or bump) does not reduce the failure of the interconnect (e.g., missing bumps). To reduce failure and improve yield of the semiconductor package, the embodiments described herein are by placing a surface polish (eg, an electroless NiPdAu surface finish) on the top and side walls of each interconnect/bump. The surface polish is applied to one or more interconnected regions. According to some embodiments, the surface finish can be used to provide good solder joint reliability (SJR) for one or more interconnects and ultimately mitigate missing bumps due to galvanic corrosion. . A new substrate preparation process (eg, as shown in Figures 1A-1D) can be used to etch the sidewalls of the bumps (i.e., the gap is formed between the dielectric and the bumps) and then place the surface polish on the bumps Above and around the top and side walls of the block. Accordingly, the embodiments described herein enable the semiconductor package to have an interconnect structure of Cu that is not exposed on both the top and sidewalls of the interconnect structure.Embodiments of semiconductor packages enhance packaging solutions by protecting interconnect structures with NiPdAu surface polishes (ie, without exposed Cu on the sidewalls of the Cu bumps). For example, NiPdAu surface polishes have proven to be excellent surface finishes for SJR under electrical and thermal aging. Embodiments of semiconductor packages can further reduce yield losses of packages (or devices) associated with one or more missing bumps and/or over-etched bumps. Additionally, embodiments of the semiconductor package help maximize the surface area of the substrate to make room for the die side capacitor (DSC) and land side capacitor (LSC) connections on the package. Moreover, embodiments of the semiconductor package as described herein help mitigate galvanic corrosion in the first level interconnect (FLI) to reduce FLI failure associated with missing bumps and/or over etched bumps mode.Moreover, the embodiments described herein enhance the need for improved fabrication of structures that use NiPdAu surface polishes to interconnect microelectronic devices. For example, deposition of a NiPdAu surface finish can achieve reduced yield loss in the Cu FLI structure because the missing bumps due to galvanic corrosion are alleviated. Accordingly, embodiments described herein provide surface polishing agents that prevent (or significantly hinder) galvanic corrosion in interconnect structures on semiconductor packages without the need to increase assembly costs, uncertainty, and time ( For example, an interconnect structure of a NiPdAu surface polish.It is noted that for some alternative embodiments, a wide variety of other surface finishes (such as, but not limited to, electrolytic NiPdAu, electrolytic PdAu, electrolytic NiPd, etc.) can also be used. It is also noted that although the NiPdAu process is illustrated herein, a wide variety of other polishing agents suitable for one or more interconnect structures can be used depending on the application and/or desired package design.Finally, the embodiments described herein provide techniques and mechanisms for improved protection of interconnect structures (eg, bumps/pads, microbumps, Cu bumps, etc.). The bumps may include any type of bump including a Cu seed layer, and the bumps may also include a combination of one or more metals (eg, Cu, Cu-NiPdAu, Cu-OSP, Cu-Sn, etc.). As used herein, "bumps" (also referred to herein as "Cu bumps," "bumps," and/or "microbumps") are used in various ways to refer to conductive contacts of a device. (or pad) and at least one of solder joints (or bumps/pads) formed on such conductive contacts. Further, the "bump" may refer to a welded joint formed by welding with microbumps (wherein such a joint may also be referred to as "microbumps").The techniques described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that may utilize the techniques described herein include any kind of mobile device and/or stationary device, such as a camera, cellular telephone, computer terminal, desktop computer, e-reader, fax Machines, kiosks, netbook computers, laptops, internet appliances, payment terminals, personal digital assistants, media players and/or recorders, servers (eg, blade servers, rack-mounted servers, combinations of the above, etc.) , set-top boxes, smart phones, tablet PCs, ultra-mobile PCs, wired phones, combinations of the above, and the like. Such devices may be portable or stationary. In some embodiments, the techniques described herein may be employed in desktop computers, laptops, smart phones, tablets, netbook computers, laptops, personal digital assistants, servers, combinations of the above, and the like. More generally, the techniques described herein may be employed in any of a wide variety of electronic devices including a substrate including an interconnect structure to provide connectivity to the integrated circuit.In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to those skilled in the art. However, it will be apparent to those skilled in the art that the present embodiments may be practiced using only some of the described aspects. The specific quantities, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative implementation. However, it will be apparent to those skilled in the art that the present embodiments may be practiced without the specific details. In other instances, well-known features are omitted or simplified so as not to obscure the illustrative implementation.The various operations are thus described as a plurality of discrete operations in a manner that is most helpful in understanding the present embodiments, however, the order of description should not be construed as meaning that the operations are necessarily sequential. In particular, these operations do not need to be performed in the order presented.1A-1D are cross-sectional views of a process flow for forming a semiconductor package having bumps surrounded by surface polishes on and around each sidewall of the bumps, in accordance with some embodiments. These embodiments, as shown with respect to Figures 1A-1D, are developed for any desired packaging needs including, but not limited to, missing bump reduction, over-etched bump reduction, and reduced yield loss and FLI failure modes. The preparation process of the sidewall protection of the interconnect structure.One such embodiment is illustrated and described based on FIGS. 1A-1D, which illustrate cross-sectional views of a package layer 100 (or substrate) for forming Cu bumps surrounding a surface polish (eg, NiPdAu). . In the illustrated embodiment, deposition of a surface finish on a bump is illustrated, however it will be appreciated that additional features (such as additional components, bumps, layers, etc., in accordance with embodiments described herein, Lines, vias, and/or pads can be formed simultaneously and with the same processing operations.Referring now to FIG. 1A, the package layer 100 can include a resist layer 130 disposed over the conductive layer 120 (or an initial top metal layer (eg, a Cu layer)), wherein the conductive layer 120 can have one or more pads, Through holes and/or traces. For one embodiment, the package layer 100 of FIG. 1A illustrates at least one interconnect layer of a semiconductor package (or substrate) having a conductive layer 120 and a resist layer 130. According to one embodiment, the package layer 100 is shown with a conductive layer 120 having existing pads and vias, but additional pads, vias, and traces may be formed under the initial conductive layer 120 and coupled to The initial conductive layer 120 (can be omitted for simplicity).For one embodiment, the resist layer 130 can be a solder resist layer, a solder mask, a solder stop mask, and/or any similar resist layer. The resist layer 130 can be a thin layer of polymer(s) typically applied to a Cu trace (eg, conductive layer 120) of a package layer (eg, a printed circuit board (PCB) or substrate) to Used to protect against oxidation and prevent solder bridges from forming between closely spaced bumps/pads. According to one embodiment, the resist layer 130 may be an epoxy resin, a photoimageable solder mask, a photosensitive forming ink, and a dry film type photosensitive forming solder mask.As shown in FIG. 1A, the seed layer 122 is disposed on the resist layer 130, and the dielectric layer 110 is then disposed on the seed layer 122. For one embodiment, bumps 115 (eg, Cu bumps) are disposed (or deposited) on conductive layer 120 within openings formed/patterned on dielectric layer 110. According to some embodiments, the seed layer 122 is an electroless Cu seed layer deposited over the top surface of the resist layer 130. For example, electroless plating or physical vapor deposition (ie, sputtering) techniques can be used to deposit the seed layer 122. By way of illustration and not limitation, electroless plating of pure Cu can be used to form crystals having a desired thickness based on package design/application (eg, in the range of 0.1 μιη to 2.0 μιη or any other desired thickness). Layer 122. In another embodiment, a combination of Cu and titanium (Ti) can be sputtered to form seed layer 122. It is noted that in alternative embodiments, the seed layer 122 can have any of a variety of other material compositions and/or thicknesses.By way of example, the dielectric layer 110 can be a polymeric material such as, for example, dry film resist (DFR), polyimide, epoxy, or deposited film (BF). In one embodiment, the dielectric layer 110 can be one of a stack including a plurality of dielectric layers for forming a stacked structure. As such, the dielectric layer 110 can be formed over another dielectric layer. Additional embodiments may include forming dielectric layer 110 as a first dielectric layer over the core material on which the stack is formed. For one embodiment, the dielectric layer 110 is patterned to provide openings for forming bumps 115 on the conductive layer 120. Thus, the bumps 115 have a top surface 112 that is exposed (and thus formed between the dielectric layers 110).Referring now to FIG. 1B, bumps 115 are etched between dielectric layer 110 and bumps 115, one or more gap openings 113 are provided between dielectric 110 and bumps 115, in accordance with some embodiments. For one embodiment, bumps 115 may be etched (or removed) on both the top and sidewalls using a Cu seed etch process (eg, a controlled etch process) or the like to make one or more The sidewall 114 is exposed. For some embodiments, the etch process provides a gap opening 113 that may extend vertically from the top surface 112 of the bump 115 to the top surface of the resist layer 130 (ie, the gap 113 extends through the seed layer 122 to the resist layer Top of 130). It is noted that when the desired bump shape is circular, the bumps 114 may have a single sidewall, however, the bumps 114 may be two or more sidewalls based on different desired shapes (eg, rectangular pads) . Additionally, as shown in the top view of the package layer (similar to package layer 100 of FIG. 1B) in FIG. 2, the package layer can have a dielectric layer (eg, dielectric layer 110 of FIG. 1B). a plurality of bumps (eg, bumps 115 of FIG. 1B), wherein each bump has a gap opening between the sidewall of the bump (eg, sidewall 114 of FIG. 1B) and the dielectric layer (eg, , gap 113) of Figure 1B.Referring now to FIG. 1C, a surface finish 140 is deposited over and around the top surface 112 and the sidewall(s) 114 of the bump 115. For one embodiment, surface polish 140 can include a combination of Ni, Pd, and Au (ie, a NiPdAu surface finish). The surface polish 140 can be the exposed opening(s) on the top surface 112 of the bump 115 and between the dielectric 110 and the sidewall 114 of the bump 115 (ie, as shown in FIG. 1B) Electroless deposition of NiPdAu on the gap opening 113). It is noted that surface polish 140 can be illustrated as having a rectangular shape that surrounds top surface 112 and sidewalls 114 of bumps 115, however, surface polish 140 can conform to any desired shape based on gap openings and bump shapes (eg, The surface polishing agent may have a circular shape or a figure-eight (or tapered shape).Referring now to FIG. 1D, the encapsulation layer 100 is stripped of the dielectric layer 110 and the subsequently exposed seed layer 122 is removed. According to one embodiment, the seed layer 122 may be removed using a seed etching process that exposes the top surface 130a of the resist layer 130. For an alternate embodiment, bumps 115 with surface polish 140 can be formed prior to forming a second dielectric layer (not shown). For one embodiment, the dielectric removal process can include wet etching, dry etching (eg, plasma etching), wet blasting, or laser ablation (eg, by using an excimer laser). According to an additional embodiment, the depth controlled dielectric removal process can be performed only proximate to bumps 115.As such, after the removal process illustrated in FIG. 1D , the package layer 100 (or semiconductor package) may include a resist layer 130 disposed on the conductive layer 120 . The package layer 100 can also have bumps 115 disposed on the conductive layer 120, wherein the bumps 115 have a top surface 112 and one or more sidewalls 114. The one or more sidewalls 114 may be exposed by forming a gap opening (eg, the gap opening 113 as shown in FIG. 1B) between the dielectric layer 110 and the bump 115. Additionally, the package layer 100 has a surface polish 140 (eg, NiPdAu) disposed/deposited on the top surface 112 of the bump 115 and one or more sidewalls 114.Thus, the embodiments illustrated in FIGS. 1A-1D enable both the top surface 112 and the sidewall(s) 114 of the Cu bump 115 to be covered (or surrounded) with the NiPdAu surface polish 140. The NiPdAu surface polish 140 thus provides sidewall (and top) protection of the Cu bumps 115, which helps the Cu bumps 115 not be exposed and etched during electroless Cu removal (as shown in Figure ID). This sidewall protection of surface polish 140 thus allows for the reduction of missing bumps and over-etched bumps associated with galvanic corrosion. Additionally, these embodiments, as illustrated by the process flow of Figures 1A-1D, enable semiconductor packages to have (i) reduced yield loss due to missing bumps (and/or over-etched bumps) and (ii) Additional space to guide/design DSC/LSC connections.It is noted that the package layer 100 of Figures 1A-1D can include fewer or additional package members based on the desired package design.2 is a plan view of a semiconductor package 200 (or substrate) having a dielectric layer 110 with a plurality of bumps 115 surrounded by a gap opening 113, in accordance with one embodiment. It is noted that the semiconductor package 200 of FIG. 2 is similar to the package layer 100 of FIGS. 1A-1D, however, the semiconductor package 200 includes a plurality of bumps 115. As such, the process flow illustrated in FIGS. 1A-1D can be used to form semiconductor package 200.For one embodiment, a gap opening 113 may be formed between each bump 115 and the dielectric layer 110, wherein the gap opening 113 may be formed using a Cu etching process (as shown in FIG. 1B). According to an embodiment, the semiconductor package 200 may have a dielectric layer 110 formed with a plurality of bumps 115. Dielectric layer 110 may surround each bump 115 and have a gap opening 113 formed between dielectric layer 100 and one or more sidewalls 114 of each bump 115.It is noted that the semiconductor package 200 can include fewer or additional package components based on the desired package design.3 is a cross-sectional view of a semiconductor package 300 having a plurality of bumps surrounded by a surface polish on and around the sidewalls of the bumps, in accordance with one embodiment. In particular, FIG. 3 illustrates a semiconductor package 300 that includes an interconnect structure (eg, a plurality of bumps disposed under the die 314 and the interposer 312) in accordance with some embodiments. For one embodiment, the semiconductor package 300 can include bumps (or Cu bumps/pads) surrounded by a surface polish (eg, NiPdAu surface polish) on the top and sidewalls of the bumps (eg, FIG. 1). -2 is shown).Semiconductor package 300 is just one example of an embodiment in which integrated circuit die 314 is coupled to a substrate (eg, an interposer) via one or more bumps/joints formed from respective microbumps 312. For example, the bumps can be formed and/or formed on the substrate 200 (e.g., bumps 115 of FIG. 2) in accordance with the process flow illustrated in FIGS. 1A-1D. As described above, the welded joint formed by welding the microbumps according to the embodiment may itself be referred to as "bumps" and/or "microbumps".For some embodiments, the semiconductor package 300 can have a die 314 disposed on the interposer 312, wherein both the stacked die 314 and the interposer 312 are disposed on the package substrate 310. According to some embodiments, the package substrate 310 may include, but is not limited to, a package, a substrate, a printed circuit board (PCB), and a motherboard. For one embodiment, the package substrate 310 is a PCB. For one embodiment, the PCB is made of a FR-4 glass epoxy substrate (not shown) with a thin copper foil laminated on both sides. For certain embodiments, a multilayer PCB with a pre-preg and a copper foil (not shown) for making additional layers can be used. For example, a multi-layer PCB can include one or more dielectric layers, wherein each dielectric layer can be a photosensitive dielectric layer (not shown). For some embodiments, a hole (not shown) may be drilled in the PCB 310. For one embodiment, PCB 310 may also include conductive copper traces, metal pads, and holes (not shown).For one embodiment, die 314 may include, but is not limited to, a semiconductor die, an electronic device (eg, a wireless device), an integrated circuit, a central processing unit (CPU), a microprocessor, a platform controller hub (PCH), a memory, and Field Programmable Gate Array (FPGA). Die 314 may be formed of a material such as silicon and have circuitry on die 314 that will be coupled to interposer 312. While some embodiments are not limited in this regard, the package substrate 310 can in turn be coupled to another body (eg, a computer motherboard (not shown)). One or more connectors between package substrate 310, interposer 312, and die 314 (eg, including some or all of bumps 316, 318, and 320) may have a surface polish (which may include NiPdAu) metallurgy). In some embodiments, these interconnect structures (or connectors) may comprise alloys of nickel, palladium, and tin (in some embodiments, and copper) in a variety of ways. By way of illustrative and non-limiting manner, one or more of the bumps 316, 318, and/or 320 can include a NiPdAu surface finish that covers the top and sidewalls of the bump (eg, as shown in FIG. 1D) ).The connector between the package substrate 310 and another body can be fabricated using any suitable structure, such as the illustrative bumps 320 shown. The package substrate 310 can include a wide variety of electronic structures formed thereon or therein. Interposer 312 can also include an electronic structure formed thereon or therein that can be used to couple die 314 to package substrate 310. For one embodiment, one or more different materials may be used to form the package substrate and interposer. In some embodiments, the package substrate 310 is an organic substrate comprised of one or more layers of polymeric matrix material with conductive regions for transmitting signals. In some embodiments, the interposer 312 is comprised of a ceramic matrix material that includes a metal region for transmitting signals. While some embodiments are not limited in this regard, the semiconductor package 300 can include a gap control structure 330 (eg, positioned between the package substrate 310 and the interposer 312). Such a gap control structure 330 may alleviate the change in height of the gap between the package substrate 310 and the interposer 312, which may otherwise occur during reflow when the die 314 is attached to the interposer 312. It is noted that the semiconductor package 300 includes an underflow material 328 between the interposer 312 and the die 314 and an underflow material 326 between the package substrate 310 and the interposer 312. The underflow materials (or layers) 326 and 328 can be one or more polymers injected between the layers.It is noted that the semiconductor package 300 can include fewer or additional package components based on the desired package design. 4 is a process flow diagram illustrating a method of including a semiconductor package surrounded by bumps of surface polish on and around each sidewall of each bump, in accordance with one embodiment. For one embodiment, process flow 400 includes one or more steps for forming a semiconductor package (eg, semiconductor package 100 of FIG. 1 and 300 of FIG. 3) as described herein.At block 405, process flow 400 places the resist layer on the conductive layer (eg, as shown in FIG. 1A). At block 410, process flow 400 places the seed layer on the resist (eg, as shown in FIG. 1A). Additionally, the process flow can also include disposing a dielectric layer on the seed layer. At block 415, process flow 400 places the bumps on the conductive layer, wherein the bumps have a top surface and one or more sidewalls (eg, as shown in FIG. 2). At block 420, process flow 400 is recessed (or etched) between the dielectric layer and the bump to expose one or more sidewalls of the bump (eg, as shown in FIGS. 1B and 2) Side wall 113). For example, the bumps can be etched on both the sidewalls and the top surface.At block 425, process flow 400 places a surface finish (eg, a NiPdAu surface finish/layer) on the top surface of the bump and one or more sidewalls (eg, as shown in FIGS. 1C and 1D) . For example, the process flow can include placing/depositing a NiPdAu surface finish (eg, surface polish 140) on the top surface of the bump and one or more sidewalls (eg, top surface 112 and sidewall 113 of bump 115) on. Additionally, the process flow can remove/stretch the dielectric layer and subsequently remove/stripe the seed layer (eg, using an electroless Cu removal process) to top the top surface of the resist (eg, as shown in FIG. 1D) The top surface 130a) of the illustrated resist coating 130 is exposed.For additional embodiments, the process flow can have a surface polish (which is an electroless surface finish). The process flow may also include that the electroless plating surface polish is a nickel-palladium-gold (NiPdAu) surface finish (note that the surface finish may include one or more different materials and/or processing techniques). Additionally, the process flow can place seed crystals on the top surface of the resist layer and place the dielectric on the seed crystals, wherein the dielectric surrounds one or more sidewalls of the bumps (as shown in FIG. 1A). The process flow can include: the seed crystal is an electroless copper seed. The process flow can further form a gap opening between the dielectric and one or more sidewalls of the bump, wherein one or more sidewalls of the bump are exposed through the gap opening (as shown in Figure IB). The process flow can include: the bumps are copper bumps. Additionally, the process flow can have a dielectric comprising a polymeric material. Additionally, the process flow can use surface polishes (to cover/surround the bumps) on a plurality of bumps that can be used to couple at least two or more dies, interposers, and substrates.It is noted that the semiconductor package formed by process flow 400 can include fewer or additional package components based on a desired package design (eg, as shown in FIGS. 1-3).5 is a schematic block diagram illustrating a computer system utilizing a semiconductor package having bumps surrounded by surface polishes on and around each sidewall of each bump, in accordance with one embodiment. FIG. 5 illustrates an example of a computing device 500. Computing device 500 houses motherboard 502. For one embodiment, the motherboard 502 can be similar to the package substrate of FIG. 3 (eg, the substrate 310 of FIG. 3). Motherboard 502 can include a number of components including, but not limited to, processor 504, semiconductor package 510, and at least one communication chip 506. Processor 504 is physically and electrically coupled to motherboard 502. For some embodiments, at least one communication chip 506 is also physically and electrically coupled to the motherboard 502. For other embodiments, at least one communication chip 506 is part of processor 504.Computing device 500 may include other components that may or may not be physically and electrically coupled to motherboard 502, depending on its application. These other components include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, Display, touch screen display, touch screen controller, battery, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, accelerometer, gyroscope, speaker, camera, and mass storage device (such as Hard disk drive, compact disk (CD), digital versatile disk (DVD), and so on.At least one communication chip 506 can implement wireless communication for transferring data to and from computing device 500. The term "wireless" and its derivatives may be used to describe a circuit, apparatus, system, method, technique, communication channel, or the like that can communicate data through the use of modulated electromagnetic radiation via a non-solid medium. The term does not mean that the associated device does not contain any wires, although in some embodiments, the associated device may not contain any wires. The at least one communication chip 506 can implement any of a number of wireless standards or protocols (including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+ , HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives above and any other wireless protocols assigned to 3G, 4G, 5G and above). Computing device 500 can include a plurality of communication chips 406. For example, the first communication chip 406 can be dedicated to more short-range wireless communication (such as Wi-Fi and Bluetooth), while the second communication chip 506 can be dedicated to more remote wireless communication (such as GPS, EDGE, GPRS, CDMA, WiMAX). , LTE, Ev-DO and others).Processor 504 of computing device 500 includes an integrated circuit die packaged within processor 504. Device package 510 can be, but is not limited to, a package substrate and/or a printed circuit board. Device package 510 may be a semiconductor package of computing device 500 with bumps surrounded by surface polish on and around each sidewall of the bump (as illustrated in Figures 1-3) or Any other components from the figures described herein. Moreover, as described herein, device package 510 can help reduce galvanic corrosion in interconnect structures (eg, FLI) by utilizing a surface polish (eg, NiPdAu) surrounding/covering the Cu interconnect structure. The yield loss of the computing device 500 is calculated without increasing assembly costs, uncertainty, and time.It is noted that the device package 510 can be a single component/device, a subset of components, and/or the entire system, as materials, features, and components may be limited to the device package 410 and/or the computing device 500 may be in a Cu interconnect structure (eg, Any other member of the surface polish layer is required around the Cu bumps, Cu pads, microbumps, bumps/pads, and the like.For certain embodiments, an integrated circuit die can be combined with one or more devices on a package substrate including a thermally stable RFIC and antenna (used in conjunction with wireless communication) and device packages, as described herein. Packaged to reduce the z-height of the computing device. The term "processor" may refer to any device or portion of a device that operates to process electronic data from registers and/or memory to transform the electronic data into other electronic data that can be stored in registers and/or memory.The at least one communication chip 506 also includes an integrated circuit die packaged within the communication chip 506. For some embodiments, an integrated circuit die of a communication chip can be packaged with one or more devices on a package substrate including one or more device packages, as described herein.The present invention provides a set of technical solutions as follows:1. A semiconductor package comprising:a resist layer on the conductive layer;a bump on the conductive layer, wherein the bump has a top surface and one or more sidewalls;The top surface of the bump and the surface polish on the one or more sidewalls.2. The semiconductor package of claim 1, wherein the surface polish surrounds the top surface of the bump and the one or more sidewalls to protect the bump from corrosion.3. The semiconductor package according to claim 1, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.4. The semiconductor package of claim 1 further comprising:a seed crystal on a top surface of the resist layer;a dielectric on the seed crystal, wherein the dielectric surrounds the one or more sidewalls of the bump.5. The semiconductor package of claim 4, wherein the seed crystal is an electroless copper seed.6. The semiconductor package of claim 4, further comprising a gap opening formed between the dielectric and the one or more sidewalls of the bump, wherein the one or more of the bumps The side walls are exposed through the gap opening.7. The semiconductor package of claim 1, wherein the bump is a copper bump.8. The semiconductor package of claim 1, wherein the dielectric comprises a polymer material.9. A method of forming a semiconductor package, comprising:Laying the resist layer on the conductive layer;Positioning a bump on the conductive layer, wherein the bump has a top surface and one or more sidewalls;A surface polish is disposed on the top surface of the bump and the one or more sidewalls.10. The method of claim 9 wherein the surface polish surrounds the top surface of the bump and the one or more sidewalls to protect the bump from corrosion.11. The method according to claim 9, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.12. The method according to claim 9 further comprising:Seeding the seed on the top surface of the resist; andA dielectric is disposed on the seed crystal, wherein the dielectric surrounds the one or more sidewalls of the bump.13. The method of claim 12, wherein the seed crystal is an electroless copper seed.14. The method of claim 12, further comprising forming a gap opening between the dielectric and the one or more sidewalls of the bump, wherein the one or more sidewalls of the bump It is exposed through the gap opening.15. The method of claim 9 wherein the bumps are copper bumps.16. The method of claim 9 wherein the dielectric comprises a polymeric material.17. A semiconductor package comprising:Interposer on the substrate;a die on the interposer;A surface polish on the plurality of bumps, wherein the plurality of bumps electrically couple the die to the interposer and electrically couple the interposer to the substrate.18. The semiconductor package of claim 17, wherein the surface polishing agent surrounds the top surface of the bump and the one or more sidewalls to protect the bump from corrosion, wherein Each bump has at least a top surface and one or more side walls, and wherein the surface polish is disposed on at least one of the top surface of each bump and the one or more side walls.19. The semiconductor package of claim 17, wherein the surface polishing agent is a nickel-palladium-gold (NiPdAu) surface polishing agent.20. The semiconductor package of claim 17, wherein the plurality of bumps comprise a metal material, and wherein the metal material comprises copper.21. The semiconductor package of claim 17 further comprising one or more underfill layers surrounding the plurality of bumps.22. The semiconductor package of claim 19, wherein the substrate comprises a package and a printed circuit board.23. The semiconductor package of claim 19, wherein the die comprises an integrated circuit, a central processing unit, a microprocessor, a platform controller hub, a memory, and a field programmable gate array.24. The semiconductor package of claim 19, further comprising a plurality of bumps electrically coupling the substrate to the second substrate.25. The semiconductor package of claim 25, wherein the second substrate comprises a mother board.In the foregoing specification, embodiments have been described with reference to specific exemplary embodiments thereof. However, it should be considered that all of these terms and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. It will be apparent that various modifications may be made to the embodiments without departing from the spirit and scope of the invention. The description and drawings are to be considered in aThe following examples pertain to additional embodiments. The various features of the various embodiments can be combined in various ways with some of the features included and other features excluded to accommodate a wide variety of different applications.The following examples pertain to additional embodiments:Example 1 is a semiconductor package comprising a resist layer on a conductive layer and bumps on a conductive layer. The bump has a top surface and one or more side walls; and a top surface of the bump and a surface polish on the one or more side walls.In Example 2, the subject matter of Example 1 can optionally include a topside and one or more sidewalls surrounding the bump to protect the bump from corrosion.In Example 3, the subject matter of any of Examples 1-2 can optionally include a surface polish as a nickel-palladium-gold (NiPdAu) surface polish.In Example 4, the subject matter of any of Examples 1-3 can optionally include: a seed crystal further comprising a top surface of the resist layer; and a dielectric on the seed crystal. The dielectric surrounds one or more side walls of the bump.In Example 5, the subject matter of any of Examples 1-4 can optionally include a seed crystal as an electroless copper seed.In Example 6, the subject matter of any of Examples 1-5 can optionally include a gap opening further formed between the dielectric and one or more sidewalls of the bump. The one or more side walls of the bump are exposed through the gap opening.In Example 7, the subject matter of any of Examples 1-6 can optionally include bumps that are copper bumps.In Example 8, the subject matter of any of Examples 1-7 can optionally include: a dielectric comprising a polymeric material.Example 9 is a method of forming a semiconductor package including disposing a resist layer on a conductive layer and placing a bump on the conductive layer. The bump has a top surface and one or more side walls; and a surface polish is disposed on the top surface and one or more side walls of the bump.In Example 10, the subject matter of Example 9 can optionally include a topside and one or more sidewalls surrounding the bump to protect the bump from corrosion.In Example 11, the subject matter of any of Examples 9-10 can optionally include a surface polish as a nickel-palladium-gold (NiPdAu) surface polish.In Example 12, the subject matter of any of Examples 9-11 can optionally include further comprising disposing a seed crystal on a top surface of the resist layer; and disposing a dielectric on the seed crystal. The dielectric surrounds one or more side walls of the bump.In Example 13, the subject matter of any of Examples 9-12 can optionally include a seed crystal as an electroless copper seed.In Example 14, the subject matter of any of Examples 9-13 can optionally include further comprising forming a gap opening between the dielectric and one or more sidewalls of the bump. The one or more side walls of the bump are exposed through the gap opening.In Example 15, the subject matter of any of Examples 9-14 can optionally include bumps as copper bumps.In Example 16, the subject matter of any of Examples 9-15 can optionally include: a dielectric comprising a polymeric material.Example 17 is a semiconductor package that includes an interposer on a substrate; a die on the interposer; and a surface polish on the plurality of bumps. A plurality of bumps electrically couple the die to the interposer and electrically couple the interposer to the substrate.In Example 18, the subject matter of Example 17 can optionally include a topside and one or more sidewalls surrounding the bump to protect the bump from corrosion. Each bump has at least a top surface and one or more side walls. A surface polish is disposed on at least one of a top surface of each of the bumps and one or more side walls.In Example 19, the subject matter of any of Examples 17-18 can optionally include a surface polish as a nickel-palladium-gold (NiPdAu) surface polish.In Example 20, the subject matter of any of Examples 17-19 can optionally include a plurality of bumps comprising a metallic material. Metal materials include copper.In Example 21, the subject matter of any of Examples 17-20 can optionally include one or more underfill layers further comprising a plurality of bumps.In Example 22, the subject matter of any of Examples 17-21 can optionally include: a substrate comprising a package and a printed circuit board.In Example 23, the subject matter of any of Examples 17-22 can optionally include: a die including an integrated circuit, a central processing unit, a microprocessor, a platform controller hub, a memory, and a field programmable gate array.In Example 24, the subject matter of any of Examples 17-23 can optionally include a plurality of bumps further comprising electrically coupling the substrate to the second substrate.In Example 25, the subject matter of any of Examples 17-24 can optionally include: a second substrate comprising a motherboard.In the foregoing specification, the methods and apparatus have been described with reference to the specific exemplary embodiments thereof. It will be apparent that various modifications may be made in the methods and apparatus without departing from the spirit and scope of the invention. The description and drawings are to be considered in a |
Embodiments of systems, apparatuses, and methods for performing a jump instruction in a computer processor are described. In some embodiments, the execution of a blend instruction causes a conditional jump to an address of a target instruction when all of bits of a writemask are zero, wherein the address of the target instruction is calculated using an instruction pointer of the instruction and the relative offset. |
1.A method of executing a jump to a nearby (JKZD) instruction in a computer processor if the write mask is zero, comprising:Extracting a JKZD instruction, wherein the JKZD instruction includes a write mask operand and a relative offset;Decoding the fetched JKZD instruction;Executing the fetched JKZD instruction to conditionally jump to an address of the target instruction when all bits of the write mask are zero, wherein the address of the target instruction is an instruction pointer using the JKZD instruction and the Calculated relative to the offset.2.The method of claim 1 wherein said write mask is a 16-bit register.3.The method of claim 1 wherein said relative offset is an 8-bit immediate value.4.The method of claim 1 wherein said relative offset is a 32-bit immediate value.5.The method of claim 1 wherein the instruction pointer of the JKZD instruction is stored in an EIP register.6.The method of claim 1 wherein the instruction pointer of the JKZD instruction is stored in a RIP register.7.The method of claim 1 wherein said performing further comprises:Generating a temporary instruction pointer, the temporary instruction pointer being an instruction pointer of the JKZD instruction plus the relative offset;Setting the temporary instruction pointer to an address of the target instruction when the temporary instruction pointer is not outside the code segment boundary of the program containing the JKZD instruction;An error occurs when the temporary instruction pointer to be the address of the target instruction is outside the code segment boundaries of the program containing the JKZD instruction.8.The method of claim 7 wherein said performing further comprises:When the temporary instruction pointer is not outside the code segment limit of the program containing the JKZD instruction, when the operand size of the JKZD instruction is 16 bits before the temporary instruction pointer is set to the address of the target instruction Clear the first two bytes of the temporary instruction pointer.9.A method of executing a jump to a nearby (JKNZD) instruction in a computer processor if the write mask is not zero, including:Extracting a JKNZD instruction, wherein the JKNZD instruction includes a write mask operand and a relative offset;Decoding the fetched JKNZD instruction;Executing the fetched JKNZD instruction to conditionally jump to an address of a target instruction when at least one bit of the write mask is not zero, wherein the address of the target instruction is an instruction pointer using the JKNZD instruction and The relative offset is calculated.10.The method of claim 9 wherein said write mask is a 16-bit register.11.The method of claim 9 wherein said relative offset is an 8-bit immediate value.12.The method of claim 9 wherein said relative offset is a 32-bit immediate value.13.The method of claim 9 wherein the instruction pointer of the JKNZD instruction is stored in an EIP register.14.The method of claim 9 wherein the instruction pointer of the JKNZD instruction is stored in a RIP register.15.The method of claim 9 wherein said performing further comprises:Generating a temporary instruction pointer, the temporary instruction pointer being an instruction pointer of the JKNZD instruction plus the relative offset;Setting the temporary instruction pointer to an address of the target instruction when the temporary instruction pointer is not outside the code segment boundary of the program containing the JKNZD instruction;An error occurs when the temporary instruction pointer to be the address of the target instruction is outside the code segment boundary of the program containing the JKNZD instruction.16.The method of claim 15 wherein said performing further comprises:When the temporary instruction pointer is not outside the code segment limit of the program containing the JKNZD instruction, when the operand size of the instruction is 16 bits before the temporary instruction pointer is set to the address of the target instruction, The two high bytes of the temporary instruction pointer are cleared.17.A device comprising:Hardware decoder for decoding:Jump to a nearby (JKZD) instruction if the write mask is zero, the JKNZD instruction including the first write mask operand and the first relative offset, andJumps to the vicinity (JKNZD) if the write mask is not zero, wherein the JKNZD instruction includes a second write mask operand and a second relative offset;Execution logic for executing the decoded JKZD and JKNZD instructions, wherein performing the decoded JKZD instruction causes a conditional jump to the address of the first target instruction when all bits of the first write mask are zero, The address of the first target instruction is calculated using the instruction pointer of the JKZD instruction and the first relative offset, and the execution of the decoded JKNZD instruction causes at least one of the second write mask The bit is conditionally jumped to the address of the second target instruction when the bit is not zero, and the address of the second target instruction is calculated using the instruction pointer of the JKNZD instruction and the second relative offset.18.The apparatus of claim 18 wherein said execution logic comprises vector execution logic.19.The apparatus of claim 18 wherein the write masks of said JKZD and JKNZD are dedicated 16-bit registers.20.The apparatus of claim 18 wherein the instruction pointers of said JKZD and JKNZD instructions are stored in an EIP register. |
System, device and method for using mask register jumpField of inventionThe field of the invention relates generally to computer processor architectures, and more particularly to instructions that, when executed, result in a particular result.Background techniqueThere are many times when the programmer wants to control the flow change during program execution. Historically, there have been two main types of instructions that specify control flow changes: branches and jumps. A branch is usually an indication of a short change from the current program counter. A jump is usually a indication that the program counter is not directly associated with a change in the current program counter (eg, jumping to an absolute memory location or using a dynamic or static table to jump), and often does not have a distance limit from the current program counter.DRAWINGSThe invention is illustrated by way of example and not limitation, in the FIGFIG. 1 illustrates an embodiment of a method for executing a JKZD instruction in a processor.Figure 2 illustrates another embodiment of executing a JKZD instruction in a processor.FIG. 3 illustrates an embodiment of a method for executing a JKNZD instruction in a processor.Figure 4 illustrates another embodiment of executing a JKNZD instruction in a processor.Figure 5 illustrates an embodiment of a method for executing a JKOD instruction in a processor.Figure 6 illustrates another embodiment of executing a JKOD instruction in a processor.Figure 7 illustrates an embodiment of a method for executing a JKNOD instruction in a processor.Figure 8 illustrates another embodiment of executing a JKNOD instruction in a processor.Figure 9A is a block diagram showing a general vector friendly instruction format and its class A instruction template in accordance with an embodiment of the present invention.Figure 9B is a block diagram showing a general vector friendly instruction format and its class B instruction template, in accordance with an embodiment of the present invention.10A-C illustrate an exemplary specific vector friendly instruction format in accordance with an embodiment of the present invention.11 is a block diagram of a register architecture in accordance with one embodiment of the present invention.Figure 12A is a block diagram of a single CPU core along with its connection to an on-chip interconnect network and a local subset of its secondary (L2) cache, in accordance with an embodiment of the present invention.Figure 12B is an exploded view of a portion of the CPU core of Figure 12A, in accordance with an embodiment of the present invention.FIG. 13 is a block diagram showing an exemplary out-of-order architecture in accordance with an embodiment of the present invention.Figure 14 is a block diagram of a system in accordance with one embodiment of the present invention.Figure 15 is a block diagram of a second system in accordance with one embodiment of the present invention.Figure 16 is a block diagram of a third system in accordance with one embodiment of the present invention.Figure 17 is a block diagram of a SoC in accordance with an embodiment of the present invention.Figure 18 is a block diagram of a single core processor and a multi-core processor with integrated memory controller and graphics in accordance with an embodiment of the present invention.19 is a block diagram of the conversion of binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with the use of a software instruction converter in accordance with an embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth. However, it is understood that the embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques are not shown in detail to avoid obscuring the understanding of the description.The reference to "one embodiment", "an embodiment", "an example embodiment" or the like in the specification means that the described embodiments may include specific features, structures or characteristics, but not necessarily each embodiment includes the specific features. , structure or characteristics. Moreover, such phrases are not necessarily referring to the same embodiment. In addition, the particular features, structures, or characteristics of the present invention are described in conjunction with the embodiments.Jump instructionSeveral embodiments of a number of jump instructions and embodiments of systems, architectures, instruction formats, etc. for executing such instructions are shown in detail below. These jump instructions can be used to conditionally change the program's control flow sequence based on the writemask value contained in the instruction. These instructions use a "write mask" to change the control flow of the vectorized code, where each bit of the mask is associated with an instance of a SIMD archive that controls the flow information - loop iteration. The details of the write mask embodiment are described in detail below.Typical uses of the following jump instructions include: early exit of a loop with dynamic convergence; iteration until all active elements are broken (eg motion estimation diamond search and finite difference algorithm); suppression of false memory errors when mask is zero; improvement Performance of aggregated/scattered instructions; and predictive code savings for sparse distributions (eg, the compiler cannot afford compression/expansion within memory).Most instances of the control flow based on the write mask are any of the following: a jump when the write mask is all zero or a jump when the mask is not all zero. A table showing exemplary high level language pseudo code and its pseudo assembly counterparts is shown below. The VCMPPS instruction compares the data elements of source registers ZMM1 and ZMM2, and if the data elements of ZMM1 are smaller than the corresponding data elements of ZMM2, they are stored as "mask" bits in write mask k1. Of course, VCMPPS is not limited to this case and may be based on other conditions (e.g., equal to, less than or equal to, out of order, not equal to, not less than, not less than or equal to or ordered).Table 1The JNZ method for this sequence is relatively slow and requires two instructions to jump out of the loop twice after the write mask has been generated:KORTEST k1,k1 //(OR(k1,k1)==0x0)=>ZFJNZ target_addrThe KORTEST instruction performs two masked "OR" operations, and if the result is zero, sets the zero flag (such as FLAG or EFLAG) in the "condition code" or status register. The JNZ (non-zero jump) instruction looks for this flag and jumps to the target address if the zero flag has been set. Therefore, there is an opportunity to reduce throughput and (in the future) reduce latency for this software sequence.JKZD – jump to nearby if the write mask is zeroThe first instruction to be discussed is to jump to the vicinity (JKZD) if the write mask is zero. The instruction causes the value of the source write mask to be checked by the execution of the processor to see if all of its write mask bits are set to "0", and if so, causes the processor to perform a jump to the target instruction, The target instruction is specified at least in part by the destination operand and the current instruction pointer. If all write mask bits are not "0" (and therefore the jump condition is not met), no jump is performed and the instruction following the JKZD instruction is executed.The address of the JKZD's target instruction is typically specified by the relative offset operand (relative to the signed offset of the current value of the instruction pointer in the EIP register) contained in the instruction. The relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level it can be encoded as a signed 8-bit or 32-bit immediate value, which is 8-bit or A 32-bit immediate value is added to the instruction pointer. Instruction encoding is typically most effective for offsets from -128 to 127. In some embodiments, if the operand size (instruction pointer) is 16 bits, the first two bytes of the EIP register are not used (cleared) to generate the target instruction address. In some embodiments, under a 64-bit mode (RIP store instruction pointer) having a 64-bit operand size, the target address of the short jump is defined as an RIP = RIP + 8-bit offset symbol that is extended to 64 bits. In this mode, a jump near the target address is defined as a 32-bit offset with RIP = RIP + extended to 64 bits.An example format for this instruction is "JKZD k1, rel8/32", where k1 is the write mask operand (e.g., similar to the 16-bit register detailed above) and rel8/32 is the immediate value of 8 or 32 bits. In some embodiments, the write masks have different sizes (8 bits, 32 bits, etc.). JKZD is the instruction code of the instruction. Typically, each operand is explicitly defined in the instruction. In other embodiments, the immediate value is, for example, a different size of 16 bits.FIG. 1 illustrates an embodiment of a method of executing a JKZD instruction in a processor. A JKZD instruction including a write mask and a relative offset is obtained at 101.The JKZD instruction is decoded at 103 and the source operand value, such as a write mask, is retrieved at 105.The decoded JKZD instruction is executed at 107, which causes a conditional jump to an instruction at the address resulting from the relative offset and the current instruction pointer when all bits of the write mask are zero, or if at least one bit of the write mask If it is 1, the instruction after the JKZD instruction is taken out, decoded, and the like. The generation of an address can occur at any stage of the decoding, acquisition or execution phase of the method.Figure 2 illustrates another embodiment of executing a JKZD instruction in a processor. It is assumed that some of 101-105 have been performed before the method begins and these steps are no longer shown to avoid confusing the details. At 201, a determination is made as to whether there is any "1" value in the write mask.If there is a "1" in the write mask (and therefore the write mask is not zero), then at 203 no jump is performed and successive instructions in the program flow are executed. If there is no "1" in the write mask, a temporary instruction pointer is generated at 205. In some embodiments, the temporary instruction pointer is the relative offset of the current instruction pointer plus the sign extension. For example, for a 32-bit instruction pointer, the value of the temporary instruction pointer is the relative offset of the EIP plus the sign extension. The temporary instruction pointer can be stored in a register.At 207, a determination is made whether the operand size attribute is 16 bits. For example, the instruction pointer is a 16-bit, 32-bit, or 64-bit value. If the operand size attribute is 16 bits, the first two bytes of the 209 temporary instruction pointer are cleared (set to zero). Clearing can occur in a number of different ways, but in some embodiments, the temporary instruction pointer is with an immediate number having two most significant bytes of "0" and two least significant bytes of "1" (eg, the immediate The number is 0x0000FFFF) for logic and operation.If the operand size is not 16 bits, then at 211 a determination is made as to whether the temporary instruction pointer falls within the code segment boundaries.If not, an error is generated at 213 and no jump is performed. This is also judged for the temporary instruction pointer whose two most significant bytes are cleared. In some embodiments where the instruction does not support far jumps (jump to other code segments), when the target of the conditional jump is in a different code segment, the conditions opposite to those tested for the JKZD instruction are used and then passed Unconditional far jumps (JMP instructions) to other code segments access the target. In embodiments with jump restrictions, if the program wants to jump to the far field of the code, the semantics of the write mask jump are negated to cause the follow-through code to make a "far" jump into the particular code. turn. For example, the condition may be illegal:JKZD FARLABEL;To accomplish this far jump, the following two instructions can be used instead:JKNZD BEYOND;JMP FARLABEL;BEYOND:If the temporary instruction pointer falls within the code segment boundaries, the instruction pointer is set to the temporary instruction pointer at 213. For example, the EIP value can be set to the temporary instruction pointer. Make a jump at 215.Finally, in some embodiments, one or more of the foregoing aspects of the method are not performed or performed in a different order. For example, if the processor does not have a 16-bit operand (instruction pointer), the decision does not occur.Table 2 shows the same pseudo code of Table 1, but it utilizes the JKNZD instruction and eliminates the need for KORTESTD. The same benefits can be produced for the following instructions.Table 2JKNZD – jump to nearby if the write mask is not zeroThe second instruction to be discussed is to jump to the vicinity (JKNZD) if the write mask is not zero. The instruction causes the value of the source write mask to be checked by the execution of the processor to see if all of its write mask bits are set to "0", and if not, causes the processor to perform a jump to the target instruction, The target instruction is specified at least in part by the destination operand and the current instruction pointer. If all write mask bits are "0" (and therefore the jump condition is not met), then no jumps are executed and the instructions following the JKNZD instruction are executed.The address of the JKNZD's target instruction is typically specified by the relative offset operand (relative to the signed offset of the current value of the instruction pointer in the EIP register) contained in the instruction. The relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level it can be encoded as a signed 8-bit or 32-bit immediate value, which is an 8-bit or 32-bit immediate value. Added to the instruction pointer. Instruction encoding is typically most effective for offsets from -128 to 127. In some embodiments, if the operand size (instruction pointer) is 16 bits, the first two bits of the EIP register are not used (cleared) to generate the target instruction address. In some embodiments, under a 64-bit mode (RIP store instruction pointer) having a 64-bit operand size, the short instruction target instruction address is defined as RIP = RIP + an 8-bit offset symbol extended to 64 bits. In this mode, the jump to the near target address is defined as RIP = RIP + 32-bit offset extended to 64 bits.An example format for this instruction is "JKNZD k1, rel8/32", where k1 is the write mask operand (e.g., similar to the 16-bit register described in detail above), and rel8/32 is the immediate value of 8 or 32 bits. In some embodiments, the write masks have different sizes (8 bits, 32 bits, etc.). JKBZD is the opcode of the instruction. Typically, each operand is explicitly defined in the instruction. In other embodiments, the immediate value is, for example, a different size of 16 bits.FIG. 3 illustrates an embodiment of a method for executing a JKNZD instruction in a processor. The JKNZD instruction including the write mask and the relative offset is fetched at 301.The JKNZD instruction is decoded at 303 and the source operand value, such as a write mask, is retrieved at 305.The decoded JKNZD instruction is executed at 307, which conditionally jumps to an instruction at an address generated from the relative offset and the current instruction pointer when all bits of the write mask are zero, or if at least one of the write masks A bit of 1 causes the instruction following the JKNZD instruction to be fetched, decoded, and so on. The generation of an address can occur at any stage of the decoding, retrieval or execution phase of the method.Figure 4 illustrates another embodiment of executing a JKNZD instruction in a processor. It is assumed that some of 401-405 have been performed prior to the start of the method, and these steps are no longer shown to avoid obscuring the next details. At 401, a determination is made as to whether there is any "1" value in the write mask.If there is only "0" in the write mask (and therefore the write mask is zero), no jump is performed at 403 and successive instructions in the program flow are executed. If "1" is present in the write mask, a temporary instruction pointer is generated at 405. In some embodiments, the temporary instruction pointer is the relative offset of the current instruction pointer plus the sign extension. For example, for a 32-bit instruction pointer, the value of the temporary instruction pointer is the relative offset of the EIP plus the sign extension. The temporary instruction pointer can be stored in a register.At 407, a determination is made whether the operand size attribute is 16 bits. For example, the instruction pointer is a 16-bit, 32-bit, or 64-bit value. If the operand size attribute is 16 bits, then at 409, the first two bytes of the temporary instruction pointer are cleared (set to zero). Clearing can occur in a number of different ways, but in some embodiments, the temporary instruction pointer is with an immediate number having two most significant bytes of "0" and two least significant bytes of "1" (eg, the immediate The number is 0x0000FFFF) for logic and operation.If the operand size is not 16 bits, then at 411 a determination is made as to whether the temporary instruction pointer falls within the code segment boundaries. If not, an error is generated at 413 and no jump is performed. This can also be judged for the temporary instruction pointer whose two most significant bytes are cleared. In some embodiments in which the instruction does not support far jumps (jump to other code segments), when the target of the conditional jump is in a different code segment, the conditions opposite to those tested for the JKNZD instruction are used, and then The target is accessed via an unconditional far jump (JMP instruction) to other code segments. For example, the condition may be illegal:JKNZD FARLABEL;To accomplish this far jump, the following two instructions can be used instead:JKZD BEYOND;JMP FARLABEL;BEYOND:If the temporary instruction pointer falls within the code segment boundaries, the instruction pointer is set to the temporary instruction pointer at 413. For example, the EIP value can be set to a temporary instruction pointer. A jump is made at 415.Finally, in some embodiments, one or more of the foregoing aspects of the method are not performed or performed in a different order. For example, if the processor does not have a 16-bit operand (instruction pointer), the decision does not occur.JKOD – jump to nearby if the write mask is all 1The third instruction to be discussed is to jump to the vicinity (JKOD) if the write mask is all one. The instruction causes the value of the source write mask to be checked by the execution of the processor to see if all of its write mask bits are set to "1", and if so, causes the processor to perform a jump to the target instruction, The target instruction is specified at least in part by the destination operand and the current instruction pointer. If all write mask bits are not "1" (and therefore the jump condition is not met), no jump is performed and the instruction following the JKOD instruction is executed.The address of the JKOD target instruction is typically specified by the relative offset operand (relative to the signed offset of the current value of the instruction pointer in the EIP register) contained in the instruction. The relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level it can be encoded as a signed 8-bit or 32-bit immediate value, which is an 8-bit or 32-bit immediate value. Added to the instruction pointer. Instruction encoding is typically most effective for offsets from -128 to 127. In some embodiments, if the operand size (instruction pointer) is 16 bits, the first two bits of the EIP register are not used (cleared) to generate the target instruction address. In some embodiments, in a 64-bit mode (RIP store instruction pointer) having a 64-bit operand size, the target address of the short jump is defined as RIP = RIP + an 8-bit offset symbol that is extended to 64 bits. In this mode, a jump to a nearby destination address is defined as a 32-bit offset with RIP=RIP+ extended to 64 bits.An example format for this instruction is "JKOD k1, rel8/32", where k1 is the write mask operand (e.g., similar to the 16-bit register detailed above) and rel8/32 is the immediate value of 8 or 32 bits. In some embodiments, the write masks have different sizes (8 bits, 32 bits, etc.). JKOD is the instruction code of the instruction. Typically, each operand is explicitly defined in the instruction. In other embodiments, the immediate value is, for example, a different size of 16 bits.Figure 5 illustrates an embodiment of a method for executing a JKOD instruction in a processor. A JKOD instruction including a write mask and a relative offset is fetched at 501.The JKOD instruction is decoded at 503 and the source operand value, such as a write mask, is retrieved at 505.The decoded JKOD instruction is executed at 507, which causes a conditional jump to an instruction at an address generated from the relative offset and the current instruction pointer when all bits of the write mask are 1, or if at least one bit of the write mask When 0, the instruction following the JKOD instruction is taken out, decoded, and the like. The generation of an address can occur at any stage of the decoding, retrieval or execution phase of the method.Figure 6 illustrates another embodiment of executing a JKOD instruction in a processor. It is assumed that some of 601-605 have been performed prior to the start of the method and these steps are no longer shown to avoid confusing the details. At 601, a determination is made as to whether there is any "0" value in the write mask.If there is a "0" in the write mask (and therefore the write mask is not all 1), then at 603 no jump is performed and successive instructions in the program flow are executed. If there is no "0" in the write mask, a temporary instruction pointer is generated at 605. In some embodiments, the temporary instruction pointer is the relative offset of the current instruction pointer plus the sign extension. For example, for a 32-bit instruction pointer, the value of the temporary instruction pointer is EIP plus the relative offset of the sign extension. The temporary instruction pointer can be stored in a register.At 607, a determination is made whether the operand size attribute is 16 bits. For example, the instruction pointer is a 16-bit, 32-bit, or 64-bit value. If the operand size attribute is 16 bits, the first two bytes of the 609 temporary instruction pointer are cleared (set to zero). Clearing can occur in a number of different ways, but in some embodiments, the temporary instruction pointer is immediate with two most significant bytes of "0" and the two least significant bytes are "1" (eg, the immediate) Is 0x0000FFFF) for logic and operation.If the operand size is not 16 bits, then at 611 a determination is made as to whether the temporary instruction pointer falls within the code segment boundaries. If not, an error is generated at 613 and no jump is performed. This is also judged for the temporary instruction pointer whose two most significant bytes are cleared.If the temporary instruction pointer falls within the code segment boundaries, the instruction pointer is set to the temporary instruction pointer at 613. For example, the EIP value can be set to a temporary instruction pointer. A jump is made at 615.Finally, in some embodiments, one or more of the foregoing aspects of the method are not performed or performed in a different order. For example, if the processor does not have a 16-bit operand (instruction pointer), then the decision does not occur.JKNOD – jump to nearby if the write mask is not all 1The last instruction to be discussed is to jump to the vicinity (JKNOD) if the write mask is not all ones. The instruction causes the value of the source write mask to be checked by the execution of the processor to see if at least one of its write mask bits is set to "0", and if so, causes the processor to perform a jump to the target instruction, the target The instructions are specified at least in part by the destination operand and the current instruction pointer. If no write mask bit is "0" (and therefore does not satisfy the jump condition), then no jump is performed and the instruction following the JKNOD instruction is executed.The address of the JKNOD's target instruction is typically specified by the relative offset operand (relative to the signed offset of the current value of the instruction pointer in the EIP register) contained in the instruction. The relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level it can be encoded as a signed 8-bit or 32-bit immediate value, which is an 8-bit or 32-bit immediate value. Added to the instruction pointer. Instruction encoding is typically most effective for offsets from -128 to 127. In some embodiments, if the operand size (instruction pointer) is 16 bits, the first two bits of the EIP register are not used (cleared) to generate the target instruction address. In some embodiments, under a 64-bit mode (RIP store instruction pointer) having a 64-bit operand size, the short instruction target instruction address is defined as RIP = RIP + an 8-bit offset symbol extended to 64 bits. In this mode, a jump to a nearby destination address is defined as a 32-bit offset with RIP=RIP+ extended to 64 bits.An example format for this instruction is "JKNOD k1, rel8/32", where k1 is the write mask operand (e.g., similar to the 16-bit register detailed above) and rel8/32 is the immediate value of 8 or 32 bits. In some embodiments, the write masks have different sizes (8 bits, 32 bits, etc.). JKNOD is the instruction code of the instruction. Typically, each operand is explicitly defined in the instruction. In other embodiments, the immediate value is, for example, a different size of 16 bits.Figure 7 illustrates an embodiment of a method for executing a JKNOD instruction in a processor. A JKNOD instruction including a write mask and a relative offset is obtained at 701.The JKNOD instruction is decoded at 703 and the source operand value, such as a write mask, is retrieved at 305.The decoded JKNOD instruction is executed at 307, which causes a conditional jump to an instruction at an address generated from the relative offset and the current instruction pointer when at least one bit of the write mask is not 1, or if the write mask All bits of 1 cause the instruction following the JKNZD instruction to be fetched, decoded, and so on. The generation of an address can occur in any stage of the decoding, retrieval or execution phase of the method.Figure 8 illustrates another embodiment of executing a JKNOD instruction in a processor. It is assumed that some of 701-705 have been performed prior to the start of the method and these steps are no longer shown to avoid obscuring the next details. At 801, a determination is made as to whether there is any "0" value in the write mask.If "0" is not present in the write mask (and therefore the write mask is all 1), no jump is performed at 803 and successive instructions in the program flow are executed. If "0" is present in the write mask, a temporary instruction pointer is generated at 805. In some embodiments, the temporary instruction pointer is the relative offset of the current instruction pointer plus the sign extension. For example, for a 32-bit instruction pointer, the value of the temporary instruction pointer is EIP plus the relative offset of the sign extension. The temporary instruction pointer can be stored in a register.At 807, a determination is made whether the operand size attribute is 16 bits. For example, the instruction pointer is a 16-bit, 32-bit, or 64-bit value. If the operand size attribute is 16 bits, the first two bytes of the 809 temporary instruction pointer are cleared (set to zero). Clearing can occur in a number of different ways, but in some embodiments, the temporary instruction pointer is immediate with two most significant bytes of "0" and the two least significant bytes are "1" (eg, the immediate) Is 0x0000FFFF) for logic and operation.If the operand size is not 16 bits, then at 811 a determination is made as to whether the temporary instruction pointer falls within the code segment boundaries. If not, an error is generated at 813 and no jump is performed. This is also judged for the temporary instruction pointer whose two most significant bytes are cleared.If the temporary instruction pointer falls within the code segment boundaries, the instruction pointer is set to the temporary instruction pointer at 813. For example, the EIP value can be set to a temporary instruction pointer. Make a jump at 815.Finally, in some embodiments, one or more of the foregoing aspects of the method are not performed or performed in a different order. For example, if the processor does not have a 16-bit operand (instruction pointer), the decision does not occur.The embodiments embodied by the instructions detailed above may be embodied in the "general vector friendly instruction format" described in detail below. In other embodiments, this format is not utilized and another instruction format is used, however, the following description of write mask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc., generally applies to A description of an embodiment of the foregoing instructions. Additionally, the exemplary systems, architectures, and pipelines are described in detail below. Embodiments of the foregoing instructions may be implemented on these systems, architectures, and pipelines, but are not limited to those detailed.The vector friendly instruction format is an instruction format suitable for vector instructions (for example, there are certain fields that are specific to vector operations). Although embodiments are described in which both vector and scalar operations are supported by a vector friendly instruction format, other embodiments use only vector operations supported by the vector friendly instruction format.Exemplary General Vector Friendly Instruction Format - Figure 9A-9B9A-9B are block diagrams showing a general vector friendly instruction format and its instruction template in accordance with an embodiment of the present invention. 9A is a block diagram showing a general vector friendly instruction format and its class A instruction template according to an embodiment of the present invention; and FIG. 9B is a block diagram showing a general vector friendly instruction format and its class B instruction template according to an embodiment of the present invention. . Specifically, a generic vector friendly instruction format 900 for which class A and class B instruction templates are defined, both of which include a memoryless access 905 instruction template and a memory access 920 instruction template. The term "general" in the context of a vector friendly instruction format indicates an instruction format that is not related to any particular instruction set. Although embodiments in which the instructions in the vector friendly instruction format operate on vectors derived from registers (no memory access 905 instruction templates) or registers/memory (memory access 920 instruction templates) will be described, other embodiments of the invention may also Only one of these is supported. Additionally, although embodiments of the invention in which load and store instructions exist in a vector instruction format are to be described, other embodiments may alternatively or additionally have instructions in different instruction formats that shift vectors into and out of registers (eg, from memory) The register is moved from the register to the memory and moved between the two registers). In addition, although embodiments of the present invention supporting two types of instruction templates will be described, other embodiments may support only one or more of the two types of instruction templates.Although an embodiment of the invention in which the vector friendly instruction format supports the following: will be described: 64-bit vector operand length (or size) or 64-bit (8-byte) data element width with 32 bits (4 bytes) ( Or size) (and therefore the 64-byte vector consists of 16 double-word size data elements or 8 four-word size data elements); has 16-bit (2 bytes) or 8 bits (1 byte) data element width (or 64-byte vector operand length (or size); has 32-bit (4 bytes), 64-bit (8-byte), 16-bit (2 bytes), or 8-bit (1 byte) data element width (or size) 32-byte vector operand length (or size); and has 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes), or 8 bits (1 byte) 16-byte vector operand length (or size) of data element width (or size); however, other embodiments may support more, fewer, or different data element widths (eg, 128-bit (16-byte) data element width More, less, and/or different vector operand sizes (eg, 956 byte vector operands).The class A instruction template in FIG. 9A includes: 1) in the no memory access 905 instruction template, there is no memory access, full round control class operation 910 instruction template, and no memory access data conversion as shown. Class operations 915 instruction templates; and 2) within the memory access 920 instruction templates, as shown, there are memory accesses, temporal 925 instruction templates and memory accesses, non-temporal 930 instruction templates. The class B instruction template in FIG. 9B includes: in the no memory access 905 instruction template, there are no memory access, write mask control, partial round control type operation 912 instruction template, and no memory as shown. Access, write mask control, vsize class operation 917 instruction template; and 2) in memory access 920 instruction template, illustrated as memory access, write mask control 927 instruction template.formatThe general vector friendly instruction format 900 includes the following fields listed below in the order shown in Figures 9A-9B.Format field 940 - a particular value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus the instruction appears in the instruction stream in a vector friendly instruction format. Thus, the content of format field 940 distinguishes the occurrence of instructions in accordance with the first instruction format from the occurrence of instructions in other instruction formats, thereby allowing the introduction of vector friendly instruction formats into instruction sets having other instruction formats. As such, this field is optional because there is no need for an instruction set that only has a general vector friendly instruction format.Base operation field 942 - its content distinguishes between different base operations. Base operation field 942 may include an opcode field and or a portion of an opcode field, as described herein below.Register index field 944, whose contents are generated directly or through an address, specifies the location of the source and destination operands if they are in a register or memory. These include a sufficient number of bits to select N registers from the PxQ (e.g., 32x1112) register file. Although in one embodiment, N has up to three source registers and one destination register, other embodiments may support more or fewer source and destination registers (eg, up to two sources may be supported, where these One of the sources also acts as a destination, supporting up to three sources, one of which can also serve as a destination, supporting more than two sources and one destination). Although P = 32 in one embodiment, other embodiments may support more or fewer registers (e.g., 16). Although in one embodiment, Q = 112 bits, other embodiments may support more or fewer bits (e.g., 128, 1024 bits).Modifier field 946 - its content will specify that the occurrence of an instruction in the general vector instruction format for memory access is distinguished from the occurrence of an instruction that does not specify a memory access; that is, in the no memory access 905 instruction template and the memory access 920 instruction template Make a distinction. The memory access operation reads and/or writes to the memory level (in some cases the value in the register is used to specify the source and/or destination address), rather than the memory access operation (e.g., the source and destination are registers). Although in one embodiment, the field is also selected between three different ways to perform memory address calculations, other embodiments may support more, fewer, or different ways to perform memory address calculations.Incremental operation field 950 - its content distinguishes which of a number of different operations besides the base operation will be executed. This field is specific to the context. In one embodiment of the invention, the field is divided into a class field 968, an alpha field 952, and a beta field 954. The Incremental Operation field allows a common set of operations to be performed in a single instruction instead of 2, 3 or 4 instructions. The following are some examples of instructions that use the increment field 950 to reduce the number of instructions required (the names of which will be described in more detail below).Where [rax] is the base pointer for address generation, and {} represents the conversion operation specified by the data manipulation field (which will be described in more detail below).The scaling field 960 - its content allows the content of the index field used for memory address generation to be scaled (e.g., generated using an address of 2 scaling * index + base address).Displacement field 962A - its content is used as part of the memory address generation (eg, generated using 2 calibrated * index + base address + shifted address).The displacement factor field 962B (note that the displacement field 962A is directly juxtaposed on the displacement factor field 962B indicates the use of one or the other) - its content is used as part of the address generation; it specifies the displacement of the size scale to be accessed by the memory access (N) Factor - where N is the number of bytes accessed by the memory (eg, generated using an address of 2 scaling * index + base address + scaled displacement). Redundant low order bits are omitted and thus the contents of the displacement factor field and memory operations The total number of sizes (N) is multiplied to produce the final displacement used in calculating the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 974 (described later herein) and the data manipulation field 954C, as described later herein. Displacement field 962A and displacement factor field 962B are optional because they are not used for no memory access 905 instruction templates and/or different embodiments may employ only one or both of the preceding two fields.Data element width field 964 - its content identifies which of a large number of data element widths is used (in some embodiments for all instructions, in other embodiments only for some instructions). This field is optional because it is not required if some aspects of the opcode are used to support only one data element width and/or support multiple data element widths.The write mask field 970, whose content controls, based on each data element location, whether the data element position in the destination vector operand reflects the result of the base operation and the incremental operation. Class A instruction templates support fusion-write mask operations, while class B instruction templates support both fusion-write mask operations and zero-write mask operations. When fused, the vector mask allows data elements of any group in the destination to be protected from being updated during execution of any operation (as defined by base operations and incremental operations); in another embodiment, corresponding The mask bit has a value of 0 to retain the old value of each data element of the destination. Conversely, when zeroing, the vector mask allows elements of any group in the destination to be zeroed during any operation (as specified by base operations and incremental operations); in one embodiment, when the corresponding mask bits have A value of 0 sets the element of the destination to 0. A subset of this function is the ability to control the vector length of the operation being performed (ie the span of the modified element - from the first to the last); however, the elements being modified are not necessarily consecutive of. Thus, write mask field 970 allows for partial vector operations, including loading, storage, arithmetic, logic, and the like. Additionally, this masking operation can be used for fault suppression (ie by masking the data element location of the destination to prevent the receipt of any operation that may/will cause an error - for example, assuming that the vector in memory spans the page boundary And the first page instead of the second page will cause a page fault. If all the data elements of the vector located on the first page are masked by the write mask, the page error can be ignored). In addition, the write mask allows for a "vectorized loop" that contains some type of conditional statement. Although in the described embodiment of the invention, the content of the write mask field 970 selects one of several write mask registers containing the write mask to be used (and thus the content of the write mask field 970 indirectly identifies The masking operation is performed), however, as an alternative or in addition, other embodiments allow the masked write field 970 content to directly specify the masking operation to be performed. In addition, zeroing allows for performance gains when: 1) Register renaming is used on instructions whose destination operands are not the source (also known as non-triple instructions) because during the register renaming pipeline phase, The destination is no longer an implicit source (no data elements from the current destination register need to be copied to the renamed destination register or carried in some way along with the operation because any data elements that are not the result of the operation (any The masked data element will be zeroed; and 2) during the writeback phase, because zero is being written.Immediate field 972 - its content allows an immediate number to be specified. This field is optional because it does not appear in implementations of general vector friendly formats that do not support immediate and does not appear in instructions that do not use immediate.Instruction template class selectionClass field 968, whose content differs between different instruction classes. Referring to Figures 2A-B, the contents of the field are selected between class A and class B instructions. In Figures 9A-B, a square with rounded corners is used to indicate that a particular value exists in the field (e.g., class A 968A and class B968B for class field 968, respectively, in Figures 9A-B).Class A no memory access instruction templateIn the case of class A's no memory access 905 instruction template, the alpha field 952 is interpreted as an RS field 952A whose content identifies which of the different incremental operation types is to be executed (eg rounding 952A.1 and data conversion 952A) .2 is specified for the no memory access rounding type operation 910 and the no memory access data conversion type operation 915 instruction template, respectively, and the beta field 954 distinguishes which specified type of operation is to be performed. In Figure 9, the fillet block is used to indicate the presence of a particular value (e.g., no memory access 946A in modifier field 946; rounding 952A.1 and data conversion 952A.2 for alpha field 952/rs field 952A). In the no memory access 905 instruction template, the scaling field 960, the displacement field 962A, and the displacement scaling field 962B are not present.No memory access instruction template - fully rounded control type operationIn the no memory access full rounding control type operation 910 instruction template, the beta field 954 is interpreted as a rounding control field 954A whose content provides static rounding. Although in the described embodiment of the invention, rounding control field 954A includes suppressing all floating point exception (SAE) field 956 and rounding operation control field 958, alternative embodiments may support these changes and encode these concepts into In the same field, or only one or the other of these concepts/fields (eg, may only have rounding operation control field 958).SAE field 956 - its content distinguishes whether exception event reporting is disabled; when SAE field 956 content indicates that suppression is enabled, the given instruction does not report any type of floating point exception flag and does not invoke any floating point exception handler.Rounding operation control field 958 - its content distinguishes which of a set of rounding operations to perform (e.g. round up, round down, round to zero, and round to nearest). Thus, rounding operation control field 958 allows the rounding mode to be changed on a per instruction basis and is therefore particularly useful when needed. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the rounding operation control field 950 content overrides the register value (the rounding mode can be selected without having to perform save-modify on the control register - Recovery is beneficial).No memory access instruction template - data conversion type operationIn the no memory access data conversion type operation 915 instruction template, the beta field 954 is interpreted as a data conversion field 954B whose content distinguishes which of the plurality of data conversions is to be performed (e.g., no data conversion, mixing, broadcast).Class A memory access instruction templateIn the case of a memory access 920 instruction template of class A, the alpha field 952 is interpreted as a eviction hint field 952B whose content distinguishes which of the plurality of eviction prompts to use (in Figure 9A, for memory access temporality) The 925 instruction template and memory access non-temporal 930 instruction templates specify temporality 952B.1 and non-temporal 952B.2), respectively, and the beta field 954 is interpreted as data manipulation field 954C, the content of which distinguishes between numerous data manipulation operations (also known as Which of the original operations is to be executed (eg no manipulation, broadcast, source up conversion, and destination down conversion). The memory access 920 instruction template includes a scaling field 960, an optional displacement field 962A, or a displacement scaling field 962B.Vector memory instructions perform vector loading from memory and vector storage to memory through conversion support. As with conventional vector instructions, vector memory instructions transfer data out/transfer to memory on a data-by-data basis, which are actually transferred by the content specification of the vector mask selected as the write mask. In Figure 9A, rounded squares are used to indicate the presence of a particular value in the field (e.g., memory access 946B for modifier field 946, temporal 952B.1 for alpha field 952/exit prompt field 952B, and non-temporal 952B. 2).Memory Access Instruction Template - TemporalTemporal data is data that can be reused soon enough to benefit from the cache. However, this is a hint and different processors may be implemented differently, including completely ignoring such prompts.Memory access instruction template - non-temporalNon-temporal data is data that is unlikely to be reused quickly enough to benefit from the cache of the primary cache and should be given priority in eviction. However, this is a hint and different processors may be implemented differently, including completely ignoring such prompts.Class B instruction templateIn the case of the instruction template of class B, the alpha field 952 is interpreted as a write mask control (Z) field 952C whose content distinguishes whether the write mask operation controlled by the write mask field 970 should be fused or zeroed. .Class B no memory access instruction templateIn the case of class B's no memory access 905 instruction template, a portion of the beta field 954 is interpreted as an RL field 957A whose content identifies which of the different incremental operation types is to be executed (eg, for memoryless access masking, respectively) Code Control Part Rounding Control Type Operation 912 Instruction Template and No Memory Access Write Mask Control VSIZE Type Operation 917 The instruction template specifies rounding 957A.1 and vector length (VSIZE) 957A.2), and the remaining part of the β field identifies which The operation of the specified type is to be executed. In Figure 9, the fillet block is used to indicate the presence of a particular value (e.g., no memory access 946A in modifier field 946; rounding 957A.1 and VSIZE957A.2 for RL field 957A). In the no memory access 905 instruction template, the scaling field 960, the displacement field 962A, and the displacement scaling field 962B are not present.No memory access instruction template - write mask control, partial rounding control type operationIn the no memory access write mask control portion rounding control type operation 910 instruction template, the remainder of the beta field 954 is interpreted as rounding operation field 959A and the exception event report is disabled (given instructions do not report any type of float) Point the exception flag and don't evoke any floating point exception handlers).Rounding operation control field 959A - like rounding operation control field 958 - whose content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round to zero, and most recent) included). Thus, rounding operation control field 959A allows the rounding mode to be changed on a per instruction basis and is therefore particularly useful when needed. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the rounding operation control field 950 content overrides the register value (the rounding mode can be selected without having to perform save-modify on the control register - Recovery is beneficial).No memory access instruction template - write mask control VSIZE type operationIn the no memory access write mask control VSIZE type operation 917 instruction template, the remainder of the beta field 954 is interpreted as a vector length field 959B whose content distinguishes which of the many data vector lengths to be executed (eg, at 128, 956) Or based on 1112 bytes).Class B memory access instruction templateIn the memory access 920 instruction template of class A, a portion of the beta field 954 is interpreted as a broadcast field 957B whose content distinguishes whether a broadcast type data manipulation operation is to be performed, while the remainder of the beta field 954 interprets the vector length field 959B. The memory access 920 instruction template includes a calibration field 960, an optional displacement field 962A, or a displacement scaling field 962B.Additional comments about the fieldWith respect to the general vector friendly instruction format 900, the full opcode field 974 is illustrated as including a format field 940, a base operation field 942, and a data element width field 964. Although the full opcode field 974 is shown to include one embodiment of all of these fields, the full opcode field 974 includes fewer fields than all of these fields in embodiments that do not support all of these fields. The full opcode field 974 provides an opcode.The delta operation field 950, the data element width field 964, and the write mask field 970 allow these features to be specified on a per-instruction basis in a general vector friendly instruction format.The combination of the write mask field and the data element width field form a type of instruction that allows the mask to be applied based on different data element widths.This instruction format requires a relatively small number of bits because it reuses different fields based on the contents of other fields for different purposes. For example, a foreground is the content of the modifier field. A choice is made between the memoryless access 905 instruction template on FIGS. 9A-B and the memory access 9250 instruction template on FIGS. 9A-B; and the contents of the class field 968 are in FIG. 9A. Selections are made within those no memory access 905 instruction templates between instruction templates 910/915 and 912/917 of FIG. 9B; and the contents of class field 968 are between instruction templates 925/930 of FIG. 9A and 927 of FIG. 9B. The memory access 920 makes a selection in the instruction template. From another perspective, the contents of the class field 968 are selected between the class A and class B instruction templates of Figures 9A and 9B; and the contents of the modifier fields are between the instruction templates 905, 920 of Figure 9A. A selection is made in the class A instruction template; while the contents of the modifier field are selected in those class B instruction templates between the instruction templates 905, 920 of Figure 9B. In the case where the content of the class field indicates a class A instruction template, the content of the modifier field 946 selects the interpretation of the alpha field 952 (between the rs field 952A and the EH field 952B). In an associated manner, the content selection of modifier field 946 and class field 968 is to interpret a as rs field 952A, EH field 952B, or write mask control (Z) field 952C. In the case where the class and modifier fields indicate that class A has no memory access operations, the interpretation of the beta field of the delta field changes based on the contents of the rs field; and in the case where the class and modifier fields indicate that class B has no memory access operations, The interpretation of the beta field depends on the content of the RL field. In the case where the class and modifier fields indicate a class A memory access operation, the interpretation of the beta field of the delta field is changed based on the content of the underlying action field; and in the case where the class and modifier fields indicate a class B memory access operation, The interpretation of the broadcast field 957B of the beta field of the delta field changes based on the content of the underlying operational field. Thus, the combination of the base operation field, the modifier field, and the incremental operation field allows for a much wider variety of incremental operations.The various instruction templates found in class A and class B are advantageous in different situations. Class A is useful when a zero-write mask operation or a small vector length is required for performance reasons. For example, zeroing allows for avoiding false dependencies when using renaming because we no longer need to artificially merge with the destination; as another example, when simulating a shorter vector size with a vector mask, vector length control eases storage - Load forwarding issues. Class B is useful when the following options are required: 1) Allow floating-point exceptions when using rounding mode control at the same time (eg when the content of the SAE field indicates no); 2) can use up-conversion, blending, swapping, and / Or down conversion; 3) operation on graphic data types. For example, when dealing with sources of different formats, up-converting, blending, swapping, down-converting, and graphics data types reduce the number of instructions required; for example, the ability to allow exceptions provides full IEEE compatibility with directed rounding modes. Sex.Exemplary specific vector friendly instruction format10A-C illustrate an exemplary specific vector friendly instruction format in accordance with an embodiment of the present invention. Figures 10A-C illustrate a particular vector friendly instruction format 1000, the specific meaning of which is that it specifies the position, size, interpretation and order of the fields and the values of some of those fields. The x86 instruction set can be extended using a specific vector friendly instruction format 1000, and thus some of these fields are similar or identical to those used in existing x86 instruction sets and their extensions (e.g., AVX). This format is consistent with the prefix encoding field, the real opcode byte field, the MOD R/M field, the SIB field, the displacement field, and the immediate field of the existing x86 instruction set with extensions. The fields from Figure 9 are shown with the fields from Figures 10A-C mapped into the fields of Figure 9.It should be understood that although the embodiments of the present invention are described with reference to a particular vector friendly instruction format 1000 in the context of a general vector friendly instruction format 900, the invention is not limited to this particular vector friendly instruction format 1000 unless otherwise statement. For example, the general vector friendly instruction format 900 considers multiple possible sizes for each field, while the particular vector friendly instruction format 1000 is illustrated as having a particular size field. As a specific example, although the data element width field 964 is illustrated as a bit field in a particular vector friendly instruction format 1000, the invention is not limited thereto (that is, the general vector friendly instruction format 900 considers the data element width field 964. Other sizes).Format - Figure 10A-CThe general vector friendly instruction format 900 includes the fields listed below in the order shown in Figures 10A-C.EVEX prefix (bytes 0-3)EVEX prefix 1002 - encoded in four-byte form.Format field 940 (EVEX byte 0, bit [7:0]) - the first byte (EVEX byte 0) is format field 940 and it contains 0x62 (in one embodiment of the invention for distinguishing vectors) The unique value of the friendly instruction format).The second through fourth bytes (EVEX bytes 1-3) include a number of bit fields that provide a particular capability.REX field 1005 (EVEX byte 1, bit [7-5]) - includes EVEX.R bit field (EVEX byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6]-X) and 957BEX byte 1, bit [5]-B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit field and are encoded in 1's complement form, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of the instruction encode the lower three bits (rrr, xxx and bbb) of the register index as known to those skilled in the art, by adding EVEX.R, EVEX.X and EVEX.B. To form Rrrr, Xxxx and Bbbb.REX' field 1010 - this is the first part of the REX' field 1010 and is the EVEX.R' bit field (EVEX byte 1, bit [4]-R'), which is used in the extended 32 register set The upper 16 bits or the lower 16 bits are encoded. In one embodiment of the invention, this bit is stored in a bit-reversed format along with the other bits indicated below (in the well-known x8632 bit pattern), which differs from the BOUND instruction in that their real opcode byte is 62, but The value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments of the present invention do not store this and other indication bits below in reverse format. Use the value 1 to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.The opcode mapping field 1015 (EVEX byte 1, bit [3:0] - mmmm) - its content encodes the implied leading opcode byte (0F, 0F38 or 0F3).The data element width field 964 (EVEX byte 2, bit [7]-W) is represented by the symbol EVEX.W. EVEX.W is used to define the granularity (size) of a data type (32-bit data elements or 64-bit data elements).EVEX.vvvv1020 (EVEX byte 2, bit [6:3]-vvvv) - The role of EVEX.vvvv can include the following: 1) EVEX.vvvv encodes the first source register operand, which is specified as reverse ( 1's complement form) and is valid for instructions with two or more source operands; 2) EVEX.vvvv encodes the destination register operand, which specifies 1 for some vector offsets Complement form; or 3) EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, the EVEX.vvvv field 1020 encodes the four lower order bits of the first source register specifier stored in inverted (1's complement) form. According to this instruction, an additional different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U968 class field (EVEX byte 2, bit [2]-U) - if EVEX.U=0, it means class A or EXEX.U0; if EVEX.U=1, it means class B or EVEX. U1.The prefix encoding field 1025 (EVEX byte 2, bits [1:0]-pp) provides an additional bit of the base operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this also facilitates compression of the SIMD prefix (rather than the bytes that need to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use SIMD prefixes (66H, F2H, F3H) in both legacy format and EVEX prefix formats, these legacy SIMD prefixes are encoded as SIMD prefix encoding fields; The PLA that is provided to the decoder (and thus the PLA can perform both the legacy format and the EVEX format of these legacy instructions without modification) is previously extended to the legacy SIMD prefix at runtime. While newer instructions may directly use the contents of the EVEX prefix encoding field as an opcode extension, some embodiments extend in a similar manner for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. Alternate embodiments may redesign the PLA to support 2-bit SIMD prefix encoding and therefore do not require extension.Alpha field 952 (EVEX byte 3, bit [7] - EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control and EVEX.N; also expressed in alpha) - as before Described, this field is text specific. Additional descriptions are given below.字段 field 954 (EVEX byte 3, bit [6:4]-SSS, also known as EVEX.s 2-0, EVEX.r 2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also represented by βββ ) - As mentioned earlier, this field is text specific. Additional descriptions are given below.REX' field 1010 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX byte 3, bit [3]-V'), which is used to be high in the extended 32 register set 16 bits or lower 16 bits are encoded. This bit is stored in a bit inverted format. Use the value 1 to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V' and EVEX.vvvv.The write mask field 970 (EVEX byte 3, bits [2:0] - kkk) - its content specifies the index of the register in the write mask register as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior that indicates that no write mask is used for the particular instruction (this can be accomplished in a number of ways, including using hardwired connections to all hardware or The bypass mask operates the hardware's write mask for the hardware).Real Opcode Field 1030 (Byte 4)This is also known as the opcode byte. A part of the opcode is specified in this field.MOD R/M field 1040 (byte 5).Modifier field 946 (MODR/M.MOD, Bit [7-6]-MOD field 1042) - As previously described, the contents of MOD field 1042 distinguish between a memory access operation and a no memory access operation. This field will be described in more detail below.MODR/M.reg field 1044, Bits [5-3] - The role of the ModR/M.reg field can be summarized in two cases: ModR/M.reg or for destination register operands or for source register operands Encoding, or ModR/M.reg is considered an opcode extension and is not used to encode any instruction operands.MODR/Mr/m field 1046, bit [2-0] - The role of the ModR/Mr/m field may include the following: ModR/Mr/m encodes the instruction operand that references the memory address, or ModR/Mr/ m encodes the destination register operand or source register operand.Scaling, Indexing, Base (SIB) Bytes (Byte 6)Scaling field 960 (SIB.SS, Bits [7-6]) - As previously described, the contents of the scaling field 960 are used for memory address generation. This field will be described in more detail below.SIB.xxx1054 (bits [5-3] and SIB.bbb1056 (bits [2-0])) - the contents of these fields have been previously referred to the register indices Xxxx and Bbbb.Shift byte (byte 7 or byte 7-10)Displacement field 962A (Bytes 7-10) - When MOD field 1042 contains 10, byte 7-10 is displacement field 962A and it works the same as the old 32-bit displacement (disp32) and works at byte granularity under.Displacement Factor Field 962B (Byte 7) - When MOD field 1042 contains 01, byte 7 is the displacement factor field 962B. The location of this field is the same as the position of the old x86 instruction set 8-bit shift (disp8), which works at byte granularity. Since disp8 is an extended symbol, it can only be addressed between -128 and 128-byte offsets; for a 64-byte cache line, disp8 uses 8 bits, which can be set to only four Really useful values -128, -64, 0, and 64; disp32 is used because of the large range often required; however disp32 requires four bytes. The displacement factor field 962B is a reinterpretation of disp8 compared to disp8 and disp32; when the displacement factor field 962B is used, the actual displacement is determined by multiplying the content of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte for displacement but has a much larger range). This compressed displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory address and therefore the redundant low order bits of the address offset need not be encoded. In other words, the displacement factor field 962B replaces the 8-bit shift of the old x86 instruction set. Thus, the displacement factor field 962B is encoded in the same manner as the 8-bit shift of the x86 instruction set (so there is no change in the ModRM/SIB encoding rules), with the only exception that disp8 is overloaded to disp8*N. In other words, there is no change in the encoding rule or encoding length, only when the displacement value is interpreted by hardware (this requires scaling the displacement by the size of the memory operand to obtain a byte-by-byte address offset).ImmediateThe immediate field 972 operates as previously described.Exemplary Register Architecture - Figure 1111 is a block diagram of a register architecture 1100, in accordance with one embodiment of the present invention. The register files and registers for this register structure are listed below:Vector Register File 1110 - In the illustrated embodiment, there are 32 vector registers of 1112 bit width; these registers are referenced as zmm0-zmm31. The lower 956 bits of the lower 16zmm register are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16zmm register (lower order 128 bits of the ymm register) are overlaid on register xmm0-15. The specific vector friendly instruction format 1000 works on these overwritten register files as shown in the following table.In other words, vector length field 959B selects between a maximum length and one or more other shorter lengths, wherein each such shorter length is half of the previous length; an instruction template without vector length field 959B Works on the maximum vector length. Moreover, in one embodiment, the class B instruction template of the particular vector friendly instruction format 1000 operates on compressed or scalar single/double precision floating point data and compressed or scalar integer type data. The scalar operation is the operation performed on the lowest order data element position in the zmm/ymm/xmm registers; the higher order data element positions remain the same or zeroed as they were prior to the instruction according to an embodiment.Write Mask Register 1115 - In the illustrated embodiment, there are 8 write mask registers (k0 - k7), each of size 64 bits. As previously described, in one embodiment of the invention, the vector mask register K0 cannot be used as a write mask; when the code that normally indicates k0 is used for the write mask, it selects a hardwired write mask of 0xFFFF. This effectively disables the write mask operation on the instruction.Multimedia Extended Control Status Register (MXCSR) 1120 - In the illustrated embodiment, this 32-bit register provides status and control bits for floating point operations.General Purpose Register 1125 - In the illustrated embodiment, there are 16 64-bit general purpose registers that are used in conjunction with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8-R15.Extended Flag (EFLAGS) Register 1130 - In the illustrated embodiment, this 32-bit register is used to record the results of many instructions.Floating Point Control Word (FCW) Register 1135 and Floating Point Status Word (FSW) Register 1140 - In the illustrated embodiment, these registers are used by the x87 instruction set extension to set the rounding mode, exception mask, and Flags and keep track of anomalies in the case of FSW.Scalar floating point stack register file (x87 stack) 1145 (integrated flat register file 1150 with MMX compression aliased thereon) - In the illustrated embodiment, the x87 stack is extended to 32/64/80 using the x87 instruction set The floating-point data performs an eight-element stack of scalar floating-point operations; the MMX register is used to perform operations on 64-bit packed integer data and to hold operands on certain operations performed between the MMX and XMM registers.Segment Register 1155 - In the illustrated embodiment, there are six 16-bit registers for storing data for use by segmented address generation.RIP Register 1165 - In the illustrated embodiment, this 64-bit register stores instruction pointers.Alternate embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the present invention may use more, fewer, or different register files and registers.Exemplary sequential processor architecture - Figure 12A-12B12A-B show block diagrams of an exemplary sequential processor architecture. These exemplary embodiments are designed around multiple instances of a sequential CPU core with a wide vector processor (VPU) added. The core communicates with certain fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnect network, depending on the e14t application. For example, this embodiment typically includes a PCIe bus as an implementation of a stand-alone CPU.Figure 12A is a block diagram of a single CPU core along with its connection to a local subset of on-chip interconnect network 1202 and its secondary (L2) cache 1204, in accordance with an embodiment of the present invention. The instruction decoder 1200 supports an x86 instruction set with an extension that includes a particular vector instruction format 1000. Although in one embodiment of the invention (to simplify the design), scalar unit 1208 and vector unit 1210 use different register sets (scalar register 1212 and vector register 1214, respectively) and the data passed between the two registers is Write to memory and then write back from level 1 (L1) cache 1206, however alternative embodiments of the invention may use different methods (eg, using a single register set or including a communication path that allows data in two registers) Transfer between files without having to be written and read back).The L1 cache 1206 allows for low latency access to the caches in the scalar and vector units. Together with the load opcode instructions in the vector friendly instruction format, this means that the L1 cache 1206 can be considered to be somewhat similar to the extended register file. This significantly improves the performance of many algorithms, especially by eviction hint field 952B.The local subset of L2 cache 1204 is part of a global L2 cache that is partitioned into multiple independent local subsets with a local subset for each CPU core. Each CPU has a direct access path to its native local subset of L2 cache 1204. The data read by the CPU core is stored in its L2 cache subset 1204 and can be accessed quickly, in parallel with other CPUs accessing its own local L2 cache subset. The data written by the CPU core is stored in its own L2 cache subset 1204 and flushed from other subsets, if needed. The ring network ensures consistency of shared data.Figure 12B is an exploded view of a portion of the CPU core of Figure 12A, in accordance with an embodiment of the present invention. Figure 12B includes the L1 data cache 1206A portion of L1 cache 1204, and more particularly to vector unit 1210 and vector register 1214. Specifically, vector unit 1210 is a 16 wide vector processing unit (VPU) (see 16 wide ALU 1228) that performs integer, single precision floating point, and double precision floating point instructions. The VPU supports mixing of register inputs by mixing unit 1220, numeric conversion by value conversion units 1222A-B, and copying of memory inputs by copy unit 1224. Write mask register 1226 allows prediction of the resulting vector write.Register data can be mixed in a variety of ways, such as to support matrix multiplication. Data from the memory can be copied across the VPU channel. This is a common operation in both graphical and non-graphic parallel data processing, which significantly improves cache efficiency.The ring network is bidirectional to allow agents such as CPU cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each circular data path is 1112 bits wide in each direction.An exemplary out-of-order architecture - Figure 13FIG. 13 is a block diagram showing an exemplary out-of-order architecture in accordance with an embodiment of the present invention. In particular, Figure 13 illustrates a well-known exemplary out-of-order architecture that has been modified to include vector friendly instruction formats and their execution. In Figure 13, the arrows indicate the coupling between two or more units, and the direction of the arrows indicates the direction of the data flow between those units. FIG. 13 includes a front end unit 1305 coupled to an execution engine unit 1310 and a memory unit 1315; the execution engine unit 1310 is further coupled to the memory unit 1315.The front end unit 1305 includes a level 1 (L1) branch prediction unit 1320 that is coupled to a level 2 (L2) branch prediction unit 1322. L1 and L2 branch prediction units 1320, 1322 are coupled to L1 instruction cache unit 1324. The L1 instruction cache unit 1324 is coupled to an instruction translation lookaside buffer (TLB) 1326, which is further coupled to the instruction fetch and predecode unit 1328. Instruction fetch and pre-encoding unit 1328 is coupled to instruction queue unit 1330, which is further coupled to decoding unit 1332. Decoding unit 1332 includes complex decoder unit 1334 and three simple decoder units 1336, 1338, and 1340. The decoding unit 1332 includes a microcode ROM unit 1342. Decoding unit 1332 can operate in the decoding stage section as previously described. L1 instruction cache unit 1324 is further coupled to L2 cache unit 1348 in memory unit 1315. Instruction TLB unit 1326 is further coupled to second stage TLB unit 1346 in memory unit 1315. Decoding unit 1332, microcode ROM unit 1342, and loop detector unit 1344 are each coupled to a rename/allocator unit 1356 in execution engine unit 1310.Execution engine unit 1310 includes a rename/allocator unit 1356 that is coupled to retirement unit 1374 and unified scheduling unit 1358. The retirement unit 1374 is further coupled to the execution unit 1360 and includes a reorder buffer unit 1378. Unified scheduling unit 1358 is further coupled to physical register file unit 1376, which is coupled to execution unit 1360. The physical register file unit 1376 includes a vector register unit 1377A, a write mask register unit 1377B, and a scalar register unit 1377C; these register units may provide a vector register 1110, a vector mask register 1115, and a general purpose register 1125, and the physical register file unit 1376 may include An additional register file not shown (e.g., a scalar floating point stack register file 1145 aliased on the MMX compressed integer flat register file 1150). Execution unit 1360 includes three mixed scalar and vector units 1362, 1364, and 1372; load unit 1366; memory address unit 1368; and storage data unit 1370. Load unit 1366, memory address unit 1368, and memory data unit 1370 are each further coupled to data TLB unit 1352 in memory unit 1315.Memory unit 1315 includes a second level TLB unit 1346 that is coupled to data TLB unit 1352. Data TLB unit 1352 is coupled to L1 data cache unit 1354. L1 data cache unit 1354 is further coupled to L2 cache unit 1348. In some embodiments, L2 cache unit 1348 is further coupled to L3 and higher cache unit 1350 within and/or outside of memory unit 1315.As an example, an exemplary out-of-order architecture may implement a process pipeline that: 1) instruction fetch and pre-decode unit 1328 performs a fetch and length decode phase; 2) decode unit 1332 performs a decode phase; 3) rename/allocator unit 1356 executes The allocation phase and the rename phase; 4) the unified scheduler 1358 performs the scheduling phase; 5) the physical register file unit 1376, the reorder buffer unit 1378, and the memory unit 1315 perform the register read/memory read phase; the execution unit 1360 performs the execution/data Conversion phase; 6) memory unit 1315 and reorder buffer unit 1378 perform a write back/memory write phase; 7) retiring unit 1374 performs an ROB read phase; 8) each unit may involve an exception handling phase 9164; and 9) retreat Unit 1374 and physical register file unit 1376 perform the delegation phase.Exemplary single-core and multi-core processors - Figure 18Figure 18 is a block diagram of a single core processor and multicore processor 1800 having an integrated memory controller and graphics in accordance with an embodiment of the present invention. The solid lined box in Figure 18 shows a processor 1800 having a single core 1802A, a system agent 1810, a set of one or more bus controller units 1816, and an optionally added dashed box showing a plurality of cores 1802A- An alternative processor 1800 for N, a set of one or more integrated memory controller units 1814 in system agent unit 1810, and integrated graphics logic 1808.The system hierarchy includes one or more cache levels within a core, a set or one or more shared cache units 1806, and an external memory (not shown) coupled to a set of integrated memory controller units 1814. The set of shared cache units 1806 may include one or more intermediate caches, such as level 2 (L2), level 3 (L3), level 4 (L4) or other levels of cache, last level cache. (LLC), and/or combinations thereof. Although in one embodiment, the ring-based interconnect unit 1812 interconnects the integrated graphics logic 1808, a set of shared cache units 1806, and the system proxy unit 1810, alternative embodiments may use any number of well-known techniques to These units are interconnected.In some embodiments, one or more cores 1802A-N can be multi-threaded. System agent 1810 includes those components that coordinate and operate cores 1802A-N. System agent unit 1810 can include, for example, a power control unit (PCU) and a display unit. The PCU can be or include the logic and components required to adjust the power states of cores 1802A-N and integrated graphics logic 1808. The display unit is used to drive one or more externally connected displays.The cores 1802A-N may be homogeneous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 1802A-N may be sequential (such as those shown in Figures 12A and 12B), while others are unordered (such as those shown in Figure 13). As another example, two or more of the cores 1802A-N can execute the same set of instructions, while other cores can only execute a subset of the set of instructions or a different set of instructions. At least one of the cores is capable of executing the vector friendly instruction format described herein.The processor may be a general purpose processor such as CoreTM i3, i5, i7, 2Duo and Quad, XeonTM, or ItaniumTM processors available from Intel Corporation of Santa Clara, California. Alternatively, the processor can also come from another company. The processor can be a special purpose processor such as a network or communications processor, a compression engine, a graphics processor, a coprocessor, an embedded processor, and the like. The processor can be implemented on one or more chips. Processor 1800 can be part of one or more substrates and/or implemented on one or more substrates using any number of processing techniques, such as BiCMOS, CMOS or NMOS.Exemplary computer system and processor - Figure 14-1714-16 are exemplary systems suitable for including processor 1800, and FIG. 17 is an exemplary system-on-a-chip (SoC) that can include one or more cores 1802. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, Other system designs and configurations for video game devices, set top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a large number of systems and electronic devices capable of incorporating the processors and/or other execution logic disclosed herein are generally suitable.Referring now to Figure 14, shown is a block diagram of a system 1400 in accordance with one embodiment of the present invention. System 1400 can include one or more processors 1410, 1415 coupled to a graphics memory controller hub (GMCH) 1420. The optional nature of the additional processor 1415 is shown in dashed lines in Figure 14.Each processor 1410, 1415 can be some version of the processor 1800. It should be noted, however, that it is unlikely that integrated graphics logic and integrated memory control units are present within the processors 1410, 1415. .Figure 14 illustrates GMCH 1420 can be coupled to a memory 1440, which can be, for example, a dynamic random access memory (DRAM). For at least one embodiment, the DRAM can be associated with a non-volatile cache.The GMCH 1420 can be part of a chipset or chipset. The GMCH 1420 can communicate with the processors 1410, 1415 and control the interaction between the processors 1410, 1415 and the memory 1440. The GMCH 1420 can also serve as an acceleration bus interface between the processors 1410, 1415 and other components of the system 1400. For at least one embodiment, the GMCH 1420 communicates with the processors 1410, 1415 via a multi-station bus such as the front side bus (FSB) 1495.In addition, the GMCH 1420 is coupled to a display 1445 (eg, a flat panel display). The GMCH1420 can include an integrated graphics accelerator. The GMCH 1420 is further coupled to an input/output (I/O) controller hub (ICH) 1450 that can be used to couple various peripheral devices to the system 1400. An external graphics device 1460, which may be a discrete graphics device coupled to the ICH 1450 with another peripheral device 1470, is shown, for example, in the embodiment of FIG.Alternatively, additional or different processors may also be present in system 1400. For example, the additional processor 1415 can include the same additional processor as the processor 1410, an additional processor that is heterogeneous or asymmetric with the processor 1410, an accelerator (eg, a graphics accelerator or digital signal processing (DSP) unit), on-site Program the gate array or any other processor. In terms of quality metrics, there are many differences between physical resources 1410 and 1415, including architecture, microarchitecture, thermal, power consumption, and so on. These differences are effectively self-presenting between the processing elements 1410, 1415 as asymmetric and heterogeneous. For at least one embodiment, each of the processing elements 1410, 1415 can reside within the same die package.Referring now to Figure 15, a block diagram of a second system 1500 in accordance with an embodiment of the present invention is shown. As shown in FIG. 15, multiprocessor system 1500 is a point-to-point interconnect system and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. As shown in Figure 15, each of the processors 1570, 1580 can be some version of the processor 1800.Alternatively, one or more of the processors 1570, 1580 can be components other than the processor, such as an accelerator or field programmable gate array.Although only two processors 1570, 1580 are shown, it is to be understood that the scope of the invention is not limited thereto. In other embodiments, one or more additional processing elements may be present in a given processor.Processor 1570 can further include an integrated memory controller hub (IMC) 1572 and point-to-point (P-P) interfaces 1576, 1578. Similarly, second processor 1580 can include IMC 1582 and P-P interfaces 1576, 1588. Processors 1570, 1580 can exchange data via PtP interface 1550 using point-to-point (PtP) interface circuits 1578, 1588. As shown in Figure 15, IMCs 1572, 1582 couple the processors to respective memories, namely memory 1542 and memory 1544, which may be portions of the main memory that are locally attached to the respective processors.Processors 1570, 1580 can exchange data with chipset 1590 via point-to-point interface circuits 1576, 1594, 1586, 1598 via separate P-P interfaces 1552, 1554. Chipset 1590 can also exchange data with high performance graphics circuitry 1538 via high performance graphics interface 1539.A shared cache (not shown) may be included in any of the two processors externally and connected to the processors via a PP interconnect, such that if the processor is placed in a low power mode, The local cache information for one or both processors is stored in a shared cache.Chip set 1590 can be coupled to first bus 1516 via interface 1596. In one embodiment, the first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or other third generation I/O interconnect bus, although the scope of the present invention is not limited in this respect.As shown in FIG. 15, various I/O devices 1514 can be coupled to a first bus 1516 along with a bus bridge 1518 that couples the first bus 1516 to a second bus 1520. In one embodiment, the second bus 1520 can be a low pin count (LPC) bus. A variety of devices can be coupled to the second bus 1520, which in one embodiment includes, for example, a keyboard/mouse 1522, a communication device 1526, and a data storage unit 1528 such as a disk drive or other mass storage device that can include code 1530. Additionally, audio I/O 1524 can be coupled to second bus 1520. Note that other architectures are also possible. For example, instead of the point-to-point architecture of Figure 15, the system can employ a multi-station bus or other such architecture.Referring now to Figure 16, a block diagram of a third system 1600 in accordance with an embodiment of the present invention is shown. The same components in FIGS. 15 and 16 are denoted by the same reference numerals, and some aspects of FIG. 15 are omitted from FIG. 16 to avoid obscuring other aspects of FIG.Figure 16 shows that processing components 1570, 1580 can include integrated memory and I/O control logic (CL) 1572, 1582, respectively. For at least one embodiment, the CL 1572, 1582 can include a memory coupler hub logic (IMC), such as previously described in connection with Figures 99 and 15. In addition, CL 1572, 1582 may also include I/O control logic. Figure 16 shows not only the memories 1542, 1544 coupled to the CL 1572, 1582, but also the I/O devices 1614 that are also coupled to the control logic 1572, 1582. The legacy I/O device 1615 is coupled to the chipset 1590.Referring now to Figure 17, a block diagram of a SoC 1700 in accordance with an embodiment of the present invention is shown. The same components have the same reference numerals. In addition, the dashed box is an optional feature of more advanced SoCs. In Figure 17, interconnect unit 1702 is coupled to: an application processor 1710 that includes a set of one or more cores 1802A-N and a shared cache unit 1806; a system proxy unit 1810; a bus controller unit 1816; Integrated memory controller unit 1814; may include a set or one or more media processors 1720 that integrate graphics logic 1808; an image processor 1724 that provides static and/or video camera functionality; an audio processor 1726 that provides hardware audio acceleration, and A video processor 1728 that provides video encoding/decoding acceleration; a static random access memory (SRAM) unit 1730; a direct memory access (DMA) unit 1732; and a display unit 1740 for coupling to one or more external displays.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the invention may be implemented as a computer program or program code for execution on a programmable system, the programmable system including at least one processor, storage system (including volatile and nonvolatile memory and/or storage elements) At least one input device and at least one output device.Program code can be applied to the input data to perform the functions described herein and produce output information. The output information can be applied to one or more output devices in a known form. For the purposes of this application, a processing system includes any system having a processor, a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor, such as a digital signal processor (DSP).The program code can be implemented in a high level procedural or object oriented programming language to communicate with the processing system. Programs can also be implemented in assembly or machine language, if needed. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In either case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment can be implemented by representative instructions stored on a machine readable medium, which represent various logic in a processor that, when read by a machine, causes the machine to generate an execution of the document The logic of the described technique. These representations, referred to as "IP cores", may be stored on a tangible, machine readable medium and provided to a plurality of customers or production facilities for loading into the manufacturing machine in which the logic or processor is actually manufactured.These machine-readable storage media may include, but are not limited to, a non-transitory tangible configuration of items manufactured or formed by a machine or device, including storage media such as a hard disk; any other type of disk, including floppy disk, optical disk, compact disk read only Memory (CD-ROM), compact disk writable (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory (SRAM) Random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM); magnetic or optical card; or any other type suitable for storing electronic instructions medium.Accordingly, embodiments of the present invention also include non-transitory tangible machine readable media containing instruction vector friendly instruction formats or containing design data, such as hardware description language (HDL), which defines the structures, circuits, devices, Processor and/or system features. These embodiments are also referred to as program products.In some cases, an instruction converter can be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter can translate (e. g., use static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert the instructions into one or more other instructions for core processing. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction converter can be external to the processor on the processor, external to the processor, or part of the processor.19 is a block diagram of the conversion of binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with the use of a software instruction converter in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter can be implemented in software, firmware, hardware, or various combinations thereof. 19 shows a program written in high-level language 1902 that can be compiled using x86 compiler 1904 to produce x86 binary code 1906 that can be executed locally by a processor having at least one x86 instruction set core 1916 (assuming being Some of the compiled instructions appear in a vector friendly instruction format). A processor having at least one x86 instruction set core 1916 represents any processor capable of performing substantially the same functions as an Intel processor having at least one x86 instruction set core, by implementing or processing the following in a compatible manner: 1) an essential part of the instruction set of the Intel x86 instruction set core, or 2) an object code version for an application or other software running on an Intel processor having at least one x86 instruction set core to obtain and have at least one x86 instruction The core of the Intel processor is basically the same result. The x86 compiler 1904 represents a compiler operable to generate x86 binary code 1906 (e.g., a target code), which may be executed on a processor having at least one x86 instruction set core 1916 with or without additional association processing. Similarly, FIG. 19 shows a program written in high-level language 1902 that can be compiled using an alternate instruction set compiler 1908 to produce an alternate instruction set binary code 1910 that may or may not pass. At least one x86 instruction set core 1914 is processed locally (eg, with a MIPS instruction set that implements MIPS Technologies of Sunnyvale, California, and/or a core that executes the ARM instruction set of ARM Co., Ltd. of Sunnyvale, California Device). The instruction converter 1912 is used to convert the x86 binary code 1906 into code that can be executed locally by the processor without the x86 instruction set core 1914. The converted code is unlikely to be the same as the alternate instruction set binary code 1910 because the instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform the normal operation and consist of instructions from the alternate instruction set. Thus, the instruction converter 1912 represents software, firmware, hardware, or a combination thereof that causes the processor or other electronic device without the x86 instruction set processor or core to execute the x86 binary code 1906 by emulation, simulation, or any other process.Certain operations of the instructions in the vector-friendly instruction format disclosed herein may be performed by hardware components and embodied by machine-executable instructions that cause, or at least cause, circuitry or other hardware components programmed with the instructions to perform some operations. The circuits may include general purpose or special purpose processors, or logic circuits, just to name a few examples. The operations may also optionally be performed by a combination of hardware and software. Execution logic and/or the processor may include a particular or specific circuit or other logic that stores the result operand specified by the instruction in response to the machine instruction or one or more control signals derived from the machine instruction. For example, embodiments of the instructions described herein can be executed in one or more of the systems of Figures 14-17, and embodiments of the instructions in the vector friendly instruction format can be stored in program code for execution in the system. In addition, the processing components of these figures may utilize one of the pipelines and/or architectures detailed herein (e.g., sequential architecture and out-of-order architecture). For example, a decoding unit of an out-of-order architecture can decode instructions, pass decoded instructions to vectors or scalar units, and the like.The above description is intended to illustrate preferred embodiments of the invention. It will be apparent from the above discussion that the present invention may be modified in the device and details without departing from the scope of the appended claims and equivalents thereof The principles of the invention are within the scope of the invention. For example, one or more operations of the method can be combined or further split.Alternative embodimentAlthough embodiments have been described for locally executing a vector friendly instruction format, alternative embodiments of the present invention may be implemented by a processor executing a different instruction set (eg, executing MIPS instructions from MIPS Technologies of Sunnyvale, California) The set of processors, the processor executing the ARM instruction set of ARM Co., Inc. of Sunnyvale, Calif., implements a vector-friendly instruction format. In addition, although the flowcharts in the figures illustrate specific operational sequences performed by certain embodiments of the present invention, it should be understood that these sequences are exemplary (eg, alternative embodiments may perform operations in different sequences, merging certain operations, overlapping certain Some operations, etc.).In the above description, numerous specific details are set forth However, it will be apparent to one skilled in the art that one or more other embodiments may be practiced without some of these specific details. The specific embodiments described are not provided to limit the invention but to illustrate embodiments of the invention. The scope of the present invention is not to be determined the |
A manufacturing method for a MirrorBit(R) Flash memory includes providing a semiconductor substrate and depositing a charge-trapping dielectric material. First and second bitlines are implanted and a wordline material is deposited. A hard mask material is deposited over the wordline material. The hard mask material is of a material having the characteristic of being deposited rather than grown. A photoresist material is deposited over the wordline material and is patterned to form a patterned hard mask. The patterned photoresist material is removed. The wordline material is processed using the patterned hard mask to form a wordline. The patterned hard mask material is removed. |
The invention claimed is: 1. A method of manufacturing an integrated circuit comprising:depositing a charge-trapping dielectric material over a semiconductor substrate; forming first and second bitlines in the semiconductor substrate; depositing a wordline material over the charge-trapping dielectric material; depositing a hard mask material over the wordline material, the hard mask material being a deposited oxide; depositing a photoresist material over the wordline material; processing the photoresist material to form a patterned photomask material; processing the hard mask material using the patterned photomask material to form a patterned hard mask material; removing the photomask material; processing the wordline material using the patterned hard mask material to form a wordline; and removing the patterned hard mask material. 2. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the hard mask layer uses a process selected from a group consisting of:High Temperature Deposition; Low Pressure Chemical Vapor Deposition; Plasma Enhanced Chemical Vapor Deposition; Oxygen Rich Silicon Deposition; and Tetraethylorthosilicate Oxide Deposition. 3. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the hard mask material deposits a thickness thereof thinner than the thickness of the charge-trapping dielectric material.4. The method of manufacturing an integrated circuit as claimed in claim 1 including:depositing an anti-reflective coating material after depositing the hard mask material; using the anti-reflective coating material to form the patterned photoresist material; forming a patterned anti-reflective coating material; and removing the patterned anti-reflective coating material. 5. The method of manufacturing an integrated circuit as claimed in claim 1 including:depositing an anti-reflective coating material after depositing the hard mask material; using the anti-reflective coating material to form the patterned photoresist material; forming a patterned anti-reflective coating material; and removing the patterned anti-reflective coating material before removing the patterned hard mask material. 6. The method of manufacturing an integrated circuit as claimed in claim 1 including performing a pre-saliciding deposition strip.7. The method of manufacturing an integrated circuit as claimed in claim 1 including:depositing a wordline spacer material; forming wordline spacers around the wordline; and growing a salicide material on the wordline. 8. The method of manufacturing an integrated circuit as claimed in claim 1 including:depositing a spacer material; and forming spacers around the wordline; and growing a salicide material on the wordline. 9. The method of manufacturing an integrated circuit as claimed in claim 1 including implanting a threshold adjustment implant into the semiconductor substrate.10. The method of manufacturing an integrated circuit as claimed in claim 1 wherein the charge-trapping dielectric material is composed of:a first dielectric material; a charge-trapping material over the first dielectric material; and a second dielectric material over the charge-trapping material. 11. A method of manufacturing an integrated circuit comprising:providing a silicon substrate; depositing a charge-trapping dielectric layer over the silicon substrate; implanting first and second bitlines in the silicon substrate; depositing a polysilicon wordline layer over the charge-trapping dielectric layer; depositing a oxide hard mask layer over the polysilicon wordline layer, the oxide hard mask layer being a deposited oxide; depositing a photoresist layer over the polysilicon wordline layer; patterning the photoresist layer; processing the oxide hard mask layer using the patterned photoresist layer to form a patterned oxide hard mask layer; removing the patterned photoresist layer; processing the polysilicon wordline layer using the patterned oxide hard mask layer to form a polysilicon wordline; removing the patterned oxide hard mask layer without damaging the charge-trapping dielectric layer and the polysilicon wordline; and growing a salicide layer without short-circuiting the first and second n-type bitlines. 12. The method of manufacturing an integrated circuit as claimed in claim 11 wherein depositing the oxide hard mask layer uses a process selected from a group consisting of:High Temperature Deposition; Low Pressure Chemical Vapor Deposition; Plasma Enhanced Chemical Vapor Deposition; Oxygen Rich Silicon Deposition; and Tetraethylorthosilicate Oxide Deposition. 13. The method of manufacturing an integrated circuit as claimed in claim 11 wherein depositing the oxide hard mask layer deposits a thickness thereof thinner than the thickness of the charge-trapping dielectric layer.14. The method of manufacturing an integrated circuit as claimed in claim 11 including:depositing an inorganic anti-reflective coating layer after depositing the oxide hard mask layer; patterning the inorganic anti-reflective coating layer to form a patterned inorganic anti-reflective coating layer; and removing the patterned inorganic anti-reflective coating layer; using the patterned inorganic anti-reflective coating layer to form the patterned photoresist layer. 15. The method of manufacturing an integrated circuit as claimed in claim 11 including:depositing an inorganic anti-reflective coating layer after depositing the hard mask layer; using the inorganic anti-reflective coating layer to form the patterned photoresist layer; forming a patterned inorganic anti-reflective coating layer; and removing the patterned inorganic anti-reflective coating layer before removing the patterned hard mask layer. 16. The method of manufacturing an integrated circuit as claimed in claim 11 including performing a pre-saliciding deposition strip.17. The method of manufacturing an integrated circuit as claimed in claim 11 wherein growing the salicide layer includes growing a metal silicide selected from a group of metals consisting of cobalt, titanium, and nickel.18. The method of manufacturing an integrated circuit as claimed in claim 11 including depositing an inorganic spacer layer and forming inorganic spacers around the polysilicon wordline before growing the salicide layer on the polysilicon wordline.19. The method of manufacturing an integrated circuit as claimed in claim 11 wherein:providing the silicon substrate provides a p-doped silicon substrate; and including: implanting a p-type threshold adjustment implant into the p-type silicon substrate. 20. The method of manufacturing an integrated circuit as claimed in claim 11 wherein the charge-trapping dielectric layer is composed of:a first oxide layer; a nitride layer over the first oxide layer; and a second oxide layer over the nitride layer. |
BACKGROUND OF THE INVENTION1. Technical FieldThe present invention relates generally to semiconductor technology and more specifically to manufacturing semiconductor memory.2. Background ArtVarious types of memories have been developed in the past as electronic memory media for computers and similar systems. Such memories include electrically erasable programmable read only memory (EEPROM) and electrically programmable read only memory (EPROM). Each type of memory had advantages and disadvantages. EEPROM can be easily erased without extra exterior equipment but with reduced data storage density, lower speed, and higher cost. EPROM, in contrast, is less expensive and has greater density but lack erasability.A newer type of memory called "Flash" EEPROM, or Flash memory, has become extremely popular because it combines the advantages of the high density and low cost of EPROM with the electrical erasability of EEPROM. Flash memory can be rewritten and can hold its contents without power. It is used in many portable electronic products, such as cell phone, portable computers, voice recorders, etc. as well as in many larger electronic systems, such as cars, planes, industrial control systems, etc.In Flash memory, bits of information are programmed individually as in the older types of memory, such as dynamic random access memory (DRAM) and static random access memory (SRAM) memory chips. However, in DRAMs and SRAMs where individual bits can be erased one at a time, Flash memory must currently be erased in fixed multi-bit blocks or sectors.Conventionally, Flash memory is constructed of many Flash memory cells where a single bit is stored in each memory cell and the cells are programmed by hot electron injection and erased by Fowler-Nordheim tunneling. However, increased market demand has driven the development of Flash memory cells to increase both the speed and the density. Newer Flash memory cells have been developed that allow more than a single bit to be stored in each cell.One memory cell structure involves the storage of more than one level of charge to be stored in a memory cell with each level representative of a bit. This structure is referred to as a multi-level storage (MLS) architecture. Unfortunately, this structure inherently requires a great deal of precision in both programming and reading the differences in the levels to be able to distinguish the bits. If a memory cell using the MLS architecture is overcharged, even by a small amount, the only way to correct the bit error would be to erase the memory cell and totally reprogram the memory cell. The need in the MLS architecture to precisely control the amount of charge in a memory cell while programming also makes the technology slower and the data less reliable. It also takes longer to access or "read" precise amounts of charge. Thus, both speed and reliability are sacrificed in order to improve memory cell density.An even newer technology allowing multiple bits to be stored in a single cell is known as "MirrorBit(R)" Flash memory has been developed. In this technology, a memory cell is essentially split into two identical (mirrored) parts, each of which is formulated for storing one of two independent bits. Each MirrorBit Flash memory cell, like a traditional Flash cell, has a gate with a source and a drain. However, unlike a traditional Flash cell in which the source is always connected to an electrical source and the drain is always connected to an electrical drain, each MirrorBit Flash memory cell can have the connections of the source and drain reversed during operation to permit the storing of two bits.The MirrorBit Flash memory cell has a semiconductor substrate with implanted conductive bitlines. A multilayer charge storage layer, referred to as a "charge-trapping dielectric layer", is formed over the semiconductor substrate. The charge-trapping dielectric layer can generally be composed of three separate layers: a first insulating layer, a charge-trapping layer, and a second insulating layer. Wordlines are formed over the charge-trapping dielectric layer perpendicular to the bitlines. Programming circuitry controls two bits per cell by applying a signal to the wordline, which acts as a control gate, and changing bitline connections such that one bit is stored by source and drain being connected in one arrangement and a complementary bit is stored by the source and drain being interchanged in another arrangement.Programming of the cell is accomplished in one direction and reading is accomplished in a direction opposite that in which it is programmed.A major problem with the MirrorBit architecture has been discovered in forming uniform wordlines by processes compatible with the materials used.A solution to this problem has been long sought but has long eluded those skilled in the art.DISCLOSURE OF THE INVENTIONThe present invention provides a manufacturing method for semiconductor devices, which includes providing a semiconductor substrate and depositing a charge-trapping dielectric layer. First and second bitlines are implanted and a wordline layer is deposited. A hard mask layer is deposited over the wordline layer. The hard mask is of an oxide material having the characteristic of being deposited rather than grown, such as a deposited oxide. A photoresist layer is deposited over the wordline layer, patterned, and used to form a patterned hard mask layer. The photoresist layer is removed. The wordline layer is processed using the patterned hard mask layer to form a uniform wordline and the patterned hard mask layer is removed. A salicide is grown without short-circuiting the first and second bitlines.The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 (PRIOR ART) is a plan view of a conventional MirrorBit Flash EEPROM;FIG. 2 (PRIOR ART) is a circuit schematic of a portion of one of the M*N array cores of FIG. 1 (PRIOR ART);FIG. 3 (PRIOR ART) is a plan view of a portion of one of the M*N array cores 104 of FIG. 1 (PRIOR ART);FIG. 4 (PRIOR ART) is a cross-sectional isometric view of a typical MirrorBit Flash memory cell along the line 4-4 of FIG. 3 (PRIOR ART);FIG. 5 is a cross-sectional view of a partially processed memory cell similar to a cross-sectional view along line 5-5 in FIG. 3 (PRIOR ART);FIG. 6 is the structure of FIG. 5 after formation of a hard mask and removal of the photoresist layer and the optional ARC layer;FIG. 7 is the structure of FIG. 6 after processing using the hard mask to form wordlines;FIG. 8 is the structure of FIG. 7 after deposition of a spacer material;FIG. 9 is the structure of FIG. 8 with saliciding; andFIG. 10 is shown a simplified process chart of the present invention.BEST MODE FOR CARRYING OUT THE INVENTIONReferring now to FIG. 1 (PRIOR ART), therein is shown a plan view of a MirrorBit(R) Flash EEPROM 100, which commonly includes a semiconductor substrate 102 in which one or more high-density core regions and one or more low-density peripheral portions are formed. High-density core regions typically include one or more M*N array cores 104 of individually addressable, substantially identical MirrorBit Flash memory cells. Low-density peripheral portions typically include input/output (I/O) circuitry and programming circuitry for selectively addressing the individual memory cells. The programming circuitry is represented in part by and includes one or more x-decoders 108 and y-decoders 110, cooperating with I/O circuitry 106 for connecting the source, gate, and drain of selected addressed memory cells to predetermined voltages or impedances to effect designated operations on the memory cell, e.g., programming, reading, and erasing, and deriving necessary voltages to effect such operations.The term "horizontal" as used in herein is defined as a plane parallel to the conventional plane or surface the semiconductor substrate 102 regardless of its orientation. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "higher", "lower", "over", "under", "side" and "beside", are defined with respect to these horizontal and vertical planes. The term "processed" as used herein is defined to include one or more of the following: depositing or growing semiconductor materials, masking, patterning, photolithography, etching, implanting, removal, and/or stripping.Referring now to FIG. 2 (PRIOR ART), therein is shown a circuit schematic of a portion of one of the M*N array cores 104 of FIG. 1 (PRIOR ART). The circuit schematic shows a line of memory cells 200, which includes memory cells 201 through 204 and which together can form an 8-bit word. Each of the memory cells 201 through 204 is connected to a wordline 206, which acts as a control gate. Each of the memory cells 201 through 204 has two associated bitlines with most of the memory cells having a common bitline. The memory cell 201 has associated bitlines 208 and 209; the memory cell 202 has associated bitlines 209. and 210; the memory cell 203 has associated bitlines 210 and 211; and the memory cell 204 has associated bitlines 211 and 212.Depending upon a signal on the wordline and the connection of the bitlines in a memory cell to an electrical source or drain, the memory cells 201 through 204 are capable of writing, reading, and erasing bits at locations 215 through 222. For example, control of the bit at location 215 is achieved through connection of the drain to the bitline 208 and the source to the bitline 209. Similarly, control of the bit at location 216 is achieved through connection of the drain to the bitline 209 and the source to the bitline 208. Although adjacent memory cells share common bitlines, the adjacent memory cells do not interfere with each other because the memory cells are programmed one at a time and only one memory cell is active at a time while programming.Referring now to FIG. 3 (PRIOR ART), therein is shown a plan view of a portion of one of the M*N array cores 104 of FIG. 1 (PRIOR ART). The semiconductor substrate 102 has a plurality of implanted bitlines 304 extending in parallel with a plurality of formed wordlines 302 extending in parallel and at right angles to the plurality of implanted bitlines 304. The wordlines 302 and bitlines 304 have contacts and interconnections (not shown) to the programming circuitry represented in part by x-decoders 108 and y-decoders 110 of FIG. 1 (PRIOR ART).Referring now to FIG. 4 (PRIOR ART), therein is shown a cross-sectional isometric view of a typical MirrorBit Flash memory cell along the line 4-4 of FIG. 3 (PRIOR ART), such as a memory cell 400. The semiconductor substrate 102 is a p-doped silicon substrate with a threshold adjustment implant 402 of a p-type material, such as boron. The threshold adjustment implant 402 provides a region that is more heavily doped than the semiconductor substrate 102 itself and assists in the control of the threshold voltage of the memory cell 400.A charge-trapping dielectric layer 404 is deposited over the semiconductor substrate 102. The charge-trapping dielectric layer 404 generally can be composed of three separate layers: a first insulating layer 406, a charge-trapping layer 408, and a second insulating layer 410. The first and second insulating layers 406 and 410 are of an oxide dielectric material such as silicon dioxide (SiO2) and the charge-trapping layer 408 is of a nitride dielectric material such as silicon nitride (SixNy). The oxide-nitride-oxide configuration is frequently referred to as a matter of convenience as an "ONO layer".The bitlines 304 of FIG. 3 (PRIOR ART) are implanted under the charge-trapping dielectric layer 404 in the semiconductor substrate 102 as typified by first and second conductive bitlines 412 and 414. They are typically of an implanted n-type material, such as arsenic, and can include an oxide portion (not shown) in some embodiments. The first and second conductive bitlines 412 and 414 are spaced apart and define a volume between them with the threshold adjustment implant 402, which is a channel 416.A material, such as polysilicon, is deposited over the charge-trapping dielectric layer 404, patterned, etched, and stripped resulting in a wordline 418. The wordline 418 is one of the wordlines 302 in FIG. 3 (PRIOR ART).It is understood that the implementation of each step in manufacturing has associated processing steps.The locations 420 through 422 indicate where bits can be stored in the memory cell 400 and locations 424 and 426 are adjacent locations, which are independent of the memory cell 400.Referring now to FIG. 5, therein is shown a cross-sectional view of a partially processed memory cell 500 similar to a cross-sectional view along line 5-5 in FIG. 3 (PRIOR ART). A p-type silicon substrate 501 has been implanted or processed with a p-type threshold adjustment implant 502.A charge-trapping dielectric layer 504 is deposited over the silicon substrate 501. The charge-trapping dielectric layer 504 generally can be composed of three separate layers: a first insulating layer 506, a charge-trapping layer 508, and a second insulating layer 510. The first and second insulating layers 506 and 510 may be of an oxide dielectric material such as silicon dioxide (SiO2) and the charge-trapping layer 508 may be of a nitride dielectric material such as silicon nitride (SixNy) to form an ONO layer. It will be noted that the present invention is not limited to specific dielectric or charge-trapping materials.The bitlines, as typified by a first n-type bitline 512, are implanted under the charge-trapping dielectric layer 504 in the silicon substrate 501 and a wordline layer 515, of a material such as polysilicon, has been deposited over the charge-trapping dielectric layer 504. Again, it will be noted that the present invention is not limited to specific bitline or gate materials. For example, NPN structures are shown but the structures can also be PNP.A hard mask layer 516 has been deposited over the wordline layer 515 and has not been processed. The hard mask layer 516 can act as an anti-reflective coating (ARC) layer or an inorganic ARC layer can be deposited as a separate layer, such as an optional ARC layer 517, but more importantly, the hard mask layer 516 is formulated to be a material that can be stripped off the wordline layer 515 without the stripping process damaging any exposed portion of the charge-trapping dielectric layer 504 at the same time.In order to be strippable without damaging the charge-trapping dielectric layer 504, the hard mask layer 516 should be properly formulated to not affect the middle layer of the charge-trapping dielectric layer 504. The hard mask layer 516 is of a material having the characteristic of being a "deposited oxide", which is defined as being a direct deposition of an oxide material on another material as distinguished from a "grown oxide", which is defined as being formed by oxidation of another material. For example, where the charge-trapping dielectric layer 504 is an ONO layer, a deposited oxide such as deposited silicon dioxide or deposited silicon oxynitride is used. Further, it has been discovered that the silicon dioxide should be deposited rather than grown because grown silicon oxide forms integrally with the underlying polysilicon. This makes grown silicon oxide difficult to remove and its removal damages the underlying polysilicon.More particularly, it has been discovered that the following deposition processes will provide a deposited oxide having the above characteristics:High Temperature Deposition (deposited from 750[deg.] F. to 800[deg.] F.)Low Pressure Chemical Vapor DepositionPlasma Enhanced Chemical Vapor DepositionOxygen Rich Silicon DepositionTetraethylorthosilicate Oxide (TEOS) DepositionIn addition, the hard mask layer 516 is made thinner than the thickness of the second insulating layer 510 and the charge-trapping layer 508 of the charge-trapping dielectric layer 504. This assures that the hard mask layer 516 is removed without damaging the charge-trapping layer 508 by the formation of holes.A photoresist layer 518, generally of an organic photoresist material, has been deposited over the hard mask layer 516 or the optional ARC layer 517. The ARC layer 517, the hard mask layer 516 and the photoresist layer 518 have been processed to form openings 521 through 523 to expose the wordline layer 515.In FIG. 5, both the photoresist layer 518 and the ARC layer 517 have been processed (i.e., the materials have been deposited, masked, patterned, exposed, and etched) for processing the hard mask layer 516.Referring now to FIG. 6, therein is shown the structure of FIG. 5 after formation of a patterned hard mask layer 519 and removal of the patterned photoresist layer 518 and the patterned ARC layer 517. The patterned hard mask layer 519 alone is used to create the structure of FIG. 7.It should be noted that in the past, the patterned photoresist layer would be used to create the wordlines 525 through 528 (without the hard mask layer 516) of FIG. 7 so the ONO layer would be exposed between the wordlines and the problems noted above would occur.Referring now to FIG. 7, therein is shown the structure of FIG. 6 after processing using the patterned hard mask layer 519 to form wordlines 525 through 528. The processing using the patterned hard mask layer 519 exposes the charge-trapping dielectric layer 504 at exposed areas 530 through 532. However, since the hard mask layer 516 material is specifically formulated so as to not damage the charge-trapping dielectric layer 504 during removal, the charge-trapping dielectric layer504 will not be damaged at the exposed areas 530 through 532 when the patterned hard mask layer 519 is removed.For example, where the patterned hard mask layer 519 is of a material such as silicon oxide, its removal would only cause openings in the top oxide layer of the ONO layers and not of the nitride layer. Thus, the subsequent pre-metal deposition oxide strip and oxide spacer etch photoresist layer strip would not penetrate the nitride layer. This would leave the nitride layer and the bottom oxide layer to protect the semiconductor substrate. With no access for metal to the semiconductor substrate, there will be no short-circuiting of the bitlines.Also, at the end of the removal of the patterned hard mask layer 519, the wordlines 525 through 528 will not be damaged or reduced in size because of the clear demarcation of the deposited oxide and the underlying material of polysilicon. A grown oxide will be integral with the native oxide of the polysilicon and the polysilicon will be damaged and reduced in size at the end of the removal process.Referring now to FIG. 8 therein is shown the structure of FIG. 7 after removal of the patterned hard mask layer 519. An inorganic spacer layer 534 has been deposited of a material such as silicon nitride or silicon oxynitride.Referring now to FIG. 9, therein is shown the structure of FIG. 8 after etching of the spacer layer 534 to form spacers 535 through 538 around the respective wordlines 525 through 528. If the spacers 535 through 538 are not formed, an additional masking step of the entire core is required or additional processing steps to provide access to the bitlines.The memory cell 500 is also shown after application of the saliciding process to grow metal salicides 540 through 543, such as cobalt silicide, titanium silicide, or nickel silicide contacts on top of the respective wordlines 525 through 528.Since the metal silicide will not form on the exposed ONO layer or the nitride spacers, which do not contain silicon, the metal silicide will be self-aligned on the tops of the polysilicon wordlines; i.e., salicide will be grown.Referring now to FIG. 10, therein is shown a simplified process chart 600 of the present invention which includes: providing semiconductor substrate 602; implanting threshold adjustment implant 604; depositing charge-trapping dielectric layer 606; implanting bitlines 608; depositing wordline layer 610; depositing hard mask layer 612; depositing ARC layer 614; depositing photoresist layer 616; forming oxide hard mask 618; removing photoresist layer (and optional ARC layer) 620; forming wordline. 622; removing oxide hard mask 624; forming spacer 626, and growing salicide 628. Various alternative sequences, additions, and deletions to this process chart would be obvious to those skilled in the art from a detailed reading of the present disclosure.Various implementations of the method may be used in different electronic devices and especially the dual bit memory cell architecture may be achieved according to one or more aspects of the present invention. In particular, the invention is applicable to memory devices wherein both bits in a dual bit cell are used for data or information storage.While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense. |
A method and system for dynamic control of shared memory resources within a portable computing device ("PCD") are disclosed. A limit request of an unacceptable deadline miss ("UDM") engine of the portable computing device may be determined with a limit request sensor within the UDM element. Next, a memory management unit modifies a shared memory resource arbitration policy in view of the limit request. By modifying the shared memory resource arbitration policy, the memory management unit may smartly allocate resources to service translation requests separately queued based on having emanated from either a flooding engine or a non-flooding engine. |
CLAIMSWhat is claimed is:1. A method for dynamic control of shared memory resources within a portable computing device, the method comprising:classifying each of a plurality of traffic engines as either a flooding engine or a non-flooding engine, wherein a flooding engine processes workloads that are capable of generating bursts of high bandwidths within short periods of times at low priorities in Quality of Service ("QoS") schemes relative to workloads processed by a non-flooding engine;for each non-flooding engine, identifying those having an unacceptable deadline miss status, wherein missing a deadline for servicing a translation request emanating from a non-flooding engine having an unacceptable deadline miss status detrimentally impacts QoS and produces user-noticeable degradation in device performance;queuing translation requests from flooding engines in a flooding engine queue and translation requests from non-flooding engines in a non-flooding engine queue, wherein the flooding engine queue and the non-flooding engine queue are separate; processing translation requests from flooding engines and translation requests from non-flooding engines according to a default memory resource arbitration policy; receiving one or more limit requests from one or more of the engines having an unacceptable deadline status; andbased on the one or more limit requests, modifying the default arbitration policy for memory address translation resource such that there is an increase in allocation of memory address translation resources to translation requests in the non-flooding engine queue.2. The method of claim 1, wherein the flooding engine queue and the non-flooding engine queue are physically separate.3. The method of claim 1, wherein the flooding engine queue and the non-flooding engine queue are logically separate.4. The method of claim 1, wherein the one or more limit requests are associated with a FIFO buffer level.5. The method of claim 1, wherein the one or more limit requests are associated with an average round trip latency for servicing a translation request.6. The method of claim 1, wherein the one or more limit requests are associated with a workload completion percentage rate.7. The method of claim 1, wherein a non-flooding engine having an unacceptable deadline miss status comprises at least one of a processing core, a display engine, a camera controller, a graphical processing unit, a modem, and software or firmware running on a programmable computing engine.8. The method of claim 1, wherein the portable computing device comprises at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link.[Remainder of this page intentionally left blank]9. A system for dynamic control of shared memory resources within a portable computing device, the system comprising:a memory management unit configured to:classify each of a plurality of traffic engines as either a flooding engine or a non-flooding engine, wherein a flooding engine processes workloads that are capable of generating bursts of high bandwidths within short periods of times at low priorities in Quality of Service ("QoS") schemes relative to workloads processed by a non-flooding engine;for each non-flooding engine, identify those having an unacceptable deadline miss status, wherein missing a deadline for servicing a translation request emanating from a non-flooding engine having an unacceptable deadline miss status detrimentally impacts QoS;queue translation requests from flooding engines in a flooding engine queue and translation requests from non-flooding engines in a non-flooding engine queue, wherein the flooding engine queue and the non-flooding engine queue are separate;process address translation requests from flooding engines and address translation requests from non-flooding engines according to a default memory resource arbitration policy,receive one or more limit requests from one or more of the engines having an unacceptable deadline status; andbased on the one or more limit requests, modify the default memory address translation resource arbitration policy such that there is an increase in allocation of memory address translation resources to translation requests in the non-flooding engine queue.10. The system of claim 9, wherein the flooding engine queue and the non-flooding engine queue are physically separate.1 1. The system of claim 9, wherein the flooding engine queue and the non-flooding engine queue are logically separate.12. The system of claim 9, wherein the one or more limit requests are associated with a FIFO buffer level.13. The system of claim 9, wherein the one or more limit requests are associated with an average round trip latency for servicing a translation request.14. The system of claim 9, wherein the one or more limit requests are associated with a workload completion percentage rate.15. The system of claim 9, wherein a non-flooding engine having an unacceptable deadline miss status comprises at least one of a processing core, a display engine, a camera controller, a graphical processing unit, a modem, and software or firmware running on a programmable computing engine.16. The system of claim 9, wherein the portable computing device comprises at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link.[Remainder of this page intentionally left blank]17. A system for dynamic control of shared memory resources within a portable computing device, the system comprising:means for classifying each of a plurality of traffic engines as either a flooding engine or a non-flooding engine, wherein a flooding engine processes workloads that are capable of generating bursts of high bandwidths within short periods of times at low priorities in Quality of Service ("QoS") schemes relative to workloads processed by a non-flooding engine;for each non-flooding engine, means for identifying those having anunacceptable deadline miss status, wherein missing a deadline for servicing a translation request emanating from a non-flooding engine having an unacceptable deadline miss status detrimentally impacts QoS;means for queuing translation requests from flooding engines in a flooding engine queue and translation requests from non-flooding engines in a non-flooding engine queue, wherein the flooding engine queue and the non-flooding engine queue are separate;means for processing translation requests from flooding engines and translation requests from non-flooding engines according to a default memory resource arbitration policy;means for receiving one or more limit requests from one or more of the engines having an unacceptable deadline status; andmeans for, based on the one or more limit requests, modifying the default memory address translation resource arbitration policy such that there is an increase in allocation of memory address translation resources to translation requests in the non- flooding engine queue.18. The system of claim 17, wherein the flooding engine queue and the non-flooding engine queue are physically separate.19. The system of claim 17, wherein the flooding engine queue and the non-flooding engine queue are logically separate.20. The system of claim 17, wherein the one or more limit requests are associated with a FIFO buffer level.21. The system of claim 17, wherein the one or more limit requests are associated with an average round trip latency for servicing a translation request.22. The system of claim 17, wherein the one or more limit requests are associated with a workload completion percentage rate.23. The system of claim 17, wherein a non-flooding engine having an unacceptable deadline miss status comprises at least one of a processing core, a display engine, a camera controller, a graphical processing unit, a modem, and software or firmware running on a programmable computing engine.[Remainder of this page intentionally left blank]24. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for dynamic control of shared memory resources within a portable computing device, said method comprising:classifying each of a plurality of traffic engines as either a flooding engine or a non-flooding engine, wherein a flooding engine processes workloads that are capable of generating bursts of high bandwidths within short periods of times at low priorities in Quality of Service ("QoS") schemes relative to workloads processed by a non-flooding engine;for each non-flooding engine, identifying those having an unacceptable deadline miss status, wherein missing a deadline for servicing a translation request emanating from a non-flooding engine having an unacceptable deadline miss status detrimentally impacts QoS;queuing translation requests from flooding engines in a flooding engine queue and translation requests from non-flooding engines in a non-flooding engine queue, wherein the flooding engine queue and the non-flooding engine queue are separate; processing translation requests from flooding engines and translation requests from non-flooding engines according to a default memory resource arbitration policy; receiving one or more limit requests from one or more of the engines having an unacceptable deadline status; andbased on the one or more limit requests, modifying the default memory address translation resource arbitration policy such that there is an increase in allocation of memory address translation resources to translation requests in the non-flooding engine queue.25. The computer program product of claim 24, wherein the flooding engine queue and the non-flooding engine queue are physically separate.26. The computer program product of claim 24, wherein the flooding engine queue and the non-flooding engine queue are logically separate.27. The computer program product of claim 24, wherein the one or more limit requests are associated with a FIFO buffer level.28. The computer program product of claim 24, wherein the one or more limit requests are associated with an average round trip latency for servicing a translation request.29. The computer program product of claim 24, wherein the one or more limit requests are associated with a workload completion percentage rate.30. The computer program product of claim 24, wherein a non-flooding engine having an unacceptable deadline miss status comprises at least one of a processing core, a display engine, a camera controller, a graphical processing unit, a modem, and software or firmware running on a programmable computing engine. |
SYSTEM AND METHOD FOR DYNAMIC CONTROL OF SHARED MEMORY MANAGEMENT RESOURCESDESCRIPTION OF THE RELATED ART[0001] Portable computing devices ("PCDs") are powerful devices that are becoming necessities for people on personal and professional levels. Examples of PCDs may include cellular telephones, portable digital assistants ("PDAs"), portable game consoles, palmtop computers, and other portable electronic devices.[0002] PCDs typically employ systems-on-chips ("SOCs"). Each SOC may contain multiple processing cores that have deadlines that, if missed, may causedetectable/visible failures that are not acceptable during operation of a PCD. Deadlines for hardware elements, such as cores, are usually driven by amount of bandwidth ("BW") a core receives from shared resources, such as memory or buses, like dynamic random access memory ("DRAM"), Internal static random access memory ("SRAM") memory ("IMEM"), or other memory such as a Peripheral Component Interconnect Express ("PCI-e") external transport links over a short period of time. What is, or is not, a short period of time depends on the particular type of processing core, but is usually in the range of about 10 seconds to about 100 milliseconds.[0003] When certain processing cores do not receive a required memory BW over specified period of time, or experience excessive transaction latency due tooverburdened resources in the memory system (such as hardware table walkers), failures that directly and visibly impact user experience may occur. For example, consider a display engine for a PCD: it reads data from a memory element (usually DRAM) and outputs data to a display panel/device for a user to view. If the display engine is not able to read enough data from DRAM within a fixed period of time, then such an issue may cause a display engine to "run out" of application data and be forced to render a fixed, solid color (usually blue or black) on a display due to the lack of display data available to the display engine. This error condition is often referred to in the art as "Display Underflow" or "Display Under Run" or "Display tearing," as understood by one of ordinary skill in the art.[0004] As another example of potential failures when a hardware element does not receive sufficient throughput or bandwidth from a memory element, a camera in a PCD may receive data from a sensor and write that data to the DRAM. If a sufficient amount of data is not written to DRAM within a fixed period of time, then this may cause the camera engine to lose input camera data. Such an error condition is often referred to in the art as "Camera overflow " or "Camera Image corruption," as understood by one of ordinary skill in the art.[0005] Another example for potential failure is a modem core not being able to read/write enough data from/to DRAM over a fixed period to complete critical tasks. If critical tasks are not completed within deadline, modem firmware may crash: voice or data calls of a PCD are lost for period of time or an Internet connection may appear sluggish (i.e. - stuttering during an internet connection).[0006] Accordingly, there is a need in the art for a system and method that dynamically controls access to, and allocation of, shared memory resources. More specifically, there is a need in the art for a system and method that dynamically modifies arbitration policies for shared memory resources such that transactions associated withunacceptable deadline miss ("UDM") engines are prioritized over low priority transactions emanating from "flooder" engines.SUMMARY OF THE DISCLOSURE[0007] A method and system for dynamic control of shared memory resources within a portable computing device ("PCD") are disclosed. An exemplary embodiment of the solution begins by classifying each of a plurality of traffic engines in the PCD as either a flooding engine or a non-flooding engine. As would be understood by one of ordinary skill in the art, a flooding engine processes workloads that have a relatively high effect on a Quality of Service ("QoS") level relative to workloads processed by a non-flooding engine. Next, for each non-flooding engine, the exemplary embodiment identifies those having an unacceptable deadline miss status. As would be understood by those of ordinary skill in the art, missing a deadline for servicing a translation request emanating from a non-flooding engine having an unacceptable deadline miss status detrimentally impacts QoS.[0008] Translation requests emanating from flooding engines are queued in a flooding engine queue and translation requests emanating from non-flooding engines are queued in a non-flooding engine queue. The flooding engine queue and the non-flooding engine queue are separate queues that, depending on embodiment, may be physically separate and/or logically separate. The method then processes translation requests from flooding engines and translation requests from non-flooding engines according to a default memory resource arbitration policy unless and until one or more limit requests are received from one or more of the non-flooding engines having an unacceptable deadline status. In response to the one or more limit requests, the exemplary method modifies the default memory resource arbitration policy such that there is an increase in allocation of memory resources to translation requests in the non-flooding engine queue.[0009] The one or more limit requests transmitted to the memory manage unit by one or more non-flooding engines processing workloads subject to an unacceptable deadline miss requirement may be based on, inter alia, a FIFO buffer level, an average round trip latency for servicing a translation request, or a workload completion percentage rate. Examples of non-flooding engines having an unacceptable deadline miss status includes, but is not limited to including, a processing core, a display engine, a camera controller, a graphical processing unit, a modem, and software or firmware running on a programmable computing engine.BRIEF DESCRIPTION OF THE DRAWINGS[0010] In the drawings, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as " 102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures.[0011] FIG. 1 is a functional block diagram of an exemplary system within a portable computing device ("PCD") for dynamic control of shared memory resources based on danger signals monitored from one or more unacceptable deadline miss ("UDM") elements;[0012] FIG. 2 is a functional block diagram of an exemplary limit request sensor for an unacceptable deadline miss ("UDM") traffic engine, such as a core of a multicore processor;[0013] FIG. 3 is a logical flowchart illustrating in more detail the exemplary method for FIFO-level based failure proximity detection described relative to the FIG. 2 limit request sensor;[0014] FIG. 4 is a logical flowchart illustrating in more detail the exemplary method for latency based failure proximity detection described relative to the FIG. 2 limit request sensor; [0015] FIG. 5 is a logical flowchart illustrating in more detail the exemplary method for software deadline based failure proximity detection described relative to the FIG. 2 limit request sensor;[0016] FIG. 6 is a logical flowchart illustrating in more detail the exemplary method for hardware deadline based failure proximity detection described relative to the FIG. 2 limit request sensor;[0017] FIG. 7 is a logical flowchart illustrating an exemplary method for dynamic control of shared memory resources;[0018] FIG. 8 is a functional block diagram of an exemplary, non-limiting aspect of a PCD in the form of a wireless telephone for implementing methods and systems for dynamic control of shared memory resources; and[0019] FIG. 9 is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 8 for executing methodologies for dynamic control of shared memory resources.DETAILED DESCRIPTION[0020] The word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, preferred or advantageous over other aspects.[0021] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0022] As used in this description, the terms "component," "database," "module," "system," "processing component," "engine," "client" and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).[0023] In this description, the terms "central processing unit ("CPU")," "digital signal processor ("DSP")," and "chip" are used interchangeably. Moreover, a CPU, DSP, or a chip may be comprised of one or more di stinct processing components generally referred to herein as "core(s)."[0024] In this description, unacceptable deadline miss ("UDM") elements or engines are those hardware and/or software elements that may cause significant or catastrophic failures of a PCD as described in the background section listed above. Specifically, UDM engines are those elements which may cause exemplary error conditions such as, but not limited to, "Display Underflows," "Display Under runs," "Display tearing," "Camera overflows, " "Camera Image corruptions," dropped telephone calls, sluggish Internet connections, etc. as understood by one of ordinary skill in the art. Any hardware and/or software element of a PCD may be characterized and treated as a UDM engine depending on the particular embodiment of the solution.[0025] In this description, the terms "workload," "process load" and "process workload" are used interchangeably and generally directed toward the processing burden, or percentage of processing burden, associated with a given processing component in a given embodiment. Further to that which is defined above, a"processing component" may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, a client, etc. or any component residing within, or external to, an integrated circuit within a portable computing device.[0026] In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G"), fourth generation ("4G") and fifth generation ("5G") wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, a notebook computer, an ultrabook computer, a tablet personal computer ("PC"), among others. Notably, however, even though exemplary embodiments of the solutions are described herein within the context of a PCD, the scope of the solutions are not limited to application in PCDs as they are defined above. For instance, the system described herein could be implemented in a typical portable computer, such as a laptop or notebook computer.[0027] Embodiments of the solution configure the memory management unit ("MMU") on a a-priori basis to recognize which one or more of a plurality of clients interfacing with the MMU is characterized as a "flooding" client or potential "aggressor" client and which one or more of the plurality of clients interfacing with the MMU is characterized as a "non-flooding" client or potential "victim" client. Depending on the embodiment of the solution, the MMU may be an aggregated MMU or it may be a distributed MMU. Advantageously, the MMU may leverage separate queuing structures for translation requests emanating from flooding clients versus translation requests emanating from non-flooding clients. It is envisioned that the separate queuing structures may be instantiated in physically separate memory components or may be instantiated in separate areas of a single memory component. Regardless, so long as the queuing structure for translation requests for flooding clients are separate and distinguishable from the queuing structure for non-flooding clients, embodiments of the solution may be able to dynamically control access to, and allocation of, memory resources of the MMU that are shared by both classes of clients. In this way, it is an advantage of the solution that non-flooding clients processing workloads subject to unacceptable deadline misses may be given dynamic priority to shared memory resources over flooding clients such that QoS experienced by a user is optimized.[0028] To optimize QoS, an MMU according to embodiments of the solution may dynamically control the number of hardware table walker components ("HTWs") that are simultaneously occupied with servicing of translation requests associated with flooding clients. As would be understood by one of ordinary skill in the art, an HTW may be leveraged to retrieve data or responses from long-term memory if/when a translation request cannot be serviced from cache - I.e., a "cache miss." In the event of a cache miss, the client from which the translation request emanated has to wait until a HTW is able to search through long-term memory and respond with the "answer" to the translation request. As would be understood by one of ordinary skill in the art, the latency for return of a response from long-term memory is necessarily increased when compared to the low latency for return of a response from cache. Moreover, latency is further increased, sometimes exponentially, if in the event of a cache miss the translation request must be queued until a HTW component becomes available. Such scenarios that increase latency for responses to translation requests may be detrimental, if not fatal, to a non-flooding client working subject to an unacceptable deadline miss ("UDM client" or "UDM engine").[0029] According to embodiments of the solution, victim clients such as UDM clients may be configured to monitor critical failure indicators, such as buffer fill levels, workload completion times, translation request latencies, etc. If a victim client determines from the monitoring of its critical failure indicator(s) that it is nearing a failure point, I.e. it is in danger of experiencing a deadline miss that is unacceptable, it may signal to the MMU to adjust its arbitration policies as applied to flooding clients. That is, the MMU may, among other things, respond to a signal from the victim client by limiting service to known flooder clients. It is envisioned that the limit signal sent by a victim UDM client may be binary in nature or, depending on embodiment, may be a numeric indication of the relative proximity to which the UDM client is toexperiencing a failure (e.g., a "higher" signal level indicates that the UDM client is relatively closer to experiencing a failure than when a "lower" signal level is transmitted). Further, depending on the embodiment of the solution, a victim UDM client may continue to transmit a limit request signal to the MMU unless and until it determines that it is no longer in critical danger of experiencing an unacceptable deadline miss.[0030] An MMU according to the solution may combine multiple incoming limit requests from multiple UDM clients into one or more aggregated limit indicators ("ALIs"). In view of the ALI level, the MMU may respond by restricting access of translation requests from one or more flooder clients to hardware page table walkers ("HTWs") by dynamically adjusting the maximum number of HTWs available to those flooders. In doing so, the MMU may "free up" one or more HTWs for servicing of translation requests emanating from UDM clients while translation requests emanating from flooder clients are queued up pending the availability of an HTW eligible for servicing a translation request from a flooder client.[0031] It is envisioned that dynamically adjusting the maximum number of HTWs available for servicing of flooder client requests may include pre-empting any translation from flooder clients already in the process of being serviced by an HTW. In this way, if/when the MMU adjusts the maximum number of HTWs that may be used for service of flooder client translation requests in response to an ALI level,embodiments of the solution may ensure that the newly adjusted maximum number of HTWs is not exceeded due to more than the maximum number of HTWs already being occupied with flooder requests. Simply put, it is envisioned that embodiments of the solution may terminate an ongoing table walk associated with a flooder client request in the event that allowing the table walk to continue would cause the maximum number of HTWs allocable to flooder clients to be exceeded.[0032] It is also envisioned that in some embodiments the level or amount of HTW limitations may be a function of the relative intensity of the ALI level indication (e.g., dependent upon the number of victim UDM clients signaling for flooder limits or the numeric indication for proximity of failure from each victim UDM client). The appropriate amount or level of HTW limitations for flooder client requests in view of a given ALI signal may be determined from a look-up table or, in some embodiments, may be the output of a predefined formula or function, as would be understood by one of ordinary skill in the art.[0033] Notably, the ALI level, which is a direct reflection of the number and/or intensity of limit requests coming from victim UDM clients, may be leveraged by embodiments of the solution to dynamically control or adjust allocation of shared memory management resources, such as HTWs, to servicing translation requests emanating from flooder clients. Advantageously, because translation requests from flooder clients are queued separately from translation requests from non-flooder clients, such as UDM clients, embodiments of the solution may smartly and dynamically allocate shared memory resources in view of the ALI level. The ALI level dictates how the MMU may adjust its arbitration policies for shared memory resources between flooder and non-flooder clients while the separate queuing of translation requests from flooder and non-flooder clients enables the MMU to allocate access according to the adjusted arbitration policies.[0034] Embodiments of the solution may provide for the MMU to revert back to a default policy for shared memory resource allocation, such as a "first in first out" or FIFO policy, if/when the ALI level reaches or nears zero or some other predefined low threshold level.[0035] Referring now to the figures, exemplary embodiments and exemplary aspects of the solution are described in more detail. [0036] FIG. 1 is a functional block diagram of an exemplary system 102 within a portable computing device ("PCD") 100 (See FIG. 8) for dynamic control of shared memory resources 204 based on limit requests monitored from one or moreunacceptable deadline miss ("UDM") elements 203.[0037] Each UDM element 203, such as UDM engines 203A-203n, may comprise a limit request module (not illustrated) that produces a limit request signal (depicted with dashed line arrows) that is received and monitored by the limit aggregation module 1 14. Further details of an exemplary limit request module that produces limit request signals will be described in further detail below in connection with FIG. 2.[0038] Other hardware elements such as Non-UDM engines 202A-n may be part of the PCD 100 and the system 101. The Non-UDM engines 202A-n may not comprise or include limit request modules. Alternatively, in other exemplary embodiments, it is possible for Non-UDM engines 202 A-n to have limit request modules, however, such limit request modules of Non-UDM hardware engines 202 are either not coupled to the limit aggregation module 1 14 or, alternatively, a switch (not illustrated) has turned these limit request modules to an "off position such that the limit aggregation module 114 does not receive any limit request signals from these designated/assigned Non-UDM hardware engines 202.[0039] Each of Flooding Traffic Engines 201, UDM Traffic Engines 203 and Non- UDM Traffic Engines 202 may be coupled to the Memory Management Unit ("MMU") 204 via one or more interconnects. Similarly, the MMU 204 may be coupled, via an interconnect, to one or more memory controllers (not illustrated) coupled to memory 1 12 (see FIG. 8). As described above, it is envisioned that the MMU may be an aggregated MMU or a distributed MMU, depending on embodiment. Memory 112 may include, but is not limited to, dynamic random access memory ("DRAM"). An interconnect, as would be understood by one of ordinary skill in the art, may comprise one or more switch fabrics, rings, crossbars, buses etc. Moreover, an interconnect may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, an interconnect may include address, control, and/or data connections to enable appropriate communications among its aforementioned components.[0040] Each UDM engine 203 has a limit request sensor that monitors proximity to failure and produces a limit request signal to the limit aggregation module 1 14. The limit request signal operates as a request by a given UDM engine 203 for the MMU 204 to adjust its resource arbitration policy to adjust access and/or allocation of MMU resources, such as hardware table walkers ("HTW") 215, for one or more flooding traffic engines 201.[0041] Limit request signals may comprise information indicating levels or degrees at which a UDM engine 203 believes that it is in danger of not meeting a deadline and/or it is in danger of a failure. The failure may comprise one or more error conditions described above in the background section for hardware devices such as, but not limited to, a display engine, a camera, and a modem. As such, each limit request signal may be unique relative to a respective hardware element type. In other words, the limit request signal produced by first UDM core 203 A may be different relative to the limit request signal produced by second UDM core 203n. For example, the limit request signal produced by the first UDM core 203 A may have a magnitude or scale of five units while the limit request signal produced by the second UDM core 203n may have a magnitude or scale of three units. The differences are not limited to magnitude or scale: other differences may exist for each unique UDM hardware element as understood by one of ordinary skill in the art. Each limit request signal, however, generallycorresponds to a time-to-failure or probability of failure value.[0042] The limit aggregation module 114 monitors the limit request signals (dashed line arrows) that are sent to it from the respective UDM engines 203. Based on the limit request signals, the limit aggregation module 114 determines a relative limit request level that it signals to the MMU manager 101 (dashed line arrow from module 114 to module 101). It is envisioned that the limit aggregation module 1 14 may aggregate all incoming limit requests into one or more aggregated limit indicator ("ALI") signals that are transmitted to the MMU manager 101, as illustrated in FIG. l by the dashed line signal arrow from the limit aggregation module 114 to the MMU manager 101.[0043] The MMU manager 101, in view of the ALI signals received from the limit aggregation module 1 14, dynamically controls assignment of HTW resources 215 for servicing of translation requests. That is, the MMU manager 101 uses the ALI signal(s) to dynamically adjust the maximum allowable number of HTW resources 215 that may be simultaneously allocated to translation requests associated with flooding traffic engines 201. As such, changes in the ALI signal(s) may cause the MMU manager 101 to dynamically adjust the maximum allowable number of HTW resources 215 that may be simultaneously allocated to translation requests associated with flooding traffic engines 201 . [0044] In embodiments of the solution, and as can be understood from the exemplary FIG. 1 illustration of system 102, traffic emanating from flooding traffic engines 201 are scheduled and multiplexed ("muxed") separately from traffic emanating from non- flooding clients (e.g, UDM engines 203 and Non-UDM engines 202). In this way, embodiments of the solution ensure that translation requests associated with flooding traffic engines 201 do not cause head-of-line-blocking for translation requests associated with non-flooding engines 202-203. Moreover, although the FIG. 1 illustration depicts a single physical buffer 207 for all translation traffic from all flooder engines 201, it is envisioned that buffer 207 may be comprised of logical buffers and/or may comprise multiple buffers for separation of engines 201 or subgroups of engines 201.[0045] Referring back to the FIG. 1 illustration, translation request traffic emanating from flooding traffic engines 201, Non-UDM engines 202 and UDM engines 203 are directed over busses to certain input transaction buffers uniquely associated with the respective engines 201, 202, 203. Notably, while Non-UDM engines 202 and UDM engines 203 may be associated with a common input transaction buffer (and, by extension, a common output transaction buffer), flooding traffic engines 201 are associated with input and output transaction buffers that are dedicated to flooding traffic engines 201.[0046] All address translation requests from the input buffers are forwarded to translation schedulers and buffers 207, 209. For those translation requests associated with flooding traffic engines 201, the translation requests are pushed to flooder input address translation scheduler and buffers 207. For those translation requests associated with Non-UDM engines 202 and UDM engines 203, the translation requests are pushed to non-flooder input address translation scheduler and buffer(s) 209. For advantageous reasons described above, scheduler and buffer(s) 207 associated with flooder translation requests are physically and/or logically separate from scheduler and buffer(s) 209 associated with non-flooder translation requests.[0047] The MMU manager 101 may coordinate to satisfy address translation requests from modules 207 and 209 by querying the shared translation cache and/or assigning the translation requests to a hardware table walker component 215. The order or priority in which the translation requests are addressed by the MMU manager 101, whether addressed via query of the shared translation cache 104 or allocation to an HTW 215, is determined in view of the ALI signals received from the limit aggregation module 114.[0048] A response to a translation request satisfied from the shared translation cache is supplied by the MMU manager 101 back to the respective scheduler 207, 209 which, in turn, forwards the response back to the appropriate input transaction buffer via the output address translation scheduler and demux module 213. A response from a HTW 215 is forwarded to the MMU manager which then may cache the result in the shared translation cache 104 and provide it to the appropriate input transaction buffer via the output address translation scheduler and demux module 213.[0049] As described above, allocation of shared memory resources, such as HTWs 215, are made by the MMU manager 101 in view of the ALI signal transmitted from the limit aggregation module 1 14. Depending on the ALI signal, the MMU manager 101 may adjust the number of HTWs 215 that are eligible to respond to a translation request associated with a flooder engine 201. Depending on embodiment, adjusting the number of HTWs 215 that are eligible to respond to a translation request associated with a flooder engine 201 may comprise terminating an ongoing page walk so that an otherwise occupied HTW 215 becomes available and earmarked for translation requests associated with a non-flooder engine such as a UDM engine. Further depending on embodiment, adjusting the number of HTWs 215 that are eligible to respond to a translation request associated with a flooder engine 201 may comprise deprioritizing translation requests associated with a certain one or more flooder engines 201 that are associated with a certain one or more UDM engines 203 making limit requests (e.g., a display may cause throttling to a GPU and DSP but not a CPU). Also, and as described previously, the number of HTWs 215 that an MMU manager 101 earmarks for non- flooder clients versus flooder clients may be a function of the ALI signal generated by the limit aggregation module 1 14. In this way, the ALI may be weighted depending on the particular UDM engine 203 sending a limit request to the limit aggregation module 1 14. Moreover, depending on embodiment, there may be one or more ALI signals from the module 1 14 to the MMU manager 101 per group of flooder engines 201. Each ALI signal may have a value assigned to it as a function of the number and intensity of active limit requests transmitted from the associated UDM cores 203 to the module 114.[0050] It is further envisioned that the ALI signal from the limit aggregation module 1 14 may be used by some embodiments of the solution as a threshold to prevent allocation of flooder engine 201 translations into the shared cache, thereby mitigating or preventing flooder clients 201 from overwriting translations for non-flooder clients 202, 203 that may be nearing failure.[0051] Referring now to FIG. 2, this figure is a functional block diagram of an exemplary limit request sensor for an unacceptable deadline miss ("UDM") traffic engine 203 A, such as a display core for example. The limit request sensor operates to detect a UDM traffic engine 203 proximity to failure. The limit request sensor may comprise a first-in, first-out (FIFO) data buffer 302 and a FIFO level danger mapping table 306. Each FIFO data buffer 302 may comprise a set of read and write pointers, storage and control logic. Storage may be static random access memory ("SRAM"), flip-flops, latches or any other suitable form of storage.[0052] According to one exemplary embodiment, each FIFO data buffer 302 may track data that is received by the UDM traffic engine 203 A. For example, suppose that the UDM traffic engine 203 A comprises a display engine. The display engine 203 A or a display controller 128 (see FIG. 8) would read from DRAM memory 1 12 display data that would be stored in the FIFO data buffer 302. The display engine 203A (or display controller 128 of FIG. 8) would then take the display data from the FIFO dater buffer 302 and send it to a display or touchscreen 132 (see FIG. 8).[0053] The FIFO data buffer 302 has a fill level 304 which may be tracked with a danger mapping table 306. As the fill level 304 for the FIFO data buffer 302 decreases in value, the limit request tracked by the danger mapping table 306 would increase because if the FIFO data buffer 302 becomes empty or does not have any data to send to the display or touchscreen 132, then the error conditions described above as the"Display Underflow" or "Display Under run" or "Display tearing," may occur. The output of the danger mapping table 306 is the limit request signal that is sent to the limit aggregation module 1 14 as described above.[0054] According to another exemplary embodiment, suppose the UDM traffic engine 203 A of FIG. 2 comprises a camera controller. The camera controller (not illustrated) within the SoC 102 reads data from the camera sensor 148 (See FIG. 8) and stores it within the FIFO data buffer 302. The camera controller then outputs the camera data from the FIFO data buffer 302 to DRAM memory 1 12. In this example embodiment, if the FIFO data buffer 302 overflows from the camera data, then some camera data may be lost and the error conditions of "Camera overflow " or "Camera Image corruption," may occur. So according to this exemplary embodiment, as the FIFO fill level 304 increases, then the limit request output signal also increases. This limit request of the camera sensor 148 is opposite to the limit request display embodiment described previously.[0055] According to another exemplary embodiment, suppose the UDM traffic engine 203A of FIG. 2 comprises a modem or analog signal processor 126 (see FIG. 8) or a graphical processing unit ("GPU") 182 (see FIG. 8). According to such embodiments, the UDM traffic engine 203 A may monitor the round-trip latency of all its transactions which are sent to the DRAM memory 1 12. The UDM traffic engine 203 A may calculate an average and/or peak round-trip DRAM latency over a fixed or a sliding time window. A limit request signal output may be generated in proportion to the average and/or peak latency observed by the UDM traffic engine 203 A: for low latency transactions the limit request may be characterized as "low," while for transactions in which latency increases, the limit request may be characterized as "high."[0056] According to other exemplary embodiments, the UDM traffic engine 203 A of FIG. 2 and its respective limit request sensor may comprise a software-based deadline projection module (not illustrated in FIG. 2). The software may be executed by a CPU 1 10 or a digital signal processor. Alternatively, the UDM traffic engine 203 A may comprise firmware running on a programmable computing engine that continuously tracks the completion of tasks as well as fraction of tasks already completed and elapsed time since each task was commenced by UDM traffic engine 203 A. The software and/or firmware of the UDM traffic engine 203 A may estimate the completion time for task and compares that completion time to a target or maximum deadline to complete one or more tasks as specified by a user and/or an application program.[0057] According to this firmware/software exemplary embodiment for the UDM traffic engine 203 A, the limit request signal output is determined and generated based on a look-up-table or a formula that uses one or more variables as input. Those one or more variables may include, but are not limited to, elapsed time, fraction of completed task, maximum deadline completion time, and/or concurrent total load on the computing engine.[0058] According to another exemplary embodiment, the UDM traffic engine 203A may comprise a hardware element that has a deadline projection mechanism. For example, such a UDM traffic engine 203A may comprise a video encoder 134 (see FIG. 8) or a video codec. The video encoder 134 or video codec may comprise a fixed function computing engine that may continuously check fractions of tasks already completed as well as elapsed times since individual tasks have started. Such dedicated hardware may estimate completion time for each task in compared to a maximum deadline completion time that may be specified by a user and/or an application program. A video codec may comprise hardware that logs a percentage of video frames that are encoded or decoded any given time.[0059] The limit request signal output for such a video oriented UDM traffic engine 203 A would be determined and generated based on a table or formula that may use, but is not limited to using, one or more of the following variables as input: elapsed time, fraction of completed task, maximum deadline for completion time, and the concurrent load on the fixed function engine.[0060] FIG. 3 is a logical flowchart illustrating in more detail the exemplary method 300 for FIFO-level based failure proximity detection described relative to the FIG. 2 limit request sensor. A limit request sensor configured for FIFO-level based failure proximity detection may be comprised within a UDM traffic engine 203 A such as, but not limited to, a display engine or a camera engine.[0061] Beginning with block 305, a latency FIFO buffer level may be monitored. At block 310, the monitored level in the FIFO buffer may be compared to a predefined proximity to failure threshold. If the UDM traffic engine comprising the limit request sensor is a display engine, for example, the predefined proximity to failure threshold may be a low threshold that, if reached, indicates that the FIFO buffer is nearing empty (thereby risking that there will be no data available for rendering on the display panel). As such, for a UDM traffic engine 203 A in the form of a display engine, the proximity to failure level monitored by the sensor at block 305 increases as the FIFO fill level decreases. By contrast, if the UDM traffic engine comprising the limit request sensor is a camera engine, for example, the predefined proximity to failure threshold may be a high threshold that, if reached, indicates that the FIFO buffer is nearing full (thereby risking that camera data may be lost before being written to the DRAM 112). As such, for a UDM traffic engine 203 A in the form of a camera engine, the proximity to failure level monitored by the sensor at block 305 increases as the FIFO fill level increases.[0062] Returning to the method 300, at decision block 315 the method determines whether there has been a violation of the predefined proximity to failure threshold. If not, then the "NO" branch is followed to block 317 and the output limit request to the MMU is removed before the method 300 loops back to block 305 and the limit request sensor associated with the UDM engine 203A continues to monitor the FIFO level. If the predefined proximity to failure threshold has been violated, then the "YES" branch is followed to block 320 and a limit request for flooders is output to the limit aggregation module 1 14 of the MMU 204.[0063] FIG. 4 is a logical flowchart illustrating in more detail the exemplary method 400 for latency based failure proximity detection described relative to the FIG. 2 limit request sensor. A limit request sensor configured for latency based failure proximity detection may be comprised within a UDM traffic engine 203 A such as, but not limited to, a modem or a graphical processing unit ("GPU").[0064] Beginning with block 405, a latency calculation may be monitored. The average and/or peak round trip latency of transactions emanating from the UDM traffic engine 203 A over a predefined time window may be monitored. At blocks 410 and 415, the average and/or peak latency may be calculated and compared to a predefined proximity to failure threshold. The predefined proximity to failure threshold may be set relatively high, as a low average and/or peak latency calculation would indicate a low or nonexistent risk of failure. By contrast, the higher the average and/or peak latency calculation, the higher the risk of failure by the UDM engine 203 A to meet its QoS demands.[0065] Returning to the method 400, at decision block 420 the method determines whether there has been a violation of the predefined proximity to failure threshold. If not, then the "NO" branch is followed to block 417 and the output limit request to the MMU is removed before the method 400 loops back to block 405 and the limit request sensor associated with the UDM engine 203 A continues to monitor the round trip latencies of requests. If the predefined proximity to failure threshold has been violated, then the "YES" branch is followed to block 425 and a limit request for flooders is output to the limit aggregation module 114 of the MMU 204.[0066] FIG. 5 is a logical flowchart illustrating in more detail the exemplary method 500 for software deadline based failure proximity detection described relative to the FIG. 2 limit request sensor. A limit request sensor configured for software deadline based failure proximity detection may be comprised within a UDM traffic engine 203 A such as, but not limited to, a central processing unit ("CPU") or a digital signal processor ("DSP").[0067] Beginning with block 505, a workload completion percentage rate may be monitored. At blocks 510 and 515, the time for completion of the remainder of the workload not yet processed may be estimated and compared to a predefined proximity to failure threshold. The predefined proximity to failure threshold may be set according to a target or maximum deadline to complete a workload. Therefore, the higher the workload completion percentage rate calculation, the more likely that the remainder of the workload will be completed before the deadline and, as such, the lower the risk of failure by the UDM engine 203 A to meet its QoS demands.[0068] Returning to the method 500, at decision block 520 the method determines whether there has been a violation of the predefined proximity to failure threshold. If not, then the "NO" branch is followed to block 517 and the output limit request to the MMU is removed before the method 500 loops back to block 505 and the limit request sensor associated with the UDM engine 203 A continues to monitor the fraction of completion for the workload relative to an elapsed amount of time since the UDM engine 203 A began processing the workload. If the predefined proximity to failure threshold has been violated, then the "YES" branch is followed to block 525 and a limit request for flooders is output to the limit aggregation module 114 of the MMU 204. It is envisioned that, depending on the percentage of workload that has been processed over a given period of time, the magnitude of the limit request sent to the limit aggregation module 1 14 may vary.[0069] FIG. 6 is a logical flowchart illustrating in more detail the exemplary method 600 for hardware deadline based failure proximity detection described relative to the FIG. 2 limit request sensor. A limit request sensor configured for hardware deadline based failure proximity detection may be comprised within a UDM traffic engine 203 A such as, but not limited to, a video codec, an image signal processor, or a "fixed function" engine.[0070] Beginning with block 605, an estimated workload completion time may be monitored via dedicated hardware, the arrangement of which would be understood by one of ordinary skill in the art. For example, if the UDM traffic engine 203 A was in the form of a video codec, the dedicated hardware may be comprised within the video codec and configured to log the percentage of a frame that has been encoded or decoded at a given point in time. At blocks 610 and 615, the time for completion of the remainder of the workload not yet processed may be estimated and compared to a predefined proximity to failure threshold. The proximity to failure level may be defined, determined and signaled based on a table or formula that considers variables such as, but not limited to, elapsed time, fraction of task completed, maximum deadline for full completion of task, and concurrent workload on the fixed function UDM engine 203 A. [0071] Returning to the method 600, at decision block 620 the method determines whether there has been a violation of the predefined proximity to failure threshold. If not, then the "NO" branch is followed to block 617 and the output limit request to the MMU is removed before the method 600 loops back to block 605 and the limit request sensor associated with the UDM engine 203 A continues to monitor the fraction of completion for the workload relative to an elapsed amount of time since the UDM engine 203 A began processing the workload. If the predefined proximity to failure threshold has been violated, then the "YES" branch is followed to block 625 and a limit request for flooders is output to the limit aggregation module 1 14 of the MMU 204. It is envisioned that, depending on the percentage of workload that has been processed over a given period of time, the magnitude of the limit request sent to the limit aggregation module 1 14 may vary.[0072] FIG. 7 is a logical flowchart illustrating an exemplary method 700 for dynamic control of shared memory resources. Beginning at block 705, each traffic engine with access to the shared memory resource may be defined or classified as either a flooding engine or a non-flooding engine. UDM engines 203 may be classified as non-flooding engines at block 710. Next, at block 71 5, translation requests emanating from flooding engines may be separately queued from translation requests emanating from non- flooding engines. The method 700 then proceeds to block 720.[0073] At block 720, for each cache miss from flooding engines 201 and non-flooding engines 202, 203 the method 700 may assign a shared memory resource, such as a hardware table walker ("HTW"), to respond to the translation request. At block 720, the method 700 may be allocating the shared memory resources according to a default allocation policy without regard for the classification of the engine from which a given translation request emanated. The method 700 continues to decision block 725 to determine if a UDM engine(s) 203 has issued a limit request for the MMU 204 to limit access for flooder engines 201 to memory resources.[0074] If no UDM engine 203 limit requests have been received, the "NO" branch may be followed from decision block 725 and the method 700 may continue to allocate shared memory resources according to a default allocation policy. If, however, one or more limit requests have been received by the MMU 204, the "YES" branch may be followed from decision block 725 to process block 730. At process block 730, the method 700 may modify the translation request arbitration policy according to one or more factors previously described such that one or more shared memory resources, such as HTWs, are freed up for servicing UDM engine 203 translation requests. The method continues to decision block 735 and, if the UDM engine 203 limit requests are cleared (I.e., no UDM engine 203 is in danger of failure), the method loops back to block 720 where the default arbitration policy is resumed. Otherwise the method 700 follows the "NO" branch from decision block 735 back to process block 730 where arbitration of shared memory resources is dynamically adjusted to ensure that translation requests emanating from UDM engines 203 are timely serviced.[0075] Referring now to FIG. 8, this figure is a functional block diagram of an exemplary, non-limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for dynamic control of shared memory resources. As shown, the PCD 100 includes an on-chip system 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Further, instead of a CPU 1 10, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill.[0076] In general, memory management unit 204 may be formed from hardware and/or firmware and may be responsible for dynamically controlling allocation of shared memory resources among and between flooding engines and non-flooding engines. As illustrated in FIG. 8, a display controller 128 and a touch screen controller 130 are coupled to the digital signal processor 110. A touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 130. PCD 100 may further include a video encoder 134, e.g., a phase- alternating line ("PAL") encoder, a sequential couleur avec memoire ("SEC AM") encoder, a national television system(s) committee ("NTSC") encoder or any other type of video encoder 134. The video encoder 134 is coupled to the multi-core CPU 1 10. A video amplifier 136 is coupled to the video encoder 134 and the touch screen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 8, a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140.[0077] A memory 1 12, which may include a PoP memory, a cache, a mask ROM / Boot ROM, a boot OTP memory, a type DDR of DRAM memory may also be coupled to the CPU 110. A subscriber identity module ("SIM") card 146 may also be coupled to the CPU 1 10. Further, as shown in FIG. 8, a digital camera 148 may be coupled to the CPU 1 10. In an exemplary aspect, the digital camera 148 is a charge-coupled device("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera.[0078] As further illustrated in FIG. 8, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 8 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150.Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150.[0079] FIG. 8 further indicates that a radio frequency ("RF") transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 8, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 8 also shows that a power supply 188, for example a battery, is coupled to the on-chip system 102 through a power management integrated circuit ("PMIC") 180. In a particular aspect, the power supply 188 includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source.[0080] The CPU 1 10 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157B. The on- chip thermal sensors 157A may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157B may comprise one or more thermistors. The thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller (not shown). However, other types of thermal sensors 157 may be employed.[0081] The touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 1 57B, the PMIC 180 and the power supply 188 are external to the on-chip system 102. It will be understood, however, that one or more of these devices depicted as external to the on- chip system 102 in the exemplary embodiment of a PCD 100 in FIG. 14 may reside on chip 102 in other exemplary embodiments.[0082] In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 1 12 or as form the MMU 204. Further, the MMU 204, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.[0083] FIG. 9 is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 8 for executing methodologies for dynamic control of shared memory resources. As illustrated in FIG. 9, the CPU or digital signal processor 110 is coupled to the memory 112 via MMU 204 and main bus 21 1. The CPU 110, as noted above, is a multiple-core processor having N core processors. That is, the CPU 1 10 includes a first core 222, a second core 224, and an core 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the Nthcore 230 are available for supporting a dedicated application or program. Alternatively, one or more applications or programs may be distributed for processing across two or more of the available cores.[0084] The CPU 1 10 may receive commands from the MMU 204 that may comprise software and/or hardware. If embodied as software, the MMU 204 comprises instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 1 10 and other processors.[0085] The first core 222, the second core 224 through to the Nth core 230 of the CPU 1 10 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the NUlcore 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.[0086] Bus 211 may include multiple communication paths via one or more wired or wireless connections, as is known in the art and described above in the definitions. The bus 21 1 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enablecommunications. Further, the bus 21 1 may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.[0087] When the logic used by the PCD 100 is implemented in software, as is shown in FIG. 9, it should be noted that one or more of startup logic 250, management logic 260, MMU interface logic 270, applications in application store 280 and portions of the file system 290 may be stored on any computer-readable medium for use by, or in connection with, any computer-related system or method.[0088] In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that may contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" can be any means that can store,communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.[0089] The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette(magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc readonly memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.[0090] In an alternative embodiment, where one or more of the startup logic 250, management logic 260 and perhaps the MMU interface logic 270 are implemented in hardware, the various logic may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array (s) (PGA), a field programmable gate array (FPGA), etc.[0091] The memory 1 12 is a non-volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor 1 10 (or additional processor cores).[0092] The startup logic 250 includes one or more executable instructions for selectively identifying, loading, and executing a select program for dynamic control of shared memory resources. The startup logic 250 may identify, load and execute a select program. An exemplary select program may be found in the program store 296 of the embedded file system 290. The exemplary select program, when executed by one or more of the core processors in the CPU 1 10 may operate in accordance with one or more signals provided by the MMU 204 to implement methodologies for dynamic control of shared memory resources.[0093] The management logic 260 includes one or more executable instructions for terminating a program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program may be found in the program store 296 of the embedded file system 290.[0094] The interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296.Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to a given proximity to failure threshold for a certain type of engine designated as a UDM engine. [0095] The interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 1 12 is a flash memory, one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 may be edited, replaced, or otherwise modified. In some embodiments, the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.[0096] The embedded file system 290 includes a hierarchically arranged memory management store 292. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information for theconfiguration and management of the various memory management and resource arbitration algorithms used by the PCD 100.[0097] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", "subsequently", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.[0098] The various operations and/or methods described above may be performed by various hardware and/or software component(s) and/or module(s), and suchcomponent(s) and/or module(s) may provide the means to perform such operations and/or methods. Generally, where there are methods illustrated in Figures having corresponding counterpart means-plus-function Figures, the operation blocks correspond to means-plus-function blocks with similar numbering. For example, blocks 805 through 845 illustrated in FIG. 8 correspond to means-plus-functions that may be recited in the claims.[0099] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows.[00100] In one or more exemplary aspects, the functions described may beimplemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM,EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.[00101] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.[00102] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media. [00103] The methods or systems, or portions of the system and methods, may be implemented in hardware or software. If implemented in hardware, the devices can include any, or a combination of, the following technologies, which are all well known in the art: discrete electronic components, an integrated circuit, an application-specific integrated circuit having appropriately configured semiconductor devices and resistive elements, etc. Any of these hardware devices, whether acting or alone, with other devices, or other components such as a memory may also form or comprise components or means for performing various operations or steps of the disclosed methods.[00104] The software and data used in representing various elements can be stored in a memory and executed by a suitable instruction execution system (microprocessor). The software may comprise an ordered listing of executable instructions for implementing logical functions, and can be embodied in any "processor-readable medium" for use by or in connection with an instruction execution system, apparatus, or device, such as a single or multiple-core processor or processor-containing system. Such systems will generally access the instructions from the instruction execution system, apparatus, or device and execute the instructions.[00105] Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
Methods, systems, and devices for controlled and mode-dependent heating of a memory device are described. In various examples, a memory device or an apparatus that includes the memory device may have a circuitry configured to heat the memory device. The circuitry configured to heat the memory device may be activated, deactivated, or otherwise operated based on an indication of a temperature (e.g., of the memory device). In some examples, the circuitry configured to heat the memory device and activated or operated in other manners may be based on an operating mode (e.g., of the memory device), which may be associated with certain access operations or operational states (e.g., of the memory device). Various operations or operating modes (e.g., of the memory device) may also be based on indications of the temperature (e.g., of the memory device). |
1.A method including:Determining the temperature of a memory device including a cell having a capacitive storage element;Comparing the temperature of the memory device with a threshold; andA circuit configured to heat the memory device is activated based at least in part on the comparison of the temperature and the threshold.2.The method of claim 1, further comprising:After activating the circuit configured to heat the memory device, determining a second temperature of the memory device;Determining that the second temperature of the memory device satisfies a second threshold; andThe circuit configured to heat the memory device is deactivated based at least in part on the determination that the second temperature meets the second threshold.3.The method of claim 2, further comprising:After deactivating the circuit configured to heat the memory device, determining a third temperature of the memory device;Comparing the third temperature of the memory device with a third threshold that is higher than the threshold and lower than the second threshold; andThe circuit configured to heat the memory device is activated based at least in part on comparing the third temperature to the third threshold.4.The method of claim 2, wherein the second threshold is higher than the threshold.5.The method of claim 1, further comprising:An indication that the access operation of the memory device is restricted is transmitted to the host device based at least in part on the comparison of the temperature and the threshold.6.The method according to claim 5, wherein transmitting the indication that the access operation of the memory device is restricted comprises:Instructing to disable at least one of a read operation or a write operation for the memory device.7.The method of claim 5, further comprising:The memory device is initialized, wherein transmitting the indication that the access operation of the memory device is restricted is based at least in part on the initialization.8.The method of claim 1, further comprising:A voltage source is coupled with one or more resistive components in the memory device configured to heat the memory device, wherein activating the circuit configured to heat the memory device is based at least in part on the coupling.9.The method of claim 1, further comprising:Applying a signal to one or more driver components of the memory device, wherein activating the circuit configured to heat the memory device is based at least in part on the application.10.A device including:A memory device, which includes a unit having a capacitive storage element;A temperature sensor coupled with the memory device and configured to generate an indication of the temperature of the memory device; andA circuit coupled with the memory device and configured to heat the memory device based at least in part on the indication generated by the temperature sensor.11.The device of claim 10, further comprising:The controller of the memory device, which is configured to cause the device to perform the following operations:Identifying the temperature of the memory device based at least in part on the indication generated by the temperature sensor; andThe circuit configured to heat the memory device is enabled based at least in part on the comparison of the temperature of the memory device with a threshold value.12.The apparatus of claim 11, wherein the controller of the memory device is further configured to cause the apparatus to perform the following operations:Comparing the temperature of the memory device with a second threshold; andThe circuit configured to heat the memory device is adjusted based at least in part on the comparison of the temperature of the memory device with the second threshold.13.The apparatus of claim 11, wherein the controller of the memory device is further configured to cause the apparatus to perform the following operations:The threshold value is identified based at least in part on a non-volatile storage component of the memory device that accesses the memory device configured to store the indication of the threshold value.14.The apparatus of claim 13, wherein the threshold is associated with a configuration of the memory device, and wherein the controller of the memory device is further configured to cause the apparatus to perform the following operations:Identifying the configuration of the memory device based at least in part on accessing the non-volatile storage component; andThe threshold is determined based at least in part on the configuration.15.The apparatus of claim 11, wherein the controller of the memory device is further configured to cause the apparatus to perform the following operations:Identifying the operating mode of the memory device; andThe threshold is determined based at least in part on the mode of operation.16.The apparatus of claim 11, wherein the controller of the memory device is further configured to cause the apparatus to perform the following operations:Detecting the initialization of the memory device; andSignaling is transmitted to the host device based at least in part on the initialization and the comparison of the temperature of the memory device with the threshold, the signaling indicating that access to the memory device is restricted.17.The apparatus of claim 16, wherein the signaling indicates that the memory device is not available for read or write operations.18.The apparatus of claim 16, wherein the signaling indicates the temperature of the memory device.19.The apparatus of claim 10, wherein the circuit configured to heat the memory device comprises:power source;One or more resistive elements; andOne or more switch components configured to selectively couple the voltage source and the one or more resistive elements.20.The apparatus of claim 10, wherein the circuit configured to heat the memory device comprises:One or more drivers, which are coupled to the load; andA switch component configured to selectively apply a signal to the one or more drivers coupled with the load.21.A method including:Receiving an indication of the temperature of a memory device coupled with the host device at a host device, the memory device including a unit having a capacitive storage element;Evaluating the temperature of the memory device with respect to a threshold; andThe command to access the memory device is inhibited by the host device determining the temperature of the memory device based at least in part on the threshold.22.The method of claim 21, further comprising:Receiving an indication of the second temperature of the memory device at the host device after the command to inhibit access to the memory device; andThe command to access the memory device is issued to the memory device based at least in part on the indication of the second temperature of the memory device.23.The method of claim 21, further comprising:Receiving an indication that the memory device is available at the host device after inhibiting the command to access the memory device; andThe command to access the memory device is issued to the memory device based at least in part on the indication available to the memory device.24.The method of claim 21, further comprising:The memory device is initialized, wherein the command to inhibit access to the memory device is based at least in part on the initialization.25.The method of claim 24, further comprising:Issue a command to the memory device that provides the indication of the temperature of the memory device based at least in part on the initialization. |
Controlled heating of storage devicescross referenceThis patent application requires Mayer et al. filed on September 23, 2019, US Patent Application No. 16/579,437 entitled "CONTROLLED HEATING OF A MEMORY DEVICE" and Mayer Priority of U.S. Provisional Patent Application No. 62/749,441 entitled "CONTROLLED HEATING OFA MEMORY DEVICE" filed on October 23, 2018, each of which All are assigned to the assignee, and each of the applications is expressly incorporated herein by reference in its entirety.Background techniqueThe following generally relates to a system including at least one memory device, and more specifically, to the controlled and mode-dependent heating of the memory device.Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, and digital displays. Information is stored by programming different states of the memory device. For example, binary devices most often store one of two states usually represented by logic 1 or logic 0. In other devices, more than two states can be stored. In order to access the stored information, the components of the device can read or sense the stored state of at least one of the memory devices. To store information, the components of the device can write or program the state in the memory device.There are various types of memory devices, including the use of magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistance RAM (RRAM), flash memory, phase change memory (PCM), etc. The memory device can be volatile or non-volatile. Non-volatile memories such as PCM and FeRAM can maintain the stored logic state for a long period of time even when there is no external power supply. Volatile memory devices (e.g., DRAM) may lose stored logic states over time unless they are periodically refreshed by power. In some cases, non-volatile memory can use a similar device architecture as volatile memory, but can have non-volatile properties by using these physical phenomena as ferroelectric capacitors or different material phases.In some applications, the memory device may be included as part of the host device, or otherwise associated with the host device (e.g., coupled to and controlled by it). The host device may be configured for operation in an environment associated with an ambient temperature range, and at least some operations of the memory device may be temperature sensitive.Description of the drawingsFigure 1 illustrates an example of a system that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figure 2 illustrates an example of a memory die that supports controlled and mode-dependent heating of a memory device in accordance with aspects disclosed herein.Figure 3 illustrates an example of a system that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figure 4 illustrates an example of a temperature profile associated with controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figure 5 illustrates an example of a temperature profile associated with controlled and mode-dependent heating of a memory device according to aspects disclosed herein.6A and 6B illustrate an example of a memory heater that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figure 7 shows a block diagram of a device that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figure 8 shows a block diagram of a device that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein.Figures 9-12 show flowcharts illustrating one or more methods of supporting controlled and mode-dependent heating of a memory device in accordance with aspects disclosed herein.Detailed waysA system or host device including a memory device may be designed or configured to operate within an ambient temperature range that is different from the operating temperature range designed or configured for the memory device. For example, automotive systems (e.g., vehicles, vehicle components, vehicle processors or controllers), networked systems (e.g., wireless base stations), or mobile devices can be designed to operate at relatively low ambient temperatures (e.g., as low as -40 ℃ ambient temperature, -40℃ to 105℃ or 115℃ ambient temperature range), the ambient temperature can be lower than the designed operating temperature of the memory device (for example, as low as 0℃) (for example, supported, with One or more guaranteed or otherwise specified performance characteristics).One or more aspects of memory device operation may be temperature-dependent, and it may be necessary to ensure that the memory device meets operating parameters within the ambient temperature range expected by the system or host device. In various examples of the described technology, a memory device or an apparatus or system including the memory device may include circuits or other components configured to heat the memory device. A circuit or other component configured to heat the memory device can be activated, deactivated based on an indication of the temperature of the memory device (for example, an indication of the overall temperature of the memory device, an indication of an average temperature of the memory device, an indication of an aggregate temperature of the memory device). Activated or otherwise operated, which in some instances can reduce the operating temperature range of the memory device to be narrower than the ambient temperature range of the system or host device containing the memory device.Controlled memory heating according to the described technology can advantageously enable the memory device to meet a relatively wide ambient temperature range parameter of the system or host device, while operating the memory device within a relatively narrow operating temperature range. The relatively narrow operating temperature range can further support the improvement and optimization of the operating parameters (for example, voltage or timing parameters) of the memory device.In some instances, activating, deactivating, or otherwise operating a circuit or other component configured to heat the memory device may be based on the target (desired) operation mode of the memory device, which may be related to the specific access operation or operating state of the memory device. Associated. For example, a relatively low temperature may be beneficial for certain operations or modes of operation (e.g., self-refresh operation or mode, power-off or standby mode), while a relatively high temperature may be beneficial for other operations or modes of operation (e.g., Read or write operations or related modes that support memory access). In addition, various operations or operating modes of the memory device may be enabled or disabled based on the indication of the temperature of the memory device. For example, some operations or modes can be activated (e.g., activated, available, supported) or deactivated (e.g., deactivated) based on whether the indicated temperature of the memory device is within the corresponding (e.g., suitable, desired, target) temperature range Activated, unavailable, restricted). Power can be saved advantageously by activating, deactivating, or otherwise operating circuits or other components configured to heat the memory device based on the operating mode of the memory device (e.g., by not heating the memory device when the memory device is in the self-refresh mode To save electricity).The features of the present disclosure are first described in the context of the memory system and memory die as described with reference to FIGS. 1 and 2. The features of the present disclosure are further described in the context of a system and temperature profile for operating a memory device with controlled and mode-dependent heating of the memory device as described with reference to FIGS. 3 to 6B. These and other features of the present disclosure are further illustrated by and with reference to equipment diagrams, system diagrams, and flowcharts related to the controlled and mode-dependent heating of the memory device as described with reference to FIGS. 7 to 12.Figure 1 illustrates an example of a system 100 that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein. The system 100 may include an external memory controller 105, a memory device 110, and multiple channels 115 that couple the external memory controller 105 and the memory device 110. The system 100 may include one or more memory devices, but for ease of description, the one or more memory devices may be described as a single memory device 110.The system 100 may include various aspects of an electronic device, such as a computing device, a mobile computing device, a wireless device, or a graphics processing device. The system 100 may be an example of a portable electronic device. The system 100 may be an example of a computer, a laptop computer, a tablet computer, a smart phone, a cellular phone, a wearable device, an Internet connected device, and so on. The memory device 110 may be a component of a system configured to store data for one or more other components of the system 100. In some examples, the system 100 is configured for two-way wireless communication with other systems or devices using base stations or access points. In some instances, the system 100 can perform machine-type communication (MTC), machine-to-machine (M2M) communication, or device-to-device (D2D) communication.At least part of the system 100 may be an example of a host device. Such a host device may be an example of a device that uses a memory to execute a process, such as a computing device, a mobile computing device, a wireless device, a graphics processing device (for example, a graphics processing unit (GPU)), a computer, a laptop Type computer, tablet computer, smart phone, cellular phone, wearable device, Internet connected device, some other fixed or portable electronic device, etc. In some cases, the host device may refer to hardware, firmware, software, or a combination thereof that implements the functions of the external memory controller 105. In some cases, the external memory controller 105 may be referred to as a host or host device. In some examples, system 100 is a graphics card.In some cases, the memory device 110 may be an independent device or component that is configured to communicate with other components of the system 100 and provide a physical memory address/space that the system 100 can use or reference. In some examples, the memory device 110 may be configurable to work with at least one or more different types of systems 100. The signaling between the components of the system 100 and the memory device 110 may be operable to support the modulation scheme used to modulate the signal, the different pin designs used to communicate the signal, the different packaging of the system 100 and the memory device 110, the system 100 Clock signaling and synchronization with the memory device 110, timing conventions, and/or other factors.The memory device 110 may be configured to store data for the components of the system 100. In some cases, the memory device 110 may act as a slave device of the system 100 (eg, respond to and execute commands provided by the external memory controller 105 of the system 100). Such commands may include access commands for access operations, such as write commands for write operations, read commands for read operations, refresh commands for refresh operations, or other commands. The memory device 110 may include two or more memory dies 160 (e.g., memory chips) supporting a required or specified capacity for data storage. A memory device 110 that includes two or more memory dies may be referred to as a multi-die memory or package (also referred to as a multi-chip memory or package).The system 100 may further include a processor 120, a basic input/output system (BIOS) component 125, one or more peripheral components 130, and an input/output (I/O) controller 135. The components of the system 100 may be coupled to each other or communicate electronically using the bus 140.The processor 120 may be configured to control at least part of the system 100. The processor 120 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or It can be a combination of these types of components. In these cases, the processor 120 may be an instance of a central processing unit (CPU), a GPU, a general-purpose GPU (GPGPU), or a system on a chip (SoC), among other instances.The BIOS component 125 may be a software component including a BIOS operating as firmware, which can initialize and run various hardware components of the system 100. The BIOS component 125 can also manage the data flow between the processor 120 and various components of the system 100 (for example, the peripheral component 130, the I/O controller 135, etc.). The BIOS component 125 may include programs or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory.The peripheral component 130 may be any input device or output device, or an interface of such a device, and it may be integrated into the system 100 or integrated with the system 100. Examples can include disk controllers, sound controllers, graphics controllers, Ethernet controllers, modems, universal serial bus (USB) controllers, serial or parallel ports, or peripheral card slots, such as peripheral component interconnect ( Peripheral component interconnect, PCI) or accelerated graphics port (accelerated graphics port, AGP) slot. The peripheral component 130 may be other components understood by those of ordinary skill in the art as a peripheral device.The I/O controller 135 can manage data communication between the processor 120 and the peripheral components 130, the input device 145, or the output device 150. The I/O controller 135 may manage peripheral devices that are not integrated into the system 100 or are not integrated with the system 100. In some cases, I/O controller 135 may represent a physical connection or port to external peripheral components.The input 145 can represent a device or signal that is external to the system 100 and can provide information, signals, or data to the system 100 or its components. This may include user interfaces or interfacing with or between other devices. In some cases, the input 145 may be a peripheral device that interfaces with the system 100 via one or more peripheral components 130, or may be managed by the I/O controller 135.The output 150 may represent a device or signal external to the system 100 that is configured to receive output from the system 100 or any of its components. Examples of output 150 may include a display, audio speakers, a printing device, or another processor on a printed circuit board, and so on. In some cases, the output 150 may be a peripheral device that interfaces with the system 100 via one or more peripheral components 130, or may be managed by the I/O controller 135.The components of the system 100 may be composed of general-purpose or special-purpose circuits designed to perform their functions. This may include output driver circuits and various other circuit elements configured to perform the functions described herein, such as conductive wires, transistors, capacitors, inductors, resistors, amplifiers, or other active or passive elements. For example, the system 100 may include one or more temperature sensors, which may be included in the memory device, the external memory controller 105, or other aspects of the system 100 or otherwise interact with the memory device, the external memory controller 105, or the system 100. Other aspects of coupling. As another example, the system 100 may include circuits configured to heat the memory device 110, and such circuits may be included in or otherwise coupled with the memory device or other aspects of the system 100.The memory device 110 may include a device memory controller 155 and one or more memory dies 160. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, and/or local memory controller 165-N) and a memory array 170 (e.g., memory Array 170-a, memory array 170-b, and/or memory array 170-N). The memory array 170 may be a collection (e.g., a grid) of memory cells, where each memory cell is configured to store at least one bit of digital data. The features of the memory array 170 and/or memory cells are further described with reference to FIG. 2.The memory array 170 may be an example of a two-dimensional (2D) memory cell array or may be an example of a three-dimensional (3D) memory cell array. For example, a 2D memory device may include a single memory die 160. A 3D memory device may include two or more memory die 160 (e.g., memory die 160-a, memory die 160-b, and/or any The number of memory dies 160-N). In a 3D memory device, multiple memory dies 160-N may be stacked on top of each other. In some cases, the memory die 160-N in a 3D memory device may be referred to as a stack, hierarchy, layer, or die. A 3D memory device may include any number of stacked memory die 160-N (e.g., two high, three high, four high, five high, six high, seven high, eight high). Compared to a single 2D memory device, this can increase the number of memory cells that can be positioned on the substrate, which in turn can reduce production costs, improve the performance of the memory array, or both. In some 3D memory devices, different stacks can share at least one common access line so that some stacks can share at least one of word lines, digit lines, and/or plate lines.The device memory controller 155 may include circuits or components configured to control the operation of the memory device 110. Therefore, the device memory controller 155 may include hardware, firmware, and software that enable the memory device 110 to execute commands, and may be configured to receive, transmit, or execute commands, data, or control information about the memory device 110. The device memory controller 155 may be configured to communicate with the external memory controller 105, one or more memory dies 160, or the processor 120. In some cases, the memory device 110 may receive data and/or commands from the external memory controller 105.For example, the memory device 110 may receive a write command that instructs the memory device 110 to store certain data on behalf of a component of the system 100 (e.g., the processor 120), or instructs the memory device 110 to store data stored in the memory die 160 Certain data provides read commands to components of the system 100 (e.g., the processor 120). In some cases, the device memory controller 155 may control the operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160. Examples of components included in the device memory controller 155 and/or the local memory controller 165 may include a receiver for demodulating the signal received from the external memory controller 105, for modulating the signal and transmitting the signal to the external memory The decoder, logic, decoder, amplifier, filter, etc. of the controller 105.The local memory controller 165 (eg, local to the memory die 160) may be configured to control the operation of the memory die 160. In addition, the local memory controller 165 may be configured to communicate with the device memory controller 155 (e.g., receive and transmit data and/or commands). The local memory controller 165 may support the device memory controller 155 to control the operation of the memory device 110 as described herein. In some cases, the memory device 110 does not include the device memory controller 155, and the local memory controller 165 or the external memory controller 105 may perform various functions described herein. Therefore, the local memory controller 165 may be configured to communicate with the device memory controller 155, communicate with other local memory controllers 165, or directly communicate with the external memory controller 105 or the processor 120. Therefore, in some cases, the device memory controller 155 or one or more local memory controllers 165 may support the operation of the circuitry configured to heat the memory device 110 as described herein.The external memory controller 105 may be configured to implement the communication of information, data, and/or commands between the components of the system 100 (for example, the processor 120) and the memory device 110. The external memory controller 105 can act as a liaison between the components of the system 100 and the memory device 110, so that the components of the system 100 do not need to know the operation details of the memory device. The components of the system 100 may present to the external memory controller 105 a request (for example, a read command or a write command) satisfied by the external memory controller 105. The external memory controller 105 can convert or translate the communication exchanged between the components of the system 100 and the memory device 110. In some cases, the external memory controller 105 may include a system clock that generates a common (source) system clock signal. In some cases, the external memory controller 105 may include a common data clock that generates a common (source) data clock signal. Therefore, in some cases, the external memory controller 105 may support the operation of a circuit configured to heat the memory device 110 as described herein.In some cases, the external memory controller 105 or other components of the system 100 or its functions described herein may be implemented by the processor 120. For example, the external memory controller 105 may be hardware, firmware, or software implemented by the processor 120 or other components of the system 100, or some combination thereof. Although the external memory controller 105 is depicted as being external to the memory device 110, in some cases, the external memory controller 105 or its functions described herein may be implemented by the memory device 110. For example, the external memory controller 105 may be hardware, firmware, or software implemented by the device memory controller 155 or one or more local memory controllers 165, or some combination thereof. In some cases, the external memory controller 105 may be distributed across the processor 120 and the memory device 110, such that part of the external memory controller 105 is implemented by the processor 120, and other parts are implemented by the device memory controller 155 or the local memory controller 165. Implement. Similarly, in some cases, one or more functions attributed to the device memory controller 155 or the local memory controller 165 herein may be provided by the external memory controller 105 (separate from the processor 120 or included in the processor 120) in some cases. 120) execution.The components of the system 100 can exchange information with the memory device 110 using multiple channels 115. In some examples, the channel 115 may enable communication between the external memory controller 105 and the memory device 110. Each channel 115 may include one or more signal paths or transmission media (e.g., conductors) between terminals associated with the components of the system 100. For example, the channel 115 may include a first terminal including one or more pins or pads at the external memory controller 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and the pin may be configured to act as part of a channel. In some cases, the pins or pads of the terminals may be part of the signal path of the channel 115. Additional signal paths can be coupled with the terminals of the channel to route signals within the components of the system 100. For example, the memory device 110 may include various components that route signals from the terminals of the channel 115 to the memory device 110 (e.g., device memory controller 155, memory die 160, local memory controller 165, memory array 170). A signal path (e.g., a signal path inside the memory device 110 or its components, such as a signal path inside the memory die 160).Channel 115 (and associated signal paths and terminals) may be dedicated to conveying specific types of information. In some cases, the channel 115 may be an aggregated channel, and thus may include multiple individual channels. For example, the data channel 190 may be x4 (for example, including four signal paths), x8 (for example, including eight signal paths), x16 (including sixteen signal paths), and so on.In some cases, channel 115 may include one or more command and address (CA) channels 186. The CA channel 186 may be configured to transfer commands between the external memory controller 105 and the memory device 110, including control information (e.g., address information) associated with the commands. For example, the CA channel 186 may contain a read command with an address of the required data. In some cases, the CA channel 186 may be registered on a rising clock signal edge and/or a falling clock signal edge. In some cases, the CA channel 186 may include eight or nine signal paths.In some cases, the channel 115 may include one or more clock signal (CK) channels 188. The CK channel 188 may be configured to communicate one or more common clock signals between the external memory controller 105 and the memory device 110. Each clock signal can be configured to adjust (eg, oscillate) between a high state and a low state and coordinate the actions of the external memory controller 105 and the memory device 110. In some cases, the clock signal may be a differential output (eg, CK_t signal and CK_c signal), and the signal path of the CK channel 188 may be configured accordingly. In some cases, the clock signal can be single-ended. In some cases, the clock signal may be a 1.5 GHz signal. The CK channel 188 may contain any number of signal paths. In some cases, the clock signal CK (eg, CK_t signal and CK_c signal) may provide a timing reference for command and addressing operations of the memory device 110 or other system-wide operations of the memory device 110. The clock signal CK can therefore be referred to as a control clock signal CK, a command clock signal CK or a system clock signal CK differently. The system clock signal CK may be generated by a system clock, and the system clock may include one or more hardware components (for example, an oscillator, a crystal, a logic gate, a transistor, etc.).In some cases, the channel 115 may include one or more data (DQ) channels 190. For example, channel 115 may include data channels 190-1 to 190-n. Each data channel may be associated with or include one or more transmission lines. The data channel 190 may be configured to communicate data and/or control information between the external memory controller 105 and the memory device 110. For example, the data channel 190 may convey information to be written to the memory device 110 (e.g., two-way) or information read from the memory device 110. The data channel 190 may convey signals modulated using a variety of different modulation schemes (e.g., NRZ, PAM4).In some cases, channel 115 may include one or more other channels 192 that may be dedicated for other purposes. These other channels 192 may contain any number of signal paths.In some cases, other channels 192 may include one or more write clock signal (WCK) channels. Although the'W' in WCK can represent "write" nominally, the write clock signal WCK (e.g., WCK_t signal and WCK_c signal) can provide a timing reference commonly used for access operations of the memory device 110 (e.g., Timing reference for both read and write operations). Therefore, the write clock signal WCK may also be referred to as a data clock signal WCK. The WCK channel may be configured to communicate a common data clock signal between the external memory controller 105 and the memory device 110. The data clock signal may be configured to coordinate access operations (for example, write operations or read operations) of the external memory controller 105 and the memory device 110. In some cases, the write clock signal can be a differential output (eg, WCK_t signal and WCK_c signal), and the signal path of the WCK channel can be configured accordingly. The WCK channel can contain any number of signal paths. The data clock signal WCK may be generated by a data clock, and the data clock may include one or more hardware components (for example, an oscillator, a crystal, a logic gate, a transistor, etc.).In some cases, other channels 192 may include one or more error detection code (EDC) channels. The EDC channel can be configured to convey error detection signals, such as checksums, to improve system reliability. The EDC channel can contain any number of signal paths.The channel 115 can couple the external memory controller 105 with the memory device 110 using a variety of different architectures. Examples of various architectures may include buses, point-to-point connections, crossbar switches, high-density interposers such as silicon interposers, or channels formed in organic substrates, or some combination thereof. For example, in some cases, the signal path may at least partially include high-density interposers, such as silicon interposers or glass interposers.The signal communicated via channel 115 can be modulated using a variety of different modulation schemes. In some cases, a binary symbol (or binary level) modulation scheme may be used to modulate the signal communicated between the external memory controller 105 and the memory device 110. The binary symbol modulation scheme may be an example of an M-ary modulation scheme in which M is equal to two. Each symbol of the binary symbol modulation scheme can be configured to represent one bit of digital data (e.g., the symbol can represent a logic 1 or a logic 0). Examples of binary symbol modulation schemes include, but are not limited to, non-return-to-zero (NRZ), unipolar encoding, bipolar encoding, Manchester encoding, pulse amplitude modulation (PAM) with two symbols (for example, PAM2), PAM4, etc. .In some cases, a multi-symbol (or multi-level) modulation scheme may be used to modulate the signal communicated between the external memory controller 105 and the memory device 110. The multi-symbol modulation scheme may be an example of an M-ary modulation scheme in which M is greater than or equal to three. Each symbol of the multi-symbol modulation scheme may be configured to represent more than one bit of digital data (for example, the PAM4 symbol may represent logic 00, logic 01, logic 10, or logic 11). Examples of multi-symbol modulation schemes include, but are not limited to, PAM4, PAM8, quadrature amplitude modulation (QAM), quadrature phase shift keying (QPSK), and the like. A multi-symbol signal (for example, a PAM4 signal) may be a signal modulated using a modulation scheme including at least three levels to encode information of more than one bit. Multi-symbol modulation schemes and symbols may alternatively be referred to as non-binary, multi-bit, or higher-order modulation schemes and symbols.According to the described technology, the system 100 may include a body configured to heat the memory device 110 (eg, heat the memory device 110, the memory die 160, or the body of the memory array 170, or heat the memory device 110, the memory die 160, or the memory in general). The mass or volume of the array 170) circuit or other components. The circuitry or other components configured to heat the memory device 110 may be based on an indication of the temperature of the memory device 110 (e.g., from being associated with the external memory controller 105, with the memory device 110, or with some other aspect of the system 100 (e.g., The temperature sensor contained in or coupled thereto generates and receives) and is activated, deactivated, or otherwise operated (for example, by the external memory controller 105, the device memory controller 155, or the local memory controller 165). In some examples, activating or otherwise operating a circuit or other component configured to heat the memory device 110 may be based on the operating mode of the memory device 110, which may be associated with a particular access operation or operating state of the memory device. The various operations or operation modes of the memory device 110 may also be based on an indication of the temperature of the memory device 110.Figure 2 illustrates an example of a memory die 160-b that supports controlled and mode-dependent heating of a memory device in accordance with aspects disclosed herein. The memory die 200 may be an example of the memory die 160 described with reference to FIG. 1. In some cases, the memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory device. The memory die 200 may include one or more memory cells 205 that are programmable to store different logic states. Each memory cell 205 may be programmable to store two or more states. For example, the memory unit 205 may be configured to store one bit of digital logic (e.g., logic 0 and logic 1) at a time. In some cases, a single memory cell 205 (e.g., a multi-level memory cell) may be configured to store more than one bit of digital logic (e.g., logic 00, logic 01, logic 10, or logic 11) at a time.The memory unit 205 may store the charge representing the programmable state in the capacitor. In a DRAM architecture, the memory cell 205 may include a capacitor that includes a dielectric material to store charge representing a programmable state. In other memory architectures, other storage devices and components are possible. For example, nonlinear dielectric materials can be used.Operations such as reading and writing can be performed on the memory cell 205 by activating or selecting access lines such as the word line 210 and/or the digit line 215. In some cases, the digit line 215 may also be referred to as a bit line. References to access lines, word lines, and digit lines or the like are interchangeable without affecting understanding or operation. Activating or selecting the word line 210 or the digit line 215 may include applying a voltage to the corresponding line.The memory die 200 may include access lines (eg, word lines 210 and digit lines 215) arranged in a grid-like pattern. The memory cell 205 can be positioned at the intersection of the word line 210 and the digit line 215. By biasing the word line 210 and the digit line 215 (for example, applying a voltage to the word line 210 or the digit line 215), a single memory cell 205 can be accessed at their intersection.Access to the memory unit 205 can be controlled by the row decoder 220 or the column decoder 225. For example, the row decoder 220 may receive a row address from the local memory controller 260 and activate the word line 210 based on the received row address. The column decoder 225 may receive a column address from the local memory controller 260 and may activate the digital line 215 based on the received column address. For example, the memory die 200 may include a plurality of word lines 210 labeled WL_1 to WL_M and a plurality of digital lines 215 labeled DL_1 to DL_N, where M and N depend on the size of the memory array. Therefore, by activating the word line 210 and the digit line 215, such as WL_1 and DL_3, the memory cell 205 at the intersection point thereof can be accessed. The intersection of the word line 210 and the digit line 215 in a two-dimensional or three-dimensional configuration may be referred to as the address of the memory cell 205.The memory unit 205 may include logic storage components, such as a capacitor 230 and a switch component 235. The capacitor 230 may be an example of a dielectric capacitor or a ferroelectric capacitor. The first node of the capacitor 230 may be coupled with the switch component 235 and the second node of the capacitor 230 may be coupled with the voltage source 240. In some cases, the voltage source 240 is ground, such as Vss. In some cases, the voltage source 240 may be an example of a plate line coupled with a plate line driver. The switching component 235 may be an example of a transistor or any other type of switching device that selectively establishes or cancels the establishment of electronic communication between two components.The selection or deselection of the memory unit 205 can be realized by activating or deactivating the switch component 235. The capacitor 230 may electronically communicate with the digital line 215 using the switch assembly 235. For example, when the switch component 235 is deactivated, the capacitor 230 can be isolated from the digital line 215, and when the switch component 235 is activated, the capacitor 230 can be coupled with the digital line 215. In some cases, the switch component 235 may be or include a transistor and its operation may be controlled by applying a voltage to the transistor gate, where the voltage difference between the transistor gate and the transistor source may be greater or less than the threshold voltage of the transistor. In some cases, the switch component 235 may be or include a p-type transistor or an n-type transistor. The word line 210 may be in electronic communication with the gate of the switching element 235, and the switching element 235 may be activated/deactivated based on the voltage applied to the word line 210.The word line 210 may be a conductive line that communicates electronically with the memory cell 205 that can be used to perform access operations on the memory cell 205. In some architectures, the word line 210 may be in electronic communication with the gate of the switching element 235 of the memory cell 205 and may be configured to control the switching element 235 of the memory cell. In some architectures, the word line 210 may electronically communicate with the node of the capacitor of the memory cell 205, and the memory cell 205 may not include a switching component.The digital line 215 may be a conductive line connecting the memory unit 205 and the sensing component 245. In some architectures, the memory cell 205 may be selectively coupled with the digit line 215 during part of the access operation. For example, the word line 210 and the switching component 235 of the memory cell 205 may be configured to couple and/or isolate the capacitor 230 and the digit line 215 of the memory cell 205. In some architectures, the memory unit 205 may electronically communicate with the digital line 215.The sensing component 245 can be configured to detect the state (eg, charge) stored on the capacitor 230 of the memory cell 205, and determine the logical state of the memory cell 205 based on the stored state. In some cases, the charge stored by the memory cell 205 may be small. Therefore, the sensing component 245 may include one or more sense amplifiers to amplify the signal output by the memory unit 205. The sense amplifier can detect a small change in the charge of the digital line 215 during the read operation, and can generate a signal corresponding to the logic state 0 or the logic state 1 based on the detected charge.During the read operation, the capacitor 230 of the memory cell 205 may output a signal (for example, to discharge the charge) to its corresponding digital line 215. The signal can change the voltage of the digital line 215. The sensing component 245 can be configured to compare the signal received from the memory cell 205 across the digital line 215 with a reference signal 250 (e.g., a reference voltage). The sensing component 245 can determine the stored state of the memory unit 205 based on the comparison. For example, in binary signaling, if the digital line 215 has a higher voltage than the reference signal 250, the sensing component 245 can determine that the stored state of the memory cell 205 is logic 1, and if the digital line 215 has a higher voltage than the reference signal 250 250 is low, then the sensing component 245 can determine that the stored state of the memory cell 205 is logic 0.The sensing component 245 may include various transistors or amplifiers to detect and amplify signal differences. In some cases, the sensing component 245 may be part of another component (e.g., column decoder 225, row decoder 220). In some cases, the sensing component 245 may electronically communicate with the row decoder 220 or the column decoder 225.The detected logic state of the memory cell 205 as determined by the sensing component 245 can be output as an output 255 through the column decoder 225. The output 255 may pass the detected logic state to one or more intermediate components (e.g., a local memory controller) for transmission on one or more channels (e.g., for transmission on one or more transmission lines). Therefore, the detected logic state of the memory unit 205 can be delivered to a device or component outside the memory die 200.The local memory controller 260 may control the operation of the memory unit 205 through various components (for example, the row decoder 220, the column decoder 225, and the sensing component 245). The local memory controller 260 may be an example of the local memory controller 165 described with reference to FIG. 1. In some cases, one or more of the row decoder 220, the column decoder 225, and the sensing component 245 may be co-located with the local memory controller 260. The local memory controller 260 may be configured to receive commands and/or data from the external memory controller 105 (or the device memory controller 155 described with reference to FIG. 1), and convert the commands and/or data into a memory die 200 usable Information, perform one or more operations on the memory die 200, and communicate data from the memory die 200 to the external memory controller 105 (or device memory controller 155) in response to the execution of the one or more operations.The local memory controller 260 may generate row and column address signals to activate the target word line 210 and the target digital line 215. The local memory controller 260 can also generate and control various voltages or currents used during the operation of the memory die 200. In general, the amplitude, shape, or duration of the applied voltage or current discussed herein can be adjusted or changed, and can be different for the various operations discussed when operating the memory die 200.In some cases, the local memory controller 260 may be configured to perform write operations (e.g., programming operations) on one or more memory cells 205 of the memory die 200. The write operation can be used for data received from an external device. During a write operation, the memory cell 205 of the memory die 200 can be programmed to store the desired logic state. In some cases, multiple memory cells 205 can be programmed during a single write operation. The local memory controller 260 can identify the target memory unit 205 where the write operation will be performed. The local memory controller 260 may identify the target word line 210 and the target digital line 215 (eg, the address of the target memory unit 205) in electronic communication with the target memory unit 205. The local memory controller 260 may activate the target word line 210 and the target digital line 215 (for example, apply a voltage to the word line 210 or the digital line 215) to access the target memory cell 205. The local memory controller 260 may apply the first signal (e.g., voltage) to the digital line 215 during the write operation to store the first state (e.g., charge) in the capacitor 230 of the memory cell 205, and the first state ( For example, charge) can indicate the desired logic state.In some cases, the local memory controller 260 may be configured to perform read operations (e.g., sensing operations) on one or more memory cells 205 of the memory die 200. The read operation can be used for data requested by the external device or intended for the external device. During a read operation, the logic state stored in the memory cell 205 of the memory die 200 can be determined. In some cases, multiple memory cells 205 may be sensed during a single read operation. The local memory controller 260 can identify the target memory unit 205 where the read operation will be performed. The local memory controller 260 may identify the target word line 210 and the target digital line 215 (eg, the address of the target memory unit 205) in electronic communication with the target memory unit 205. The local memory controller 260 may activate the target word line 210 and the target digital line 215 (for example, apply a voltage to the word line 210 or the digital line 215) to access the target memory cell 205.The target memory cell 205 may transmit a signal to the sensing component 245 in response to the biased access line. The sensing component 245 can amplify the signal. The local memory controller 260 may activate the sensing component 245 (for example, a latch sensing component), and thereby compare the signal received from the memory unit 205 with the reference signal 250. Based on the comparison, the sensing component 245 can determine the logic state stored on the memory unit 205. As part of the read operation, the local memory controller 260 may communicate the logic state stored on the memory unit 205 to the external memory controller 105 (or device memory controller 155).In some memory architectures, accessing the memory unit 205 can degrade or destroy the logic state stored in the memory unit 205. For example, a read operation performed in a DRAM architecture can partially or completely discharge the capacitor of the target memory cell. The local memory controller 260 may perform a rewrite operation or a refresh operation to restore the memory cell to its original logic state. The local memory controller 260 may rewrite the logic state to the target memory cell after the read operation. In some cases, the rewrite operation can be regarded as part of the read operation. In addition, activating a single access line (e.g., word line 210) can interfere with the state stored in some memory cells in electronic communication with the access line. Therefore, a rewrite operation or a refresh operation can be performed on one or more memory cells that may not have been accessed yet.The memory die 200 illustrates a two-dimensional (2D) memory cell array. In some cases, the memory device may include a three-dimensional (3D) array or memory cells. The 3D memory array may include two or more 2D memory arrays stacked on top of each other. In some cases, the 2D memory array in the 3D memory array may be referred to as a stack, hierarchy, layer, or die. The 3D memory array may include any number of stacked 2D memory arrays (e.g., two high, three high, four high, five high, six high, seven high, eight high). Compared to a single 2D memory array, this can increase the number of memory cells that can be positioned on a single die or substrate, which in turn can reduce production costs, improve the performance of the memory array, or both. In some 3D memory arrays, different stacks can share at least one common access line, so that some stacks can share at least one of the word line 210 or the digit line 215.System 100 or external memory controller 105 that includes memory die 200 or is otherwise associated with memory die 200 (e.g., includes memory device 110 that includes memory die 200 or is otherwise associated with memory die 200) It may be designed or configured to operate in an ambient temperature range that is different from the operating temperature range originally designed or configured for the memory die 200. For example, when the system 100 or the external memory controller 105 is included in a vehicle, the vehicle may be designed to operate at a relatively low temperature that may be lower than the designed operating temperature of the memory die 200 (eg, as low as 0°C). Operate at an ambient temperature (for example, as low as -40°C).One or more operational aspects of the memory die 200 may be temperature-sensitive, and the memory die 200 or a system including the memory die (e.g., a system such as the system 100) may be configured such that the memory device is not included in the memory die. The chip 200 or the system 100 or the external memory controller 105 that is otherwise associated with the memory die 200 satisfies the operating parameters within the expected ambient temperature range. For example, the memory die 200 or the device or system 100 that includes the memory die 200 may include circuits or other components configured to heat the memory die 200 (e.g., configured to heat the memory die 200 or include the memory die 200 The circuit of the memory device 110 of the 200 is configured to heat the circuit of the memory array included in the memory die 200, and the circuit is configured to heat all the memory cells 205 of the memory die 200). The circuitry or other components configured to heat the memory die 200 may be activated, deactivated, or otherwise operated based on an indication of the temperature of the memory device 110 or its components (e.g., based on a determination to raise the temperature of the memory device 110) . In some examples, activating, deactivating, or otherwise operating a circuit or other component configured to heat the memory die 200 may be based on the operating mode of the memory device 110 that includes the memory die 200, which may be consistent with the specificity of the memory device 110. The access operation or operation status is associated. The various operations or modes of operation of the memory device 110 may also be based on an indication of the temperature of the memory device 110 or its components.Figure 3 illustrates an example 300 of a system 100-c that supports controlled and mode-dependent heating of a memory device in accordance with aspects disclosed herein. The system 100-c may include a host device 305 and a memory device 110-c, which may be examples of corresponding components described with reference to FIGS. 1 and 2. Although the system 100-c is illustrated as having one memory device 110 (e.g., memory device 110-c), the components and techniques described herein may be described as including one memory device 110 or a group of memory devices 110 (e.g., more than A memory device 110) system 100.The system 100-c may operate in an environment 302 having an ambient temperature (for example, TA), which may refer to the ambient temperature or an ambient temperature range in which the system 100-c is designed to operate (for example, outdoor temperature, containing the system 100 -c the temperature of the shell or its interior). In some examples, the environment 302 may be associated with an ambient temperature range that is different from the operating temperature range associated with the memory device 110-c, or different from what would be configured for the memory device 110-c. Operating temperature range. For example, the system 100-c or the host device 305 may represent a vehicle or a vehicle component (e.g., a vehicle controller, a vehicle processing unit, or an external memory controller 105 included in the vehicle), and the environment 302 (e.g., outdoor environment , Vehicle environment, engine compartment environment, vehicle interior environment) can be associated with an ambient temperature range of -40°C to 100°C or some other temperature range. In some examples, the memory device 110-c may be designed or otherwise configured for operating temperatures between 0°C and 100°C. According to aspects disclosed herein, the system 100-c may include circuitry configured to heat the memory device 110-c (e.g., based on a determination associated with increasing the temperature of the memory device 110-c) such that even When the ambient temperature of the environment 302 is lower than the operating temperature (for example, as low as -40°C), the memory device 110-c may be within a designed or configured operating temperature range (for example, between 0°C and 100°C) The operation is performed at a temperature of the same temperature.The system 100-c may include various temperature sensors for measuring or indicating the temperature of the memory device 110-c. In some examples, the system 100-c may include a memory device temperature sensor 320, which may be a component of the memory device 110-c. The memory device temperature sensor 320 may be embedded in any of the device memory controller 155, the memory die 160, the local memory controller 165, the memory array 170, or any other components included in the memory device 110-c (e.g., As an integral component thereof), or coupled to any of the device memory controller 155, the memory die 160, the local memory controller 165, the memory array 170, or any other components included in the memory device 110-c. Although shown within the illustrative boundaries of the memory device 110-c, the memory device temperature sensor 320 may also be coupled to (eg, fused, fastened, soldered to) the external package of the memory device 110-c, which may include, for example, Thermally conductive coupling of thermal paste or other coupling between thermally conductive materials. The memory device temperature sensor 320 may provide a relatively direct measurement or indication of the temperature of the memory device 110-c or its components (e.g., temperature T1). In some examples, the acquisition rate associated with the memory device temperature sensor 320 (e.g., the rate at which the temperature indication is determined) may be linked to operations of the memory device 110-c, such as refresh or auto-refresh (AREF) commands, And can occur according to a configured interval (e.g., every 1.9 μs).Additionally or alternatively, the system 100-c may include a host device temperature sensor 330, which may be a component of the host device 305. The host device temperature sensor 330 may be embedded in the external memory controller 105 (for example, as an integral component thereof), or coupled to the external memory controller 105, or if such components are included in the host device 305, the host device temperature The sensor 330 may be embedded in the processor 120, the BIOS component 125, the peripheral component 130, or the I/O controller 135 or be coupled to the processor 120, the BIOS component 125, the peripheral component 130, or the I/O controller 135. Although shown within the illustrative boundaries of the host device 305, the host device temperature sensor 330 may also be coupled to (e.g., fused to, fastened to, soldered to) an external package of the host device 305, which may include thermal conductivity such as thermal paste Coupling or other coupling between thermally conductive materials. The host device temperature sensor 330 may provide a relatively direct measurement or indication of the temperature of the host device 305 or its components (for example, temperature T2), and in some instances or conditions, may provide a memory device 110 for supporting the technology described herein -A suitable measurement or indication of the temperature of c (for example, a relatively indirect measurement or indication).In some instances, the host device 305 and the memory device 110-c may be coupled via the coupling component 310, but in various instances of the system 100-c, the coupling component 310 or the functions described by it may be included in the system 100-c Or omitted from system 100-c. The coupling component 310 may be a physical component of the system 100-c that provides a coupling between the host device 305 and the memory device 110-c. The described coupling may include a thermal coupling that transports thermal energy between the host device 305 and the memory device 110-c. For example, the coupling component 310 may have relatively high thermal conductivity (e.g., low heat resistance), which may facilitate communication between the host device 305 and the memory device 305 under a relatively small temperature difference between the host device 305 and the memory device 110-c. Transfer of thermal energy between devices 110-c. In other words, the coupling component 310 can support the host device 305 and the memory device 110-c at relatively similar temperatures (for example, via relatively strong thermal coupling).By coupling the memory device 110-c with the host device 305 via the coupling component 310 (eg, via thermal coupling), the host device temperature sensor 330 can provide a more accurate measurement of the temperature of the memory device 110-c than when the coupling component 310 is omitted. instruct. For example, when the system 100-c includes the coupling component 310, the host device temperature sensor 330 can provide the memory device during thermal transients or when the internal heat generation of the memory device 110-c is different from the internal heat generation of the host device 305 More accurate indication of temperature. However, in some examples, the coupling component 310 may be omitted from the system 100-c, and the host device temperature sensor 330 may be adapted to support the techniques described herein.In some examples, the coupling component 310 may be specifically configured to reduce the temperature difference between the host device temperature sensor 330 and the memory device 110-c. For example, the coupling component 310 may be a specifically designed thermal bridge or connection between the host device temperature sensor 330 or the host device 305 and the memory device 110-c, such as a thermally conductive trace or pad of the substrate (for example, with the memory device 110-c). The conductive portion of the printed circuit board to which both the device 110-c and the host device 305 are coupled). In some instances, the coupling component 310 may be configured for other purposes, but additionally supports heat conduction between the memory device 110-c and the host device 305. For example, the coupling component 310 may be a heat sink or cooling fin configured to draw heat from the memory device 110-c or the host device 305, and may additionally limit the temperature between the memory device 110-c and the host device 305 Poor (e.g., as a secondary or additional purpose of coupling component 310). In some examples, the coupling component 310 can also refer to conductive traces of a printed circuit board or other interfacing components configured to communicate signals between the memory device 110-c and the host device 305 (e.g., with one or more The signal path associated with channel 115).Although the coupling component 310 is illustrated as a component separate from the memory device 110-c and the host device 305, in various examples of the system 100-c, the coupling component 310 or its described characteristics may be included in the memory device 110-c or the host device. One or both of the devices 305. For example, the memory device 110-c may include one or more memory dies 160 mounted to a printed circuit board or other substrate, and the printed circuit board of the memory device 110-c may include a thermally conductive portion through The thermal energy exchange between the memory device 110-c and the host device 305 is configured or otherwise supported, thereby reducing the temperature difference between the memory device 110-c and the host device temperature sensor 330-c. Additionally or alternatively, the host device 305 may include a printed circuit board, and the printed circuit board of the host device 305 may include a thermally conductive portion that is configured or otherwise supports the connection between the host device 305 and the memory device 110-c The heat energy is exchanged, thereby limiting the temperature difference between the memory device 110-c and the host device temperature sensor 330-c.For the system 100-c including the coupling component 310, the system 100-c may include the coupling component temperature sensor 340 (for example, in addition to or as one of the memory device temperature sensor 320 or the host device temperature sensor 330, or both. One or an alternative to both), which may be a component of the coupling component 310. The coupling component temperature sensor 340 may be embedded in the coupling component 310 (for example, as an integral component thereof), or otherwise coupled to the coupling component 310, which may include thermally conductive coupling of thermal paste or other couplings between thermally conductive materials, for example. The coupling component temperature sensor 340 can provide a relatively direct measurement or indication of the temperature of the coupling component 310 (for example, temperature T3). In some instances or conditions, it can provide one of the temperature of the memory device 110-c or the temperature of the host device 305. Appropriate measurement or indication of one or both (e.g., relatively indirect measurement or indication). In various examples, the coupling component temperature sensor 340 can communicate with the memory device 110-c or the host device 305 or both (e.g., can provide temperature indications thereto).Although illustrated as a single component, any one or more of the memory device temperature sensor 320, the host device temperature sensor 330, or the coupled component temperature sensor 340 may be repeated in the system 100-c. For example, the memory device 110-c may include a set of memory device temperature sensors 320 distributed across multiple memory dies 160 or otherwise distributed across different locations of the memory device 110-c. Additionally or alternatively, the host device 305 may include a set of host device temperature sensors 330 distributed across various components of the host device 305 or otherwise distributed across different locations of the host device 305. Additionally or alternatively, the coupling component 310 may include a set of coupling component temperature sensors 340 distributed across various components of the coupling component 310 or otherwise distributed across different positions of the coupling component 310.In various examples, multiple temperature sensors may be used in the system 100-c to provide aggregate indications (e.g., an average or otherwise aggregated temperature indication of a particular memory device 110-c, a group of memory devices 110-c Average or otherwise aggregated temperature, system 100-c average or otherwise aggregated temperature), minimum or maximum indication (e.g., minimum or maximum temperature of a particular memory device 110-c, a group of memory devices 110-c The minimum or maximum temperature of the system 100-c, the minimum or maximum temperature of the system 100-c), or a reasonable indication (for example, an indication that can be used to detect whether a temperature sensor fails or otherwise provide an unreasonable temperature indication), or each Kind of combination.In various examples, multiple temperature sensors may be used in the system 100-c to support offset determination, cross-calibration, or other processing of the indication of one temperature sensor based on the indication of another temperature sensor. In one example, the host device 305 or the memory device 110-c (for example, the external memory controller 105 or the device memory controller 155) can identify the temperature sensors of the same device (for example, the temperature sensors of the host device 330, the memory The offset or scaling difference between the device temperature sensors 320), and the offset or scaling can be applied to the temperature sensor of the same device (for example, as the addition or subtraction of the identified offset or scaling difference, as the identified offset The addition or subtraction of a certain ratio or weighted amount of shifting or scaling difference), which can be a cross calibration of the temperature sensor in the same device of the system 100-c. In another example, the host device 305 or the memory device 110-c (for example, the external memory controller 105 or the device memory controller 155) can identify the offset or scaling between the memory device temperature sensor 320 and the host device temperature sensor 330 Difference, and the offset or scaling difference can be applied to one or both of the indication from the memory device temperature sensor 320 or the indication from the host device temperature sensor 330 (eg, as an addition to the identified offset or scaling difference Or subtraction, as the addition or subtraction of a certain ratio or weighted amount of the identified offset or scaling difference), which may be an example of cross calibration of temperature sensors across different devices of the system 100-c.In some examples, the host device 305 or the memory device 110-c (for example, the external memory controller 105 or the device memory controller 155) can identify a temperature between the indicated temperature of the memory device 110-c and the indicated temperature of the host device 305 The host device 305 or the memory device 110-c may perform the described operation or exchange commands or signaling based on the identified difference. For example, the host device 305 can identify the difference (e.g., offset) between the memory device temperature sensor 320 and the host device temperature sensor 330, and the host device 305 can apply the identified difference (e.g., by addition or subtraction) to The host device temperature sensor 330 is later instructed to estimate the instruction of the memory device temperature sensor 320, and an operation is performed based on the estimated instruction of the memory device temperature sensor 320.In some cases, between the temperature of one device and the temperature of another device (for example, between the temperature of the host device 305 and the temperature of the memory device 110-c, or between the temperature of the coupling component 310 and the temperature of the memory device 110-c) The offset can be pre-configured at the host device 305 or the memory device 110-c (for example, stored in one or more fuses or anti-fuses), and the host device 305 or the memory device 110-c can use such This pre-configured offset is identified as described herein.The memory device temperature sensor 320, the host device temperature sensor 330, or the coupling component temperature sensor 340 may include different types of components that provide indications of temperature, and such indications may be transmitted, signaled, compared or compared in the digital domain or the analog domain. Deal with it in other ways. For example, any one or more of the memory device temperature sensor 320, the host device temperature sensor 330, or the coupling component temperature sensor 340 may include thermocouples, thermistors, semiconductor temperature sensors, and resistance temperature detectors. , RTD) or some other type of sensor.In some examples, the set of temperature sensors of a particular component of the system 100-c may be the same type of sensor. For example, each of the set of memory device temperature sensors 320 of memory device 110-c may be a semiconductor temperature sensor. In some examples, the components of the system 100-c may have multiple types of temperature sensors, which may support different temperature ranges, different operating conditions (eg, different operating modes, different power consumption, different parts of the powered components) , Redundancy or reasonableness testing. For example, the memory device temperature sensor 320 of the memory device 110-c may include a set of thermocouples and one or more RTDs.In various examples, the components of the system 100-c may use the same or different types of temperature sensors. For example, the memory device temperature sensor 320, the memory device 110-c may include a thermocouple, and the host device temperature sensor 330 of the host device 305 may include a thermocouple or an RTD or both. According to the described technology, various other combinations of temperature sensor types can be used in the memory device temperature sensor 320, the host device temperature sensor 330, or the coupling component temperature sensor 340.The system 100-c may also include various circuits or components configured to heat the memory device 110-c. In some examples, the system 100-c may include a memory device memory heater 350, which may be a component of the memory device 110-c. The memory device memory heater 350 may be embedded in any of the device memory controller 155, the memory die 160, the local memory controller 165, the memory array 170, or any other components included in the memory device 110-c (e.g., , As an integral component thereof), or coupled to any of the device memory controller 155, the memory die 160, the local memory controller 165, the memory array 170, or any other components included in the memory device 110-c. For example, the memory device memory heater 350 may include resistive elements or resistive paths (e.g., traces, wires, or electrodes) that convert electrical energy into thermal energy (e.g., via ohmic heating). In some examples, the memory device memory heater 350 may include a switch component configured to couple a voltage source to ground, chassis ground, or some other voltage source that supports current. Such resistive elements, resistive paths, grounding, voltage sources, or switching components can be associated with one or more various components of the memory device 110-c (e.g., device memory controller 155, memory die 160, local memory controller 165, The memory array 170) is associated.In some examples, a switch component included in a circuit configured to heat the memory device 110-c may be configured to selectively couple (e.g., via a resistive element, via a resistive path, via a short circuit) associated with the memory array 170 Two access lines. In some examples, the memory device 110-c may be configured to perform virtual operations (eg, virtual access operations, access operations not associated with exchanging information with the host device 305) that are configured to heat the memory Device 110-c (e.g., configured to increase the overall temperature of the memory array 170, configured to increase the temperature of the plurality of memory cells 205 of the memory array 170, in response to increasing the temperature of the memory device 110-c Deterministic operation), in this case, the memory device memory heater 350 may include parts of the local memory controller 165, the memory array 170, or both. In some examples, the memory device memory heater 350 may include circuitry configured to heat the memory device 110-c that is not used for access operations (e.g., configured to heat the memory device 110-c that is not in the access operation Used memory device 110-c components).Although shown within the illustrative boundaries of the memory device 110-c, the memory device memory heater 350 may also be coupled to (eg, fused to, fastened to, soldered to) the external package of the memory device 110-c, which may include For example, thermally conductive coupling of thermal paste or other couplings between thermally conductive materials. The memory device memory heater 350 may provide relatively direct heating of the memory device 110-c.Additionally or alternatively, the system 100-c may include a host device memory heater 360, which may be a component of the host device 305. The host device memory heater 360 may be embedded in the external memory controller 105 (for example, as an integral component thereof), or coupled to the external memory controller 105, or when such components are included in the host device 305, the host device The memory heater 360 may be embedded in the processor 120, the BIOS component 125, the peripheral component 130, or the I/O controller 135 or coupled to the processor 120, the BIOS component 125, the peripheral component 130, or the I/O controller 135. In some examples, host device memory heater 360 may include resistive elements or resistive paths (e.g., traces, wires, or electrodes) that convert electrical energy into thermal energy (e.g., via ohmic heating). Although shown within the illustrative boundaries of the host device 305, the host device memory heater 360 may also be coupled to (e.g., fused to, fastened to, soldered to) the external package of the host device 305, which may include, for example, thermal paste Thermally conductive coupling or other coupling between thermally conductive materials. The host device memory heater 360 can provide relatively direct heating of the host device 305, and in some instances or conditions, can provide heating suitable for raising the temperature of the memory device 110-c for supporting the techniques described herein (e.g., , Relatively indirect heating).For the system 100-c including the coupling component 310, the system 100-c may include the coupling component memory heater 370 (e.g., in addition to or as part of one or both of the memory device temperature sensor 320 or the host device temperature sensor 330 One or both alternatives), which may be a component of the coupling component 310. The coupling component storage heater 370 may be embedded in the coupling component 310 (eg, as an integral component thereof), or otherwise coupled to the coupling component 310, which may include thermally conductive coupling of thermal paste or other couplings between thermally conductive materials, for example. In some examples, the coupled component memory heater 370 may include a resistive element or resistive path that converts electrical energy into thermal energy (e.g., via ohmic heating). In some examples, the coupling component storage heater 370 may refer to a circulating fluid or fluid path normally associated with a cooling function (for example, when the coupling component 310 includes a radiator or manifold associated with a liquid cooling system), but may It is configured to provide heating to the storage device 110-c under certain conditions (e.g., when the fluid of the fluid source has a higher temperature than the storage device 110-c). The coupling component memory heater 370 may provide relatively direct heating of the coupling component 310, and in some instances or conditions, may provide heating suitable for increasing the temperature of the memory device 110-c or the host device 305 (for example, relatively indirect heating). In various examples, the coupled component memory heater 370 may communicate with the memory device 110-c or the host device 305 or both (e.g., receive control commands therefrom, activate commands therefrom, and deactivate commands therefrom) .Although illustrated as a single component, any one or more of the memory device memory heater 350, the host device memory heater 360, or the coupled component memory heater 370 may be repeated in the system 100-c. For example, the memory device 110-c may include a set of memory device memory heating distributed across multiple memory dies 160, within each memory die 160, or otherwise distributed across different locations of the memory device 110-c器350. Additionally or alternatively, the host device 305 may include a set of host device memory heaters 360 distributed across various components of the host device 305 or otherwise distributed across different locations of the host device 305. Additionally or alternatively, the coupling component 310 may include a set of coupling component storage heaters 370 distributed across various subcomponents of the coupling component 310 or otherwise distributed across different locations of the coupling component 310. In various examples, multiple storage heaters can be used to support relatively uniform heating (e.g., distributed heat flow), relatively uniform component temperature (e.g., minimize or reduce hot or cold spots across or within components), Specific heating of certain components or sub-components (eg, heating of the active portion of the memory device 110, heating of the active or target memory device 110), or various combinations thereof.The system 100-c may also include various signaling between the memory device 110-c and the host device 305 (for example, the external memory controller 105 of the host device 305), which can support the memory device 110-c and the host device 305 Various operations or operations in between. For example, the system 100-c may support data signaling 380, temperature signaling 385, initialization signaling 390, mode signaling 395, or various combinations thereof. Each of the described signaling may be conveyed via channel 115 (e.g., those channels described with reference to system 100 of FIG. 1).The data signaling 380 may include two-way data exchange, such as data delivered as part of reading or writing to the memory unit 205 of the memory device 110-c. The data signaling 380 may be transported, for example, via the data channel 190 or some other operating channel or line between the memory device 110-c and the host device 305 as described with reference to the system 100 of FIG.The temperature signaling 385 may include various indications of the temperature communicated between the memory device 110-c or the host device 305, and may be via a data channel 190 or an EDC pin associated with another channel 192 or a joint test action team ( JTAG) signals (such as those described with reference to the system 100 of FIG. 1) or some other temperature feedback channel or wire transport. For example, the memory device 110-c and the host device 305 may exchange explicit indications of temperature (for example, a digital value expressing the temperature in Fahrenheit or Celsius) or an implicit indication of temperature (for example, the voltage of a thermocouple) , Or across the RTD's voltage or current originally associated with a specific temperature in Fahrenheit or Celsius). For example, the memory device 110-c may provide an indication of the temperature of the memory device 110-c (eg, from the memory device temperature sensor 320) to the host device 305 via the temperature signaling 385. For example, the host device 305 may provide an indication of the temperature of the host device 305 (eg, from the host device temperature sensor 330) to the memory device 110-c via the temperature signaling 385. This type of temperature signaling can be used to support various examples of the described techniques for controlled and mode-dependent heating of memory devices.The initialization signaling 390 may include various indications of initialization operations or trigger events for initialization performed by the memory device 110-c or the host device 305, and may be via the data channel 190 or with another channel 192 (eg, power channel) The associated EDC pins or JTAG signals (such as those described with reference to the system 100 of FIG. 1) or some other initialized feedback channel or wire transport. For example, the initialization may be triggered by powering on or otherwise providing to the system 100-c (eg, via the input 145) or the host device 305 to otherwise activate or enable the memory device 110-c. In some examples, the initialization signaling 390 may include the provision of power to the memory device 110-c (eg, via a power channel), or may include an explicit command for the memory device 110-c to perform initialization, either or both The initialization of the memory device 110-c can be triggered. In some examples, after power is provided to the memory device 110-c, the memory device 110-c may perform initialization without signaling from the host device 305, but the memory device 110-c may be performing the initialization operation ( For example, an instruction via initialization signaling 390) is provided to the host device 305. In some examples, initialization may be performed by the memory device 110-c or the host device 305 from the idle state or the inactive state (for example, when exiting the idle state or the inactive state), and the initialization may be triggered by the initialization signaling 390, whether or not There is a transition of power supplied to the memory device 110-c or the host device 305.The mode signaling 395 may include various indications of the operating mode in which the memory device 110-c or the host device 305 operates, and may be via the data channel 190 or an EDC pin or JTAG signal associated with another channel 192 (e.g., refer to Those described in the system 100 of FIG. 1) or some other mode of feedback channel or line delivery.The first operation mode (eg, refresh mode, self-refresh mode) may be associated with the refresh operation or self-refresh operation of the memory device 110-c, which may include the period of the logic state stored by the memory cell 205 of the memory device 110-c Sexual refresh. During the first mode, the memory device 110-c may not perform or may not be available for performing read or write operations. Therefore, the first mode may be associated with a lack of data transfer between the memory device 110-c and the host device 305 (e.g., an operating mode associated with the absence of data signaling 380).In some instances, operation in the first mode may be triggered by an indication of the temperature of the memory device 110-c (eg, when the first mode is associated with a relatively low temperature mode or a relatively high temperature mode), or the first mode It may be associated with an indicated configured operation based on the temperature of the memory device 110-c (e.g., when the first mode is associated with a relatively low temperature operation or a relatively high temperature operation). For example, the first mode may be associated with a relatively lower temperature mode and may include or support refresh or self-refresh operations, because the leakage rate may be reduced (slower) at lower temperatures (e.g., memory cell 205 Can show a longer retention time of the stored logic state). In various examples, the mode signaling 395 may include the memory device 110-c indicating to the host device 305 that the memory device 110-c is operating in the first mode (eg, the memory device 110-c is not available for access commands or operations) , Or the mode signaling 395 may include the host device 305 instructing the memory device 110-c to enter the first mode (e.g., a command to operate according to the first mode).The second mode of operation (eg, read/write mode) may be associated with relatively high temperature operation, and may include or support a read operation of the memory device 110-c, a write operation of the memory device 110-c, or Both. Therefore, the second mode may be associated with the existence of data transfer between the memory device 110-c and the host device 305 (e.g., an operating mode associated with the existence of the data signaling 380).In some instances, operation in the second mode may be triggered by an indication of the temperature of the memory device 110-c (eg, when the second mode is associated with a relatively low temperature mode or a relatively high temperature mode), or a second mode It may be associated with the configured operation based on the indication of the temperature of the memory device 110-c (e.g., when the second mode is associated with a relatively low temperature operation or a relatively high temperature operation). For example, when the second mode is associated with a specific access operation (e.g., read operation, write operation), this type of operation is at a relatively high temperature in the memory device 110-c (e.g., at a temperature that may be higher than When the environment 302 is within the operating temperature range of the ambient temperature), it can be executed more quickly, effectively, or reliably. Therefore, operating in the second mode may be associated with a temperature reached based at least in part on the heating of the memory device 110-c under certain conditions. In various examples, the mode signaling 395 may include the memory device 110-c indicating to the host device 305 that the memory device 110-c is operating in the second mode (e.g., the memory device 110-c may be used for access commands or operations), Or the mode signaling 395 may include the host device 305 commanding the memory device 110-c to enter the second mode, which may include a request to access the memory device 110-c. In some examples, when operating in the second mode, the memory device 110-c may provide a "ready to operate" signal via mode signaling 395.Although described with reference to the first mode and the second mode, the techniques described in this article can be applied to any number of types of modes, such as low or high power mode, idle mode, standby mode, high performance mode, energy saving mode, etc. . In other words, the controlled and mode-dependent heating of the memory device 110-c can generally support the memory temperature aligned with the access type, such as allowing or achieving a relatively low temperature for refresh operations with lower power consumption, Or a relatively high temperature used for read or write operations with higher performance (for example, increased data throughput, efficiency, or reliability). In various examples, it may be selected, activated (allowed, available, supported) or deactivated (not allowed, unavailable) based on the indicated temperature (e.g., the temperature before, at, or after heating the memory device 110-c). , Restricted) operation mode.Therefore, the system 100-c may be configured to provide various indications based on the temperature of the memory device 110-c (e.g., from one of the memory device temperature sensor 320, the host device temperature sensor 330, the coupling component temperature sensor 340, or a combination thereof). Many) activate, deactivate, or otherwise control one or more of the memory device memory heater 350, the host device memory heater 360, or the coupled component memory heater 370. In some examples, this control of the memory heater by the system 100-c may be based at least in part on one or more of data signaling 380, temperature signaling 385, initialization signaling 390, or mode signaling 395, or some combination thereof. Alternatively, or this control of the memory heater by the system 100-c may be additionally accompanied by this signaling.Figure 4 illustrates an example of a temperature profile 400 associated with the controlled and mode-dependent heating of the memory device 110 in accordance with aspects disclosed herein. The temperature profile 400 may illustrate an example of the indicated temperature 405 of the memory device 110-c when performing controlled and mode-dependent heating in the system 100-c described with reference to FIG. 3. In various examples, the indicated temperature 405 may describe the temperature indicated at the memory device 110-c (e.g., T1, as indicated by one or more memory device temperature sensors 320), the temperature indicated at the host device 305 ( For example, T2, as indicated by one or more host device temperature sensors 330), the temperature indicated at the coupling component 310 (eg, T3, as indicated by one or more coupling component temperature sensors 340), or some combination thereof . In some examples, the indicated temperature 405 may indicate the average indicated temperature of a group of temperature sensors, the minimum indicated temperature of a group of temperature sensors, the maximum indicated temperature of a group of sensors, or a certain value applied to a group of temperature sensors. Other combinations or operations.At t0, the system 100-c, host device 305, or memory device 110-c may be in a standby, idle, or power-off state or mode. In some instances, at t0, the memory device 110-c may not perform read operations, write operations, or other access operations. In some examples, the system 100-c, the host device 305, or the memory device 110-c may not receive power (eg, from the power of the system 100-c or the host device 305, or from the input 145). In the case where the system 100-c or the host device 305 is a vehicle, before t0, in a mode where the ignition system or other electric or propulsion systems are disabled or otherwise restricted, or in a vehicle or a certain subsystem of the vehicle The vehicle can be turned off in a power outage or some other mode of operation in a low power or idle state.The indicated temperature 405 at t0 may be equal to the ambient temperature TA associated with the environment 302. In other words, the system 100-c, the host device 305, or the memory device 110-c may have reached thermal equilibrium with the environment 302, where the indicated temperature of the memory device 110-c is equal to TA. In other examples, the indicated temperature 405 may not have reached equilibrium with the environment 302, but the indicated temperature 405 may otherwise have reached a relatively low temperature (eg, below a threshold, outside the operating temperature range).The indicated temperature 405 at t0 may be lower than or outside of the operating temperature range associated with the memory device 110-c. For example, the temperature Tth,1 may represent a first threshold associated with the memory device 110-c (e.g., the lower threshold or limit of the operating temperature range). In some examples, the first threshold Tth,1 may be 0°C, but other examples may include the first threshold at a different temperature (for example, a temperature higher or lower than 0°C).At t1, the indicated temperature 405 may be determined by one or more components of the system 100-c at t1, and since the indicated temperature 405 is at or below or otherwise meets the first threshold Tth,1, the system 100- c may initiate heating of the memory device 110-c according to various techniques, which in some examples may include the system 100-c pre-heating the memory device 110-c before performing a particular access operation. In other words, the heating of the memory device 110-c may be initiated in response to a determination that the temperature of the memory device 110-c (e.g., overall temperature, polymerization temperature) should increase.Therefore, after one or more of the operations at t1, the indicated temperature 405 may rise. In some examples, the system 100-c, host device 305, or memory device 110-c may be changed to a different operating mode or state as part of the operation of t1 that may be associated with initialization. For example, any one or more of the system 100-c, host device 305, or memory device 110-c may receive power (e.g., via a power channel), or may receive some other signaling to command changes or trigger operations A change in mode or state (e.g., via initialization signaling 390 or mode signaling 395), and in response a circuit configured to heat the memory device 110-c may be activated. In an example where the indicated temperature 405 at t1 is not lower than the first threshold Tth,1, the system 100-c can continue the access operation without activating the circuit configured to heat the memory device 110-c, but when the indicated temperature 405 A decrease (e.g., based on a comparison between the indicated temperature 405 and the third threshold Tth,3, e.g. the operation described with reference to t4) may activate a circuit configured to heat the memory device 110-c.In the first instance of t1, the memory device 110-c may receive power, receive an initialization command, or both. For example, when the system 100-c or the host device 305 is a vehicle, t1 may indicate the activation or initialization of the ignition system, the propulsion system, or some other system of the vehicle. In response, the memory device 110-c may initialize the memory device 110-c (e.g., the memory device 110-c may perform an initialization operation). As part of the initialization of t1, or otherwise based on a change in the mode or state of the memory device 110-c (e.g., detecting initialization based on the memory device 110-c), the memory device 110-c may determine the indicated temperature 405 (e.g., from a Or a plurality of memory device temperature sensors 320), and compare or otherwise evaluate the indicated temperature 405 with the first threshold Tth,1 (for example, determine that the indicated temperature 405 is at or below or otherwise meets the first threshold Tth, 1). Based on the comparison or evaluation, the memory device 110-c may activate, enable, or otherwise control circuits or other components configured to heat the memory device 110-c (e.g., activate one or more memory device memory heaters 350, Send a command to activate one or more host device memory heaters 360 or one or more coupled component memory heaters 370). In some examples, based on one or more of the operations of t1 (eg, based on detecting initialization), the memory device 110-c may indicate (eg, to the host device 305) that the memory device 110-c is in the initialization mode or the memory device 110 -c is being heated, or indicates the restriction of the access operation of the memory device 110-c (for example, the memory device 110-c is not available for read or write commands, and read or write is disabled for the memory device 110-c operate). In some examples, these indications may be conveyed via initialization signaling 390, mode signaling 395, or some other signaling such as EDC or JTAG signals or data lines.In the second instance of t1, the host device 305 may receive power, receive an initialization command, or both. In response, the host device 305 may perform an initialization operation of the host device 305, which may include (e.g., via the initialization signaling 390, via the mode signaling 395) initializing the memory device 110-c. As part of the initialization at t1, or otherwise based on a change in the mode or state of the host device 305, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 405 (e.g., from one or more The host device temperature sensor 330 or the coupling component temperature sensor 340, via the temperature signaling 385 indicating the temperature of the memory device 110-c, and compares the indicated temperature 405 with the first threshold Tth,1 or evaluates in other ways ( For example, it is determined that the indicated temperature 405 is at or below or otherwise meets the first threshold Tth, 1). In other examples, the host device 305 may additionally identify or determine that the temperature of the memory device 110-c is at or below or otherwise meets the first threshold Tth,1 (for example, by indicating that the temperature of the memory device 110-c is at or below Initialization signaling 390 or mode signaling 395) that meets or otherwise meets the threshold. Based on the comparison or other identification or determination of t1, the host device 305 may activate, enable, or otherwise control the circuit or other components configured to heat the memory device 110-c (e.g., activate one or more host device memory heaters 360 , Send a command to activate one or more memory device memory heaters 350 or one or more coupled component memory heaters 370). In an example where the system 100-c includes multiple memory devices 110, the host device may activate a circuit configured to heat all of the memory devices 110, or activate a circuit configured to heat a subset of the memory devices 110, where based on operating conditions, The type of memory device 110, the type of data to be exchanged with the memory device 110, the type of access operation, and other considerations select this subset.In some instances (eg, according to the second instance of t1), based on one or more of the operations of t1, the memory device 110-c may (eg, to the host device 305) indicate that the memory device 110-c is in the initialization mode Or the memory device 110-c is being heated, or instructs the restriction of the access operation of the memory device 110-c (for example, the memory device 110-c is not available for read or write commands, and reads are disabled for the memory device 110-c. Fetch or write operation). In some instances, the host device 305 may additionally understand or recognize that the memory is not available for access operations (e.g., based on the temperature signaling 385 from the memory device 110-c, based on sending an initialization command via the initialization signaling 390, based on not receiving To the "ready to operate" signal). Therefore, based on one or more of the operations of t1, the host device may suppress one or more commands to access the memory device after t1.At t2, the indicated temperature 405 may exceed the first threshold Tth,1 (e.g., based on heating or operation of the memory device 110-c). In some examples, the first threshold Tth,1 may be the lower operating threshold of the memory device 110-c. The indicated temperature 405 may be determined by one or more components of the system 100-c at t2, and because the indicated temperature 405 is at or above or otherwise meets the first threshold Tth,1, the memory device 110-c may It becomes available for access operations (for example, read operations, write operations). Although the indicated temperature of the memory device 110-c may be at or above or otherwise meet the first threshold Tth,1, the heating of the memory device 110-c may continue to t2 (for example, when a different threshold is used for deactivation, When deactivating or otherwise controlling the storage heating).In the first example of t2, the memory device 110-c may determine the indicated temperature 405 (for example, from one or more memory device temperature sensors 320), and compare the indicated temperature 405 with the first threshold Tth,1 or Evaluate in other ways (e.g., determine that the indicated temperature 405 is at or above or otherwise meets the first threshold Tth,1). Based on the comparison or evaluation of t2, the memory device 110-c may transition to an active or enabled state (e.g., stop restricting access operations), which may be accompanied by a signal from the memory device 110-c (e.g., to the host device 305 395) The memory device 110-c can be used for access operations (e.g., via a "ready to operate" signal) via mode signaling 395).In the second instance of t2, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 405 (for example, from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling 385 of the temperature of the memory device 110-c), and compares or otherwise evaluates the indicated temperature 405 with the first threshold Tth,1 (for example, determines that the indicated temperature 405 is at or higher than or otherwise Meet the first threshold Tth, 1). In other examples, the host device 305 may additionally identify or determine that the temperature of the memory device 110-c is at or higher than or otherwise meets the first threshold Tth,1 (for example, via indicating that the temperature of the memory device 110-c is at or higher Initialization signaling 390 or mode signaling 395) that meets or otherwise meets the threshold. Based on a comparison of t2 or other identification or determination (for example, based on an indication that the memory device is available, based on receiving a "ready to operate" signal), the host device 305 may continue to issue or transmit commands (for example, via data signaling 380) for storage Take the memory device.At t3, the indicated temperature 405 may exceed the second threshold Tth,2 (e.g., based on heating or operation of the memory device 110-c). In some examples, the second threshold Tth,2 may be the threshold of the memory device 110-c, which is different from the lower threshold of the operating temperature range of the memory device 110-c (for example, different from Tth,1, greater than Tth,1) . In some examples, the second threshold Tth,2 may be 10°C, but other examples may include the second threshold at a different temperature (for example, a temperature higher or lower than 10°C).In some examples, the second threshold Tth,2 may be configured, set, or selected to reduce the rate or duty cycle of activated and deactivated memory heating (e.g., when the second threshold Tth,2 is different from the first threshold Tth , 1 o'clock). Therefore, the different first threshold Tth,1 and the second threshold Tth,2 or the band between the first threshold Tth,1 and the second threshold Tth,2 can be referred to as the temperature associated with the heating memory device 110-c Hysteresis range or hysteresis band. Although the second threshold Tth,2 is described as different from the first threshold Tth,1, in some examples, the second threshold Tth,2 may be the same as the first threshold Tth,1 (for example, when the system 100-c is configured to At a single threshold for memory heating), and activating, deactivating, or otherwise controlling the heating of the memory device 110-c may be based on the relationship between the indicated temperature 405 and the single threshold (for example, the indicated temperature 405 is higher than the indicated temperature 405). The threshold is still lower than the threshold).The indicated temperature 405 can be determined by one or more components of the system 100-c at t3, and because the indicated temperature 405 is at or higher than or otherwise meets the second threshold Tth,2, the system 100-c can be determined according to Various techniques disable, deactivate, or otherwise adjust the heating of the memory device 110-c. Therefore, after one or more of the operations at t3, the indicated temperature 405 may drop (e.g., when the ambient temperature TA is lower than the indicated temperature 405, when the heat loss from cooling is greater than from operating the memory device 110-c It may or may not follow the overshoot after the indicated temperature 405 exceeds the second threshold Tth,2 after t3 (for example, due to thermal diffusion across the component, due to the application of heat and the temperature sensor The delay between temperature rises is due to signaling delay or processing delay). The memory device 110-c may remain available for access operations (e.g., read operations, write operations) until t3.In the first example of t3, the memory device 110-c may determine the indicated temperature 405 (for example, from one or more memory device temperature sensors 320), and compare the indicated temperature 405 with the second threshold Tth,2 or Evaluate in other ways (e.g., determine that the indicated temperature 405 is at or above or otherwise meets the second threshold Tth,2). Based on the comparison or evaluation, the memory device 110-c may deactivate, deactivate, or otherwise adjust the circuitry or other components configured to heat the memory device 110-c (e.g., deactivate one or more memory device memory heating The device 350 sends a command to deactivate one or more host device memory heaters 360 or one or more coupled component memory heaters 370). In some examples, based on one or more of the operations at t3, the memory device 110-c may indicate (eg, to the host device 305) that the memory device 110-c is not heated. In some examples, these indications may be conveyed via initialization signaling 390, mode signaling 395, or some other signaling such as EDC or JTAG signals or data lines.In the second instance of t3, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 405 (for example, from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling 385 of the temperature of the memory device 110-c), and the indicated temperature 405 is compared with the second threshold Tth,2 or evaluated in other ways (for example, it is determined that the indicated temperature 405 is at or higher than or otherwise Meet the second threshold Tth, 2). Based on the comparison or other identification or determination of t3, the host device 305 may deactivate, deactivate, or otherwise adjust or control the circuits or other components configured to heat the memory device 110-c (e.g., deactivate one or more host The device memory heater 360 sends a command to deactivate one or more memory device memory heaters 350 or one or more coupled component memory heaters 370). The host device can continue to transmit or issue access commands to access the memory devices 110-c to t3.At t4, the indicated temperature 405 may exceed the third threshold Tth,3 (e.g., based on the cooling of the memory device 110-c). In some examples, the third threshold Tth,3 may be the threshold of the memory device 110-c, which is different from the lower threshold of the operating temperature range of the memory device 110-c (for example, different from Tth,1, greater than Tth,1) , Or different from the threshold associated with deactivating or deactivating the heating of the memory device 110-c (eg, different from Tth,2, less than Tth,2). In some examples, the third threshold Tth,3 may be 5°C, but other examples may include the third threshold at a different temperature (for example, a temperature higher or lower than 5°C). Although the third threshold Tth,3 is described as different from the first threshold Tth,1 and the second threshold Tth,2, in some examples, the third threshold Tth,3 may be the same as the first threshold Tth,1, and the second threshold Tth. , 2 or both are the same (for example, when the system 100-c is configured with a single threshold for memory heating), and activation, deactivation, or otherwise controlling the heating of the memory device 110-c can be based on the indicated temperature 405 The relationship with a single threshold (e.g., whether the indicated temperature 405 is above the threshold or below the threshold).The indicated temperature 405 may be determined by one or more components of the system 100-c at t4, and because the indicated temperature 405 is at or lower than or otherwise meets the third threshold Tth,3, the system 100-c may be determined according to Various techniques again initiate the heating of the memory device 110-c. Therefore, after one or more of the operations at t4, the indicated temperature 405 may rise. The memory device 110-c may remain available for access operations (e.g., read operations, write operations) until t4.In the first example of t4, the memory device 110-c may determine the indicated temperature 405 (eg, from one or more memory device temperature sensors 320), and compare the indicated temperature 405 with a third threshold Tth,3 or Evaluate in other ways (e.g., determine that the indicated temperature 405 is at or below or otherwise meets the third threshold Tth,3). Based on the comparison or evaluation, the memory device 110-c may activate, enable, or otherwise control circuits or other components configured to heat the memory device 110-c (e.g., activate one or more memory device memory heaters 350, Send a command to activate one or more host device memory heaters 360 or one or more coupled component memory heaters 370). In some examples, based on one or more of the operations at t4, the memory device 110-c may indicate (eg, to the host device 305) that the memory device 110-c is being heated. In some examples, these indications may be conveyed via initialization signaling 390, mode signaling 395, or some other signaling such as EDC or JTAG signals or data lines.In the second instance of t4, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 405 (for example, from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling of the temperature of the memory device 110-c 385), and compares or otherwise evaluates the indicated temperature 405 with the third threshold Tth,3 (for example, determines that the indicated temperature 405 is at or lower than or otherwise Meet the first threshold Tth, 3). In other examples, the host device 305 may additionally identify or determine that the temperature of the memory device 110-c is at or below or otherwise meets the third threshold Tth,3 (e.g., via indicating that the temperature of the memory device 110-c is at or below Initialization signaling 390 or mode signaling 395) that meets or otherwise meets the threshold. Based on the comparison or other identification or determination of t4, the host device 305 may activate, enable, or otherwise control the circuit or other components configured to heat the memory device 110-c (e.g., activate one or more host device memory heaters 360 , Send a command to activate one or more memory device memory heaters 350 or one or more coupled component memory heaters 370). The host device can continue to transmit or issue access commands to access the memory devices 110-c to t4.At t5, the indicated temperature 405 may again exceed the second threshold Tth,2 (e.g., based on heating or operation of the memory device 110-c). The indicated temperature 405 can be determined by one or more components of the system 100-c at t5, and because the indicated temperature 405 is at or higher than or otherwise meets the second threshold Tth,2, the system 100-c can be determined according to Various techniques (e.g., similar to those described with reference to the indicated temperature 405 exceeding the second threshold Tth,2 at t3) deactivate, deactivate, or otherwise adjust the heating of the memory device 110-c. Therefore, after one or more of the operations at t5, the indicated temperature 405 may drop again (for example, when the ambient temperature TA is lower than the indicated temperature 405, when the heat loss from cooling is greater than from operating the memory device 110- c), which may or may not follow the overshoot after t5 at which the indicated temperature 405 exceeds the second threshold Tth,2 (for example, due to thermal diffusion across the component, due to the application of heat and the temperature sensor The delay between temperature rises is due to signaling delay or processing delay). The memory device 110-c can remain available for access operations (e.g., read operations, write operations) until t5, and the system 100-c can continue to perform various operations, including the described techniques for controlled memory heating.In various examples of the technology described with reference to FIGS. 3 to 5, the described thresholds (for example, the first threshold Tth,1, the second threshold Tth,2, the third threshold Tth,3, the fourth threshold Tth,4) Any one or more of these may be configured, identified, or determined according to various technologies. For example, any one or more of the thresholds may be configured at the device (e.g., as a static value or level or a set of static values or levels at the memory device 110-c, as a static value or level at the host device 305 Or a set of static values or levels), which can be stored in the mode register, trimming parameters, or one or more non-volatile storage elements (for example, fuses, anti-fuses) of the corresponding device, the non-volatile The storage element is configured to store an indication of one or more configurations or thresholds of the corresponding device. In various examples, the memory device 110-c or the host device 305 can identify the configuration (e.g., configured threshold) by accessing these non-volatile storage elements.Additionally or alternatively, it may be based at least in part on the operating mode of the device (e.g., refresh mode, access mode, read/write mode, idle mode, active mode) or the operating mode of a different device (e.g., based on another device The signaling of the operating mode, such as mode signaling 395), determines or identifies any one or more of the thresholds at the device. In some examples, any one or more of the thresholds may be determined or identified at the device based at least in part on the operating conditions of the device. For example, when the indicated temperature 405 experiences rapid fluctuations (for example, when the ambient temperature TA of the environment 302 is particularly low), the second threshold Tth,2 can be set to be relatively high (for example, a wider hysteresis band), or the first The three threshold Tth,3 may be set to be relatively high (e.g., to limit the overshoot of the indicated temperature 405 beyond the operating temperature range of the memory device 110-c), or both.The described comparison or evaluation of the indicated temperature (eg, indicated temperature 405 or 505) with various thresholds may be performed by one or both of the memory device 110-c or the host device 305 according to various technologies, so The techniques described may include operations performed at the device memory controller 155 or the external memory controller 105. For example, when the indicated temperature is represented in the digital domain at the memory device 110-c or the host device 305, these comparisons may be performed in the digital domain at the processor or digital comparator (e.g., as a comparison of binary values). , As a comparison of integer values, as a comparison of floating-point values). When the indicated temperature is represented in the analog domain at the memory device 110-c or the host device 305 (for example, as the voltage of a thermocouple, as the voltage or current across the RTD), it can be used in the processor, comparator, transistor (for example, , Perform these comparisons in the analog domain (between the gate and the source or drain node) or other circuits in the analog domain (for example, as a comparison of a voltage with a reference voltage indicating a threshold, as a comparison of a current with a reference current indicating a threshold) .In addition, although the operations described with reference to FIGS. 3 to 5 are described as including activating and deactivating a circuit configured to heat the memory device 110-c, more complex forms of control may be applied. For example, the degree of heating (for example, the amount of heat flux) can be controlled, adjusted or otherwise modulated by various control techniques, such as proportional-integral-derivative (PID) control, pulse Wide modulation (PWM) and other technologies. In some examples, temperature thresholds or levels can be applied to these control techniques, such as target temperature, final frequency band, gain scheduling, and other techniques.Figure 5 illustrates an example of a temperature profile 500 associated with the controlled and mode-dependent heating of the memory device 110 in accordance with aspects disclosed herein. The temperature curve 500 may illustrate an example of the indicated temperature 505 of the memory device 110-c when the mode-related heating is performed in the system 100-c described with reference to FIG. 3. In various examples, the indicated temperature 505 may describe the temperature indicated at the memory device 110-c (e.g., T1, as indicated by one or more memory device temperature sensors 320), the temperature indicated at the host device 305 ( For example, T2, as indicated by one or more host device temperature sensors 330), the temperature indicated at the coupling component 310 (eg, T3, as indicated by one or more coupling component temperature sensors 340), or some combination thereof . In some examples, the indicated temperature 505 may indicate the average indicated temperature of a group of temperature sensors, the minimum indicated temperature of a group of temperature sensors, the maximum indicated temperature of a group of sensors, or a certain value applied to a group of temperature sensors. Other combinations or operations.At t0, the memory device 110-c may operate in the first mode of the memory device 110-c. In various examples, the first mode may be a refresh mode, a self-refresh mode, a standby mode, a memory device idle mode, a host device idle mode, or another mode in which the data is not in the memory device 110-c and another in the system. Exchange between a component (for example, do not exchange data with the host device 305). Thus, at t0, the memory device 110-c may operate in a mode associated with refreshing one or more memory cells 205 of the memory device 110-c (e.g., based at least in part on the memory device 110-c in the memory device 110 -c operates in the first mode).In some examples, the power source of the memory device 110-c may be determined (e.g., by the memory device 110-c, by the host device 305, via mode signaling 395), and the memory device 110-c can be operated in the first mode (e.g., , The operation of the memory device 110-c, the command of the host device 305) may be based at least in part on the determined power source. For example, in a vehicle application, operating in the first mode may be based on a determination that the memory device 110-c is operating on battery power rather than an alternator or generator (e.g., via the host device 305, via the memory device 110-c ). In some instances, the memory device 110-c may receive power, but the memory device 110-c may not be available for read operations or write operations (eg, due to the first mode associated with the memory device 110-c Restrictions or other operating conditions).In some examples, operating in the first mode of the memory device 110-c may be based on a comparison of the indicated temperature 505 with a threshold (e.g., the third threshold, Tth, 3), which may be caused by the memory device 110-c or the host Comparison or evaluation performed by either or both of the devices 305. In other words, the first mode of the memory device 110-c may be triggered based on the indicated temperature 505 being at or below the third threshold Tth,3, or various operations associated with the first mode of the memory device 110-c may be Based at least in part on the indicated temperature 505 being at or below the third threshold Tth,3. Therefore, the first mode may be a low-temperature refresh mode or a low-temperature self-refresh mode of the memory device 110-c.In the first example of t0, the memory device 110-c may determine the indicated temperature 505 (e.g., from one or more memory device temperature sensors 320) and compare the indicated temperature 505 with a third threshold Tth,3 or Evaluate in other ways (e.g., determine that the indicated temperature 505 is at or below or otherwise meets the third threshold Tth,3). Based at least in part on the comparison or evaluation, the memory device 110-c is operable or determined to operate in the first mode of the memory device 110-c. In some examples, the memory device 110-c may (e.g., indicate to the host device 305) the restriction of the access operation when the memory device 110-c is operating in the first mode (e.g., the memory device 110-c is not available for reading The fetch or write command disables the read or write operation for the memory device 110-c). In some instances, these indications may be conveyed via mode signaling 395 or some other signaling such as EDC or JTAG signals or data lines.In the second instance of t0, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 505 (e.g., from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling 385 of the temperature of the memory device 110-c), and the indicated temperature 405 is compared with the third threshold Tth,3 or evaluated in other ways (for example, it is determined that the indicated temperature 505 is at or lower than or otherwise Meet the third threshold Tth, 3). In other examples, the host device 305 may additionally identify or determine that the temperature of the memory device 110-c is at or below or otherwise meets the third threshold Tth,3 (e.g., via indicating the temperature of the memory device 110-c and the memory device 110-c Mode signaling associated with the first mode of 110-c 395).Based on a comparison of t0 or other identification or determination (for example, the determination by the host device 305 that the indicated temperature 505 is associated with the first mode), the host device 305 may operate the memory device 110-c according to the first mode, in some examples Here, the first mode may include inhibiting access operations (for example, read operations, write operations) by the host device 305. In some examples, the host device 305 may additionally understand that the memory device 110-c is operating in the first mode (e.g., via mode signaling 395 associated with the first mode), and the host device may inhibit the memory device based on the understanding 110-c access operations (for example, read operations, write operations). In other examples, the host device 305 may not have a reason to perform or request an access operation at t0, for example, it may support the operation of the host device 305 in the host device idle mode. In some examples, the memory device 110-c may operate in the first mode based on a command from the host device 305 (e.g., via mode signaling 395), which may be based on a comparison or other identification or determination of t0.At t1, it may be desirable to transition to the second mode of memory device 110-c. For example, the host device 305 may have information to be written to the memory device 110-c, or may wish to retrieve data from the memory device 110-c, or the system 100-c may have something associated with transitioning to the second mode One other condition (e.g., activation or activation of a part of the system 100-c, a change in the power source associated with the system 100-c, the ignition of the vehicle is triggered). In some examples, the second mode of the memory device 110-c may be associated with a specific access operation, such as a read operation or a mode associated with a write operation (e.g., read mode, write mode, read/ The write mode, the mode associated with the exchange of information at the memory device 110-c). In some examples, the second mode of the memory device 110-c may be associated with a different (e.g., higher) temperature or temperature range (e.g., an operating temperature higher than the temperature t1) than the first mode of the memory device 110-c scope). Therefore, to support the transition to the second mode of the memory device 110-c, a circuit configured to heat the memory device 110-c can be activated at t1.In the first instance of t1, host device 305 may transmit signaling associated with the second mode of memory device 110-c to memory device 110-c. This signaling may include an instruction to switch to the second mode of the memory device 110-c (for example, for the memory device 110-c), or a command to access the memory device 110-c (for example, an access command, a read command , Write command). Accordingly, the memory device 110-c may receive the signaling associated with the second mode, and in response, the memory device 110-c may activate, enable, or otherwise control a circuit or other circuit configured to heat the memory device 110-c Components (e.g., activate one or more memory device memory heaters 350, send a command to activate one or more host device memory heaters 360 or one or more coupled component memory heaters 370). Therefore, after t1, the indicated temperature 505 may rise (e.g., based on memory device 110-c activating a circuit or other component configured to heat memory device 110-c).In some examples, based on one or more of the operations of t1 (eg, based on receiving signaling associated with the second mode), the memory device 110-c may (eg, to the host device 305) instruct the memory device 110 -c is being heated, or indicates the restriction of the access operation of the memory device 110-c (for example, the memory device 110-c is not available for read or write commands, and read or write is disabled for the memory device 110-c operate). In some examples, these indications may be conveyed via initialization signaling 390, mode signaling 395, or some other signaling such as EDC or JTAG signals or data lines.In some examples, the memory device 110-c may determine the indicated temperature 505 (eg, from one or more memory device temperature sensors 320), and compare the indicated temperature 405 with the second threshold Tth,2 or otherwise Evaluation (e.g., determining that the indicated temperature 505 is at or below or otherwise meets the second threshold Tth,2). In some examples, a circuit or other component configured to heat the memory device 110-c can be activated, enabled, or controlled based at least in part on the comparison or evaluation. Although the second threshold Tth,2 and the third threshold Tth,3 are illustrated as having different values, in some examples, the second threshold Tth,2 and the third threshold Tth,3 may have the same value (for example, for The same threshold for operation in one mode, and used to activate memory heating when receiving signaling associated with the second mode).In the second instance of t1, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 505 (e.g., from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling 385 of the temperature of the memory device 110-c), and compares or otherwise evaluates the indicated temperature 505 with the second threshold Tth,2 (for example, determines that the indicated temperature 505 is at or lower than or otherwise Meet the second threshold Tth, 2). In other examples, the host device 305 may additionally identify or determine that the memory device 110-c is in the first mode (e.g., via mode signaling 395), and therefore, the memory device 110-c is at the temperature associated with the first mode ( For example, a temperature below the temperature or temperature range associated with the second mode).Based on the comparison or other identification or determination of t1, the host device 305 may activate, enable, or otherwise control the circuit or other components configured to heat the memory device 110-c (e.g., activate one or more host device memory heaters 360 , Send a command to activate one or more memory device memory heaters 350 or one or more coupled component memory heaters 370). In an example where the system 100-c includes multiple memory devices 110, the host device may activate a circuit configured to heat all of the memory devices 110, or activate a circuit configured to heat a subset of the memory devices 110, where based on operating conditions, The type of memory device 110, the type of data to be exchanged with the memory device 110, the type of access operation, and other considerations select this subset. Therefore, after t1, the indicated temperature 505 may rise (e.g., based on the host device 305 activating a circuit or other component configured to heat the memory device 110-c).At t2, the indicated temperature 505 may exceed the first threshold Tth,1 (e.g., based on heating or operation of the memory device 110-c). In some examples, the first threshold Tth 1 may be the lower threshold temperature associated with the second mode of the memory device 110-c, or the lower limit temperature of the operating range of the memory device 110-c. The indicated temperature 505 may be determined by one or more components of the system 100-c at t2, and because the indicated temperature 505 is at or above or otherwise meets the first threshold Tth,1, the memory device 110-c Operation can be shifted to the second mode. In other words, the memory device 110-c may operate in the second mode based at least in part on activating a circuit or other component configured to heat the memory device.In some instances, at t2 (e.g., based on the transition to the second mode), the memory device 110-c may become available for access operations (e.g., read operations, write operations). Thus, at t2, the memory device 110-c may operate in a mode associated with accessing one or more memory cells 205 of the memory device 110-c (e.g., based at least in part on the memory device 110-c in the memory device Operate in the first mode of 110-c). In some examples, based on the operation of t2, the memory device 110-c may exchange information with the host device (e.g., via data signaling 380), and the access may be based at least in part on the information exchanged with the host device. In some instances, operating the memory device 110-c in the second mode may be associated with higher power consumption than operating the memory device 110-c in the first mode (e.g., where such as read operations and write operations) Operational access operations are associated with higher power than refresh or self-refresh operations). Although the indicated temperature of the memory device 110-c may be at or above or otherwise meet the first threshold Tth,1, the heating of the memory device 110-c may continue to t2 (for example, when a different threshold is used for deactivation, When deactivating or otherwise controlling the storage heating).In the first example of t2, the memory device 110-c may determine the indicated temperature 505 (for example, from one or more memory device temperature sensors 320), and compare the indicated temperature 405 with the first threshold Tth,1 or Evaluate in other ways (e.g., determine that the indicated temperature 505 is at or above or otherwise meets the first threshold Tth,1). Based on the comparison or evaluation of t2, the memory device 110-c may transition to operating in the second mode (for example, stop the restriction of the access operation, transition from the refresh mode to the access mode), which may be accompanied by the memory device 110-c Signaled (e.g., to host device 305, via mode signaling 395 or some other signaling such as EDC or JTAG signals or data lines) memory device 110-c is operating in the second mode or memory device 110-c is available For access operations (for example, via a "ready to operate" signal).In the second instance of t2, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 505 (e.g., from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling of the temperature of the memory device 110-c 385), and the indicated temperature 505 is compared with the first threshold Tth,1 or evaluated in other ways (for example, it is determined that the indicated temperature 505 is at or higher than or otherwise Meet the first threshold Tth, 1). In other examples, the host device 305 may additionally identify or determine that the temperature of the memory device 110-c is at or higher than or otherwise meets the first threshold Tth,1 (for example, via indicating that the temperature of the memory device 110-c is at or higher Mode signaling that meets or otherwise meets the threshold 395). Based on a comparison of t2 or other identification or determination (for example, based on an indication that the memory device is available for access operations, based on receiving a "ready to operate" signal), the host device 305 can transition to the second mode in the memory device 110-c Operating the memory device 110-c, which may include forwarding issuance or transmitting commands (for example, via data signaling 380) to access the memory device 110-c, or otherwise execute with the memory device 110-c in the second mode The associated access operation.At t3, the indicated temperature 505 may exceed the fourth threshold Tth,4 (e.g., based on heating or operation of the memory device 110-c). In some examples, the fourth threshold Tth,4 may be the threshold of the memory device 110-c, and is different from the lower threshold of the operating temperature range of the memory device 110-c, and different from the threshold associated with activating the heating of the memory device 110-c The threshold value of (eg, different from Tth,2, greater than Tth,2), or different from the threshold associated with transitioning to the second mode (eg, different from Tth,1, greater than Tth,1). In some examples, the second threshold Tth,2 may be 10°C, but other examples may include the second threshold at a different temperature (for example, a temperature higher or lower than 10°C).In some examples, the fourth threshold Tth,4 may be configured, set or selected to reduce the rate or duty cycle of activated and deactivated memory heating (for example, when the fourth threshold Tth,4 is different from the second threshold Tth , 2 o'clock). Therefore, the different fourth threshold Tth,4 and the second threshold Tth,2 or the band between the fourth threshold Tth,4 and the second threshold Tth,2 may be referred to as the temperature associated with the heating memory device 110-c Hysteresis range or hysteresis band. Although the fourth threshold Tth,4 is described as different from the second threshold Tth,2, in some examples, the fourth threshold Tth,4 may be the same as the second threshold Tth,2 (for example, when the system 100-c is configured to At a single threshold for memory heating), and activating, deactivating, or otherwise controlling the heating of the memory device 110-c may be based on the relationship between the indicated temperature 505 and the single threshold (for example, the indicated temperature 505 is higher than the indicated temperature 505). The threshold is still lower than the threshold). In addition, although the fourth threshold Tth,4 is described as different from the first threshold Tth,1, in some examples, the fourth threshold Tth,4 may be the same as the first threshold Tth,1 (for example, when the system 100-c is When configured with thresholds for memory heating and mode switching sharing), and activating, deactivating, or otherwise controlling the heating of the memory device 110-c may be based on whether the memory device 110-c is operating in the first mode or the second mode, Or whether the transition from the first mode to the second mode has been completed.The indicated temperature 505 may be determined by one or more components of the system 100-c at t3, and because the indicated temperature 405 is at or higher than or otherwise meets the fourth threshold Tth,4, the system 100-c may be determined according to Various techniques disable, deactivate, or otherwise adjust the heating of the memory device 110-c. Therefore, after one or more of the operations at t3, the indicated temperature 505 may drop (e.g., when the ambient temperature TA is lower than the indicated temperature 505, when the heat loss from cooling is greater than from operating the memory device 110-c It may or may not follow the overshoot after the indicated temperature 505 exceeds the fourth threshold Tth,4 after t3 (for example, due to thermal diffusion across the component, due to the application of heat and the temperature at the temperature sensor). The delay between temperature rises is due to signaling delay or processing delay). The memory device 110-c may remain available for access operations (e.g., read operations, write operations) until t3.In the first example of t3, the memory device 110-c may determine the indicated temperature 505 (for example, from one or more memory device temperature sensors 320), and compare the indicated temperature 505 with a fourth threshold Tth,4 or Evaluate in other ways (e.g., determine that the indicated temperature 505 is at or above or otherwise meets the fourth threshold Tth,4). Based on the comparison or evaluation, the memory device 110-c may deactivate, deactivate, or otherwise adjust the circuitry or other components configured to heat the memory device 110-c (e.g., deactivate one or more memory device memory heating The device 350 sends a command to deactivate one or more host device memory heaters 360 or one or more coupled component memory heaters 370). In some examples, based on one or more of the operations at t3, the memory device 110-c may indicate (eg, to the host device 305) that the memory device 110-c is not heated. In some examples, these indications may be conveyed via initialization signaling 390, mode signaling 395, or some other signaling such as EDC or JTAG signals or data lines.In the second instance of t3, the host device 305 may determine or otherwise receive signaling associated with the indicated temperature 505 (e.g., from one or more host device temperature sensors 330 or coupling component temperature sensors 340, via the instruction The temperature signaling 385 of the temperature of the memory device 110-c), and compares or otherwise evaluates the indicated temperature 505 with the fourth threshold Tth,4 (for example, determines that the indicated temperature 505 is at or above or otherwise Meet the fourth threshold Tth, 4). Based on the comparison or other identification or determination of t3, the host device 305 may deactivate, deactivate, or otherwise adjust or control the circuits or other components configured to heat the memory device 110-c (e.g., deactivate one or more host The device memory heater 360 sends a command to deactivate one or more memory device memory heaters 350 or one or more coupled component memory heaters 370). The host device can continue to transmit or issue access commands to access the memory devices 110-c to t3.At t4, operating the memory device 110-c may transition from operating in the second mode of the memory device 110-c, which may include transitioning to the first mode of the memory device 110-c, and the first mode of the memory device 110-c Or some other mode of the memory device 110-c different from the second mode. In some examples, the transition from the second mode may be based on a lack of access operations (e.g., lack or lack of data signaling 380) associated with the exchange of data between host device 305 and memory device 110-c. In some examples, the transition of t4 may be associated with a transition to refresh mode, self-refresh mode, standby mode, memory device idle mode, or host device idle mode, and the mode may be the same mode as at t0 or different model.In the first instance of t4, the memory device 110-c can identify the lack of data to be exchanged with the host device 305 (for example, the lack of data signaling 395), the completion of the access operation using the host device 305, or the change of the memory Some other determination of the mode of the device 110-c. In some instances, as part of the determination to change the mode of the memory device 110-c, the memory device 110-c may compare or otherwise evaluate the indicated temperature 505 to a threshold. Therefore, according to various examples, the determination to change the mode at t4 may be performed at the memory device 110-c.In some examples, the transition may be based at least in part on a comparison of the indicated temperature 505 with a third threshold Tth,3. According to the indicated temperature 505-a, for example, the transition to operating in the first mode of the memory device 110-c can occur directly after determining that the host device 305 is used to complete the access operation, because the indicated temperature 505-a At or below the third threshold Tth,3 (for example, when the first mode or the third threshold Tth,3 is associated with a low temperature refresh or a self-refresh mode). According to the indicated temperature 505-b, for example, the transition to operating in the first mode of the memory device 110-c can occur directly after determining that the host device 305 is used to complete the access operation, even if the indicated temperature 505-b is high. At the third threshold Tth,3 (for example, when the first mode is a refresh mode or a self-refresh mode that is not related to low-temperature operation, or when the indicated temperature 505-b is lowered to a temperature that is still related to the low-temperature refresh or self-refresh mode Or when the first mode generally refers to refresh or self-refresh mode).In another example, the transition to operating in the first mode of the memory device 110-c may not occur directly after determining that the host device 305 completes the access operation, but may occur when it drops to or below the third threshold Tth, After 3, it may include an intermediate mode before the indicated temperature 505 drops to or below the third threshold Tth,3. In some examples, the determination of whether the host device 305 is performing an access operation may be made by the memory device 110-c after a comparison or other evaluation of the indicated temperature 505 with a threshold value. In some examples, the memory device 110-c may determine the elapsed time since the access operation, and switching from operating in the second mode of the memory device 110-c to operating in the first mode of the memory device 110-c may be Based at least in part on the elapsed time, which may or may not be based on the indicated temperature 505.In the second example of t4, the host device 305 can recognize the lack of data to be exchanged with the memory device 110-c, the completion of the access operation using the memory device 110-c, or some of the changes in the mode of the memory device 110-c. One other determination. In some instances, as part of the determination based on the mode change operation of the memory device 110-c, the host device 305 may compare or otherwise evaluate the indicated temperature 505 with a threshold value. Therefore, according to various examples, the determination to change the mode at t4 may be performed at the host device 305.In some examples, host device 305 may transmit an indication of the second mode switch from memory device 110-c to memory device 110-c (e.g., via mode signaling 395). In some examples, the indication may be an explicit indication that the memory device 110-c has transitioned to the first mode. In some examples, the indication may be an indication that the memory device 110-c is otherwise associated with the first mode, and the memory device 110-c may use the indication to determine whether to transition to the first mode (e.g., in combination with the memory device 110-c). c Evaluation of the indicated temperature 505 and the threshold, for example, is it in accordance with the indicated temperature 505-a or 505-b).In one example, the memory device 110-c may determine the transition to The first mode. According to the indicated temperature 505-a, the transition to operating in the first mode of the memory device 110-c can occur directly after receiving the indication from the host device 305, because the indicated temperature 505-a is at or below the third The threshold Tth,3 (for example, when the first mode or the third threshold Tth,3 is associated with a low temperature refresh or a self-refresh mode). According to the indicated temperature 505-b, the transition to operating in the first mode of the memory device 110-c may occur directly after receiving the indication from the host device 305, even after the indicated temperature 505-b is higher than the third threshold Tth, 3 o'clock (for example, when the first mode is a refresh mode or a self-refresh mode that is not associated with low-temperature operation, or when the temperature of the indicated temperature 505-b drops to a range still associated with the low-temperature refresh or self-refresh mode, Or when the first mode generally refers to refresh or self-refresh mode). In another example, the transition to operating in the first mode of the memory device 110-c may not occur directly after receiving the instruction from the host device 305, but may occur when it drops to or below the third threshold Tth,3 After that, it may include an intermediate mode before the indicated temperature 505 drops to or below the third threshold Tth,3.According to the various techniques described herein, operations using the memory device 110-c may continue according to the first mode of the memory device 110-c, the second mode of the memory device 110-c, or other modes of the memory device 110-c . Under various conditions (for example, according to different modes), the memory device 110-c may or may not be available for various access operations using the host device 305 at a given time, but the system 100-c may include activation configured To heat the circuit or other components of the memory device 110-c to support various modes of operation. Therefore, the mode-dependent heating of the memory device 110-c described with reference to FIG. 5 may illustrate an example of aligning the memory temperature with the access type.In various examples of the technology described with reference to FIGS. 3 to 5, the described thresholds (for example, the first threshold Tth,1, the second threshold Tth,2, the third threshold Tth,3, the fourth threshold Tth,4) Any one or more of these may be configured, identified, or determined according to various technologies. For example, any one or more of the thresholds may be configured at the device (e.g., as a static value or level or a set of static values or levels at the memory device 110-c, as a static value or level at the host device 305 Or a set of static values or levels), which can be stored in the mode register, trimming parameters, or one or more non-volatile storage elements (for example, fuses, anti-fuses) of the corresponding device, the non-volatile The storage element is configured to store an indication of one or more configurations or thresholds of the corresponding device. In various examples, the memory device 110-c or the host device 305 can identify the configuration (e.g., configured threshold) by accessing these non-volatile storage elements.Additionally or alternatively, it may be based at least in part on the operating mode of the device (e.g., refresh mode, access mode, read/write mode, idle mode, active mode) or the operating mode of a different device (e.g., based on another device The signaling of the operating mode, such as mode signaling 395), determines or identifies any one or more of the thresholds at the device. In some examples, any one or more of the thresholds may be determined or identified at the device based at least in part on the operating conditions of the device. For example, when the indicated temperature 405 experiences rapid fluctuations (for example, when the ambient temperature TA of the environment 302 is particularly low), the second threshold Tth,2 can be set to be relatively high (for example, a wider hysteresis band), or the first The three threshold Tth,3 may be set to be relatively high (e.g., to limit the overshoot of the indicated temperature 405 beyond the operating temperature range of the memory device 110-c), or both.The described comparison or evaluation of the indicated temperature (eg, indicated temperature 405 or 505) with various thresholds may be performed by one or both of the memory device 110-c or the host device 305 according to various technologies, so The techniques described may include operations performed at the device memory controller 155, the local memory controller 165, or the external memory controller 105. For example, when the indicated temperature is represented in the digital domain at the memory device 110-c or the host device 305, these comparisons may be performed in the digital domain at the processor or digital comparator (e.g., as a comparison of binary values). , As a comparison of integer values, as a comparison of floating-point values). When the indicated temperature is represented in the analog domain at the memory device 110-c or the host device 305 (for example, as the voltage of a thermocouple, as the voltage or current across the RTD), it can be used in the processor, comparator, transistor (for example, , Perform these comparisons in the analog domain (between the gate and the source or drain node) or other circuits in the analog domain (for example, as a comparison of a voltage with a reference voltage indicating a threshold, as a comparison of a current with a reference current indicating a threshold) .In addition, although the operations described with reference to FIGS. 3 to 5 are described as including activating and deactivating a circuit configured to heat the memory device 110-c, more complex forms of control may be applied. For example, the degree of heating (for example, the amount of heat flux) can be controlled, adjusted or otherwise modulated by various control techniques, such as proportional integral derivative (PID) control, pulse width modulation (PWM), and Other technologies. In some examples, temperature thresholds or levels can be applied to these control techniques, such as target temperature, final frequency band, gain scheduling, and other techniques.6A illustrates an example 600-a of a memory heater 605-a that supports controlled and mode-dependent heating of the memory device 110 in accordance with aspects disclosed herein. In various examples of the system 100, the memory heater 605-a may illustrate any of the memory device memory heater 350, the host device memory heater 360, or the coupled component memory heater 370.The memory heater 605-a can include a heating resistor 610 (e.g., a resistive component that can be configured to heat the memory device 110), which can represent any component or circuit that exhibits electrical resistance, and can therefore convert electrical energy into thermal energy (e.g., hot). In various examples, the heating resistor 610 may be a dedicated component used to provide heating, or may be another controllable circuit integrated into a related component (for example, the memory device 110, the external memory controller 105, or the coupling component 310) . In the example where the memory heater 605-a is part of the memory device 110, the heating resistor 610 may be a component of the memory die 160, which may include a component of the local memory controller 165, the memory array 170, or some other component ( For example, integrated into the memory die 160, coupled with the memory die 160). The memory heater 605-a may also include a switch component 615, which may be a component configured to selectively activate or deactivate the memory heater 605-a (e.g., based on the input signal SW1). The switching element 615 may be an n-type or p-type transistor, and the input signal SW1 may be applied to the gate of the transistor. In various examples, the input signal SW1 may be a logic value (e.g., a digital signal) generated by a memory controller, or may be an analog signal provided from a temperature sensor (e.g., provide a voltage directly from a temperature sensor, or from a temperature The sensor's originally amplified or converted voltage or other signal).In some examples, the memory heater 605-a may be coupled to or between the first voltage source 620-a and the second voltage source 620-b. In some examples, the first voltage source 620-a may represent a ground voltage source or a chassis ground voltage source, and the second voltage source 620-b may represent some other voltage (e.g., a relatively higher voltage power supply or rail). More generally, the voltage V0 of the first voltage source 620-a may be any voltage different from the voltage V1 of the second voltage source 620-b. The activation switch assembly 615 may permit current to flow through the heating resistor 610 between the first voltage source 620-a and the second voltage source 620-b. Therefore, activating the switch assembly can achieve heat generation by the memory heater 605-a (for example, ohmic heating via the heating resistor 610). The deactivation switch assembly 615 can prevent current from flowing through the heating resistor 610 between the first voltage source 620-a and the second voltage source 620-b. Therefore, deactivating the switch assembly can inhibit the heat generation of the storage heater 605-a.6B illustrates an example 600-b of a memory heater 605-b that supports controlled and mode-dependent heating of the memory device 110 in accordance with the aspects disclosed herein. In various examples of the system 100, the memory heater 605-b may illustrate any of the memory device memory heater 350, the host device memory heater 360, or the coupled component memory heater 370.The memory heater 605-b may include one or more driver stages 630 (e.g., driver components), which may be coupled with a capacitive load (e.g., capacitor 635, oscillator, resonator). In some examples, the output of the driver stage 630 may be coupled with the first terminal or plate of the capacitor 635, and the voltage source 620-c (eg, a ground or chassis ground voltage source) may be coupled with the second terminal or plate of the capacitor 635 . The input terminal of the driver stage 630 may be coupled with the AND gate 640. In various examples, the memory heater 605-b may include a single driver stage 630, or any number of driver stages 630 (e.g., illustrated in a series arrangement in the example of the memory heater 605-b).The AND gate 640 may represent an output signal configured to provide an output signal when each of the plurality of input signals is in a relatively high state or voltage or otherwise enabled state or voltage (e.g., an output signal that may be applied to one or more driver components ) Of the circuit. In the example of the memory heater 605-b, the input to the AND gate 640 may include a clock signal 650 and a Heater On/Off signal 660. The clock signal 650 may represent any clock signal, which may be the source of the same component including the memory heater 605-b, or the source of a different component (e.g., as conveyed by the clock channel 188). Therefore, the AND gate 640 can receive as one input an oscillating signal that oscillates between a relatively high value and a relatively low value. The AND gate 640 may also receive activation when the memory heater 605-b is activated or otherwise configured for heat generation (eg, a relatively high state or voltage) or when the memory heater 605 is deactivated or otherwise not configured to generate heat. A signal for deactivation (for example, a relatively low state or voltage) when generating heat is used as another input.When the heater on/off signal is enabled, the AND gate 640 may output an oscillating signal to one or more driver stages 630, which may support current flowing in and out of the driver stage 630 and the capacitor 635. Therefore, heat can be generated by ohmic heating within the driver stage 630, within the capacitor 635, or along other signal paths between the conductor or these components, the AND gate 640 and the voltage source 620. When the heater on/off signal is deactivated, the AND gate 640 may not output any signal, thereby deactivating the heating of the memory heater 605-b. In various examples, the heater on/off signal 660 may be a logical value (e.g., a digital signal) generated by the memory controller, or may be an analog signal provided from a temperature sensor (e.g., provided directly from the temperature sensor or with The voltage amplified or converted in other ways).Figure 7 shows a block diagram 700 of a device 705 that supports controlled and mode-dependent heating of a memory device according to aspects disclosed herein. According to various examples of the described technology, the device 705 may be an example of aspects of the system 100, the external memory controller 105, the external memory controller 105, the memory device 110, or the device memory controller 155 as described with reference to FIGS. 1 to 6 . The device 705 may include a temperature component 710, a comparison component 715, a heating control component 720, a signaling component 725, a heating component 730, an operation control component 735, and a data exchange component 740. Each of these modules can directly or indirectly communicate with each other (e.g., via one or more buses).The temperature component 710 can determine the temperature of the memory device. In some examples, the temperature component 710 may determine the second temperature of the memory device after activating a circuit configured to heat the memory device. In some examples, the temperature component 710 may determine the third temperature of the memory device after deactivating a circuit configured to heat the memory device.The comparison component 715 can compare the temperature of the memory device with a threshold value. In some examples, the comparison component 715 can determine that the second temperature of the memory device meets the second threshold. In some cases, the second threshold is higher than the threshold. In some examples, the comparison component 715 can compare the third temperature of the memory device with a third threshold that is higher than the threshold and lower than the second threshold.In some examples, the comparison component 715 can compare the temperature of the memory device with a third threshold, and operating the memory device in the first mode can be based on the comparison with the third threshold. In some examples, the comparison component 715 may determine that the temperature of the memory device meets a threshold after activating a circuit configured to heat the memory device, and operating the memory device in the second mode may be based on a determination that the temperature of the memory device meets the threshold. In some examples, the comparison component 715 can compare the temperature of the memory device to a second threshold that is less than the threshold, and activating a circuit configured to heat the memory device can be based on the comparison with the second threshold. In some examples, the comparison component 715 can determine that the temperature of the memory device meets the fourth threshold.The heating control component 720 can activate a circuit or other component configured to heat the memory device based on the comparison of the temperature to a threshold value. In some examples, the heating control component 720 may activate a circuit or other component configured to heat the memory device based on receiving the signaling associated with the second mode. In some examples, the heating control component 720 may deactivate a circuit or other component configured to heat the memory device based on the determination that the second temperature meets the second threshold. In some examples, the heating control component 720 can activate a circuit or other component configured to heat the memory device based at least in part on comparing the third temperature to a third threshold. In some examples, the heating control component 720 may deactivate a circuit or other component configured to heat the memory device based on a determination that the temperature of the memory device meets the fourth threshold.The signaling component 725 may receive signaling associated with the second mode of the memory device (e.g., from the host device). In some cases, the signaling associated with the second mode contains an indication to switch to the second mode. In some cases, the signaling associated with the second mode includes commands to access the memory device. In some examples, the signaling component 725 can transmit an indication that the memory device is operating in the second mode (e.g., to the host device). In some cases, the indication that the memory device is operating in the second mode includes an indication that the memory device is available for read operations, write operations, or a combination thereof.In some examples, the signaling component 725 may transmit an indication that the access operation of the memory device is restricted (e.g., to the host device) based on the comparison of the temperature and the threshold. In some examples, the signaling component 725 can instruct to disable at least one of a read operation or a write operation for the memory device. In some examples, the signaling component 725 may initialize the memory device, and transmitting an indication that the access operation of the memory device is restricted may be based on the initialization. In some examples, the signaling component 725 can determine the power source of the memory device, and operating the memory device in the first mode can be based on the power source of the memory device. In some examples, the signaling component 725 may receive an indication to switch to the first mode (e.g., from the host device).The heating component 730 may couple a voltage source and one or more resistive components in the memory device that are configured to heat the memory device, and activating a circuit or other component configured to heat the memory device may be based on the coupling. In some examples, the heating component 730 can apply a signal to one or more driver components of the memory device, and activating a circuit or other component configured to heat the memory device can be based on the application.The operation control component 735 may operate the memory device in the first mode of the memory device. In some examples, the operation control component 735 may refresh the memory cells of the memory device based on operating the memory device in the first mode. In some examples, operating the memory device in the first mode is associated with first power consumption.In some examples, the operation control component 735 can operate the memory device in the second mode based on activating a circuit or other component configured to heat the memory device. In some examples, the operation control component 735 may access the memory unit of the memory device based on operating the memory device in the second mode. In some examples, operating the memory device in the second mode is associated with a second power consumption that is greater than the first power consumption.In some examples, the operation control component 735 may switch from operating the memory device in the second mode to operating the memory device in the first mode based on the instruction to switch to the first mode. In some examples, the operation control component 735 may determine the elapsed time from the access operation, and switch from operating the memory device in the second mode to operating the memory device in the first mode based on the elapsed time.The data exchange component 740 can exchange information with the host device. In some examples, accessing the memory device may be based on information exchanged with the host device.Figure 8 shows a block diagram 800 of a device 805 that supports controlled and mode-dependent heating of a memory device in accordance with aspects disclosed herein. According to various examples of the described technology, the device 805 may be an example of an aspect of the system 100, the external memory controller 105, or the external memory controller 105 described with reference to FIGS. 1 to 6. The device 805 may include a temperature component 810, a temperature evaluation component 815, an access component 820, a signaling component 825, and an initialization component 830. Each of these modules can directly or indirectly communicate with each other (e.g., via one or more buses).The temperature component 810 can receive (e.g., at the host device) an indication of the temperature of a memory device coupled with the host device. In some examples, the temperature component 810 can receive (e.g., at the host device) an indication of the temperature of the memory device, and the temperature can be associated with the first mode of the memory device. In some cases, the first mode includes a refresh mode. In some examples, the temperature component 810 may receive (e.g., at the host device after inhibiting a command to access the memory device) an indication of the second temperature of the memory device.The temperature evaluation component 815 can evaluate the temperature of the memory device relative to a threshold.The access component 820 may inhibit (e.g., by the host device) a command to access the memory device based on determining the temperature of the memory device relative to the threshold. In some examples, the access component 820 can perform an access operation associated with the memory device being in the second mode based on receiving signaling indicating that the memory device is in the second mode.In some examples, the access component 820 may issue a command to the memory device to access the memory device based on the indication of the second temperature of the memory device. In some examples, the access component 820 can issue a command to access the memory device to the memory device based on the instructions available to the memory device. In some examples, the access component 820 can issue a command to the memory device that provides an indication of the temperature of the memory device based on the initialization.The signaling component 825 can transmit (e.g., from the host device) signaling associated with the second mode of the memory device. In some cases, the signaling associated with the second mode may include an indication to switch to the second mode. In some cases, the signaling associated with the second mode may include commands to access the memory device.In some examples, the signaling component 825 can receive (e.g., at the host device) signaling indicating that the memory device is in the second mode. In some cases, the signaling indicating that the memory device is in the second mode may include an indication that the memory device is available for access operations. In some cases, the signaling indicating that the memory device is in the second mode may include an indication of the second temperature of the memory device. In some examples, the signaling component 825 may receive (e.g., at the host device after inhibiting a command to access the memory device) an indication that the memory device is available.The initialization component 830 can initialize the memory device (e.g., by the host device). In some examples, the command to inhibit access to the memory device (e.g., by the host device) may be based on initialization.FIG. 9 shows a flowchart of a method 900 of supporting controlled and mode-dependent heating of a memory device according to aspects disclosed herein. The operations of the method 900 may be implemented by the memory device 110, the external memory controller 105, the system 100, or the memory device 110, the external memory controller 105, or various components of the system 100 as described with reference to FIGS. 1 to 8. For example, the operations of the method 900 may be performed by the device 705 as described with reference to FIG. 7. In some examples, the memory device 110, the external memory controller 105, or the system 100 can execute a set of instructions to control the functional elements of the memory device 110, the external memory controller 105, or the system 100 to perform the described functions. Additionally or alternatively, the memory device 110, the external memory controller 105, or the system 100 may use dedicated hardware or circuitry to perform aspects of the described functions.At 905, method 900 can include determining the temperature of the memory device. Operation 905 can be performed according to the techniques described herein. In some examples, aspects of the operation of 905 may be performed by the temperature component 710 as described with reference to FIG. 7.At 910, method 900 can include comparing the temperature of the memory device to a threshold. The operations of 910 can be performed according to the techniques described herein. In some examples, aspects of the operation of 910 may be performed by comparison component 715 as described with reference to FIG. 7.At 915, method 900 can include activating a circuit configured to heat the memory device based on the comparison of the temperature to a threshold. The operations of 915 can be performed according to the techniques described herein. In some examples, aspects of the operation of 915 may be performed by the heating control assembly 720 as described with reference to FIG. 7.A device for performing controlled and mode-dependent heating of the storage device is described. The apparatus may include: means for determining the temperature of the memory device, means for comparing the temperature of the memory device with a threshold value, and means for activating a circuit configured to heat the memory device based on the comparison of the temperature with the threshold value.Another device for performing controlled and mode-dependent heating of the storage device is described. The apparatus may include a controller or circuit configured to determine the temperature of the memory device, compare the temperature of the memory device to a threshold, and activate the circuit configured to heat the memory device based on the comparison of the temperature to the threshold .In some examples of methods or apparatuses, the memory device may include cells with capacitive or ferroelectric storage elements.Some examples of methods or apparatus may further include operations, features, devices, or instructions for: determining a second temperature of the memory device after activating a circuit configured to heat the memory device; determining that the second temperature of the memory device satisfies A second threshold; and deactivating a circuit configured to heat the memory device based on the determination that the second temperature meets the second threshold. In some examples of methods or devices, the second threshold is higher than the threshold.Some examples of methods or apparatus may further include operations, features, devices, or instructions for: determining a third temperature of the memory device after deactivating a circuit configured to heat the memory device; setting the third temperature of the memory device Comparing with a third threshold that is above the threshold and below the second threshold; and activating a circuit configured to heat the memory device based at least in part on comparing the third temperature to the third threshold.Some examples of the method or apparatus may further include an operation, feature, device, or instruction for transmitting an indication that the access operation of the memory device is restricted to the host device based on the comparison of the temperature and the threshold value. In some examples of the method or apparatus, the indication that the access operation of the transmitting memory device is restricted may include an operation, feature, device, or instruction for instructing the memory device to disable at least one of a read operation or a write operation .Some examples of methods or apparatus may further include operations, features, devices, or instructions for initializing a memory device, and transmitting an indication that the access operation of the memory device is restricted may be based on the initialization.Some examples of methods or apparatus may further include operations, features, devices, or instructions for coupling a voltage source to one or more resistive components of the memory device configured to heat the memory device, and activating the operation, features, devices, or instructions configured to heat the memory device The circuit can be based on the coupling.Some examples of methods or apparatus may further include operations, features, devices, or instructions for applying a signal to one or more driver components of the memory device, and activating a circuit configured to heat the memory device may be based on the application.Figure 10 shows a flowchart of a method 1000 to support controlled and mode-dependent heating of a memory device according to aspects disclosed herein. The operation of the method 1000 may be implemented by the system 100 or the external memory controller 105, or various components of the system 100 or the external memory controller 105 as described with reference to FIGS. 1 to 8. For example, the operations of the method 1000 may be performed by the device 805 as described with reference to FIG. 8. In some examples, the system 100 or the external memory controller 105 may execute a set of instructions to control the functional elements of the host device to perform the described functions. Additionally or alternatively, the system 100 or the external memory controller 105 may use dedicated hardware or circuitry to perform aspects of the described functions.At 1005, method 1000 can include receiving (e.g., at the host device) an indication of the temperature of a memory device coupled with the host device. The operation of 1005 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1005 may be performed by the temperature component 810 as described with reference to FIG. 8.At 1010, method 1000 can include evaluating the temperature of the memory device relative to a threshold. The operation of 1010 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1010 may be performed by the temperature assessment component 815 as described with reference to FIG. 8.At 1015, the method 1000 can include a command to inhibit (e.g., by the host device) access to the memory device based on determining the temperature of the memory device relative to a threshold. The operation of 1015 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1015 may be performed by the access component 820 as described with reference to FIG. 8.The equipment used to perform the method is described. The apparatus may include: means for receiving an indication of the temperature of the memory device coupled with the host device at the host device, means for evaluating the temperature of the memory device relative to a threshold value, and means for determining the memory device based on the relative threshold value by the host device The temperature of the device suppresses the command to access the memory device.Another device for performing controlled and mode-dependent heating of the storage device is described. The apparatus may include a controller or circuit configured to: receive an indication of the temperature of a memory device coupled with the host device at the host device, evaluate the temperature of the memory device with respect to a threshold; and, by the host device, based on the relative The temperature of the memory device is determined at the threshold value and the command to access the memory device is suppressed.In some examples of methods or apparatuses, the memory device may include cells with capacitive or ferroelectric storage elements.Some examples of methods or apparatus may further include operations, features, devices, or instructions for: receiving an indication of the second temperature of the memory device at the host device after inhibiting the command to access the memory device, and based on the memory device The instruction of the second temperature is issued to the memory device to access the memory device.Some examples of methods or apparatus may further include operations, features, devices, or instructions for: receiving (e.g., after inhibiting a command to access the memory device, at the host device) an indication that the memory device is available, and based on The memory device may issue instructions to the memory device to access the memory device with an indication that it may be available.Some examples of methods or apparatus may further include operations, features, devices, or instructions for initializing a memory device, and the command to inhibit access to the memory device may be based on the initialization. Some examples of methods or apparatus may further include operations, features, devices, or instructions for issuing a command to the memory device to provide an indication of the temperature of the memory device based on the initialization.Figure 11 shows a flowchart of a method 1100 to support controlled and mode-dependent heating of a memory device according to aspects disclosed herein. The operations of the method 1100 may be implemented by the memory device 110 or various components of the memory device 110 as described with reference to FIGS. 1 to 8. For example, the operations of the method 1100 may be performed by the device 705 as described with reference to FIG. 7. In some examples, the memory device 110 can execute a set of instructions to control the functional elements of the memory device 110 to perform the described functions. Additionally or alternatively, the memory device 110 may use dedicated hardware or circuitry to perform aspects of the described functions.At 1105, method 1100 can include operating the memory device in a first mode of the memory device. The operations of 1105 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1105 may be performed by the operation control component 735 as described with reference to FIG. 7.At 1110, method 1100 can include receiving signaling associated with the second mode of the memory device from the host device. The operations of 1110 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1110 may be performed by the signaling component 725 as described with reference to FIG. 7.At 1115, method 1100 can include activating a circuit configured to heat the memory device based on receiving signaling associated with the second mode. The operation of 1115 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1115 may be performed by the heating control assembly 720 as described with reference to FIG. 7.At 1120, method 1100 can include operating the memory device in a second mode based on activating a circuit configured to heat the memory device. The operations of 1120 may be performed according to the techniques described herein. In some examples, aspects of the operation of 1120 may be performed by the operation control component 735 as described with reference to FIG. 7.A device for performing controlled and mode-dependent heating of the storage device is described. The apparatus may include: means for operating the memory device in the first mode of the memory device; means for receiving from the host device signaling associated with the second mode of the memory device; The associated signaling activates the device configured to heat the circuit of the memory device; and the device for operating the memory device in the second mode based on activating the circuit configured to heat the memory device.Another device for performing controlled and mode-dependent heating of the storage device is described. The device may include a controller or a circuit configured to operate the memory device in a first mode of the memory device, receive signaling associated with the second mode of the memory device from the host device, based on receiving and The signaling associated with the second mode activates a circuit configured to heat the memory device, and operates the memory device in the second mode based on activating the circuit configured to heat the memory device.In some examples of methods or apparatuses, the memory device may include cells with capacitive or ferroelectric storage elements.Some examples of methods or devices may further include operations, features, devices, or instructions for determining that the temperature of the memory device meets a threshold after activating the circuit, and operating the memory device in the second mode may be based on the determination that the temperature of the memory device meets the threshold . Some examples of methods or apparatuses may further include operations, features, devices, or instructions for comparing the temperature of the memory device with a second threshold that is less than the threshold, and activating a circuit configured to heat the memory device may be based on Comparison of two thresholds.Some examples of methods or apparatus may further include operations, features, devices, or instructions for refreshing memory cells of the memory device based on operating the memory device in the first mode.Some examples of methods or apparatus may further include operations, features, devices, or instructions for accessing memory units of the memory device based on operating the memory device in the second mode. Some examples of methods or apparatus may further include operations, features, devices, or instructions for exchanging information with the host device, and the memory unit that accesses the memory device may be based on the information exchanged with the host device.In some examples of the method or apparatus, operating the memory device in the first mode may be associated with a first power consumption, and operating the memory device in the second mode may be associated with a second power consumption that is greater than the first power consumption.In some instances of the method or device, the signaling associated with the second mode may include an indication to switch to the second mode. In some examples of the method or apparatus, the signaling associated with the second mode may include a command to access the memory device.Some examples of methods or apparatus may further include operations, features, devices, or instructions for transmitting instructions to the host device to operate the memory device in the second mode. In some examples of the method or apparatus, the indication that the memory device is operating in the second mode may include an indication that the memory device is available for read operations, write operations, or a combination thereof.Some examples of methods or apparatus may further include operations, features, devices, or instructions for comparing the temperature of the memory device with a third threshold, and operating the memory device in the first mode may be based on the comparison with the third threshold.Some examples of methods or apparatus may further include operations, features, devices, or instructions for: determining that the temperature of the memory device meets a fourth threshold, and deactivating configured based on the determination that the temperature of the memory device meets the fourth threshold To heat the circuit of the memory device.Some examples of methods or apparatus may further include operations, characteristics, devices, or instructions for determining the power source of the memory device, and operating the memory device in the first mode may be based on the power source of the memory device.Some examples of methods or devices may further include operations, features, devices, or instructions for: receiving an instruction to switch to the first mode from the host device, and switching from the second mode based on the instruction to switch to the first mode The operating memory device is switched to operating the memory device in the first mode.Some examples of methods or apparatuses may further include operations, features, devices, or instructions for determining the elapsed time from the access operation, and switching from operating the memory device in the second mode to switching based on the elapsed time The memory device is operated in the first mode.Figure 12 shows a flowchart of a method 1200 to support controlled and mode-dependent heating of a memory device according to aspects disclosed herein. The operation of the method 1200 may be implemented by the system 100 or the external memory controller 105, or various components of the system 100 or the external memory controller 105 as described with reference to FIGS. 1 to 8. For example, the operations of the method 1200 may be performed by the device 805 as described with reference to FIG. 8. In some examples, the system 100 or the external memory controller 105 may execute a set of instructions to control the functional elements of the host device to perform the described functions. Additionally or alternatively, the system 100 or the external memory controller 105 may use dedicated hardware or circuitry to perform aspects of the described functions.At 1205, method 1200 can include receiving (e.g., at the host device) an indication of the temperature of the memory device, where the temperature is associated with the first mode of the memory device. The operation of 1205 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1205 may be performed by the temperature component 810 as described with reference to FIG. 8.At 1210, method 1200 can include transmitting (e.g., from a host device) signaling associated with the second mode of the memory device. The operations of 1210 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1210 may be performed by the signaling component 825 as described with reference to FIG. 8.At 1215, method 1200 can include receiving (e.g., at the host device) signaling indicating that the memory device is in the second mode. The operations of 1215 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1215 may be performed by the signaling component 825 as described with reference to FIG. 8.At 1220, method 1200 can include performing an access operation associated with the memory device being in the second mode based on receiving signaling indicating that the memory device is in the second mode. The operations of 1220 can be performed according to the techniques described herein. In some examples, aspects of the operation of 1220 may be performed by the access component 820 as described with reference to FIG. 8.The equipment used to perform the method is described. The apparatus may include: means for receiving an indication of the temperature of the memory device at the host device, wherein the temperature is associated with the first mode of the memory device; and for transmitting from the host device the information associated with the second mode of the memory device Signaling, receiving at the host device signaling indicating that the memory device is in the second mode, and a device that performs an access operation associated with the memory device being in the second mode based on receiving the signaling indicating that the memory device is in the second mode.Another device for performing controlled and mode-dependent heating of the storage device is described. The apparatus may include a controller or circuit configured to: receive an indication of the temperature of the memory device at the host device, wherein the temperature is associated with the first mode of the memory device; and transmit from the host device to the memory device Signaling associated with the second mode of the device; receiving signaling at the host device indicating that the memory device is in the second mode; and executing the signaling that the memory device is in the second mode based on receiving the signaling indicating that the memory device is in the second mode The associated access operation.In some examples of methods or apparatuses, the memory device may include cells with capacitive or ferroelectric storage elements.In some instances of the method or apparatus, the signaling indicating that the memory device is in the second mode may include an indication that the memory device is available for access operations. In some examples of the method or apparatus, the signaling indicating that the memory device is in the second mode may include an indication of the second temperature of the memory device.In some instances of the method or device, the signaling associated with the second mode may include an indication to switch to the second mode. In some examples of the method or apparatus, the signaling associated with the second mode may include a command to access the memory device. In some examples of methods or devices, the first mode may include a refresh mode.It should be noted that the methods, systems, and devices described herein describe possible implementations, and operations and steps can be rearranged or modified in other ways, and other implementations are possible. In addition, two or more aspects from the methods or other techniques can be combined.Describes a device. The apparatus may include: a memory device that includes a unit having a capacitive storage element; a temperature sensor that is coupled to the memory device and configured to generate an indication of the temperature of the memory device; and a circuit that is coupled to the memory device and configured to at least partially The memory device is heated based on the indication generated by the temperature sensor.In some examples, the apparatus may further include a controller of the memory device configured to identify the temperature of the memory device based at least in part on the indication generated by the temperature sensor, and based at least in part on the comparison of the temperature of the memory device with a threshold value. Enable the circuit configured to heat the memory device.In some examples, the controller of the memory device may be further configured to compare the temperature of the memory device to a second threshold, and adjust configured to heat the memory device based at least in part on the comparison of the temperature of the memory device to the second threshold. The circuit.In some examples, the controller of the memory device may be further configured to identify the threshold based at least in part on accessing a non-volatile storage component of the memory device, the non-volatile storage component being configured to store an indication of the threshold.In some examples, the threshold may be associated with the configuration of the memory device, and the controller of the memory device may be further configured to identify the configuration of the memory device based at least in part on accessing the non-volatile storage component, and based at least in part on The configuration determines the threshold.In some examples, the controller of the memory device may be further configured to identify the mode of operation of the memory device and determine the threshold based at least in part on the mode of operation.In some examples, the controller of the memory device may be further configured to detect the initialization of the memory device, and transmit signaling to the host device based at least in part on the initialization and the comparison of the temperature of the memory device with a threshold. Let indicate that access to the memory device is restricted.In some examples, the signaling may indicate that the memory device is not available for read or write operations.In some examples, the signaling may indicate the temperature of the memory device.In some examples, a circuit configured to heat the memory device may include a voltage source, one or more resistive elements, and one or more switch components configured to selectively couple the voltage source and the one or more resistive elements.In some examples, a circuit configured to heat the memory device may include one or more drivers coupled with a load, and a switching component configured to selectively apply signals to the one or more drivers coupled with the load.Describe another device. The apparatus may include: a memory device that includes a unit having a capacitive storage element; a temperature sensor that is configured to indicate the temperature of the memory device; and a controller of the memory device. The controller of the memory device may be configured to cause the apparatus to: receive signaling associated with the second mode of the memory device from the host device when the memory device is in the first mode; based at least in part on receiving the signal associated with the second mode Signaling to activate a circuit configured to heat the memory device; compare an indication of the temperature of the memory device generated by the temperature sensor with a threshold value; and operate the memory device in a second mode based at least in part on the comparison of the indication with the threshold value .In some examples, the controller may be further configured to cause the device to compare a second indication of the temperature of the memory device generated by the temperature sensor with a second threshold, wherein activating the circuit configured to heat the memory device is based at least in part on the The second indication is compared with the second threshold.In some examples, the controller may be further configured to cause the device to perform a refresh operation of the memory device when operating the memory device in the first mode. In some examples, the controller may be further configured to cause the device to perform a read operation or a write operation when operating the memory device in the second mode.In some examples, the controller may be further configured to cause the device to transmit an indication that the memory device is operating in the second mode to the host device based at least in part on the comparison of the indication to a threshold. In some examples, the controller may be further configured to cause the device to compare a third indication of the temperature of the memory device generated by the temperature sensor with a third threshold, wherein operating the memory device in the first mode is based at least in part on the third indication Comparison with the third threshold.In some examples, the controller may be further configured to cause the device to compare a fourth indication of the temperature of the memory device generated by the temperature sensor with a fourth threshold, and adjust based at least in part on the comparison of the fourth indication with the fourth threshold The circuit configured to heat the memory device.The description herein provides examples and does not limit the scope, applicability, or examples set forth in the claims. Changes in the function and configuration of the discussed elements can be made without departing from the scope of the present disclosure. Some examples may omit, replace or add various programs or components as appropriate. In addition, features described with respect to some examples can be combined in other examples.Although certain features or techniques may be described herein with respect to capacitive memory technology (eg, DRAM technology) or in the context of capacitive memory technology, such descriptions are for illustrative purposes and those skilled in the art will understand The teachings in this article can be applied to any type of memory device. For example, the teachings herein can be applied to volatile or non-volatile memory devices, such as magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), etc.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof may be used to represent data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description. And chip. Some figures may illustrate the signaling as a single signal. However, those of ordinary skill in the art should understand that a signal may represent a bus of signals, where the bus may have various bit widths.As used herein, the term "virtual ground" refers to a circuit node that is maintained at a voltage of approximately zero volts (0V) without being directly coupled to ground. Therefore, the voltage of the virtual ground may temporarily fluctuate and return to approximately 0V in a steady state. The virtual ground can be implemented using various electronic circuit elements such as a voltage divider composed of an operational amplifier and a resistor. Other implementations are also possible. "Virtual ground" or "virtual ground to ground" means to be connected to approximately 0V.The terms "electronic communication", "conductive contact", "connection" and "coupling" may refer to the relationship between components that support the flow of signals between components. If there are any conductive paths between the components that can support the flow of signals between the components at any time, the components are considered to be in electronic communication with each other (or in conductive contact with each other, or connected to each other, or coupled to each other). At any given time, based on the operation of the device containing the connected components, the conductive paths between components that are in electronic communication with each other (or in conductive contact with each other, or connected to each other, or coupled to each other) may be open or closed. The conductive paths between connected components may be direct conductive paths between components, or the conductive paths between connected components may be indirect conductive paths that may include intermediate components such as switches, transistors, or other components. In some cases, the signal flow between connected components may be interrupted for a period of time, for example, using one or more intermediate components such as switches or transistors.The phrase "coupled between" can refer to the order of components relative to each other and can refer to electrical coupling. In one example, the component “B” electrically coupled between the component “A” and the component “C” may refer to the component order “A-B-C” or “C-B-A” in an electrical sense. In other words, electrical signals (for example, voltage, charge, current) can be transmitted from component A to component C by means of component B.The description that component B is "coupled between component A and component C" should not necessarily be interpreted as excluding other intermediate components in the described order. For example, the component "D" can be coupled between the described component A and the component B (for example, as an example, referring to the component order "ADBC" or "CBDA"), while still supporting the electrical coupling of the component B to the component A and Between component C. In other words, the use of the phrase "coupled between" should not be understood as necessarily involving exclusive sequential order.In addition, the description that the component B is “coupled between the component A and the component C” does not exclude the second different coupling between the component A and the component C. For example, component A and component C may be coupled to each other in a separate coupling with a coupling level row via component B. In another example, component A and component C may be coupled via another component "E" (for example, component B is coupled between component A and component C and component E is coupled between component A and component C). In other words, the use of the phrase "coupled between" should not be understood as an exclusive coupling between components.The term "coupled" may refer to the condition of moving from an open-circuit relationship between components to a closed-circuit relationship between components in which a signal cannot currently be communicated between the components through a conductive path. , The signal can be communicated between the components through the conductive path. When a component such as a controller couples other components together, the component initiates a change that allows signals to flow between other components via conductive paths that previously did not allow the signal to flow.The term "isolation" can refer to the relationship between components where signals cannot currently flow between components. If there is an open circuit between the components, the components are isolated from each other. For example, the components separated by a switch positioned between two components are isolated from each other when the switch is open. When the controller isolates the two components, the controller implements the following change: preventing the signal from flowing between the components using the conductive path that previously permitted the signal to flow.The term "layer" as used herein refers to a layer or sheet of geometric structure. Each layer can have three dimensions (e.g., height, width, and depth) and can cover at least a portion of the surface. For example, the layer may be a three-dimensional structure in which two dimensions are greater than the third dimension, such as a film. Layers can contain different elements, components and/or materials. In some cases, a layer may be composed of two or more sublayers. In some drawings, the two dimensions of the three-dimensional layer are depicted for illustrative purposes. However, those skilled in the art will recognize that the layers are three-dimensional in nature.As used herein, the term "electrode" can refer to electrical conductors, and in some cases, can be used as electrical contacts to memory cells or other components of a memory array. Electrodes may include traces, wires, conductive lines, conductive layers, or other features that provide conductive paths between elements or components of the memory array.The term "lithography" as used herein may refer to a process that uses photoresist materials for patterning and uses electromagnetic radiation to expose such materials. For example, the photoresist material can be formed on the base material by, for example, spin-coating the photoresist on the base material. The pattern can be created in the photoresist by exposing the photoresist to radiation. For example, the pattern may be defined by a photomask that spatially depicts where the radiation exposes the photoresist. For example, the exposed photoresist area can then be removed by a chemical treatment, thereby leaving the desired pattern. In some cases, the exposed area may remain, and the unexposed area may be removed.The memory array-containing devices discussed herein can be formed on semiconductor substrates, such as silicon, germanium, silicon germanium alloys, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping using various chemical species including but not limited to phosphorus, boron, or arsenic. The doping can be performed by ion implantation or by any other doping method during the initial formation or growth of the substrate.The switching components or transistors discussed herein may represent field effect transistors (FETs) and include three-terminal devices including a source, a drain, and a gate. The terminal may be connected to other electronic components through a conductive material such as metal. The source and drain may be conductive and may include heavily doped (e.g., degenerate) semiconductor regions. The source and drain may be separated by lightly doped semiconductor regions or channels. If the channel is n-type (ie, most of the carriers are signals), then the FET can be referred to as an n-type FET. If the channel is p-type (ie, majority carriers are holes), then the FET can be referred to as a p-type FET. The channel may be terminated by an insulated gate oxide. The channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive voltage or a negative voltage to an n-type FET or a p-type FET, respectively, can cause the channel to become conductive. When a voltage greater than or equal to the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned on" or "activated". When a voltage less than the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned off" or "deactivated".The description set forth herein in conjunction with the drawings describes example configurations, and does not represent all examples that can be implemented or fall within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance, or illustration" and is not "preferred" or "better" than other examples. The detailed description contains specific details that provide an understanding of the described technology. However, these techniques can be practiced without these specific details. In some cases, well-known structures and devices are shown in the form of block diagrams so as not to obscure the concepts of the described examples.In the drawings, similar components or features may have the same reference label. In addition, various components of the same type can be distinguished by following the reference mark with a dash and a second label distinguishing similar components. If only the first reference sign is used in the specification, the description is applicable to any of the similar components having the same first reference sign irrespective of the second reference sign.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof may be used to represent data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description. And chip.Various illustrative blocks and modules described in conjunction with the disclosure herein may use general-purpose processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, or other programmable logic devices designed to perform the functions described herein , Discrete gates or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in combination with a DSP core, or any other such configuration).The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions can be stored as one or more instructions or codes on a computer-readable medium or transmitted through the computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, the described functions can be implemented using software executed by a processor, hardware, firmware, hard-wired, or a combination of any of these. Features that implement a function may also be physically located at various locations, including being distributed so that various parts of the function are implemented at different physical locations. And, as used herein, included in the claims, a list of items (e.g., a list of items starting with phrases such as "at least one of" or "one or more of") used in "Or" indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C).As used herein, the term "substantially" means that the characteristic being modified (for example, a verb or adjective modified by the term "substantially") need not be absolute, but close enough to achieve the advantage of the characteristic, or close enough The characteristics involved are true in the context of the relevant aspects of this disclosure.As used herein, the phrase "based on" should not be interpreted as a reference to a set of closed conditions. For example, the exemplary steps described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should equally be interpreted as the phrase "based at least in part."Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates the transfer of a computer program from one place to another. Non-transitory storage media can be any available media that can be accessed by a general-purpose or special-purpose computer. By way of example and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage devices, magnetic disk storage devices, or other magnetic A storage device, or any other non-transitory medium that can be used to carry or store desired program code components in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer or a general-purpose or special-purpose processor. And, any connection is appropriately referred to as a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, the coaxial cable , Fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of media. As used herein, magnetic disks and optical disks include CDs, laser disks, optical disks, digital versatile disks (DVD), floppy disks, and Blu-ray disks, where disks usually reproduce data magnetically, and optical disks reproduce data optically with lasers. Combinations of the above are also included in the scope of computer-readable media.The description herein is provided to enable those skilled in the art to make or use the present disclosure. Those skilled in the art will easily understand various modifications to the present disclosure, and the general principles defined herein can be applied to other variations without departing from the scope of the present disclosure. Therefore, the present disclosure is not limited to the examples and designs described herein, but is given the widest scope consistent with the principles and novel features disclosed herein. |
A standard cell for use in semiconductor device design and manufacture and a method for manufacturing a standard cell are disclosed. Aspects disclosed include a standard cell having a plurality of wide metal lines. The wide metal line is formed of copper. The standard cell also includes a plurality of narrow metal lines. The narrow metal wire is formed of a material having a lower resistance than copper for a wire width of a magnitude of 12 nm or less. |
1.A standard unit consisting of:a plurality of wide metal lines, wherein the wide metal lines are formed of copper; andA plurality of narrow metal lines, wherein for line widths on the order of 12 nm or less, the narrow metal lines are formed of a material having a lower resistance than copper.2.The standard cell of claim 1, wherein the plurality of narrow metal wires are made of rhodium (Rh), platinum (Pt), iridium (Ir), niobium (Nb), nickel (Ni), aluminum (Al), ruthenium At least one of (Ru), molybdenum (Mo), and osmium (Os) is formed.3.The standard cell of claim 1, wherein a ratio of a width of a wide metal line of the plurality of wide metal lines to a width of a narrow metal line of the plurality of narrow metal lines is on the order of three to one .4.The standard cell of claim 1, wherein the plurality of wide metal lines each have a width on the order of 35 nanometers.5.5. The standard cell of claim 4, wherein the plurality of narrow metal lines each have a width on the order of 12 nanometers.6.The standard cell of claim 1, wherein the plurality of narrow metal lines each have a width on the order of 12 nanometers.7.The standard cell of claim 1, wherein the plurality of wide metal lines have a height that is at least 5 nanometers greater than the height of the plurality of narrow metal lines.8.The standard unit of claim 1, further comprising:A plurality of barriers, wherein each barrier is formed around each of the plurality of wide metal lines.9.The standard cell of claim 8, wherein each barrier is composed of at least one of a combination of tantalum nitride (TaN) including at least one of TaN/Ta, TaN/Co (cobalt), or TaN/Ru (ruthenium). a formation.10.The standard unit of claim 8, further comprising:a dielectric disposed between each of the plurality of narrow metal lines and the plurality of wide metal lines.11.11. The standard cell of claim 10, wherein the dielectric is formed of a low dielectric constant material.12.The standard cell of claim 11, wherein the low dielectric constant material is a carbon doped oxide dielectric.13.The standard unit of claim 10, further comprising:an adhesive layer at one end of each of the plurality of narrow metal lines, the adhesive layer being disposed between the plurality of wide metal lines.14.14. The standard cell of claim 13, wherein the adhesion layer is titanium nitride (TiN).15.The standard cell of claim 1, wherein the plurality of narrow metal lines are formed by a subtractive etch process.16.The standard cell of claim 1, wherein the plurality of wide metal lines are formed by a damascene process.17.The standard cell of claim 1, wherein the plurality of narrow metal lines are formed between at least two of the plurality of wide metal lines.18.18. The standard cell of claim 17, wherein the at least two of the plurality of wide metal lines are configured to provide a supply voltage to a given circuit.19.19. The standard cell of claim 18, wherein the plurality of narrow metal lines are configured to conduct signals of the given circuit.20.The standard unit of claim 1, wherein the standard unit is part of a semiconductor device incorporated into a music player, video player, entertainment unit, navigation device, communication device, mobile device, mobile device In at least one of a phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of Things device (IoT), a laptop computer, a server, or a device in a motor vehicle.21.A method for manufacturing a standard unit, comprising:fabricating a plurality of wide metal lines, wherein the wide metal lines are formed of copper; andA plurality of narrow metal lines are fabricated, wherein for line widths on the order of 12 nm or less, the material of the narrow metal lines is formed from a material having a lower resistance than copper.22.The method of claim 21, wherein the plurality of narrow metal wires are made of rhodium (Rh), platinum (Pt), iridium (Ir), niobium (Nb), nickel (Ni), aluminum (Al), ruthenium ( At least one of Ru), molybdenum (Mo), and osmium (Os) is formed.23.21. The method of claim 21, wherein the ratio of the width of the wide metal line of the plurality of wide metal lines to the width of the narrow metal line of the plurality of narrow metal lines is three to one magnitude.24.21. The method of claim 21, wherein the plurality of wide metal lines have a height that is at least 5 nanometers greater than the height of the plurality of narrow metal lines.25.21. The method of claim 21, wherein the plurality of narrow metal lines are formed by subtractive etching and the plurality of wide metal lines are formed by a damascene process.26.The method of claim 25, further comprising:depositing an adhesive layer; anddepositing narrow wire material on the adhesive layer,wherein fabricating the plurality of narrow metal lines includes patterning and etching the adhesive layer and the narrow metal line material.27.The method of claim 26, further comprising:forming a dielectric that substantially encapsulates each of the plurality of narrow metal lines; andPortions of the dielectric are patterned and etched to form cavities for the plurality of wide metal lines.28.The method of claim 27, further comprising:Forming a barrier over the dielectric includes forming the barrier in the cavity for the plurality of wide metal lines.29.The method of claim 28, further comprising:depositing a copper layer over the barrier, including filling the cavity,wherein fabricating the plurality of wide metal lines includes using chemical mechanical polishing/planarization to remove excess copper and barrier portions over the dielectric to form the plurality of wide metal lines from the filled cavity.30.28. The method of claim 27, wherein the dielectric is formed of a low dielectric constant material. |
Hybrid Low Resistance Metal Wireclaim of priorityThis patent application claims priority to Application No. 16/702,335, filed on December 3, 2019, entitled "HYBRID LOW RESISTANCE METALLINES," and which is assigned to the assignee of the present application, and expressly is hereby incorporated by reference.technical fieldThe present disclosure relates to standard cells in semiconductor designs and, in other aspects, to hybrid low resistance metal lines implemented in standard cells.Background techniqueIntegrated circuit technology has made great strides in increasing computing power through the miniaturization of active components. In semiconductor design, standard cells are used in about 70% of digital designs. Standard cells have allowed designers to produce complex multi-million gate system-on-chip (SOC) devices. A standard cell is a set of transistors and interconnect structures that can be used for various logic and memory functions. Standard cells include narrow metal lines and wide metal lines. Narrow traces are used for internal cell routing, while wide traces are used for power rails to supply system voltages, provide high current loads, etc.It is desirable to have low resistance (R) in the power rails to reduce IR drop, and low R in the narrow lines to reduce circuit delay. Copper (Cu) dual damascene has been used in traditional designs. However, when the cell size shrinks (eg, 30% at each node), the Cu resistivity explodes due to surface scattering.As semiconductor designs scale down and have critical dimensions of 12 nanometers (nm) or less, it is desirable that wide metal lines (eg, power rails) have low resistance and narrow metal lines (eg, signal lines) also have low resistance.SUMMARY OF THE INVENTIONThe following summary identifies certain features and is not intended to be an exclusive or exhaustive description of the disclosed subject matter. Additional features and further details can be found in the detailed description and the appended claims. The content in this summary does not reflect materiality. Other aspects will become apparent to those skilled in the art upon reading the following detailed description and reviewing the accompanying drawings which form a part hereof.The disclosed aspects include standard cells with multiple wide metal lines. The wide metal lines are formed of copper. Standard cells also include multiple narrow metal wires. For line widths on the order of 12 nm or less, the narrow metal lines are formed from materials with lower resistance than copper.Other aspects disclosed include methods for making standard cells. Multiple wide metal wires are made. The wide metal lines are formed of copper. A number of narrow metal wires were also fabricated. For line widths on the order of 12 nm or less, the narrow metal lines are formed from materials with lower resistance than copper.Other objects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the drawings and detailed description.Description of drawingsThe accompanying drawings are presented to help describe embodiments of the present disclosure and are provided to illustrate the various aspects of the disclosure only and not to limit the same.1A is a diagram depicting aspects of a standard cell in accordance with aspects of the present disclosure.1B is a diagram depicting a cross-sectional portion of the standard cell of FIG. 1A in accordance with aspects of the present disclosure.2 is a graph depicting resistance versus line width in accordance with aspects of the present disclosure.3 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.4 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.5 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.6 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.7 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.8 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.9 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.10 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.11 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.12 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.13 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.14 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure.15 is an illustration of a mobile device in accordance with aspects of the present disclosure.16 is a diagram depicting an exemplary communication system in accordance with aspects of the present disclosure.17 is a flowchart illustrating aspects of a method in accordance with aspects of the present disclosure.Detailed waysIn the following description and associated drawings directed to specific embodiments, aspects of the present disclosure are illustrated. Alternative aspects or embodiments may be devised without departing from the scope of the teachings herein. Additionally, well-known elements of the illustrative embodiments herein may not be described in detail or may be omitted so as not to obscure the relevant details taught in the present disclosure.In some of the described embodiments, instances are identified in which various component structures and partial operations may be obtained from known conventional techniques and then arranged in accordance with one or more exemplary embodiments. In such instances, internal details of well-known conventional component structures and/or partial operations may be omitted to help avoid potential obscurity of the concepts illustrated in the illustrative embodiments disclosed herein.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It is also to be understood that the terms "comprising", "having", "comprising" and/or "comprising" when used herein indicate the presence of stated features, integers, steps, operations, elements and/or components, but not The presence or addition of one or more other features, integers, steps, operations, elements, devices and/or parts thereof is excluded.As discussed above, there is a need to reduce resistance in both narrow and wide metal lines in semiconductor standard cells. In one embodiment, copper is used for wide metal lines and other materials are used for narrow metal lines.FIG. 1A is a diagram depicting aspects of a standard cell 100 in accordance with aspects of the present disclosure. As shown, the standard cell 100 may include a plurality of wide metal lines 104 (eg, power rails). The wide metal lines 104 may be formed of copper (Cu). Additionally, the standard cell 100 may include a plurality of narrow metal lines 102 (eg, signal lines). As shown, the plurality of narrow metal lines 102 may be formed of ruthenium (Ru). However, as discussed below, it should be understood that other materials may be used. Additionally, as shown, the standard cell 100 may have two or more wide metal lines 104 disposed on opposite sides of the plurality of narrow metal lines 102 . In some aspects, the ratio of the widths (W) of the wide metal lines 104 to the narrow metal lines 102 is on the order of three to one.It should be understood that the arrangement of the plurality of narrow metal lines 102 and the plurality of wide metal lines 104 in FIG. 1A is for illustration purposes only. There are any number of arrangements of narrow metal lines 102 and wide metal lines 104 that can be used to form a standard cell, including variations in the number and location of multiple narrow metal lines 102 and/or multiple wide metal lines 104 . In addition, it should be understood that these figures are provided only to assist in explaining and illustrating the various aspects disclosed, and are not intended to be limiting. Additionally, the cross-sectional portion 110 is indicated by reference lines as shown, and will be discussed in greater detail below.FIG. 1B is a diagram depicting a cross-sectional portion 110 of the standard cell of FIG. 1A in accordance with aspects of the present disclosure. As shown in the cross-sectional view, the narrow metal lines 102 have a lower profile than the wide metal lines 104 . For example, the wide metal line 104 may have a height (H) that extends beyond the narrow metal line by 1025 nm or more. Additionally, the narrow metal lines 102 have an adhesion layer 112 that may be formed of titanium nitride (TiN). The wide metal line 104 may have a copper barrier 114 surrounding the wide metal line. The barrier 114 may be formed of a combination of tantalum nitride (TaN) such as TaN/Ta, TaN/Co (cobalt), TaN/Ru, and the like. The barrier 114 separates the wide metal lines 104 from the dielectric 120 to prevent copper from migrating into the dielectric 120 . The narrow metal lines 102 have no barriers. The dielectric 120 may be formed of a low dielectric constant material (low-K material), such as a carbon-doped oxide dielectric including Si, C, O, and H (SICOH) films or carbon-doped oxide (CDO).FIG. 2 is a diagram illustrating a graph 200 of resistance versus critical dimension for copper (Cu) and ruthenium (Ru). In this exemplary illustration, it can be seen from Ru line 202 that Ru has a lower resistance than Cu, as shown by Cu line 204 below a certain critical dimension (CD) (eg, narrow line width). In this example, CD is a linewidth of 12 nm. Also, as can be seen from graph 200, Cu over narrow line widths (eg, in the range of 12 nm) has a lower resistance than Ru, and when Ru lines 202 and Cu lines 204 approach the resistance for wide metal lines With another CD (eg, in the 35 nm range), the difference becomes larger.It should be understood that the above-described materials, arrangements, and critical dimensions provided for the narrow metal lines 102 and for the wide metal lines 104 are for illustrative purposes only. For example, the narrow wire material is described as Ru. However, it should be understood that other materials may be used. For example, the plurality of narrow metal wires 102 may be composed of rhodium (Rh), platinum (Pt), iridium (Ir), niobium (Nb), nickel (Ni), aluminum (Al), ruthenium (Ru), molybdenum (Mo), and At least one of the group consisting of osmium (Os) is formed. For convenience in providing fabrication examples and further explanation of the various aspects disclosed herein, Ru will be used for narrow lines. It should be understood, however, that the illustrations and materials provided herein are provided only to aid in the explanation and description of the various aspects disclosed, and not to limit them.To illustrate the various aspects disclosed herein, the following descriptions of exemplary fabrication processes are provided. It is intended to provide examples for illustrative purposes, and is not intended to serve as a detailed description of every aspect of manufacturing and/or alternative manufacturing processes. Conventional and well-known processes may be omitted and/or not described since there is no need to educate those skilled in the art. Likewise, alternative fabrication techniques will be understood by those skilled in the art and various aspects need not be described in detail.3 is an illustration of a portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 3 , an adhesion layer 302 is deposited on the substrate 304 . The adhesive layer 302 has a thickness in the range of 0.3 nm to 1 nm. In addition, the adhesion layer 302 may be formed of titanium nitride (TiN). For brevity and convenience of illustration, the substrate 304 will not be described in the following discussion.4 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 4 , a narrow metal line material 402 for forming narrow metal lines (eg, 102 ) is deposited on the adhesion layer 302 . The narrow line material 402 may be deposited using chemical vapor deposition (CVD), which is a vacuum deposition method used to create thin films. The narrow metal line material 402 may be formed of Ru. However, it should be understood that other materials as discussed above may be used.5 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 5 , a photoresist (PR) (not shown) is deposited and patterned using a mask 504 and ultraviolet (UV) radiation 506 . The photoresist (not shown) is then etched, resulting in a patterned photoresist (PR) 502. For simplicity, various process operations have been combined in this illustration, and it should be understood that other techniques and/or additional processes may be used to form the patterned PR 502 .6 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 6, the patterned PR 502 is used to protect portions of the adhesion layer 302 and narrow metal line material 402 during the plasma etch process. In this embodiment, a chlorine plasma etch process may be used to form the resulting structure of FIG. 6 in which the protective adhesion layer 302 and the unpatterned PR502 portion of the narrow metal line material 402 are removed. The patterned and etched portions of narrow metal line material 402 may also be referred to herein as narrow metal lines 402, which are similar to the narrow metal lines 102 shown in FIGS. 1A and 1B .7 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 7, the photoresist is removed, leaving the patterned portions of the adhesive layer 302 and narrow metal line material 402, forming narrow metal lines (eg, 102 as discussed above). Photoresist can be removed by a process such as plasma oxygen ashing.8 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 8, a flowable chemical vapor deposition (CVD) of a low dielectric constant material (low-k material) 802 such as SICOH, CDO, etc. is formed.9 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 9 , a photoresist (PR) (not shown) is deposited and patterned using a mask 904 and ultraviolet (UV) radiation 906 . The photoresist (not shown) is then etched, resulting in a patterned photoresist (PR) 902. For simplicity, various process operations have been combined in this illustration, and it should be understood that other techniques and/or additional processes may be used to form the patterned PR 902 . The patterned PR 902 is used to protect portions of the dielectric 802 , as well as previously patterned portions of the adhesive layer 302 and narrow metal line material 402 .10 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 10, patterned PR 902 is used to protect portions of dielectric 802, as well as previously patterned portions of adhesion layer 302 and narrow metal line material 402, during the plasma etch process. In this example, a fluorine plasma etch process may be used to form the structure of FIG. 10 in which portions of dielectric 802 are removed.11 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 11, the photoresist is removed, leaving the dielectric 802, adhesion layer 302, and patterned portions of narrow metal line material 402 (eg, narrow metal lines 102 as discussed above) and for the wide metal lines the channel 1102. Photoresist can be removed by a process such as plasma oxygen ashing.12 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 12 , a barrier 1202 is provided on the dielectric 802 , the adhesive layer 302 and the patterned portion of the narrow metal line material 402 and in the stealing 1102 for the wide metal line. The barrier may be formed of at least one of a combination of tantalum nitride (TaN), including at least one of TaN/Ta, TaN/Co (cobalt), TaN/Ru. Each layer of the combination can be deposited using physical vapor deposition (PVD), chemical vapor deposition (CVD), or atomic layer deposition (ALD), which include various vacuum deposition methods that can be used to create thin films and coatings. Common PVD methods include sputtering and evaporation. Thus, in one part of the process, TaN can be deposited, and then in another part of the process, Ta, Co, or Ru can be deposited, forming a barrier 1202 for at least one of the combinations of TaN.13 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in FIG. 13, copper 1302 is deposited over the barrier 1202, and thus over the dielectric 802, the adhesion layer 302, and the patterned portion of the narrow metal line material 402, and fills the trenches used to form the wide metal lines Road. The copper may be deposited using an electrochemical deposition process, such as depositing a seed layer, and then electroplating the copper fill as part of the damascene process.14 is an illustration of a further portion of a fabrication process in accordance with aspects of the present disclosure. As shown in Figure 14, as part of the damascene process, excess copper is removed along with portions of the barrier and the surface is planarized using chemical mechanical polishing/planarization (CMP). CMP is the process of removing material through a combination of chemical and mechanical (or abrasive) actions to obtain a highly smooth and planar surface of the material. With the excess copper and barrier portions removed, wide metal lines 1402 are formed and isolated from dielectric 802 by barriers 1202 . Accordingly, excess copper and barrier portions are also removed from the patterned portions of the adhesive layer 302 and narrow metal line material 402 . The narrow metal lines 402 are covered by the dielectric 802 and because the narrow metal lines 402 have a smaller height than the wide metal lines 1402, they are not exposed by the CMP process.The resulting standard cell of Figure 14 is similar to the standard cell shown in Figures 1A and 1B. Accordingly, it should be understood that aspects of the present disclosure include standard cells having multiple wide metal lines (eg, 104, 1402). The wide metal lines are formed of copper. Additionally, a standard cell has multiple narrow metal lines (eg, 102, 402) made of materials with lower resistance than copper for line widths on the order of 12 nm or less form. Additionally, it will be appreciated that due to the difference in height between the wide metal lines (eg, 104, 1402) and the narrow metal lines (eg, 102, 402), the narrow metal lines are not exposed to the CMP process. It will be appreciated that it is very difficult to perform a CMP process on Ru. Thus, the height difference provides an improvement in the processing of standard cells containing Cu wide metal lines and Ru narrow metal lines. It will also be understood from the above description that narrow metal lines are formed by subtractive etching, while wide metal lines are formed by a damascene process. Standard cells according to aspects disclosed herein may be used in the back end of the line (BEOL) portion of an integrated circuit for interconnecting individual devices (eg, transistors, capacitors, resistors, inductors, etc.) to form Contacts and bond sites for die-to-package or package-to-package connections. Accordingly, any of the various circuits and/or components described below in connection with an exemplary mobile device or other apparatus may include means utilizing the various aspects disclosed herein.15 illustrates an example mobile device in accordance with some aspects of the present disclosure. Referring now to FIG. 15 , a block diagram of a mobile device configured in accordance with exemplary aspects and generally designated as mobile device 1500 is depicted. In some aspects, mobile device 1500 can be configured as a wireless communication device that can be designed and manufactured using, in part, the standard cells described in some aspects herein. As shown, mobile device 1500 includes processor 1501 . The processor 1501 is shown to include an instruction pipeline 1512 , a buffer processing unit (BPU) 1508 , a branch instruction queue (BIQ) 1511 , and a throttle 1510 as known in the art. Other well-known details of these blocks (eg, counters, entries, confidence fields, weighted sums, comparators, etc.) have been omitted from this view of processor 1501 for clarity, but may be used at least in part as disclosed herein standard unit for design and manufacture.The processor 1501 may be communicatively coupled to the memory 1532 through a link, which may be a die-to-die link or a chip-to-chip link. Mobile device 1500 also includes display 1528 and display controller 1526, wherein display controller 1526 is coupled to processor 1501 and display 1528.In some aspects, FIG. 15 may include an encoder/decoder (CODEC) 1534 (eg, an audio and/or voice CODEC) coupled to the processor 1501, a speaker 1536 and a microphone 1538 coupled to the CODEC 1534, and a wireless antenna 1542 and the wireless circuitry 1540 of the processor 1501 (which may include a modem, designed and fabricated at least in part using the standard cells disclosed herein).In certain aspects, processor 1501 , display controller 1526 , memory 1032 , CODEC 1534 and wireless circuitry 1540 may be included in a system-in-package or system-on-chip device 1522 in the presence of one or more of the above blocks. Input device 1530 (eg, physical or virtual keyboard), power source 1544 (eg, battery), display 1528, input device 1530, speaker 1536, microphone 1538, wireless antenna 1542, and power source 1544 may be external to system-on-chip device 1522 and may be A component coupled to the system-on-chip device 1522, such as an interface or controller.It should be noted that although Figure 15 depicts a mobile device, the processor 1501 and memory 1532 and various support circuits including aspects disclosed herein may also be integrated into set-top boxes, music players, video players, entertainment units, navigation devices, Personal digital assistants (PDAs), fixed location data units, computers, laptops, tablets, communication devices, mobile phones, wearable devices, Internet of Things (IoT) devices, servers, devices in motor vehicles, or other similar in the device.16 illustrates various electronic devices that may be designed and manufactured, at least in part, using standard cells in accordance with various aspects disclosed herein. Various devices may include semiconductor devices, integrated circuits, dies, packages, or package-on-package (PoP), and may be designed and fabricated, at least in part, using standard cells according to some examples of the present disclosure. For example, mobile phone 1602, laptop computer device 1604, and fixed location terminal device 1606 may include semiconductor devices 1600 formed at least in part using standard cells as described herein. Semiconductor device 1600 may be, for example, any of the integrated circuits, dies, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 1602, 1604, 1606 shown in Figure 16 are exemplary only. Other electronic devices may also feature semiconductor device 1600, including, but not limited to, the group of devices (eg, electronic devices) that include: mobile devices, handheld personal communication system (PCS) units—portables such as personal digital assistants data units, Global Positioning System (GPS) enabled devices, navigation devices, set-top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communication devices, smartphones, tablets, computers, Wearables, servers, routers, electronics implemented in motor vehicles (eg, self-driving vehicles), Internet of Things (IoT) devices, or any other device that uses digital logic, stores or retrieves data or computer instructions, or any combination.In order to fully illustrate aspects of the design of the present disclosure, methods of fabrication are presented. Other fabrication methods are also possible, and the fabrication methods presented are provided only to aid in understanding the concepts disclosed herein.In accordance with the various aspects disclosed herein, it should be understood that there are various methods for fabricating standard cells. 17 is a flowchart of a method for manufacturing a standard cell in accordance with at least one disclosed aspect. For example, block 1702 includes fabricating a plurality of wide metal lines, wherein the wide metal lines are formed of copper. Block 1704 includes fabricating a plurality of narrow metal lines, wherein the narrow metal lines are formed of a material having a lower resistance than copper for line widths on the order of 12 nm or less. Various processes for fabricating wide and narrow metal lines, including narrow metal lines formed by subtractive etching and wide metal lines formed by damascene processes, are discussed in detail in the above disclosure. It should be understood from the above disclosure that additional processes for making the various aspects disclosed herein will be apparent to those skilled in the art and that no literal descriptions of process variations will be provided or illustrated herein.The devices, processes, and functions disclosed above may be designed and configured as computer files (eg, RTL, GDSII, GERBER, etc.) stored on a computer-readable medium. Some or all of these files may be provided to manufacturers who manufacture devices based on such files. The resulting product can include a semiconductor wafer that is diced into semiconductor dies and packaged into semiconductor chips, which can then be applied to the aforementioned apparatus.The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integrated into the processor.Accordingly, embodiments disclosed herein may include non-transitory computer readable media embodying the methods disclosed herein for making standard cells. Accordingly, the invention is not limited to the embodiments shown, as this disclosure contemplates any method for performing the functions described herein.One or more of the components, processes, features and/or functions shown in FIGS. 1-17 may be rearranged and/or combined into a single component, process, feature or function, or incorporated into several components, processes or function. Additional elements, components, procedures and/or functions may also be added without departing from the present disclosure. It should also be understood that FIGS. 1-17 and their corresponding descriptions in this disclosure are not limited to dies and/or ICs. In some embodiments, FIGS. 1-17 and their corresponding descriptions may be used to fabricate, create, provide, and/or produce integrated devices. In some embodiments, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package-on-package (PoP) device, and/or plug-in.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any details described herein as "exemplary" should not be construed as advantageous over other examples. Likewise, the term "example" does not imply that all examples include the described features, advantages, or modes of operation. Furthermore, certain features and/or structures may be combined with one or more other features and/or structures. Furthermore, at least a portion of the apparatus described herein may be configured to perform at least a portion of the method described herein.Any reference herein to elements using names such as "first," "second," etc. does not limit the number and/or order of those elements. Rather, these names are used as a convenient method of distinguishing between two or more elements and/or instances of elements. Additionally, unless stated otherwise, a group of elements may include one or more elements.Those skilled in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques, for example, data, instructions, commands, information, signals, bits, symbols that may be referenced throughout the above specification and chips can be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.Nothing stated or described in this application is intended to be a donation to the public of any component, act, feature, benefit, advantage or equivalent, whether or not such device, act, feature, benefit, advantage or equivalent is stated in the claims.As can be seen in the detailed description above, different features are combined together in the embodiments. This manner of disclosure should not be construed as the claimed embodiment having more features than those expressly recited in the corresponding claims. Rather, the present disclosure may include less than all features of a single disclosed example. Thus, the following claims should be regarded as incorporated into the specification, with each claim standing on its own as a separate example. While each claim may itself be a separate example, it should be noted that although dependent claims may be cited in a claim in a specific combination with one or more of the claims, other embodiments may also include or include such dependent claims Combinations of a claim with the subject-matter of any other dependent claims, or of any feature with other dependent and independent claims. Unless expressly stated that a particular combination is not required, such combinations are presented herein. Furthermore, the features of a claim may also be included in any other independent claim, even if this claim is not directly dependent on this independent claim.Furthermore, in some examples, a single action may be subdivided into or contain multiple sub-actions. Such sub-actions may be included in, and may be part of, the disclosure of a single action.While the foregoing disclosure shows various embodiments, it should be noted that various changes and modifications can be made without departing from the scope of the teachings of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with embodiments in the disclosure described herein need not be performed in any particular order. Additionally, although elements of the present disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
An embodiment is directed to displaying information to a user of a communications device. The communications device receives a query including a social parameter, a temporal parameter and a spatial parameter relative to the user that are indicative of a desired visual representation of a set of data objects. The communications device determines degrees to which the social, temporal and spatial parameters of the query are related to each of the set of data objects in social, temporal and spatial dimensions, respectively. The communications device displaying a first visual representation of at least a portion of the set of data objects to the user based on whether the determined degrees of relation in the social dimension, temporal dimension and spatial dimension satisfy the respective parameters of the query. |
1.A method for displaying information to a user of a communication device, including:Receiving a query containing social, temporal and spatial parameters relative to the user, the parameters indicating the desired visual representation of the data object collection;Determine the degree to which the social, temporal, and spatial parameters of the query are related to each of the data object collections in the social, temporal, and spatial dimensions, respectively; andA first visual representation of at least a portion of the set of data objects is displayed to the user based on whether the determined degree of correlation on the social dimension, temporal dimension, and spatial dimension satisfies the corresponding parameters of the query.2.The method of claim 1, further comprising:Modify one or more of the parameters of the query; andTransitioning from displaying the first visual representation to displaying a second visual representation configured to determine whether the degree of determination on the social dimension, temporal dimension, and / or spatial dimension satisfies the query The one or more modified parameters to display another part of the set of data objects.3.The method of claim 2, wherein the one or more parameters that modify the query correspond to the first in a degree range that will satisfy one of the temporal, spatial, or social parameters of the query Expand.4.The method of claim 3, further comprising:The expansion will satisfy the degree range of one or more of the other parameters of the query, while maintaining the first expansion.5.The method of claim 2, wherein modifying the one or more parameters of the query corresponds to one or more of the temporal, spatial, and / or social parameters that will satisfy the query The extent of the degree is reduced.6.The method of claim 2, wherein the modifying step modifies less than all of the parameters of the query so that at least one parameter of the query is modified and at least one parameter of the query remains unmodified.7.The method of claim 2, wherein modifying the one or more parameters of the query corresponds to a shift in a degree range that will satisfy one of the temporal, spatial, or social parameters of the query .8.The method of claim 1, wherein the set of data objects includes at least one of the user's activities, multimedia files, or social network contacts.9.The method of claim 8, wherein the determined degree of the user's relevance to a given data object of an activity in the social dimension corresponds to a level of attention that the user anticipates to the activity.10.The method of claim 8, wherein the determined degree to which the user is related to a given data object of a social network contact in the social dimension corresponds to the social network contact expected to have to the user How important.11.The method of claim 1, wherein the temporal and spatial information associated with a given data object are compared with the temporal and spatial parameters of the query to determine the query and the given data object The degree of relevance.12.The method of claim 11, wherein the given data object is a social network contact of the user, and the temporal and spatial information is the location of the social network contact in one or more periods.13.The method of claim 11, wherein the query includes time, space, and social parameters indicating time, location, and social relationships of interest, and the displayed portion of the data object collection is based on the determination The corresponding relationship between the degree and the query parameter.14.The method according to claim 13, wherein the temporal parameter, spatial parameter and social parameter of the query respectively include a specified date, city and friend of the user, and wherein the part of the data object collection is included in The social dimension has a degree of being a friend and a spatial and temporal dimension having a degree of being a data object near the city on the specified date.15.The method of claim 1, wherein at least one artifact is displayed in the first visual representation, wherein the at least one artifact conveys visually recognizable information and is related to the at least one data object or the user.16.The method of claim 15, wherein the artifact will have other artifact-like characteristics associated with the selected degree.17.The method of claim 15, wherein the at least one data object corresponds to a social network contact of the user, and the at least one illusion corresponds to a picture related to the social network contact.18.The method of claim 15, wherein the at least one data object corresponds to an activity, and the at least one artifact corresponds to a picture related to the activity.19.The method of claim 1, further comprising:Designating at least one of the data object sets as the target data object of the message to be established in the hierarchical area;Select at least one data object as an attachment to the message;Adding the at least one data object to the rating area;Attach each added data object in the rating area to the message; andThe message containing each additional data object is sent to the at least one target data object.20.The method of claim 1, further comprising:Receiving a message from the transmitting data object, the received message containing at least one additional data object;Displaying the notification of the received message based at least in part on the attributes of the transmitted data object;Extract the at least one additional data object from the received message; andEach extracted data object is displayed based at least in part on the attributes of the extracted data objects.21.The method of claim 1, wherein the data object used in the determining step to determine the degree to which the social, temporal, and spatial parameters of the query are related to the at least one data object The attributes of at least one of the sets can be modified by the user.22.A communication device configured to display information to a user, including:Means for receiving a query containing social parameters, temporal parameters and spatial parameters relative to the user, the parameters indicating the desired visual representation of the data object collection;Means for determining the degree to which the social, temporal, and spatial parameters of the query are related to each of the data object collections in the social, temporal, and spatial dimensions, respectively; andDevice for displaying a first visual representation of at least a part of the data object set to the user based on whether the determined degree of correlation on the social dimension, temporal dimension and spatial dimension satisfies the corresponding parameter of the query .23.The communication device according to claim 22, further comprising:Means for designating at least one of the data object sets as the target data object of the message to be established in the hierarchical area;A device for selecting at least one data object as an attachment to the message;Means for adding the at least one data object to the classification area;Means for attaching each added data object in the hierarchical area to the message; andMeans for sending the message containing each additional data object to the at least one target data object.24.The communication device according to claim 22, further comprising:An apparatus for receiving a message from a transmitted data object, the received message containing at least one additional data object;Means for displaying a notification of the received message based at least in part on the attributes of the transmitted data object;Means for extracting the at least one additional data object from the received message; andMeans for displaying each extracted data object based at least in part on the attributes of the extracted data object.25.A communication device configured to display information to a user, including:Logic configured to receive queries containing social, temporal, and spatial parameters relative to the user, the parameters indicating the desired visual representation of the data object collection;Logic configured to determine the degree to which the social, temporal, and spatial parameters of the query are related to each of the set of data objects in the social, temporal, and spatial dimensions, respectively; andConfigured to display a first visual representation of at least a portion of the set of data objects to the user based on whether the determined degree of correlation on the social dimension, temporal dimension, and spatial dimension satisfies the corresponding parameter of the query logic.26.The communication device according to claim 25, further comprising:Logic configured to modify one or more of the parameters of the query; andLogic configured to transition from displaying the first visual representation to displaying a second visual representation configured to be based on the determined degree in the social dimension, temporal dimension, and / or spatial dimension Whether the one or more modified parameters of the query are satisfied to display another part of the set of data objects.27.The communication device of claim 26, wherein the one or more parameters that modify the query correspond to the first in a range of degrees that will satisfy one of the temporal, spatial, or social parameters of the query One expansion.28.The communication device according to claim 27, further comprising:It is configured to expand the range of degrees that will satisfy one or more of the other parameters of the query while maintaining the logic of the first expansion.29.The communication device of claim 26, wherein the logic configured to modify the one or more parameters of the query corresponds to the temporal, spatial, and / or social parameters that will satisfy the query The extent of one or more of them is reduced.30.The communication device of claim 26, wherein the logic configured to modify modifies less than all of the parameters of the query such that at least one parameter of the query is modified and at least one parameter of the query remains Unmodified.31.The communication device of claim 26, wherein the logic configured to modify the one or more parameters of the query corresponds to the time parameter, the space parameter, or the time parameter, the space parameter, or the parameter configured to satisfy the query The logic of the degree range shift of one of the social parameters.32.The communication device of claim 25, wherein the set of data objects includes at least one of the user's activities, multimedia files, or social network contacts.33.The communication device according to claim 32, wherein the determined degree of the user's relevance to a given data object of an activity in the social dimension corresponds to a level of attention expected by the user to the activity.34.The communication device of claim 32, wherein the determined degree to which the user is related to a given data object of a social network contact in the social dimension corresponds to the social network contact expected to the user How important.35.The communication device of claim 25, wherein the time and space information associated with a given data object are compared with the time and space parameters of the query to determine the query and the given data, respectively The relevance of the object.36.The communication device of claim 35, wherein the given data object is a social network contact of the user, and the temporal and spatial information is the location of the social network contact in one or more periods.37.The communication device according to claim 35, wherein the query includes time, space, and social parameters indicating time, location, and social relationship of interest, and the displayed portion of the data object set is based on the determination The degree of correspondence between the degree and the query parameters.38.The communication device according to claim 37, wherein the time parameter, space parameter, and social parameter of the query respectively include a specified date, city, and friend of the user, and wherein the portion of the data object set includes A data object that has a degree of being a friend in the social dimension and a degree of being near the city on the specified date in the space and time dimensions.39.The communication device of claim 25, wherein at least one artifact is displayed in the first visual representation, wherein the at least one artifact conveys visually recognizable information and is related to the at least one data object or the user .40.The communication device of claim 39, wherein the artifact will have other artifact-like characteristics associated with the selected degree.41.The communication device according to claim 39, wherein the at least one data object corresponds to a social network contact of the user, and the at least one illusion corresponds to a picture related to the social network contact.42.The communication device according to claim 39, wherein the at least one data object corresponds to an activity, and the at least one artifact corresponds to a picture related to the activity.43.The communication device according to claim 25, further comprising:Logic configured to designate at least one of the set of data objects as the target data object of the message to be established in the hierarchical area;Logic configured to select at least one data object as an attachment to the message;Logic configured to add the at least one data object to the hierarchical area;Logic configured to append each added data object in the staging area to the message; andLogic configured to send the message containing each additional data object to the at least one target data object.44.The communication device according to claim 25, further comprising:Logic configured to receive a message from the transmitting data object, the received message containing at least one additional data object;Logic configured to display the notification of the received message based at least in part on the attributes of the transmitted data object;Logic configured to extract the at least one additional data object from the received message; andIt is configured to display the logic of each extracted data object based at least in part on the attributes of the extracted data object.45.The communication device according to claim 25, wherein the data used to determine the degree to which the social parameter, temporal parameter, and spatial parameter of the query are related to the at least one data object in the determining step The attributes of at least one of the set of objects can be modified by the user.46.A computer-readable storage medium containing instructions that when executed by a communication device configured to display information to a user causes the communication device to perform operations, the instructions including:Program code to receive a query containing social, temporal, and spatial parameters relative to the user, the parameters indicating the desired visual representation of the data object collection;Program code for determining the degree to which the social parameters, temporal parameters, and spatial parameters of the query are related to each of the data object collections in the social dimension, temporal dimension, and spatial dimension, respectively; andA program for displaying to the user a first visual representation of at least a portion of the set of data objects based on whether the determined degree of correlation on the social, temporal and spatial dimensions satisfies the corresponding parameters of the query Code. |
Integrated display and management of data objects based on social parameters, temporal parameters and spatial parametersClaim priority based on 35U.S.C. §119This patent application claims the provisional application No. 61 / 094,376, entitled "Integrated display of user-centric activities and social proximity (INTEGRATED DISPLAY, USER-CENTERED, ACTIVITY, and SOCIAL PROXIMITY), filed on September 4, 2008. Priority of the case, the provisional application is assigned to the assignee of the present invention and is hereby expressly incorporated by reference in its entirety.Technical fieldThe embodiment is directed to providing integrated display and management of data objects based on social parameters, temporal parameters, and spatial parameters.Background techniqueIn mobile telecommunication devices such as cellular phones, PDAs, mini laptops, and advanced pagers, the devices usually contain various types of information related to contacts, calendars, emails, and so on. Each type of information is usually classified and presented to the user according to the category customized for the specific type of information. For example, calendar information is usually presented in chronological order. Contact information is usually sorted and presented in alphabetical order. In some cases, two types of information can be integrated. For example, a calendar event may include contact information about the person invited by the event. The calendar event may further contain location information (if input by the user).Mobile devices often have limited space to provide user interfaces. In particular, mobile phone devices with numeric keypads may have limited screen space and key functions to provide detailed information and user interface options. Users often have to scroll through many screens and scroll through many menu choices to find the information they are looking for. For example, if a user desires to find a contact who has been invited about a calendar event, then the user must usually have a priori information about the event name and type. The user must then open the event and search the entire contact list. If the user desires to find more information about the relationship or location of each contact, the user must expand each contact to find the information.These traditional user interfaces have a limited ability to present intuitive and representative information about how users actually think and perceive events, places, and people. People are often perceived in terms of relationship and position. In addition to location, events can also be perceived more in terms of social importance and about the people invited to the event. However, the traditional information hierarchical, segmented, and menu-driven structure provided in mobile devices does not provide this intuitive and user-friendly interface.Summary of the inventionThe embodiment is directed to displaying information to a user of a communication device. The communication device receives a query containing social parameters, temporal parameters, and spatial parameters relative to the user, the social parameters, temporal parameters, and spatial parameters indicating a collection of data objects (eg, people, places, events, multimedia such as pictures , Etc.). The communication device determines the degree to which the social parameters, temporal parameters, and spatial parameters of the query are related to each of the data object sets in social, temporal, and spatial dimensions, respectively. The communication device displays a first visual representation of at least a portion of the set of data objects to the user based on whether the determined degree of correlation on the social dimension, temporal dimension, and spatial dimension satisfies the corresponding parameter of the query .BRIEF DESCRIPTIONA more complete understanding of the embodiments of the present invention and its many accompanying advantages will be easily obtained, because the embodiments of the present invention and its many accompanying advantages become better understood by referring to the following detailed description considered in conjunction with the drawings, which are for illustration only The drawings that limit the purpose of the present invention are presented, and in the drawings:FIG. 1 is a representative diagram of a wireless network in which a designated PTT group of wireless telecommunication devices communicates with a group communication server and other computer devices via the wireless network.2 is a representative diagram of one embodiment of a wireless network in a shared cellular telecommunications configuration, in which a group communication server controls communication between wireless telecommunication devices of PTT group members.Figure 3 is a block diagram illustrating a computer platform of a PTT capable wireless telecommunications device.4 is a diagram of one embodiment of a software layer for a communication group application, the software layer having a PTT client and a group-oriented media client.5 is an exemplary mobile communication device.Fig. 6 is an exemplary three-dimensional representation based on the axis of time, relationship and space.7 is an exemplary diagram depicting an activity-centric design.Figure 8 is an exemplary space-time diagram.9 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.10 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.11 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.12 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.13 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.14 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.15A to 15C illustrate examples of visual representations for specific sets of data objects according to embodiments of the invention.FIG. 16 depicts an exemplary process incorporating some of the embodiments disclosed herein.FIG. 17 depicts an exemplary process incorporating some of the embodiments disclosed herein.Figure 18 depicts an exemplary process incorporating some of the embodiments disclosed herein.19 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.20 depicts an exemplary user interface on a mobile communication device that incorporates some of the embodiments disclosed herein.21 illustrates a process that can be used to display one or more data objects belonging to a collection of data objects to a user according to an embodiment of the present invention.22A to 22J are for the exchange from the perspective of the sender of the data object according to an embodiment of the present invention.23 is for data object exchange from the perspective of the recipient or target of the data object according to an embodiment of the present invention.detailed descriptionVarious aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternative embodiments can be designed without departing from the scope of the invention. In addition, well-known elements of the invention will not be described in detail or the elements will be omitted so as not to obscure the relevant details of the invention.The words "exemplary" and / or "example" are used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" and / or "example" need not be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.In this description, the terms "mobile communication device", "communication device", "wireless device", "wireless communication device", "PTT communication device", "handheld device", "mobile device" and "handset" may Used interchangeably. The terms "call" and "communication" are also used interchangeably. The term "application" is used herein to cover executable and non-executable software files, raw data, aggregated data, patches, and other code segments. The term "group communication" means point-to-point or point-to-multipoint communication sent between wireless communication devices across real or virtual half-duplex channels. The term "exemplary" means that the disclosed element or embodiment is only an example and does not indicate any preference of the user. In addition, the same number refers to the same element in all several views, and the articles "a" and "said" include plural references unless specified otherwise in the description.In addition, many embodiments are described in terms of a sequence of actions to be performed by, for example, elements of a computing device. It will be recognized that the various actions described herein can be performed by specific circuits (eg, application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. In addition, it can be considered that the sequences of actions described herein are fully embodied in any form of computer readable storage medium, and the computer readable storage medium stores a set of corresponding computer instructions, which when executed will cause relevant The associated processor performs the functionality described herein. Therefore, various aspects of the present invention can be embodied in many different forms, all of which are included within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any of the described embodiments may be described herein as, for example, "logic configured to (perform the described actions)" .A high data rate (HDR) subscriber station, referred to herein as user equipment (UE), may be mobile or fixed, and may communicate with one or more access points (AP), which may be referred to as Node Bs. The UE transmits and receives data packets to the radio network controller (RNC) through one or more of the Node Bs. Node B and RNC are part of a network called a radio access network (RAN). Radio access networks can transport voice and data packets between multiple access terminals.The radio access network can be further connected to additional networks outside the radio access network. This core network includes specific operator-related servers and devices as well as services such as company intranet, Internet, public switched telephone network (PSTN), and general packet radio services (GPRS) Support Node (SGSN), Gateway GPRS Support Node (GGSN) and other network connectivity, and can transfer voice and data packets between each UE and such networks. A UE that has established an effective traffic channel connection with one or more Node Bs may be called an active UE, and may be said to be in a traffic state. A UE that is in the process of establishing an effective traffic channel (TCH) connection with one or more Node Bs may be said to be in a connection setup state. The UE may be any data device that communicates through a wireless channel or through a wired channel. The UE may further be any of many types of devices, including (but not limited to) PC cards, compact flash devices, external or internal modems, or wireless or wired phones. The communication link through which the UE sends a channel to the Node B is called an uplink channel (eg, reverse traffic channel, control channel, access channel, etc.). The communication link through which the Node B sends a channel to the UE is called a downlink channel (eg, paging channel, control channel, broadcast channel, forward traffic channel, etc.). As used herein, the term traffic channel (TCH) may refer to uplink / reverse or downlink / forward traffic channels.FIG. 1 illustrates a block diagram of an exemplary embodiment of a wireless system 100 according to at least one embodiment of the present invention. The system 100 may contain an access terminal (eg, a cellular telephone 102) that communicates with an access network or radio access network (RAN) 120 via an air interface 104. The access network or radio access network (RAN) 120 may connect The incoming terminal 102 is connected to network equipment that provides data connectivity between the packet switched data network (eg, intranet, Internet, and / or carrier network 126) and the access terminals 102, 108, 110, 112. As shown here, the access terminal may be a cellular phone 102, a personal digital assistant 108, a pager 110 (which is shown here as a two-way text pager), or even a separate computer platform 112 with a wireless communication portal. Therefore, embodiments of the present invention may be implemented on any form of access terminal including wireless communication portals or wireless communication capabilities, including (but not limited to) wireless modems, PCMCIA cards, personal computers, telephones, or any combination thereof Or subgroup. In addition, as used herein, the terms "access terminal", "wireless device", "client device", "mobile terminal" and variations thereof are used interchangeably.Referring again to FIG. 1, the correlation between the components and elements of the wireless network 100 of the exemplary embodiment of the present invention is not limited to the illustrated configuration. The system 100 is merely exemplary and may include allowing remote access terminals (e.g., wireless client computing devices 102, 108, 110, 112) to be between and among them and / or via the air interface 104 and RAN 120. Any system for wireless communication between two or more of the connected components, including (but not limited to) the carrier network 126, the Internet, and / or other remote servers.The RAN 120 controls the messages sent to the base station controller / packet control function (BSC / PCF) 122 (usually sent as data packets). The BSC / PCF 122 is responsible for signaling, establishing and tearing down the bearer channel (ie, data channel) between the packet data service node 100 ("PDSN") and the access terminal 102/108/110/112. If link layer encryption is allowed, the BSC / PCF 122 also encrypts the content before forwarding it via the air interface 104. The function of BSC / PCF122 is well known in the art and will not be discussed further for brevity. The operator network 126 may communicate with the BSC / PCF 122 via the network (Internet and / or Public Switched Telephone Network (PSTN)). Alternatively, BSC / PCF122 can be directly connected to the Internet or an external network. Generally, the network or Internet connection between the carrier network 126 and the BSC / PCF 122 transmits data, and the PSTN transmits voice information. The BSC / PCF 122 can be connected to multiple base stations (BS) or modem set transceivers (MPT) 124. In a manner similar to the operator's network, BSC / PCF 122 is usually connected to MPT / BS 124 through a network (Internet and / or PSTN for data transfer and / or voice information). The MPT / BS 124 can broadcast the data message to the access terminal wirelessly, for example, the cellular phone 102. MPT / BS124, BSC / PCF 122 and other components may form RAN 120, as is known in the art. However, alternative configurations may also be used, and the invention is not limited to the illustrated configuration. For example, in another embodiment, the functionality of one or more of BSC / PCF 122 and MPT / BS 124 can be reduced to the functionality of both BSC / PCF 122 and MPT / BS 124 In a single "hybrid" module.FIG. 2 illustrates the core network 126 according to an embodiment of the present invention. In the embodiment of FIG. 2, the operator network 126 includes a packet data service node (PDSN) 160, a broadcast service node (BSN) 165, and the Internet 175. The exchange server 172 and the social network server 174 are also shown in FIG. 2. However, in alternative embodiments, the exchange server 172, the social network server 174, and / or other components may be located outside the operator's network. The PDSN 160 provides the Internet for mobile stations (eg, access terminals, such as 102, 108, 110, and 112 from FIG. 1) that utilize (eg) the cdma2000 radio access network (RAN) (eg, RAN 120 of FIG. 1) 175. Access to the intranet and / or remote servers (eg, servers 172, 174). By acting as an access gateway, PDSN 160 can provide simple IP and mobile IP access, foreign agent support, and packet delivery. The PDSN 160 can act as a client for authentication, authorization, and accounting (AAA) servers and other supporting infrastructure, and provide gateways to IP networks for mobile stations, as is known in the art. As shown in FIG. 2, PDSN 160 can communicate with RAN 120 (eg, BSC / PCF 122) via a conventional A10 connection. The A10 connection is well known in the art and will not be described further for simplicity. A broadcast service node (BSN) 165 may be configured to support multicast and broadcast services. The BSN 165 communicates with the RAN 120 (eg, BSC / PCF 122) via a broadcast (BC) A10 connection. The BCA10 connection is used to transmit multicast and / or broadcast messaging.Referring to FIG. 2, the exchange server 172 corresponds to one or more distributed servers that support messaging and cooperation software accessible through the Internet 175. For example, Microsoft Exchange Server (Microsoft Exchange Server) is a widely used type of Exchange Server 172. As will be appreciated, the exchange server 172 may store contact information (eg, email contacts, addresses, etc.) and associated messaging (eg, email, etc.) and / or schedule information (eg, meetings, appointments, etc.) . Twitter is another example of services that the exchange server 172 can support.The social network server 174 corresponds to one or more distributed servers that support social network services to subscribers (eg, Facebook, MySpace, Orkut, etc.). The social network server 174 stores information related to subscriber profiles as well as inter-subscriber information (eg, a subscriber's friend list, family list, business contact list, etc.). The social network server 174 may also evaluate and / or generate a social graph of users, for example, draw indirectly related subscribers (eg, friends of friends, etc.) that do not have direct social connections, and so on. The social network server 174 may also store multimedia related to its subscribers, such as delivered multimedia files (eg, images, audio files, video files, text files, etc.) and its associated information (eg, when the multimedia files were delivered or generated , For example, a location associated with a multimedia file such as a place where a photo is taken or a sound is recorded, a subscriber or non-subscriber associated with a multimedia file such as a person speaking in an audio file, or a location where a photo is taken, etc.As will be appreciated, the functionality of the exchange server 172 and the social network server 174 may overlap, so that the functionality of each server may be combined into a single server, or these different servers may be independent, but may query each other for information when needed.FIG. 3 is a block diagram illustrating one embodiment of a wireless telecommunication device as a mobile phone 14 having a PTT button 78 that is open to direct communication with devices of a target group (ie, other members of the communication group 12). The wireless device 14 is also shown as having a graphical display 80 for the user of the wireless device 14. The wireless device 14 includes a computer platform 82 that can handle voice and data packets, and receives and executes software applications transmitted via the wireless network 20 to include media for the group. Among other components, the computer platform 82 includes an application specific integrated circuit ("ASIC") 84, or other processor, microprocessor, logic circuit, programmable gate array, or other data processing device. ASIC 84 is installed when manufacturing wireless devices and is generally not upgradeable. The ASIC 84 or other processor executes an application programming interface ("API") layer 86, which contains the resident application environment, and may include an operating system loaded on the ASIC 84. The resident application environment interfaces with any resident programs in the memory 88 of the wireless device. An example of the hosted application environment is the "Wireless Binary Runtime Environment" (BREW) software developed by QUALCOMMfor the wireless device platform.As shown here, the wireless device may be a mobile phone 14 with a graphic display 80, but may also be any wireless device known in the art with a computer platform 82, such as a personal digital assistant (PDA), with graphics The pager of the display 80, or even a separate computer platform 82 with a wireless communication portal, and may have a wired connection to the network or the Internet in other ways. In addition, the memory 88 may include read-only or random-access memory (RAM and ROM), EPROM, EEPROM, flash memory card, or any memory common to computer platforms. The computer platform 82 may also include a local database 90 for storing software applications that are not currently in use in the memory 88. The local database 90 typically includes one or more flash memory units, but can be any secondary or tertiary storage device known in the art, such as magnetic media, EPROM, EEPROM, optical media, magnetic tape, or floppy disk or hard disk. The graphic display 80 can present not only information about the ongoing group call, but also information about the media for the group, thereby including the file preview as described more fully herein.In this embodiment of the wireless device, the computer platform 82 also includes a direct communication interface 92, which can open a communication channel from the wireless device. The direct communication interface 92 may also be part of a standard communication interface for wireless devices that generally carries voice and data transmitted to and from the wireless device. The direct communication interface 92 generally includes hardware known in the art.FIG. 4 is a diagram of one embodiment of a software layer configured to execute on the wireless device 14. In this embodiment, the computer platform 82 in the mobile device environment consists of a series of software "layers" developed on the basis of the mobile station modem (MSM) 100 and the advanced mobile subscriber software (AMSS) 102 (which was developed by Qualcomm), Drive the basic MSM chipset, and implement a software protocol stack for the entire set of CDMA communication technologies including CDMA2000 1X and CDMA2000 1xEV-DO. There is a mobile operating system layer 104, which in this embodiment is BREWis also developed by Qualcomm. The mobile operating system layer 104 application programming interface is used for chip-specific operations or device-specific operations, while providing an isolation layer that eliminates direct contact with AMSS 100 and any OEM software on the computer platform. The mobile operating system layer 104 enables application development using mobile device features without having to rewrite the application program whenever a new version of device-specific software is released.The social network client 108 is an application that provides access to social network services (eg, connection to Facebook, MySpace, etc.) through an external interface (shown here at UI 106 that knows the social network). The social network client 108 contains all the functions required to enable the mobile operating system 104 applications (eg, social network media client 110). In addition to providing access to social network services through the social network client 108, the social network client 108 can also act as an isolation between all social network aware applications and the interface to the social network server 174 (in one instance) Floor. In this embodiment, the social network client 108 maintains access to social network services, responds to social network communication requests, processes all social network-aware mobile operating system application requests for social network services, and processes all outgoing Social network request.The social network media client 110 is a mobile operating system-based application that extends social network services for access to media associated with social networks (eg, group conversations between social network contacts, social network connections The exchange of image data, video data, etc. between people and / or social network servers 174, etc.). Referring to FIG. 5, an exemplary mobile communication device 500 is illustrated, and in particular, a user interface for the device is illustrated. The mobile communication device 500 includes a display 505 (for example, an LCD or OLED display). In some embodiments, the display 505 may include touch screen capabilities. The mobile communication device 500 may include a keypad 515 (eg, a standard telephone keypad, a QWERTY keypad, a physical tactile response keypad, a soft keyboard via a touch screen, etc.). The mobile communication device 500 may also include a navigation button 510, which may further include up, down, left, and right keys for navigation on the display 505. In an example, the navigation button 510 may correspond to a directional pad (directional pad) that can obtain a desired direction from the user in other directions (eg, upper left, lower right, etc.). The navigation button 510 may further include a selection or OK key 550 to instruct the user to select or confirm a specific function. The device may also include a custom function key 507 that is programmable and used to select a function as indicated in the area of the display 505 near the custom function key 507.In various embodiments, systems, methods, and communication devices for providing integrated data management, visualization, and user interface capabilities on communication devices (eg, mobile communication devices) are disclosed. The user interface may be along the time dimension (eg, timeline), spatial dimension (eg, based on distance from the user or specified location) or based on "data objects" (eg, activities, events, social network contacts, multimedia files, etc.) ) Provides a visual representation of the data object (eg, social map, location map, time of activity) on the social dimension of its own social proximity (eg, based on the user ’s expected attention to the activity being evaluated and / or social network contacts) Line etc.).In one embodiment, the time dimension may be configured to convey past or historical information (eg, for past activities or past locations of social network contacts), current information (eg, based on real-time information or information that is expected to indicate the current time) Or future information (eg, based on planned event schedule, social network contact schedule, etc.). The data objects can be further presented in a user-selectable chronological time band. For example, a band may represent past events, current events, or planned (future) events. The time zone may include successive future time zones. In various embodiments, the present invention may be referred to as "integrated display" or "integrated zoom" in various ways.In other embodiments, within the band of data objects (eg, events, activities, multimedia files, and / or contacts) based on user-selectable time, the data objects have been completed or planned by the user and may be accessible along the society The dimension of degree (ie, "social dimension") represents several groups, which are depicted by artifacts or icons of different sizes and shapes (e.g., whereby each artifact is configured to have visual salients indicating data objects, thereby similar Data objects may be represented by similar artifacts to assist users in visual recognition of events and / or contacts). By providing the ability to scan data objects in different spatial, social, and / or temporal dimensions (eg, events and activities that have occurred or locations that social network contacts have visited within the current time band), users can be based on data objects To extract and maintain context. In instances where the data object corresponds to an activity, these attributes may include:(a) The person whose activities have been completed or planned;(b) The content or characteristics of completed or planned activities;(c) the time of past, current or planned activities;(d) The location or location of past, current or planned activities.The context information obtained according to the attributes of the activity can also be used to plan future activities. Users can repeat specific activities immediately or at a later time. Activities can cover various actions performed by users on mobile devices, such as push-to-talk, one-to-one, and conference voice calls, as well as push-to-share objects, such as pictures, videos, notes, chats, emoticons, and planned schedules Table events and other information.In yet another embodiment, a visual representation of a set of data objects (eg, activities, events, social network contacts, etc.) can be implemented on an axis within a three-dimensional coordinate system, where the axis corresponds to a spatial dimension (eg, distance, etc.) , Time dimension (for example, time) and social dimension (for example, the user's expected attention or “social proximity” to events or social network content). By navigating on the spatial and social proximity axis or dimension, the user can be further provided with the ability to select contacts within the reference system to start activities with the selected contacts. Thus, a particular data object represented on a chart may correspond to an event or activity with its associated spatial and temporal information, or may correspond to a user's social network contact with its associated spatial and temporal information.Alternatively, the visual representation itself does not need to display each of the dimensions, but the way the data objects are displayed is still based on each of the three dimensions. For example, if the visual representation corresponds to a location map of a geographic area, then the actual visual representation shows the spatial dimensions. However, the actual data objects displayed in their corresponding locations on the location map also satisfy the time and social parameters of the query to be displayed on the location map first. Therefore, although geographic maps are usually used to indicate location, in this case, the illusion of representing data objects appearing on the map is also used to indicate that the corresponding data objects satisfy other parameters of the query, not just location parameters.FIG. 6 depicts an exemplary coordinate system 600 for implementing some of the embodiments disclosed herein. In the figure, consider using three orthogonal axes to present information and zoom relative to the user 605 that is considered the origin of the orthogonal axes. It will be appreciated that the origin of the orthogonal axis can represent any point that can be identified in terms of social, spatial, and temporal dimensions (eg, the profile of a given subscriber or user at the current location of the user at the current time), such that The relative position of the object relative to the origin can be determined. On the horizontal axis 610 (or x-axis), space-based scaling along the spatial dimension may be represented. In an embodiment, the physical distance of a location associated with the data object from the origin (eg, the user's location) may be represented along this axis. The vertical axis 620 (or y-axis) may be used to represent time-based scaling along the time dimension. In another embodiment, both the past and current amounts of time can be represented along this axis. The third axis 630 (or z-axis) may be used to represent relationship-based scaling along the social dimension. In various embodiments, this axis may be used to represent the social distance or proximity to the origin (eg, the user's social profile). For example, a data object corresponding to a close friend or family member may be represented closer to the origin than distant relatives, which may be represented farther from the origin. In addition to the "type" of relationships (eg, making family contacts a higher priority than friends, etc.), social proximity may also consider indirect relationships (eg, friends of friends are closer than friends of friends of friends, etc.).In yet another example, the origin of the visual representation represented by the data object in FIG. 6 may indicate the origin of the designated display (eg, the subscriber ’s social profile, time of interest, and location of interest) and the desired range of dimensions along each axis Queries (for example, for a given user). Therefore, in an example, if a given user wishes to view his social network contacts within three (3) miles of his current location, the query is determined by social parameters (eg, if the user has not narrowed or expanded his / her request to view Contact, then it can default to any direct friend of the user), spatial parameters (for example, the user ’s current location combined with its three-mile radius) and time parameters (for example, the current time, which can include a default time range so that For example, in the case of a threshold amount of time within the last 10 minutes, any contact location known to be current is considered to be "current"). In this example, the visual representation would then contain the display of each direct contact of the user currently within a specified three-mile radius. The visual representation of the contact may be via associated illusions (eg, a photo of the contact, a photo of the location of the contact, a video indicating the social relationship between the user and the contact, etc.).Although this example has been given in terms of the user's social network contacts, it will be understood that activities can also be specified in a similar manner, so that the user queries the display of his / her preferred activities as indicated in his / her social profile, or Query for specific activities (eg, golf, bowling), where the data object corresponds to social network contacts who are also concerned about the activity and / or locations where the activity can be performed (eg, golf course, bowling alley, etc.)For example, if the visual representation corresponds to a position map, the visual representation may display a given range of positions with a zoom field based on spatial parameters, and the position of each data object that meets spatial, social, and temporal parameters may be displayed The corresponding position within the given position range. Therefore, the user can view the location map and understand that each artifact is associated with a matching result of the query and can infer its corresponding location. Alternatively, if the visual representation corresponds to a social graph, the visual representation may enable the matching results to be displayed in a manner that infers the social relationship between the user and the matching data object based on the display. Alternatively, if the visual representation corresponds to a timeline, the visual representation may make it possible to infer the timing of the data object based on the visual inspection of the display (eg, the occurrence of an event, when a contact will be in a position corresponding to a spatial parameter, etc.) Way to display matching results.Other representations can be implemented in various embodiments using two of the three types of information (physical distance, time, and social proximity). For example, space-based scaling can be provided in two axes, where the horizontal axis represents longitude or east-west distance, and the vertical axis represents dimension or north-south distance. The present invention contemplates various embodiments in which a more integrated user interface may be provided to enable users to access, view, and manage information on mobile devices in a more intuitive manner.According to the principles of activity theory, activity is the main situation from a human-centered perspective. The argument of activity theory is that when people participate in and interact with their environment, various tools and processes can be produced. These tools and processes can be external manifestations of spiritual and social processes, and thus these tools and processes can be more easily accessed and communicated to others by others. Such tools and processes can be particularly useful for social interaction. In the framework structure derived from activity theory, tasks and activities can be further divided into several actions, and actions are further divided into several operations. In the context of system design, using these categories can provide an understanding of the steps necessary for the user to perform the task. The present invention contemplates the implementation of activity theory principles to provide an intuitive user interface for managing and accessing information on mobile devices.User activities can be considered within the framework structure depicted in FIG. 7. As will be understood, the description of FIG. 7 provides an example of how the "social proximity" or the degree of social relationship between a user (or origin) and a given data object (eg, an activity in this case) can be determined. When considering human activities, three basic triples can be considered. 710 depicts the subject-rule-group triplet. 720 depicts a group-role-object triplet. 730 depicts a theme-tool-object triplet. In the subject-rule-group triplet, people follow implicit and explicit rules within the group. Social networks are "scale-free" networks that contain several centers of highly connected people and many companions of radiative connections. Social networks continue to add newer people and therefore keep growing. Newcomers are more likely to join groups that already have many "connections."In the group-role-object triplet, in order to perform activities, people abide by the hierarchy and play the role of division of labor. In the theme-tool-object triplet, people need to provide an intermediary to perform activities through appropriate tools. The selected tool should provide sufficient intermediary for the current object (target). Using a snapshot of events from a simulation with a Minkowski space-time graph provides a mechanism to form and maintain a space-time context by treating activities as “events”. FIG. 8 depicts an exemplary space-time graph 810. Future and past events can be represented by an absolute future light cone 820 and an absolute past light cone 830, where the intersection of the two cones indicates the current time 840. The event can then be represented as an event snapshot 850 as a slice of past or future light cones. Also, although FIG. 8 is illustrated and described with respect to “events”, the same teachings can be applied to other types of data objects, such as users ’social network contacts, where“ events ”correspond to contacts’ schedules and meetings The inferred location of a contact at a specific time, such as a timetable, is expected.In light of the above discussion, various views, filters, and containers that enable users to perform activities in the context are disclosed. In one embodiment, three main views can be provided. In an embodiment of an activity diagram (eg, it is an example of a visual representation of a data object corresponding to an activity that matches a parameter queried by a user), the activity diagram view may present a snapshot of the activity as an event on the timeline, similar to the space- Events in time. Completed events may be indicated as historical events, and may be expressed as "near" in the past for events that occurred not too long ago. Events older in time can be expressed as "away" in the past. The planned event may be indicated as a future event, and the event in the near future may be expressed as "close" in the future time. Long-term events can be expressed as "away" in the future. Therefore, if the time position of the origin or the time parameter of the query corresponds to the current, then the position of the data object in the time dimension corresponds to how far in time or expected the data object (eg, event or activity in this case) is How far.In an embodiment, the navigation on various data objects (eg, events, contacts, multimedia files, etc.) as described above can be done on the mobile communication device using available keys. In devices with numeric keypads, custom function keys can be used. The custom function key is a button that can perform assignable functions depending on text or other instructions displayed near the button on the display at the moment the button is pressed. In one embodiment, these custom function keys can be used to zoom in and out of the display (eg, where "zoom" means to modify the spatial, temporal, and / or social dimensions of the visual representation of the data object). In another embodiment, a five-button navigation key with left, right, up, down arrows and OK button may be used. In some embodiments, the OK button may double as a push-to-share button.In one embodiment, a custom function key may be assigned as the "zoom out" function, and a custom function key may be assigned as the "zoom in" function. Narrowing can cross the direction of time toward the past or toward increasing event history. Zooming in can traverse the direction of time towards the future or towards increasing planned events. In an embodiment of a three-dimensional frame structure for representing activities, the time direction may be assigned to the Y axis, as shown in FIG. 6. It will be appreciated, however, that users are permitted to zoom in any dimension, not just in time. For example, the user may reduce the spatial dimension, which may correspond to increasing the range of spatial parameters so that a greater distance meets the user's query. In another example, the user may enlarge the social dimension, which may correspond to the social requirement of increasing social parameters so that a closer social relationship is needed to satisfy the user's query (eg, "show only family members" and "show friends and family (Members are narrower requirements). As will be understood, "zooming" may correspond to expanding or reducing the field of view or "shifting" the field of view so that the same amount of data is seen from different angles (for example, in a time instance, the user The view of the occurring data object is shifted to the view of the data object that occurred the following week, so that both views show the value of the data object of the week, but in different periods).When the user navigates the data object in the visual representation, the user can select the given data object by highlighting (eg, right-clicking) the given data object. By clicking the OK button, the interface can provide details of data objects such as events. For example, the display may indicate the location, date / time, and invitee of the event. Alternatively, if the data object is a social network contact, highlighting or selecting the data object may cause the contact's profile or other information to be displayed. The display may also indicate actions that the user may select for the selected event. Where the data object is an event, such actions may include, for example, updating the event details, inviting other contacts, joining the event, or canceling the event. In the case where the data object is a contact, such actions may include messaging the contact, adding comments to the contact's social profile wall, adding the contact as a friend, etc.Referring to FIG. 9, an exemplary user interface 900 that can be implemented on a mobile communication device such as a cell phone is illustrated. The display 915 may be an LCD or OLED display that provides a visual representation of data objects to mobile phone users. The display 915 may contain a title area 910 indicating the current type of user interface currently presented. In the example of FIG. 9, multiple data objects 950 may be presented in a radial manner from a center point on the display. As illustrated, the data object 950 may appear as an illusion or icon. When desired, other embodiments may use thumbnails or other graphical indicators. Some data objects 950 may be represented by user IDs or other ways to identify artifacts or icons in a limited display area. The particular way of describing or representing a given data object 950 within the display 915 may be referred to as "aliasing." In an example, similar artifacts may be associated with similar data objects or data objects that share a given proximity to the origin of the query in one or more of the spatial, social, and / or temporal dimensions.For example, if two specific data objects correspond to events that occurred at the same time and / or location, then the two data objects may share common visual characteristics within their illusions to convey this similarity. In another example, if two specific data objects correspond to events related to the same type of activity (eg, two different bowling tournaments), then the two data objects may share common visual features within their illusions Communicate this similarity (for example, images of bowlers, bowling bottles, etc.). In another example, if two specific data objects correspond to family contacts, then the two data objects may share common visual characteristics within their illusions to convey this similarity (eg, have a shared by all family members Images of a specific color background). In another example, the icon or illusion 920 can be identified with a three-letter abbreviation or initial letter of the contact. It will be appreciated that many different types of artifacts can be used to visually represent different data objects, and the examples given above in this section are non-limiting.The custom function keys 930 and 935 may be assigned to functions as indicated on the display. Referring to the figure, the custom function key 935 may be assigned to the "+" indication 925. The custom function key 930 may be assigned to the "-" indication 922 on the display. The user interface 900 may also include a navigation button 940 for navigation in the up, down, left, and right directions. The interface may also include a selection button 945, usually embodied as an OK button.As explained above, the user can navigate on the display by selecting the navigation button 940. In one example, the initial display of the data object 950 in the display 915 can be regarded as the user's initial query, where the displayed data object 950 satisfies the user's initial settings of social, temporal, and spatial parameters. However, the user is not limited to initially displaying the data object 950 in the display 915. In fact, for example, the selection of the left and right navigation buttons can scroll through different categories of data objects, such as activities, locations, events, etc. Navigation with the up and down buttons can move the effective display area to the upper and lower levels of the hierarchy, thereby modifying one or more of the social, temporal, and / or spatial parameters of the query. For example, the currently active area may be the active icon area 917. By pressing the up arrow button, the effective area can be shifted to the main subject area 910, which indicates "My Activity Map" in the figure. The user can then select the left or right navigation button to change the subject area to, for example, "My Items", which means that the data objects displayed in the display 915 change from activities (eg, events, etc.) to items (eg, multimedia files, etc.) ). After the user navigates to the desired type of data object, the data object that meets the current social, temporal, and spatial parameters is displayed to the user, and so on. As will be described in more detail below, the user can then modify the query to determine the parameters (eg, spatial, temporal, and / or social parameters) of the data object to be displayed.When the effective area is the icon area 917, the user can zoom in or out by pressing the "+" or "-" zoom custom function key, which further modifies the social, time, and / or spatial parameters of the data object, thereby affecting the user Show which data objects. Referring to FIG. 10, when the user presses the "-" zoom out button, one or more of the social, temporal, and / or spatial parameters are modified so that the field of view can be expanded to include a new data object 950 circle, where The previous active circle is represented as a smaller circle 1010, indicating that the circle 1010 is away from viewing. In particular, in this example, zooming out means that the origin of the query has changed, and a new collection of data objects within a given range of the new origin is displayed in the social, spatial, and / or temporal dimensions. As shown in FIG. 10, data objects that more closely match the updated query may be displayed more prominently in display 915 (eg, circle 950 is larger and more prominent than circle 1010, etc.). Likewise, if the zoom "+" button is selected, the field of view can be enlarged to provide an active circle as previously indicated in FIG. Therefore, the zoom button allows the user to modify the origin of the query, so that the user can focus on different times, locations, social relationships, etc. in the display 915 (for example, "showing my friends who are now watching a baseball game" can be changed to "show me Bowling tournament near my house tonight "etc.). The user can also modify the dimension range that will satisfy the query for each circle or level so that the user can expand or reduce the data objects displayed relative to the same origin (for example, "show me the next week's activities" can be changed to "for me Showcase next month's activities ", etc.).The user can further use the up, right, left, and down navigation keys to select the field of view, and further select items within the field of view. For example, referring to FIG. 11, the user can use the up and down navigation buttons to move the selected field of view from the title area 910 to the icon area 950. The user can then use the right and left navigation keys to select a specific icon as the currently active icon or illusion. In the example shown in FIG. 11, circle 1110 indicates the currently valid icon or illusion. Other methods that indicate valid icons or artifacts can be used, such as highlighting, producing shadow effects, and so on.In an embodiment, the visual representation of the data object (eg, activity diagram, etc.) may be created and modified via a website (eg, maintained by the social network server 174, etc.), and may be uploaded via the Internet 175. For example, network-based services may provide access to user accounts associated with mobile communication devices. The service can authenticate users and provide users with various account management functions. Users may further be able to use network services to create and modify social network information. Once the user has created or modified the social network information on the website, the radio wave download can provide the information to the handset, thus updating the new information on the handset.In an embodiment, new data objects may be generated by providing new data object wizards to facilitate intuitive and efficient creation of new data objects (eg, new contacts, activities, and / or events). This wizard may provide a mechanism to select people / contacts, places / locations, data or media items / events, and time values, for example. The time value may not be a specific time or date, but can be selected relative to the current time value. For example, the user may select a person in the contact list, select a location, and select may indicate the time value of "now" versus "later" selection.In another embodiment, the visual representation may correspond to a social graph representing contacts and groups along the social proximity dimension. Contacts can include individuals, organizations, and other entities. The group may include a set of contacts defined by the user or by the device. Social proximity can be generally defined as an indication of the extent of the relationship between two people along the social dimension. In one embodiment, icons or artifacts may be displayed on the mobile communication device, indicating "intimate" and "distant" people / contacts (eg, intimate contacts may be displayed more prominently than estranged contacts, etc.). This representation can provide users with instructions for quickly and intuitively determining social relationships. The social proximity may be indicated as "intimate", "distance", "friends", "friends of friends", etc., for example. Alternatively, the social proximity can be determined by the number and / or type of social interactions (eg, based on the number of instant messages or emails exchanged, the number of photos where the user and contacts appear together in the same photo, etc.).As indicated above, in one embodiment, navigation on various data objects (eg, contacts) may be done on the mobile communication device using available keys. The examples in the embodiments given above with respect to FIGS. 9 to 11 have generally described data objects and associated artifacts as activities or events, whereby the visual representation of the data objects becomes an activity diagram. If the data object is limited to social network contacts, the resulting visual representation may be referred to as a social graph, as will now be described with respect to FIG.In devices with numeric keypads, custom function keys can be used. In one embodiment, the navigation on the data object can be completed by zooming in and out in the visual representation (eg, social graph) of the data object using custom function keys on the mobile communication device. In another embodiment, a five-button navigation key with left, right, up, down arrows and OK button may be used. In some embodiments, the OK button may double as a push-to-share button.As discussed above, one custom function key can be assigned as the "zoom out" function, and one custom function key can be assigned as the "zoom in" function. Zooming in and out can result in updating the corresponding number of contacts or people in the reference system. For example, zooming out may increase the number of contacts or persons within the frame of reference based on social proximity, as indicated by the number and placement of icons or artifacts on the display 915. In one example, in terms of social proximity, an example of "zooming in" may be a transition from displaying family and friends to displaying only friends or only family. In another example, an example of "zooming out" may be a transition from friends only or family only to showing family and friends. Alternatively, a "shift" of the displayed social proximity may also occur, whereby the display changes from displaying only friends to displaying only family members.Referring to FIG. 12, an exemplary user interface 1200 that can be implemented on a mobile communication device is illustrated, which provides a social graph including personal contacts. The display 915 may be an LCD or OLED display that provides a visual representation of data to mobile phone users. The display 915 may include a title area 910 indicating that the current type of the currently presented user interface is "My Contacts". Therefore, in FIG. 12, the considered data object corresponds to the user's social network contact. Various contact icons or artifacts 920 may be presented on the display in a radial manner from the center point. When desired, other embodiments may use thumbnails or other graphical indicators. Some contacts 920 may be represented by a user ID or other means of identifying icons within a limited display area. For example, the icon or illusion 920 can be identified by the three-letter abbreviation or initial letter of the contact.The custom function keys 930 and 935 may be assigned to functions as indicated on the display. The user interface 900 may also include a navigation button 940 for navigation in the up, down, left, and right directions. The interface may also include a selection button 945, usually embodied as an OK button.As explained above, the user can navigate on the display by selecting the navigation button 940. For example, the selection of the left and right navigation buttons can scroll through different categories of contacts, such as those limited to a specific alphabetic range or group that has been identified by the user. Navigate with the up and down buttons to move the effective display area to the upper and lower levels of the hierarchy. For example, the currently active area may be the active icon area 917. By pressing the up arrow button, the effective area can be shifted to the main subject area 910, which indicates "My Contact" in the figure. The user can then select the left or right navigation button to change the subject area to, for example, "My Things". Additional valid areas may be provided to indicate whether the current display indicates a specific alphabet range, user-defined group, or other categories.When the effective area is an icon or phantom area, the user can choose to zoom in or out by pressing the "+" or "-" zoom custom function key. Contact person. In this case, the user can (i) change the origin of the query to change the display of the data object, or (ii) modify the dimension range from the origin in the spatial, temporal, and / or social dimensions to adjust the data objects that meet the display conditions . As indicated in FIG. 12, an outer circle of an icon or illusion may be provided, which indicates the far social proximity of those contacts represented in the circle. The inner contact circle 1210 may indicate closer or closer social proximity by means of being shown closer to the center point of the display. Alternatively, the larger artifact of the outer circle may indicate a more intimate social relationship than the inner circle. When the user presses the "-" zoom out button, the field of view can be expanded to include a new contact circle, where the previous active circle is represented as a smaller circle. Likewise, if the zoom "+" button is selected, the field of view can be enlarged to expand the inner circle to the outer circle, and further provide a new inner circle that includes yet another closer or closer contact circle.In another embodiment, the social graph may be created and modified via a website (eg, maintained by the social network server 174), and may be uploaded via the Internet 175. For example, network-based services may provide access to user accounts associated with mobile communication devices. The service can authenticate users and provide users with various account management functions. Users may further be able to use network services to create and modify data objects, such as contacts and groups. Contacts and groups can be further created as aliases and placed along the social graph provided by the website. Once the user has created or modified the social graph information on the website, the radio wave download can provide the information to the handset, thus updating the new information on the handset.When the user navigates on the social graph on the mobile device by selecting the zoom level, the user can select the contact or group by highlighting the contact or group. In one embodiment, by then clicking the OK button, the person within the social reference frame being viewed can be selected. Its interface can further provide details of the selected contact or group. For example, the display may indicate name and contact information. The display can also indicate actions that the user can select for the selected event. Such actions may include, for example, updating contact details, inviting contacts, or deleting events. In one embodiment, a social graph wizard may be provided to facilitate the intuitive and efficient creation of new contacts and groups and initiate actions on them. Through the further progress of the wizard, various activities such as making phone calls, sharing information, starting games, etc. can be started or arranged.Those skilled in the art will understand that the various categories and types of social proximity indicators discussed above are exemplary, and that many implementations of social proximity indicators can be used to reflect specific social or cultural situations. For example, the above examples include social proximity indicators such as "friends" and "relatives." Another example is a social proximity indicator such as "just met" or "old friend" based on the duration of the relationship. In various embodiments, the concept of social proximity may include any number of related indicators that may be useful for a particular situation.In one embodiment, the social proximity indicator "trust" may be used to illustrate a social proximity latitude that may be useful for several activities and transactions related to the user's mobile community. Trust can be generally viewed as an indicator of the trustworthiness of a particular contact relative to transactions that normally require authentication and security in other settings. For example, in an online system, a trusted client may be an individual or entity with good credit and a device for payment for online purchases. In a social network situation, the trusted contact may be someone who can be considered a trusted friend or family member in the online group, and a contact who may also be trusted by others in the online group.In at least one embodiment, the social graph view may present a social graph representing contacts and groups along the trust dimension. In other words, the social dimension may indicate to what extent the trusted contacts are reached, rather than how "intimate" the contacts are to the user (for example, although these two criteria may of course be related). Therefore, the "distance" along the social dimension need not be based only on emotional intimacy due to, for example, family relationships, but may be based on a measure of trust. For example, a user may have siblings who are very close but the user does not trust him (for example, if the user ’s sister gets a bad reputation, the user may love his / her sister but not trust him / her sisters). If the social dimension is configured to indicate the user's trust level, then from the perspective of the trust relationship even close relatives may not be "intimate". As will be understood, the social parameters of the query may be used to rank and display data objects according to the social intimacy of emotions, or to rank and display data objects based on different types of social intimacy, such as trust.Contacts may include individuals, organizations, and other entities, each of which may be associated with the user's trust level. In one embodiment, icons or illusions representing contacts may be displayed on the mobile communication device in a radial manner, with icons closer to the center indicating a higher degree of trust. This representation can provide an indication so that the user can quickly and intuitively determine the trust level of the contact. The trust level may for example be indicated as "untrust", "social trust only", "trust for financial transactions", etc. In one embodiment, the social graph may only provide two levels: trust and distrust. In other embodiments, more different levels and types of trust may be included.In one embodiment, the trust level entered by one user can be pushed to other users in the mobile community. For example, when a user enters a new contact, the new contact, along with the contact's trust level, can be pushed to other members of the user's social network group. Alternatively, when the user modifies the trust level of the contact, the change can be pushed to other members of the user's mobile community. Therefore, once a user has established a trust level with respect to a contact, the entire group or community can be extended to provide the same trust level with the help of each user's membership in the group or community. Those skilled in the art will recognize that this process provides a means of verifying contacts for various transactions and activities in the case of a mobile community. By tagging contacts with a trust level, the initial user will usually pass the contact due to personal knowledge and experience about the contact. Since by virtue of the user's membership in the mobile community, other members of the user's mobile community will usually trust the original user, so additional verification will usually not be needed in order to push the new contact as a trusted contact to other members. As will be understood, the "push" of the user's social network contact's trust level to the user's other social network contacts corresponds to the "sharing" of data object attributes between users, which are themselves data objects. A more detailed explanation regarding how data objects and / or data object attributes can be shared between users is described in more detail below with respect to FIGS. 22A through 22J.The mobile device may provide various options for transactions and activities that may be allowed or enabled depending on the trust level. In one embodiment, a mobile bidding mechanism can be implemented, where bids and acceptances for financial transactions can be exchanged between trusted contacts within the social network. Because a threshold verification level can be assumed for the trusted contact network, this bidding mechanism implemented in the mobile user community and its associated trusted contacts can provide a secure closed network for implementing secure transactions without continuous User identification / authorization and security protocol overhead. Those skilled in the art will recognize that this mechanism may provide a more efficient and / or secure infrastructure than online systems where inherent security may be difficult to implement.By using this social network of trusted contacts, various e-commerce methods can be implemented. As described above, this framework structure can be used to implement electronic bidding that includes time-based or price-based bidding. In other embodiments, electronic objects may be exchanged that represent financial or other indicators of value, such as electronic coupons and vouchers. As discussed above, because the trusted contacts within the mobile community have been pre-validated by the trust status granted by at least one user, this electronic value object can be exchanged with the contacts without continuous identification and verification.In one embodiment, the mobile communication device may provide additional options and settings to allow the user to configure specific transactions and activities that may be allowed depending on the trust level. General settings can be configured to enable transactions depending on the trust level of the contact. Specific settings can also be provided to allow, for example, specific transactions to occur only for specific marked contacts. For example, the user may configure the mobile device to allow all transactions associated with any contacts marked as "trusted." Or, in an embodiment where the three levels of trust have been defined as "high trust", "trust", and "untrust", the user may configure the mobile device to only allow contacts associated with contacts marked as "high trust" Financial affairs. The user may additionally configure the mobile device to allow contacts marked as "trusted" to receive data about financial activities, but not to receive bids about financial transactions. Those skilled in the art will recognize that the infrastructure disclosed herein can be used to implement many trust levels and actions associated with trust levels in a given mobile community situation.Referring now to FIG. 18, an exemplary process for displaying a user interface on a mobile communication device according to some of the methods disclosed herein is illustrated. The device may be a mobile communication device belonging to a user of a social network, and through the device, the user may connect with other members of the social network via a wireless communication system. In 1800, the device may receive and store input from a user, the input including contact information for a plurality of contacts and an associated trust level for each of the contacts. The input may be provided by another system for receiving input, where the system is associated with the mobile user's account (eg, exchange server 172, social network server 174, etc.). Input can also be provided directly by the user on the mobile device. Alternatively, the input may be retrieved from the device's memory if previously received.In 1810, the device may transmit the contact information to at least one member of its social network group. For example, when a contact and associated trust level are entered into the mobile device, the device can automatically transmit the contact information to other members of the communication group. Alternatively, the device may prompt the user whether to transmit contact information to other users. The trust level may generally be a trusted level or an untrusted level. In some embodiments, additional trust levels may be used.The user may, for example, desire to launch financially sensitive data objects, such as objects representing sales offers. The user can navigate to the social proximity screen on the mobile device and further navigate to the display of one or more trusted contacts available on the device, whereby the social proximity screen corresponds to the visual representation of the data object, the data The object in this case is the user's social network contact. In 1820, the device may determine that the trust level associated with each contact is a trusted level when displaying the display. In other words, assuming that the social parameters of the user query related to the visual representation permit the display of trusted contacts on it, the illusion associated with each trusted contact may be displayed (eg, as long as the trusted contact also satisfies the query Spatial and / or temporal parameters). The user may further select the option of transmitting the sales offer to each of the trusted contacts. In process 1830, the device may transmit a message representing the economic value of the product or service used to sell to each of the trusted contacts.Trusted contacts who have received the sales offer can review the sales offer. The trusted contact person may recognize that the sales offer is received by the trusted contact person, thereby storing the contact information of the originating sender on their corresponding device as the trusted contact person. Finally, in process 1840, the initiating sender may receive a message indicating acceptance of the sales offer. Thus, Figure 8 demonstrates the process in which the visual representation of data objects (eg, contacts) can be used in connection with e-commerce transactions.In another embodiment, the location map view may present a location map representing a user-centric spatial map, where contacts and groups are located outward from the user. Contacts and groups can be represented by icons or artifacts on the location map, which are located by their approximate space or geographic location. In other words, the illusion associated with each data object (eg, contact, activity, etc.) is displayed on the location map at a location corresponding to its associated location, where its distance from the center (eg, the user's location) Corresponding to the degree of relevance from the user's location, the user's location in this instance is the spatial origin of the query. In one example, the viewable area of the position map may correspond to the boundaries of spatial parameters along the spatial dimension, so that data objects outside the viewable range are not displayed on the position map.In one embodiment, navigation on contacts and groups can be done by zooming in and out in the location map using custom function keys on the mobile communication device. For example, zooming in on the location map narrows the range of viewable locations, which limits the number of data objects displayed, while zooming in on the location map expands the range of viewable locations, which increases the size of the displayed data objects. number. In yet another example, the user can shift in either direction along the position axis, whereby the viewing range does not change, but the displayed axis portion is modified (eg, the position map shows a two-mile radius from a different spatial origin and (The actual radius of the position map is not modified). In another embodiment, a five-button navigation key with left, right, up, down arrows and OK button may be used. In some embodiments, the OK button may double as a push-to-share button.A custom function key can be assigned as the "zoom out" function, and a custom function key can be assigned as the "zoom in" function. Zooming in can update the display and represent contacts that are spatially closer to the user. Zooming out increases the spatial reference system and uses the increased spatial display area to display contacts. In another example, the custom function keys can be used to shift in any direction on the axis without zooming in or out.Referring to FIG. 13, an exemplary user interface 1300 that can be implemented on a mobile communication device is illustrated, which provides a location map with personal points of interest and contacts. The display may include a title area 910 indicating that the current type of the currently presented user interface is "my location". Various location icons or artifacts 950 can be presented on the display in an approximate spatial or geographic manner. When desired, other embodiments may use thumbnails or other graphical indicators. The points of some contacts 920 may be presented with the illusion of being marked with a user ID or other means of identifying data objects within a limited display area. For example, the three-letter abbreviation or initial letter of the contact can be used to identify the illusion 920.As explained above, the user can navigate on the display by selecting the navigation button 940. For example, the selection of left and right navigation buttons can scroll through different categories of locations, such as those limited to a specific geographic range. Navigation with the up and down buttons moves the effective display area to the upper and lower levels of the hierarchy. For example, the currently active area may be the active icon area 917. By pressing the up arrow button, the active area can be shifted to the main subject area 910, which indicates "my location" in the figure. The user can then select the left or right navigation button to change the subject area to, for example, "My Things". Additional valid areas may be provided to indicate whether the current display indicates a specific geographic range, locations related only to contacts, entertainment locations, or other categories.When the effective area is an icon or artifact area, the user can choose to zoom in or out by pressing the "+" or "-" zoom custom function key. At this time, the closer or farther zoom level of the geographic features on the map can be provided On the display area 915. When the user presses the "-" zoom out button, the field of view can be expanded to include a larger drawing area. Similarly, if the zoom "+" button is selected, the field of view can be enlarged to expand the drawing view. The contact points desired by the user can be represented on the display according to their relative positions on the corresponding drawing view. As described above with respect to FIG. 18, the boundary of the position map may limit, for example, which data objects are displayed thereon, so that only data objects associated with positions that can be displayed within the viewing range of the position map are displayed thereon. Therefore, in the example of FIG. 13, the displayable part of the spatial dimension is limited by the spatial origin centered on the position map and the zoom level or the range of the position being displayed.In another embodiment, the location map can be created and modified via a website, and can be uploaded via the Internet. For example, network-based services may provide access to user accounts associated with mobile communication devices. The service can authenticate users and provide users with various account management functions. Users may further be able to use network services to create and modify location information. Addresses, points of interest, and other location information can be further added to contacts and activities, and placed along the location map provided by the website. Once the user has created or modified the location map on the website, the radio wave download can provide the information to the handset, so the new information on the handset is updated.When the user navigates on the location map on the mobile device by selecting the zoom level, the user can select the contact or group by highlighting the contact or group. In one embodiment, by then clicking the OK button, the person within the viewed spatial reference system can be selected. The interface can further provide details of the selected data object. For example, the display may indicate the name and contact information associated with the selected data object. The display can also indicate actions that the user can select for the selected data object. Such actions may include, for example, updating contact details, inviting contacts, or deleting events. In one embodiment, a location map wizard may be provided to facilitate the intuitive and efficient creation of new data objects (eg, events, activities, contacts, and groups) and initiate actions on them. Through the further progress of the wizard, various activities such as making phone calls, sharing information, starting games, etc. can be started or arranged.In another embodiment, the location map view may also provide "traveled routes" and "planned routes" and destinations. This view can be provided from the user's perspective, and provides a top-down view or a user-centric view. This is a representation of the latest location combined with the location update history (past location or route) and the planned future location or route. This view of the traversed route or planned route may be useful to the user in a dispatch situation (eg, truck, taxi, etc.) when the route needs to be reselected.In other embodiments, filters may be provided to provide context to assist users in finding and managing information on the mobile communication device. For example, in one embodiment, a voice call filter may be provided for voice call events. The voice call filter provides a time call history, which contains a list of incoming and outgoing calls, sorted in chronological order according to the time and date of receipt. Each telephone call may further include a representation of the social proximity of the users and / or groups involved in the call. Therefore, the degree of correlation between the time of the origin of the query (for example, the current time) and the time of the data object (in this case, a phone call) can be compared. In this way, the user can view on the display an illusion (eg, a photo of the callee, etc.) representing a phone call received within a given time range or a given degree range in the time dimension. The illusion of display can be further restricted so that only callers who meet the social parameters of the query are displayed (eg, only calls from friends and family are displayed, etc.), and the illusion of display can be further restricted so that only calls that satisfy the spatial parameters of the query are displayed Callers (for example, callers from a specific location area, etc.).Similarly, filters can be provided for other types of data objects, such as sticky note filters, picture and video filters, game filters, music filters, and so on. Such filters may provide further contextual filtering based on specific types of activities. For example, the picture filter may provide a list of photo files categorized by category (eg, location, contacts, or associated activities). The photo filter may further include an indication of the social proximity of the contact associated with the photo.In other embodiments, a container for collecting and organizing information on the mobile communication device may be provided. In one embodiment, four main containers can be provided for people, places, events, and settings. The character container may contain contacts and groups arranged in an alphabetical manner and according to social proximity in a radial manner. This arrangement may therefore include "closer circles" and "farther circles". The user can use the navigation method described above to control and navigate the interface. The following describes several examples of how different "containers" can be used to exchange information between different social network contacts with respect to FIGS. 22A-22J.In an embodiment, when a user is adding new data objects (eg, new contacts and / or groups), the user interface may provide a mechanism to sort them by social proximity. This mechanism may include providing a wizard as described above. In addition, various types of social proximity can be used. In one non-limiting example, the social proximity type may include the most intimate, more intimate, less intimate, and alienated. The list of social proximity types can be expanded to provide more options. For example, the list can be expanded to include family members, friends, friends of friends, etc. Alternatively, each category in the list may contain subcategories. For example, the family category can be further subdivided into parents, siblings, and in-laws. Any number and combination of categories may be used to represent social categories important to users of mobile communication devices.In one embodiment, to facilitate more efficient selection, input, and management of information, custom function keys or other indicators may provide a simple way to cycle through various social proximity options to find a selected contact or group. For example, selection of a "distance" indicator on a contact or group may cause the contact or group to move outward from the center, indicating a more "distant" social proximity.In another embodiment, a space container may be provided for a place or location shared between users and groups. The place may be represented as an alias that may have latitude and longitude and / or the nature of the point of interest data. As mentioned above, the location can be initiated or modified via a guide. The place can also be initiated or modified via the corresponding website through the user account associated with the mobile communication device. Once the location has been added or updated, the alias can be downloaded to the mobile communication device wirelessly. The website also provides users with the option to share items with other users. Once the location is downloaded to the user's mobile communication device, the mobile communication device can spatially draw the location in the context as described above. In this way, a given data object (eg, event, contact, multimedia file, etc.) can become associated with a specific location, which can then be used to determine the degree of relationship with the spatial parameter of the query, such that it will satisfy The data objects of the spatial parameters queried are displayed to the user. In one embodiment, the locations can be arranged spirally on the display, indicating the date / time of the added locations, and the locations can be controlled via the above-mentioned navigation.The "items" container may contain data objects such as pictures, music, videos, and notes that can be shared between users and groups. The event container may also contain games played between users and groups. As mentioned above, matters can be initiated or modified via a wizard. The matter can also be initiated or modified via the corresponding website through the user account associated with the mobile communication device. Once the items have been added or updated, the alias can be downloaded to the mobile communication device wirelessly. The website also provides users with the option to share items with other users. Once the item is downloaded to the user's mobile communication device, the mobile communication device can spatially draw the location in the context as described above. In one embodiment, the items can be arranged spirally on the display, indicating the date / time of the added place, and the items can be controlled via the above-mentioned navigation.14, an example photo container provided on the display 915 is illustrated. A thumbnail or other representation of the photo object is indicated in the illusion 1410. In the example shown, the photo objects are arranged spirally on the display, indicating the date / time when the photo was added.The settings container may contain settings for volume, mode, preferences, etc. The settings can be arranged spirally on the display of the device to show changes in amplitude, frequency and time. You can use the above navigation to provide navigation and control.15A to 15C illustrate examples of visual representations of specific sets of data objects according to embodiments of the invention. Each of FIGS. 15A to 15C illustrates visual representations with different “dominant” dimensions along the spatial, social, and temporal dimensions. Specifically, FIG. 15A illustrates a visual representation of a dominant data object collection in the social dimension, FIG. 15B illustrates a visual representation of a dominant data object collection in the spatial dimension, and FIG. 15C illustrates a data object collection in the temporal dimension. Visual representation.Referring to FIG. 15A, zoom indicators 1500A, 1505A, and 1510A indicate the current "zoom" of the display 1545A of a collection of data objects. In particular, the fill levels of the pyramids of the zoom indicators 1500A, 1505A, and 1510A indicate the way to filter the collection of data objects in time, space, and social dimensions, respectively. For example, a fully full pyramid indicates full zoom, so that a relatively narrow time, space, or social relationship will meet the requirements displayed on the display 1545A, while an empty pyramid means that a particular dimension is "shrinked", making the dimension of Any value will meet the display requirements. Likewise, a medium "fill" level indicates a medium zoom level. Therefore, although the display 1545A mainly shows the social relationship of the displayed data objects, it will be understood that any data objects displayed in the display 1545A also meet the time and / or space requirements corresponding to the zoom level as indicated by the filling of several periods. Therefore, if time scaling requires the display of a time period corresponding to "last week" and space scaling requires the display of the user's current city, social contacts outside the city in the previous week will not be displayed, no matter how close the contact is to the user is also like this.Still referring to FIG. 15A, degree indicators 1515A, 1520A, and 1525A indicate how to display different relationship degrees in the display 1545A for the social dimension, so that data objects intimate in the social relationship are displayed in the inner circle of the display 1545A, and the social relationship is closer The data objects of are displayed in the middle circle of the display 1545A, and the data objects farther away in terms of social relationship are displayed in the outer circle of the display 1545A. The data object type indicators 1530A, 1535A, and 1540A specify which types of data objects the user can select for display, where FIG. 15A shows the characters, locations, and / or items that are available data objects that can be displayed on the display 1545A. In FIG. 15A, it can be assumed that the user has selected characters (ie, the user ’s social network contacts) as the set of data objects to be displayed, the user has selected the social dimension as dominant, and the user only selects display intimacy via the degree indicator 1515A relationship.Therefore, in this example, the display 1545A shows four contact quadrants, corresponding to "work", "friends", "family" and "other". Because the user has only instructed to display the intimacy relationship, and the intimate data objects are displayed in the inner circle, only the contacts in the user's inner circle are displayed on the display 1545A. Specifically, an intimate work contact and an intimate family contact are displayed on the display 1545A, where each displayed contact is represented by a different photo or illusion (e.g. Significance of different visual attributes, such as size, etc.). It will be appreciated from the display 1545A shown in FIG. 15A that the user can navigate to different display criteria within the dominant social dimension (or "social graph"), or can switch the dominant dimension to the spatial or temporal dimension.Referring to FIG. 15B, the zoom indicators 1500B, 1505B, and 1510B indicate the current "zoom" of the display 1545B whose spatial dimension is set to the dominant set of data objects. In particular, as in FIG. 15A, the fill levels of the pyramids of the zoom indicators 1500B, 1505B, and 1510B indicate the way to filter the collection of data objects in the temporal, spatial, and social dimensions, respectively.Still referring to FIG. 15B, degree indicators 1515B, 1520B, and 1525B indicate how different degrees of relationship are displayed in the display 1545B for the spatial dimension, so that spatially close data objects are displayed in the inner circle of the display 1545B, and spatially closer data The objects are displayed in the middle circle of the display 1545B, and the data objects that are spatially farther away are displayed in the outer circle of the display 1545B. The data object type indicators 1530B, 1535B, and 1540B specify which types of data objects the user can select for display, where FIG. 15B shows the characters, locations, and / or items that are available data objects that can be displayed on the display 1545B. In FIG. 15B, it can be assumed that the user has selected each of the person, location, and item as the set of data objects to be displayed, the user has selected the spatial dimension as the dominant, and the degree to which the user has used the data item for "item" The indicator only selects to display the close relationship, the degree indicator for the "person" data object only selects to display the closer relationship, and the degree indicator for the "place" data object only selects to display the farther relationship.Therefore, in this example, the display 1545B shows four position quadrants relative to the origin, corresponding to northeast (NE), northwest (NW), southeast (SE), and southwest (SW), which corresponds to the user ’s query The input location (eg, the user's current location, the location the user is traveling to, etc.). Overlaid on the display 1545B is a street map, so that the user can better explain the location of the data objects displayed therein. Since the user has only instructed to show the close relationship of the "item" data objects, and the close data objects are displayed in the inner circle, any spatially close "item" data objects are displayed in the inner circle of the user of the display 1545B. In addition, because the user has only instructed to display the closer relationship of the "person" data object, and the closer data object is displayed in the middle circle, any spatially closer "person" data object is in the middle of the user of the display 1545B Shown in a circle. In addition, because the user has only instructed to display the farther relationship of the "location" data object, and the farther data object is displayed in the outer circle, any spatially farther "location" data object is outside the user of the display 1545B Shown in a circle.Specifically, in FIG. 15B, a close "event" data object is displayed in the inner circle of the display 1545B, a closer "person" data object is displayed in the display 1545B, and a farther distance is displayed in the display 1545B "Location" data object. In addition, each data object shown in display 1545B is represented by a different picture or artifact (eg, has different visual attributes that affect its saliency in display 1545B, such as size, etc.). It will be appreciated from the display 1545B shown in FIG. 15B that the user can navigate to different display criteria within the dominant spatial dimension (or "location map"), or can switch the dominant dimension to the social or temporal dimension.Referring to FIG. 15C, zoom indicators 1500C, 1505C, and 1510C indicate the current "zoom" of the display 1545C whose time dimension is set to the dominant set of data objects. In particular, as in FIGS. 15A and 15B, the fill levels of the pyramids of the zoom indicators 1500C, 1505C, and 1510C indicate the way to filter the collection of data objects in the temporal, spatial, and social dimensions, respectively.Still referring to FIG. 15C, degree indicators 1515C, 1520C, and 1525C indicate how different relationship degrees are displayed in the display 1545C for the time dimension, so that temporally close data objects (eg, just, current, or immediately, etc.) are displayed on the display 1545C The inner circle shows that data objects that are closer in time (for example, not long ago, not long after, etc.) are displayed in the middle circle of the display 1545C, and the data objects that are farther in time (for example, long ago, long after, etc.) ) Is displayed in the outer circle of the display 1545C. The data object type indicators 1530C, 1535C, and 1540C specify which types of data objects the user can select for display, where FIG. 15C shows the characters, locations, and / or items that are available data objects that can be displayed on the display 1545C. In FIG. 15C, it can be assumed that the user has selected a character as the set of data objects to be displayed, the user has selected the time dimension as the dominant, and the user only selects to display a distant relationship for the "person" data object.Therefore, in this example, the display 1545C shows four time quadrants relative to the origin, which corresponds to the time the user entered with the query (eg, the current time). In one example, the four quadrants may represent different parts of time for a given radial distance from the origin. For example, the quadrant may represent seasons (eg, spring, summer, autumn, and winter), and the distance from the origin or center of the display 1545C may correspond to the year. Alternatively, the quadrant may indicate the day of the week (eg, Monday / Tuesday, Wednesday / Thursday, etc.), and the distance from the origin may correspond to the number of weeks (eg, 1, 2, 3, etc.). Because the user has instructed to display only the far relationship of the "person" data object, and the farther data object is displayed in the outer circle, the "person" data object that is farther away at any time is in the outer circle of the user of the display 1545C Show. In one example, a "person" data object or a user's social network contact has not communicated with the user for a long time at the contact (eg, the contact has died for many years, etc.) or is expected to not communicate for a long time (eg, The contact person may have a distant time relationship with the user in the case of performing a Mars mission for 5 years, etc.).Specifically, in FIG. 15B, a farther "person" data object is shown in the outer circle of the display 1545C. It will be understood from the display 1545C shown in FIG. 15C that the user can navigate to different display criteria within the dominant time dimension (or "time graph" or "timeline"), or can switch the dominant dimension to social Or spatial dimension.Referring now to FIG. 16, it illustrates an exemplary process for displaying a visual representation of a data object on a mobile communication device according to an embodiment of the present invention. At 1600, the device may receive input from the user, such as a query about social, temporal, and spatial parameters, to control the manner in which a given set of data objects (eg, which may also be indicated in the query) are displayed to the user. As discussed above, the input may be provided by another system for receiving input, where the system is associated with the mobile user's account. The input can also be provided directly on the mobile device by the user. Alternatively, the input may be retrieved from the device's memory if previously received. In an example, input from a user may correspond to a query related to a visual representation of one or more data objects, where the query includes spatial, social, and temporal parameters that affect which of the data objects is displayed in the visual representation. The query may further include an origin, compare the attributes of the data object with the origin to determine whether to display the data object, and may further include an indication of which dimension will be dominant (eg, one will get FIG. 15A, 15B, or 15C Visual representation shown in). In 1610, a first dimension may be displayed, which provides a time representation of the received input (eg, as in FIG. 15C, where the time dimension is dominant). For example, in 1610, the device may determine which data objects meet the time parameters of the query (for example, if the time parameter indicates that the user focuses on data objects within the next week from the current time, then data objects outside this time range Not considered, etc.).In 1620, a second dimension may be displayed, which provides a spatial representation of the received input (eg, as in FIG. 15B, where the spatial dimension is dominant). For example, in 1620, the device may determine which data objects meet the spatial parameters of the query (for example, if the spatial parameters indicate that the user focuses on data objects within two miles of his / her current location, then the Data objects are not displayed, etc.). In another example, the device only needs to consider the position of the data object during the time range associated with the time parameter (eg, if the spatial parameter corresponds to San Francisco and the time parameter corresponds to the current time, it will not be displayed in the next year Data objects for events that occurred in San Francisco).In 1630, a third dimension may be displayed, which provides a social proximity representation of the received input (eg, as in FIG. 15A, where the social dimension is dominant). For example, in 1630, the device may determine which data objects satisfy the social parameters of the query (eg, if the social parameters indicate that the user is focused on the user based on the user profile and / or social network contacts in the user's contact list Data objects of the event, then data objects that do not meet these conditions are not displayed, etc.). As will be understood, once each dimension is displayed in 1610 to 1630, the resulting visual representation of one or more of the displayed data objects can be displayed to the user (eg, as a location map, activity map, etc., where the display shows Illusion to represent data objects). Although 1610, 1620, and 1630 show the selection of each type of dominant dimension in a sequential manner, it will be understood that the first dimension set as dominant can also be used to satisfy the user, so that the user does not need to be as in 1620 and / or 1630 Navigate to other dominant dimensions.In process 1640, the mobile communication device may receive an indication that the user wishes to modify the selected dimension of the visual representation. For example, if the user wishes to modify the range or degree of displayable dimensions from the initial query, the user selects the dimension to be modified in 1640.For example, assume that the first custom function key is assigned as the "zoom in" function to navigate the selected or active axis, and the second custom function key is assigned as the "zoom out" function. By configuring custom function keys, the user can zoom in and out, which can result in updating the corresponding number of people in the displayed reference system. In this case, the indication received in 1640 may correspond to the user's selection of one of enlarging or reducing the spatial, temporal, or social dimensions. For example, if the visual representation is in the form of a position map, when the user presses the zoom out custom function key, the field of view can be expanded to include a wider field of view, so that more positions are displayed, and then more data may be displayed Object. Similarly, if you select the zoom "+" custom function key, the field of view can be enlarged to expand the current view, which can exclude data objects that are no longer within the range of the visual representation. After receiving an indication to modify one of the dimensions for the visual representation, the visual representation is modified in 1650 (eg, transition to another dominant dimension to modify the parameters to meet the parameters of the query, etc.).Referring now to FIG. 17, an exemplary process for displaying a user interface on a mobile communication device according to some of the methods disclosed herein is illustrated. In 1700, the device may receive input from a user regarding a data object, which in this embodiment corresponds to a planned event. As discussed above, the input may be provided by another system for receiving input, where the system is associated with the mobile user's account. Input can also be provided directly by the user on the mobile device. Alternatively, the input may be retrieved from the device's memory if previously received. In 1710, the planned event is displayed according to the location of the event. As discussed above, other data objects and representations can be indicated by location. Alternatively, the data object may be indicated according to other qualities revealed, such as social proximity and temporal distance. For example, the location representation may include an indication on the location map. In the various embodiments disclosed above, the location representation may include events located on the two-dimensional map.In 1720, a navigation key for two-dimensional navigation can be configured. As disclosed above, on mobile communication devices, up, down, left, and right keys can be used for this tour. In 1730, a first button can be configured for zooming in within the display (eg, this corresponds to narrowing the spatial dimension if the visual representation corresponds to a location map, narrowing the society if the visual representation corresponds to a social map) Dimensions, etc.). At 1740, a second button is configured for zooming out within the display. By configuring the buttons for zooming, the user can zoom in and out, which can result in updating the corresponding number of data objects (eg, events) in the displayed reference frame. When the user presses the zoom-out button, the field of view can be expanded to include a wider field of view (for example, this corresponds to expanding the spatial dimension if the visual representation corresponds to a position map, and expanding when the visual representation corresponds to a social map) Social dimension, etc.). Similarly, if you select the zoom in button, the field of view can be zoomed in to expand the current view. In 1750, a new event can be generated by displaying a new event wizard to facilitate the intuitive creation of new activities and events. This wizard can provide a mechanism for selecting people / contacts, locations / locations, data or media entities / events, and time values. In 1760, the wizard may prompt the user for input. Through various menus and prompts, the user can, for example, select a person in the contact list, select a location, and select a time value that can indicate the selection of "now" versus "later".Referring to FIG. 19, an exemplary display depicting an embodiment of the present invention is illustrated. In particular, FIG. 19 illustrates changes to the user interface and display described and described above with respect to FIG. 15A, where the visual representation corresponds to the social graph. The display 1900 depicts a setting function in which the user can select various filters for viewing, input, and / or editing. In this example, the user has selected "Family" 1910, as depicted in the highlighted area. After pressing the "Enter" or "OK" button, the display 1920 can be instantiated immediately, which further depicts the various contacts included in the "Family" category. In addition, the radio button 1925 may indicate the current social proximity setting of the contacts listed in the "Family" filter. The user may highlight a specific contact, for example, highlight "Aier" 1930 depicted in the area. The user can modify the social proximity setting, which is shown as "close", "closer" and "farther" in this example.In other words, in the embodiment of FIG. 19, display 1900 shows the social parameters for the user's query, whereby the data object having a family relationship with the user satisfies the social parameters of the query. The data object corresponding to the social network contact of the user in this example is illustrated at display 1920. In particular, display 1920 shows data objects that meet the social parameters of the user's family members, and further shows the degree of social relationships within the user's family (eg, close, near, and far in this example).The user can select "Done" after the editing is completed, at this time an example display 1940 can be displayed, which depicts the contact in the "Family" filter. Display 1940 depicts contacts in the "family" filter located along concentric circles 1950, indicating their corresponding social proximity settings. The smallest circle contains the contact "Greg" associated with the "farther" social proximity setting. The medium-sized circle contains the contacts "Karl" and "Heather" associated with the "closer" social proximity setting. The large circle contains the contacts "Aier" and "Ivan" associated with the "close" social proximity setting. It can be seen that the gradually increasing circles indicate gradually approaching social proximity settings, while the gradually decreasing circles indicate gradually alienating social proximity settings. In one embodiment, the user can configure the integrated zoom display to draw a gradually increasing circle to indicate a gradually alienating social proximity setting, and configure a gradually decreasing circle to indicate a gradually alienating social proximity setting.In other words, displaying 1940 corresponds to a social graph, thereby displaying the social proximity of each of the user's family members based on the distance from the displayed center at 1950, where the displayed center corresponds to the social origin of the query ( For example, the social origin corresponds to the user itself in this case).Referring now to FIG. 20, an exemplary display depicting another embodiment of the present invention is illustrated. In particular, FIG. 20 illustrates changes to the user interface and display described and described above with respect to FIG. 15A, where the visual representation corresponds to a social graph based on the user's trust level in his / her social contacts. The display 2000 depicts a setting function in which the user can select various filters for viewing, input, and / or editing. In this example, the user has selected "Trusted" 2010, as depicted in the highlighted area. After pressing the "Enter" or "OK" button, the display 2020 can be instantiated immediately, which further depicts the various contacts contained in the "trusted" category. In addition, the radio button 2025 may indicate the current social proximity setting of the contacts listed in the "trusted" filter. The user can highlight a specific contact, for example, highlight "Aier" 2030 depicted in the area. The user can modify the social proximity setting, which is shown as "close", "closer" and "farther" in this example.In other words, in the embodiment of FIG. 20, the display 2000 shows the social parameters for the user's query, whereby data objects (eg, contacts) having a trusted relationship with the user satisfy the social parameters of the query. The data object corresponding to the social network contact of the user in this example is illustrated at display 2020. Specifically, display 2020 shows data objects that satisfy the social parameters of the user's trusted contacts, and further shows the degree of trust within the user's trusted contacts (eg, in this example, close, near, and far) ).The user can select "Done" when editing is complete, and at this time an example display 2040 can be displayed, which depicts the contacts in the "Trusted" filter. Display 2040 depicts the contacts in the "trusted" filter located along concentric circles 2050, indicating their corresponding social proximity settings. The smallest circle contains the contact “Ivan” associated with the “farther” social proximity setting. The medium-sized circle contains the contacts "Aier" and "Greg" associated with the "closer" social proximity setting. The large circle contains the contact "Karl" associated with the "close" social proximity setting. Since the contact "Heather" is not assigned a trust setting, the associated icon is not depicted in the display 2040. It can be seen that a gradually increasing circle indicates a gradually approaching social proximity setting, while a gradually decreasing circle indicates a gradually Alienated social proximity settings. In one embodiment, the user can configure the integrated zoom display to draw a gradually increasing circle to indicate a gradually alienating social proximity setting, and configure a gradually decreasing circle to indicate a gradually alienating social proximity setting.In other words, the display 2040 corresponds to a social graph, thereby displaying the social proximity of each of the user's trusted members based on the distance from the displayed center at 2050, where the displayed center corresponds to the social origin of the query (For example, the social origin corresponds to the user itself in this case).It can also be seen that data objects such as contacts can have multiple social proximity settings associated with various "filters" or categories. For example, in FIGS. 19 and 20, the contact "Aier" is associated with the "closer" setting relative to the "Trust" filter, and is associated with the "closer" setting relative to the "Family" filter Will be associated. The ability to distinguish social proximity settings relative to different situations such as "family" and "trust" allows users to more accurately approximate actual relationships and related activities in the real world. For example, a user may not trust a family member with respect to financial transactions, but may tend to be intimately associated with the family member with respect to social activities. The method of the present invention may enable users to portray such nuances in relationships and activities on mobile devices in order to provide a richer user experience that more closely resembles relationships and activities in the real world.21 illustrates a process in which one or more data objects belonging to a collection of data objects can be displayed to a user according to an embodiment of the present invention. Referring to FIG. 21, a query containing social parameters, temporal parameters, and spatial parameters relative to a user is received, the parameters indicating a desired visual representation of a collection of data objects (2100). In an example, the query may be received at the mobile communication device belonging to the user from which the query originated. In one example, as discussed with respect to the above embodiments, the data objects may correspond to events, activities, social network contacts, multimedia files, and / or any other types of information that may be classified in terms of space, society, and time.For example, the query can specify the origin in the spatial, temporal, and social dimensions. In a specific example, the social origin can be the user's identity or subscriber profile, the spatial origin can be the user's location, and the time origin can be the time or time range specified by the user. The space and time origin will usually be considered together, so that the user's space origin will be considered at the time or within the time range of the time origin. For example, assuming that the user wants to know which of his / her colleagues will go to work on Tuesday, the social parameter of the query can be "my colleague", the time parameter of the query can be "Tuesday" and the spatial parameter of the query can be "I Work address ".Next, the degree to which the social, temporal, and spatial parameters of the query are related to each of the data object collection in the social, temporal, and spatial dimensions, respectively, is determined (2105). For example, because the social parameters of the query are related to "My Colleagues" indicating social network contacts, the user's communication device may contact the social network server 174 and obtain the user's colleague list and may request the colleague's Tuesday schedule . In this case, it is assumed that the query is binary, so that only contacts that exactly match the query are displayed to the user. Therefore, if a colleague does not have an appointment on Tuesday, the device may assume that the colleague will be in, for example, the office, and may display these colleagues to the user without displaying the colleague who is scheduled to be out of the office.Next, the communication device displays a visual representation of at least a portion of the data object set to the user based on whether the determined degree of correlation in the social dimension, temporal dimension, and spatial dimension satisfies the corresponding parameter of the query (2110). In the above example, this means showing colleagues who are determined to be in the office on Tuesday to the user. The visual representation may be in the form of a social map, an activity map, and / or a location map (eg, centered around the office), in each case only showing colleagues who are expected to be in the office on Tuesday.Although not shown in FIG. 21, the user can shift the visual representation from 2110 to another day of the week (eg, along the time axis or dimension), and the user can narrow the visual representation from 2110 to show only certain colleagues (Eg, zoom in along the social axis or dimension), the user can expand the visual representation from 2110 to show any colleagues who will be in the office during the week (eg, zoom out along the time axis or dimension), the user can expand from 2110 A visual representation to show any colleagues who will be in any of the multiple office locations on Tuesday (eg, shrink along the spatial axis or dimension), and so on.Although in FIG. 21 described, the social parameters of the query specify the colleague of the user and the spatial parameter specifies the office, in another example, the social parameter may specify the friend of the user and the spatial parameter may specify a specific city. In addition, although the data objects in FIG. 21 described correspond to social network contacts, it will be easily understood that other embodiments can guide the general teachings of FIG. 21 to any type of data objects, such as events, activities, multimedia files Wait.Although the attributes (eg, location, time, and / or social relationships) of data objects have been described above as "static", so that a given data object maintains the same social relationship as other data objects and a given data object is in a specific The location of time is the same, but it will be appreciated that other embodiments may be directed to dynamically updating one or more of these data object attributes. For example, if the user becomes a close friend of an acquaintance, the user may transform his / her social relationship with the data object representing the acquaintance into a closer social setting (for example, by expressing the The illusion of an acquaintance is dragged from the "closer" or "farther" position on the social graph of Fig. 15A to the "closer" position, for example). Likewise, the user may have been personally informed that his / her contact will not participate in the event where the contact is invited to reply, and the contact may be removed from the time or space map indicating that the contact participated in the event. When the data object attributes change, the data object may have the privileges and / or permissions of its new attributes (eg, notifications related to the data object may achieve higher or lower saliency to the user, etc.).Although the above-described embodiments of the present invention have generally been directed to visually representing a collection of data objects at a device operated by a particular user, other embodiments are directed to the exchange of data objects between users. Therefore, FIGS. 22A to 22J are for this exchange from the perspective of the sender of the data object, and FIG. 23 is for the exchange of data objects from the perspective of the recipient or target of the data object.Referring to FIG. 22A, assume that a given user determines to send one or more data objects to at least one other user (2200A). Therefore, a given user specifies at least one target data object to which the data object will be sent as an attachment to the message (2205A). In an example, each target data object will generally correspond to the user's social network contacts.Next, a potentially repetitive process begins, whereby a given user browses the data objects available to him and selects data objects to send to the at least one target data object (2210A). After selecting the data object, the given user requests to add the selected data object to the staging area corresponding to the message written for the at least one target data object. In 2220A, a given user determines whether to add another data object to the staging area. If the given user determines to add another data object to the staging area in 2220A, the process returns to 2210A and the given user browses to find another data object. Otherwise, if the given user determines in 2220A not to add another data object to the staging area, then each data object added to the staging area is appended to the message (2225A), and the device operated by the given user will then have Any additional data object message is sent to the at least one target data object (2230A).22B to 22J visually illustrate examples of the process of FIG. 22A. Referring to FIG. 22B, assume that a given user has determined to send one or more data objects to one of his / her social network contacts (2200A), and thereby browses his list of "person" data objects to specify at least A target contact receives the message (2205A). Fig. 22B illustrates a description of the "person" data object for a given user in a social graph visual representation, similar to Fig. 15A. In FIG. 22C, it is assumed that a given user selects a data object corresponding to a given user ’s friend "Rick", where the selection of Rick indicated by Rick's illusion in the social graph is highlighted in FIG. 22C. After selection, through some other user input (eg, double-click, pressing another button, etc.), the given user indicates that Rick should be added to the rating area as the target of the message that will contain at least one data object attachment. As will be understood, in this case, the "person" data object corresponding to "Rick" is the target of the message, but itself does not need to be attached to the message except for the specified target. The hierarchical area illustrated in FIG. 23C generally explains all the information sent to the target in the message, but the sender's identification is added to each message and is not explained in the hierarchical area for this reason. Therefore, after FIG. 22D, an empty message is generated from the given user to Rick (ie, no data object has been attached), which can now be filled with data object attachments. Therefore, in this example, FIGS. 22B to 22D correspond to 2205A of FIG. 22A.Referring to FIG. 22E, the given user then browses the "picture" data object, and in FIG. 22F, the given user instructs to select the "picture" data object corresponding to the artwork.jpg image file, which is then added to the picture 22G classification area. Selecting the data object in FIG. 22F and adding the selected data object to the staging area in FIG. 22G can be performed in a similar manner to that described above with respect to FIGS. 22C and 22D, respectively, but appended to the staging area in FIGS. 22F and 22G The data object is not added as a target to the message, but as an attachment to the message. Therefore, in this example, FIGS. 22E to 22G correspond to 2210A to 2220A of FIG. 22A.Next, referring to FIG. 22H, the given user then browses the "location" data object, and in FIG. 22I, the given user instructs to select the "location" data object corresponding to the forest location, which is then added to FIG. 22J 'S rating area. Selecting the data object in FIG. 22I and adding the selected data object to the staging area in FIG. 22J can be performed in a manner similar to that described above with respect to FIGS. 22F and 22G, respectively. Therefore, in this example, FIGS. 22H to 22J correspond to another repetition of 2210A to 2220A of FIG. 22A.At this point, in FIG. 22G, the hierarchical area contains the target data object "Rick" and the data object attachment "Artwork.jpg" and the forest. A given user can then attach the data object attachment to the message and send the message to the data object "Rick" (2225A and 2230A) by indicating the selection of the send button described in the hierarchical area. Alternatively, a given user may first add a text description of the attached data object to promote contextual understanding of why the data object is being sent to Rick (for example, "this 'artwork' picture was taken of 'forest' "Wait).Next, FIG. 23 illustrates an example of how to receive and view a message containing one or more data object attachments at the target data object. Referring to FIG. 23, a device operated by a given user receives a message containing a data object attachment at a device operated by the given user (eg, social network server 174, mobile device, etc.) (2300).After receiving the message, in 2305, the device operated by the user immediately displays a notification of the received message based at least in part on the attributes of the transmitted data object. For example, if the user who is receiving the message is Rick, and the sender of the message is Jane, and Rick and Jane are husband and wife, then the 2300 message can be displayed as important, even if Rick does not pay attention to the actual content contained in it. The same is true for data objects. Therefore, the attributes of the sender can affect the saliency of the notification of the displayed message, even if the message itself is not important to the receiver after further review.Based on the message notification, the target user of the message determines whether to view the message (2310). If the target user determines not to view the message, the process of FIG. 23 terminates. Otherwise, if the target user determines to view the message, the data object attachment of the message is extracted (2315), and each extracted data object is displayed to the target user based at least in part on the attributes of the extracted data object.For example, if the sender of the message is not important, the message notification of 2305 may not initially be significantly indicated to the target user in 2305. However, if the message itself is very important (for example, a friend far away sent an invitation to a party that the target user very much wanted to attend, etc.), then in 2320 the message is displayed more prominently immediately after extraction.In yet another example, important messages can also affect how message notifications are displayed in 2305, so that important messages can cause significant notifications, even when the sender of the message is not important to the target user. Similarly, although the extracted data object does not seem to be important, it is sent from the user ’s very important contacts (for example, the CEO of the target user ’s company, the person the target user wants to pursue, etc.), so the The data objects are displayed as prominent. Therefore, in at least one example, the message notification is displayed with a significance that at least indicates the importance level of the sender, where if the message is particularly important, the message notification may be displayed more prominently. Similarly, the data objects extracted in 2315 are displayed with a significance indicating at least the importance level of the extracted data objects, where if the sender of the message is particularly important, the data objects may be displayed more prominently.Those skilled in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, the data, instructions, commands, information, signals, bits, symbols, and codes referred to throughout the above description can be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof sheet.In addition, those skilled in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, the illustrative components, blocks, modules, circuits, and steps have been described above generally in relation to the functionality of various illustrative components, blocks, modules, circuits, and steps. Whether the functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. Those skilled in the art may implement the described functionality in different ways for each specific application, but the implementation decision should not be interpreted as causing a departure from the scope of the present invention.A general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gates or transistors designed to perform the functions described herein can be used Logic, discrete hardware components, or any combination thereof, implement or execute various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, multiple microprocessors, a combination of one or more microprocessors and a DSP core, or any other such configuration.The methods, sequences and / or algorithms described in connection with the embodiments disclosed herein may be directly embodied in hardware, in software modules executed by a processor, or a combination of both. The software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor so that the processor can read information from and write information to the storage medium. In the alternative, the storage medium may be integrated with the processor. The processor and the storage medium may reside in the ASIC. The ASIC may reside in a user terminal (eg, access terminal). In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media, which includes any medium that facilitates transfer of a computer program from one place to another. The storage medium may be any available medium that can be accessed by a computer. By way of example (and not limitation), the computer readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, or may be used to carry or store instructions or The desired program code in the form of a data structure and any other medium that can be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. For example, if you use coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave to transmit software from a website, server, or other remote source, then the coaxial cable, fiber Cables, twisted pairs, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media. As used herein, magnetic disks and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), floppy disks, and Blu-ray discs, where magnetic disks usually reproduce data magnetically, while optical discs use optical Way to reproduce data. The above combination should also be included in the scope of computer-readable media.Although the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications can be made herein without departing from the scope of the invention as defined by the appended claims. . The functions, steps and / or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. In addition, although elements of the invention may be described or claimed in the singular, unless explicitly stated that they are limited to the singular, the plural are also covered. |
Use of a compliant, elastomeric layer between upper and lower substrates of the combination sensor can increase the sensitivity to applied pressure or force from a stylus, while increasing the lateral resolution for a given sensel pitch. The elastomeric material may have an index of refraction that is substantially similar to that of the upper and lower substrates. The elastomeric material may include open regions for the inclusion of force-sensitive resistors. With careful selection of the elastomeric and FSR materials, the loss of transmissivity that can accompany air gaps can be minimized. |
CLAIMS What is claimed is: 1. An apparatus, comprising: a first substantially transparent substrate; a first plurality of substantially transparent electrodes formed in a first region of the first substantially transparent substrate; a second plurality of substantially transparent electrodes formed in a second region of the first substantially transparent substrate; a first plurality of resistors formed on some, but not all, of the first plurality of electrodes; a second plurality of resistors formed on the second plurality of electrodes; a second substantially transparent and flexible substrate; a third plurality of substantially transparent electrodes formed in a first region of the second substantially transparent substrate; a fourth plurality of substantially transparent electrodes formed in a second region of the second substantially transparent substrate, the fourth plurality of electrodes having a spacing which is substantially the same as that of the second plurality of electrodes, the fourth plurality of electrodes being located in a zone in which fourth electrode positions correspond to second electrode positions of the second plurality of electrodes; and a substantially transparent elastomeric material extending from the first region of the first substrate to the first region of the second substrate, wherein an index of refraction of the first substrate substantially matches an index of refraction of the elastomeric material. 2. The apparatus of claim 1, wherein the index of refraction of the elastomeric material substantially matches an index of refraction of the second substrate. 3. The apparatus of claim 1 or claim 2, wherein a modulus of elasticity of the elastomeric material is substantially lower than a modulus of elasticity of the second substrate. 4. The apparatus of any of claims 1 through 3, wherein a touch and handwriting sensor zone of the apparatus corresponds to a zone in which the elastomeric material substantially fills a space between a portion of the first plurality of electrodes and the third plurality of electrodes. 5. The apparatus of any of claims 1 through 4, further including force-sensitive resistor material disposed between the second plurality of electrodes and the fourth plurality of electrodes. 6. The apparatus of claim 5, wherein the elastomeric material and the force- sensitive resistor material substantially fill an area between the first substantially transparent substrate and the second first substantially transparent substrate. 7. The apparatus of claim 3, wherein the modulus of elasticity of the elastomeric layer is between about 0.5 and 50 megapascals. 8. The apparatus of claim 3, wherein the modulus of elasticity of the second substrate is between about 0.5 and 5.0 gigapascals. 9. The apparatus of any of claims 1 through 8, wherein: the first plurality of electrodes having the first plurality of resistors formed thereon are handwriting sensor electrodes; and a handwriting resolution detectable by the handwriting sensor electrodes is less than a pitch between adjacent handwriting sensor electrodes. 10. The apparatus of any of claims 1 through 9, further including substantially transparent and force-sensitive resistor material extending from the first plurality of electrodes to the third plurality of electrodes. 11. The apparatus of any of claims 1 through 10, further comprising: a display; a processor that is configured to communicate with the display, the processor being configured to process image data; and a memory device that is configured to communicate with the processor. 12. The apparatus of claim 11, further comprising: a driver circuit configured to send at least one signal to the display; and a controller configured to send at least a portion of the image data to the driver circuit. 13. The apparatus of claim 11, further comprising: an image source module configured to send the image data to the processor, wherein the image source module includes at least one of a receiver, transceiver, and transmitter. 14. The apparatus of claim 11, further comprising: an input device configured to receive input data and to communicate the input data to the processor. 15. An apparatus, comprising: a first substantially transparent substrate; a first plurality of substantially transparent electrodes formed in a first region of the first substantially transparent substrate; a second plurality of substantially transparent electrodes formed in a second region of the first substantially transparent substrate, the second plurality of electrodes being spaced more closely than the first plurality of electrodes; a first plurality of resistors formed on some, but not all, of the first plurality of electrodes; a second plurality of resistors formed on the second plurality of electrodes; a second substantially transparent and flexible substrate; a third plurality of substantially transparent electrodes formed in a first region of the second substantially transparent substrate; a fourth plurality of substantially transparent electrodes formed in a second region of the second substantially transparent substrate, the fourth plurality of electrodes having a spacing which is substantially the same as that of the second plurality of electrodes, the fourth plurality of electrodes being located in a zone in which fourth electrode positions correspond to second electrode positions of the second plurality of electrodes; and a substantially transparent elastomeric material extending from the first region of the first substrate to the first region of the second substrate, wherein an index of refraction of the first substrate substantially matches an index of refraction of the elastomeric material. 16. The apparatus of claim 15, wherein the first plurality of resistors is formed on first instances of the first plurality of electrodes, wherein the first plurality of resistors is not formed on second instances of the first plurality of electrodes, and wherein the second instances of the first plurality of electrodes are configured as touch sensor electrodes. 17. The apparatus of claim 16, wherein the substantially transparent elastomeric material extends from the touch sensor electrodes to the second substrate. 18. The apparatus of claim 16, wherein the substantially transparent elastomeric material does not extend from the first instances of the first plurality of electrodes to the third plurality of substantially transparent electrodes. 19. The apparatus of any of claims 16 through 18, wherein the touch sensor electrodes are configured to detect changes in capacitance between the third plurality of electrodes and the second instances of the first plurality of electrodes. 20. The apparatus of any of claims 16 through 19, wherein the touch sensor electrodes are configured to function as projected capacitive touch sensor electrodes. 21. The apparatus of any of claims 16 through 20, further comprising a substantially transparent elastomeric material extending from the second instances of the first plurality of electrodes to the second substrate. 22. The apparatus of any of claims 16 through 21, wherein the first instances of the first plurality of electrodes are configured as handwriting sensor electrodes. 23. The apparatus of any of claims 16 through 22, wherein the first instances of the first plurality of electrodes are configured to detect changes in capacitance caused by changes in a distance between the third plurality of electrodes and the first instances of the first plurality of electrodes. 24. The apparatus of any of claims 16 through 22, wherein the first instances of the first plurality of electrodes are configured to detect changes in resistance caused by changes in a distance between the third plurality of electrodes and the first instances of the first plurality of electrodes. 25. An apparatus, comprising: first substrate means, the first substrate means being substantially transparent; first electrode means formed in a first region of the first substantially transparent substrate means; second electrode means formed in a second region of the first substantially transparent substrate means; first resistor means formed on some, but not all, of the first electrode means; second resistor means formed on the second electrode means; second substrate means, the second substrate means being substantially transparent and flexible; third electrode means formed in a first region of the second substrate means; fourth electrode means formed in a second region of the second substrate means, the fourth electrode means having a spacing which is substantially the same as that of the second electrode means, first positions of the fourth electrode means corresponding to second positions of the second electrode means; and elastomeric means extending from the first region of the first substrate means to the first region of the second substrate means, the elastomeric means being substantially transparent, wherein an index of refraction of the first substrate means substantially matches an index of refraction of the elastomeric means. 26. The apparatus of claim 25, wherein the index of refraction of the elastomeric means substantially matches an index of refraction of the second substrate means. 27. The apparatus of claim 25 or claim 26, wherein a modulus of elasticity of the elastomeric material is substantially lower than a modulus of elasticity of the second substrate. 28. The apparatus of claim 27, wherein the modulus of elasticity of the elastomeric material is between about 0.5 and 50 megapascals. 29. The apparatus of claim 27, wherein the modulus of elasticity of the second substrate is between about 0.5 and 5.0 gigapascals. 30. The apparatus of any of claims 25 through 29, wherein: the first electrode means having the first resistor means formed thereon are handwriting sensor electrodes; and a handwriting resolution detectable by the handwriting sensor electrodes is less than a pitch between adjacent handwriting sensor electrodes. |
TOUCH, HANDWRITING AND FINGERPRINT SENSOR WITH ELASTOMERIC SPACER LAYER PRIORITY CLAIMS [0001] This application claims priority to U.S. Provisional Patent Application No. 61/394,054, entitled "COMBINATION TOUCH, HANDWRITING AND FINGERPRINT SENSOR" (Attorney Docket No. QUALP045P/102908P1) and filed on October 18, 2010, which is hereby incorporated by reference and for all purposes. This application also claims priority to U.S. Patent Application No. 13/271,057, entitled "TOUCH, HANDWRITING AND FINGERPRINT SENSOR WITH ELASTOMERIC SPACER LAYER" (Attorney Docket No. QUALP045C/102908U3) and filed on October 11, 2011, which is hereby incorporated by reference and for all purposes. TECHNICAL FIELD [0002] This disclosure relates to display devices, including but not limited to display devices that incorporate multifunctional touch screens. DESCRIPTION OF THE RELATED TECHNOLOGY [0003] Electromechanical systems (EMS) include devices having electrical and mechanical elements, actuators, transducers, sensors, optical components (including mirrors) and electronics. Electromechanical systems can be manufactured at a variety of scales including, but not limited to, microscales and nanoscales. For example, microelectromechanical systems (MEMS) devices can include structures having sizes ranging from about a micron to hundreds of microns or more. Nanoelectromechanical systems (NEMS) devices can include structures having sizes smaller than a micron including, for example, sizes smaller than several hundred nanometers. Electromechanical elements may be created using deposition, etching, lithography, and/or other micromachining processes that etch away parts of substrates and/or deposited material layers, or that add layers to form electrical and electromechanical devices. [0004] One type of EMS device is called an interferometric modulator (IMOD). As used herein, the term interferometric modulator or interferometric light modulator refers to a device that selectively absorbs and/or reflects light using the principles of optical interference. In some implementations, an interferometric modulator may include a pair of conductive plates, one or both of which may be transparent and/or reflective, wholly or in part, and capable of relative motion upon application of an appropriate electrical signal. In an implementation, one plate may include a stationary layer deposited on a substrate and the other plate may include a reflective membrane separated from the stationary layer by an air gap. The position of one plate in relation to another can change the optical interference of light incident on the interferometric modulator. Interferometric modulator devices have a wide range of applications, and are anticipated to be used in improving existing products and creating new products, especially those with display capabilities. [0005] The increased use of touch screens in handheld devices causes increased complexity and cost for modules that now include the display, the touch panel and a cover glass. Each layer in the device adds thickness and requires costly glass-to-glass bonding solutions for attachment to the neighboring substrates. These problems can be further exacerbated for reflective displays when a frontlight also needs to be integrated, adding to the thickness and cost of the module. SUMMARY [0006] The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein. Some implementations described herein provide a combined sensor device that combines aspects of capacitive and resistive technologies for touch sensing, handwriting input and fingerprint imaging. Some such implementations provide a touch sensor that combines capacitive and resistive technologies to enable a multi-feature user input sensor overlaid on a display. [0007] In some such implementations, a cover glass apparatus of a consumer device such as a cell phone, an e-reader, or a tablet computer serves additionally as part of a combined sensor device having a single or multi-touch sensor, a handwriting or stylus input device, and/or a fingerprint sensor. The cover glass apparatus may include 2, 3 or more layers. The substrates used to form a cover glass apparatus may be formed of various suitable substantially transparent materials, such as actual glass, plastic, polymer, etc. Such a cover glass apparatus with touch, handwriting and/or fingerprint detection capability may, for example, be overlaid on a display. [0008] One innovative aspect of the subject matter described in this disclosure can be implemented in an apparatus that includes a first substantially transparent substrate. A first plurality of substantially transparent electrodes may be formed in a first region of the first substantially transparent substrate and a second plurality of substantially transparent electrodes may be formed in a second region of the first substantially transparent substrate. A first plurality of resistors may be formed on some, but not all, of the first plurality of electrodes and a second plurality of resistors may be formed on the second plurality of electrodes. [0009] The apparatus may include a second substantially transparent and flexible substrate. A third plurality of substantially transparent electrodes may be formed in a first region of the second substantially transparent substrate and a fourth plurality of substantially transparent electrodes may be formed in a second region of the second substantially transparent substrate. The fourth plurality of electrodes may have a spacing that is substantially the same as that of the second plurality of electrodes. The fourth plurality of electrodes may be located in a zone in which fourth electrode positions correspond to second electrode positions of the second plurality of electrodes. [0010] The apparatus may include a substantially transparent elastomeric material extending from the first region of the first substrate to the first region of the second substrate. In some implementations, an index of refraction of the first substrate and/or the second substrate substantially matches an index of refraction of the elastomeric material. [0011] A modulus of elasticity of the elastomeric material may be substantially lower than a modulus of elasticity of the second substrate. For example, the modulus of elasticity of the elastomeric layer may be between about 0.5 and 50 megapascals and the modulus of elasticity of the first substrate may be between about 0.5 and 5.0 gigapascals. [0012] A touch and handwriting sensor zone of the apparatus may correspond to a zone in which the elastomeric material substantially fills a space between a portion of the first plurality of electrodes and the third plurality of electrodes. In some implementations, the apparatus includes force-sensitive resistor material disposed between the second plurality of electrodes and the fourth plurality of electrodes and/or disposed between the first plurality of electrodes and the third plurality of electrodes. The force-sensitive resistor material may or may not be substantially transparent, according to the implementation. In some implementations, the elastomeric material and the force-sensitive resistor material substantially fill an area between the first substantially transparent substrate and the second first substantially transparent substrate. [0013] In some implementations, the first plurality of electrodes having the first plurality of resistors formed thereon may be handwriting sensor electrodes. A handwriting resolution detectable by the handwriting sensor electrodes may be less than a pitch between adjacent handwriting sensor electrodes. [0014] The apparatus may include a display and a processor that is configured to communicate with the display. The processor may be configured to process image data. The apparatus may include a memory device that is configured to communicate with the processor. The apparatus may include a driver circuit configured to send at least one signal to the display and a controller configured to send at least a portion of the image data to the driver circuit. The apparatus may include an image source module configured to send the image data to the processor. The image source module includes at least one of a receiver, transceiver, and transmitter. The apparatus may include an input device configured to receive input data and to communicate the input data to the processor. [0015] Another innovative aspect of the subject matter described in this disclosure can be implemented in an alternative apparatus including a first substantially transparent substrate. A first plurality of substantially transparent electrodes may be formed in a first region of the first substantially transparent substrate and a second plurality of substantially transparent electrodes may be formed in a second region of the first substantially transparent substrate. In some implementations, the second plurality of electrodes may be spaced more closely than the first plurality of electrodes. A first plurality of resistors may be formed on some, but not all, of the first plurality of electrodes and a second plurality of resistors may be formed on the second plurality of electrodes. [0016] The apparatus may include a second substantially transparent and flexible substrate. A third plurality of substantially transparent electrodes may be formed in a first region of the second substantially transparent substrate and a fourth plurality of substantially transparent electrodes may be formed in a second region of the second substantially transparent substrate. The fourth plurality of electrodes may have a spacing that is substantially the same as that of the second plurality of electrodes. The fourth plurality of electrodes may be located in a zone in which fourth electrode positions correspond to second electrode positions of the second plurality of electrodes [0017] The apparatus may include a substantially transparent elastomeric material extending from the first region of the first substrate to the first region of the second substrate. In some implementations, an index of refraction of the first substrate and/or the second substrate may substantially match an index of refraction of the elastomeric material. [0018] The first plurality of resistors may be formed on first instances of the first plurality of electrodes but not formed on second instances of the first plurality of electrodes. The second instances of the first plurality of electrodes may be configured as touch sensor electrodes. The touch sensor electrodes may be configured to detect changes in capacitance between the third plurality of electrodes and the second instances of the first plurality of electrodes. The touch sensor electrodes may be configured to function as projected capacitive touch sensor electrodes. [0019] In some implementations, the substantially transparent elastomeric material may extend from the touch sensor electrodes to the second substrate. In some implementations the substantially transparent elastomeric material may not extend from the first instances of the first plurality of electrodes to the third plurality of substantially transparent electrodes. However, in alternative implementations the substantially transparent elastomeric material may extend from the first instances of the first plurality of electrodes to the third plurality of substantially transparent electrodes. [0020] The first instances of the first plurality of electrodes may be configured as handwriting sensor electrodes. The first instances of the first plurality of electrodes may be configured to detect changes in capacitance caused by changes in a distance between the third plurality of electrodes and the first instances of the first plurality of electrodes. Alternatively, or additionally, the first instances of the first plurality of electrodes may be configured to detect changes in resistance caused by changes in a distance between the third plurality of electrodes and the first instances of the first plurality of electrodes. [0021] Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Although the examples provided in this summary are primarily described in terms of MEMS-based displays, the concepts provided herein may apply to other types of displays, such as liquid crystal displays, organic light-emitting diode ("OLED") displays and field emission displays. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale. BRIEF DESCRIPTION OF THE DRAWINGS [0022] Figure 1 shows an example of an isometric view depicting two adjacent pixels in a series of pixels of an interferometric modulator (IMOD) display device. [0023] Figure 2 shows an example of a system block diagram illustrating an electronic device incorporating a 3x3 interferometric modulator display. [0024] Figure 3 shows an example of a diagram illustrating movable reflective layer position versus applied voltage for the interferometric modulator of Figure 1. [0025] Figure 4 shows an example of a table illustrating various states of an interferometric modulator when various common and segment voltages are applied. [0026] Figure 5A shows an example of a diagram illustrating a frame of display data in the 3x3 interferometric modulator display of Figure 2. [0027] Figure 5B shows an example of a timing diagram for common and segment signals that may be used to write the frame of display data illustrated in Figure 5A. [0028] Figure 6A shows an example of a partial cross-section of the interferometric modulator display of Figure 1. [0029] Figures 6B-6E show examples of cross-sections of varying implementations of interferometric modulators. [0030] Figure 7 shows an example of a flow diagram illustrating a manufacturing process for an interferometric modulator. [0031] Figures 8A-8E show examples of cross-sectional schematic illustrations of various stages in a method of making an interferometric modulator. [0032] Figure 9A shows an example of sensor electrodes formed on a cover glass. [0033] Figure 9B shows an alternative example of sensor electrodes formed on a cover glass. [0034] Figure 10A shows an example of a cross-sectional view of a combined sensor device. [0035] Figures 10B-10D show examples of cross-sectional views of alternative combined sensor devices. [0036] Figures 11A-11D show examples of cross-sectional views of combined sensor devices having high-modulus and low-modulus compressible layers. [0037] Figure 12 shows an example of a device that includes a cover glass with a combination touch, handwriting and fingerprint sensor. [0038] Figure 13 shows an example of a top view of a force-sensitive switch implementation. [0039] Figure 14 shows an example of a cross-section through a row of the force-sensitive switch implementation shown in Figure 13. [0040] Figure 15A shows an example of a circuit diagram that represents components of the implementation shown in Figures 13 and 14. [0041] Figure 15B shows an example of a circuit diagram that represents components of an alternative implementation related to Figures 13 and 14. [0042] Figure 16 shows an example of a flow diagram illustrating a manufacturing process for a combined sensor device. [0043] Figures 17A-17D show examples of partially formed combined sensor devices during various stages of the manufacturing process of Figure 16. [0044] Figure 18A shows an example of a block diagram that illustrates a high-level architecture of a combined sensor device. [0045] Figure 18B shows an example of a block diagram that illustrates a control system for a combined sensor device. [0046] Figure 18C shows an example representation of physical components and their electrical equivalents for a sensel in a combined sensor device. [0047] Figure 18D shows an example of an alternative sensel of a combined sensor device. [0048] Figure 18E shows an example of a schematic diagram representing equivalent circuit components of a sensel in a combined sensor device. [0049] Figure 18F shows an example of an operational amplifier circuit for a combined sensor device that may be configured for handwriting or stylus mode sensing. [0050] Figure 18G shows an example of the operational amplifier circuit of Figure 18F configured for touch mode sensing. [0051] Figure 18H shows an example of an operational amplifier circuit for a combined sensor device that includes a clamp circuit. [0052] Figure 181 shows examples of clamp circuit transfer functions. [0053] Figure 18J shows an example of a circuit diagram for a clamp circuit. [0054] Figure 19 shows an example of a cross-section of a portion of an alternative combined sensor device. [0055] Figure 20 shows an example of a top view of routing for a combined sensor device. [0056] Figure 21 A shows an example of a cross-sectional view through the combined sensor device shown in Figure 20. [0057] Figure 21B shows an example of a cross-sectional view of a wraparound implementation. [0058] Figure 22 shows an example of a flow diagram illustrating a fingerprint-based user authentication process. [0059] Figure 23A shows an example of a mobile device that may be configured for making secure commercial transactions. [0060] Figure 23B shows an example of using a fingerprint-secured mobile device for physical access applications. [0061] Figure 24A shows an example of a secure tablet device. [0062] Figure 24B shows an example of an alternative secure tablet device. [0063] Figures 25A and 25B show examples of system block diagrams illustrating a display device that includes a combined sensor device. [0064] Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION [0065] The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein can be applied in a multitude of different ways. The described implementations may be implemented in any device or system that can be configured to display an image, whether in motion (e.g., video) or stationary (e.g., still image), and whether textual, graphical or pictorial. More particularly, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (i.e., e-readers), computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems, microelectromechanical systems, and non-MEMS applications), aesthetic structures (e.g., display of images on a piece of jewelry) and a variety of EMS devices. The teachings herein also can be used in non-display applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art. [0066] Some implementations described herein combine novel aspects of capacitive and resistive technologies for touch sensing, stylus detection for handwriting input, and fingerprint imaging. Some such implementations provide a combined sensor device, at least part of which is incorporated in a cover glass apparatus that may be overlaid on or otherwise combined with a display. The cover glass apparatus may have 2, 3 or more layers. In some implementations, the cover glass apparatus includes a substantially transparent and flexible upper substrate and a substantially transparent and relatively more rigid lower substrate. In some such implementations, the lower substrate of the cover glass apparatus may be overlaid on a display substrate. In alternative implementations, the lower substrate of the cover glass apparatus may be a display substrate. For example, the lower substrate of the cover glass apparatus may be the same transparent substrate on which IMOD devices are fabricated, as described below. [0067] Various implementations of such sensor devices are described herein. In some implementations, the cover glass of a display device serves as a single or multi-touch sensor, as a handwriting (or note capture) input device, and as a fingerprint sensor. Sensor functionality and resolution can be tailored to specific locations on the cover glass. In some such implementations, the area in which the fingerprint sensing elements are located may provide not only fingerprint detection, but also handwriting and touch functionality. In some other implementations, the fingerprint sensor may be segregated in a separate, high-resolution zone that only provides fingerprint functionality. In some implementations, the sensor device serves as a combination touch and stylus input device. Various methods of fabrication are described herein, as well as methods for using a device that includes a combined sensor device. [0068] Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. Some implementations described herein combine aspects of capacitive and resistive technologies for touch sensing, handwriting input and in some cases fingerprint imaging. Some such implementations provide a touch sensor that combines capacitive and resistive technologies to enable a multi-functional user input sensor that can be overlaid on a display. Some implementations of the combined sensor device eliminate a middle touch sensor layer that is disposed between the cover glass and the display glass in some conventional projected capacitive touch (PCT)- based devices. Accordingly, some such implementations can mitigate or eliminate at least some drawbacks of PCT and resistive technologies. [0069] A hybrid PCT and digital resistive touch (DRT) implementation allows, for example, detection of a narrow stylus tip pressing onto the display with the DRT aspect while also allowing the detection of very light brushing or close hovering over the display with a finger using the PCT aspect. The sensor device can accept any form of stylus or pen input, regardless of whether it is conducting or non-conducting. Transparent or effectively transparent force-sensitive resistors may be included within some or all of the sensels to improve optical and electrical performance. [0070] According to some implementations, the combination sensor may include two or more patterned layers, some of which may be on a different substrate. The upper (or outer) substrate may, for example, be formed of a plastic such as polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyimide, or a similar material. The upper substrate also may be substantially transparent and have a substantially transparent conductor such as indium-tin-oxide (ITO) patterned on its underside. The lower substrate may be formed of a substantially transparent substrate material, such as glass, with another suitable material. The top surface of the substantially transparent substrate can be a patterned layer of substantially transparent conductor material such as ITO. In some implementations, the conductors on the underside of the upper substrate and the upper side of the lower substrate may be patterned into diamond-shaped electrodes, connected as rows or columns on each of the two different layers [0071] Some such implementations include a wrap-around configuration wherein a flexible upper substrate of the sensor device has patterned metallization on an extended portion to allow routing of signal lines, electrical ground, and power. This flexible upper substrate may be wrapped around an edge of a relatively more rigid lower substrate of the cover glass apparatus. One or more ICs or passive components including connecting sockets may be mounted onto the flexible layer to reduce cost and complexity. Signal lines that address sensor electrodes on the lower substrate may be routed and connected to corresponding patterns on the underside of the flexible upper substrate. Such implementations have the potential advantage of eliminating the need for a flex cable for electrically connecting signal lines of the upper layer to integrated circuits and/or other devices. The approach allows a bezel- less configuration for some versions of the final cover glass apparatus. [0072] Fabrication methods include predominantly transparent substrates and materials to increase the optical performance of underlying displays. The fabrication processes may utilize flexible substrates for at least a portion of the sensor device, and lend themselves to roll-to-roll processing for low cost. [0073] Use of a compliant, elastomeric layer between upper and lower portions of the combination sensor can increase the sensitivity to applied pressure or force from a stylus, while increasing the lateral resolution for a given sensel pitch. The elastomeric material may include open regions for the inclusion of force-sensitive resistors. With careful selection of the elastomeric and FSR materials, the loss of transmissivity that can accompany air gaps is minimized. [0074] An array of force-sensitive switches and local capacitors may be used to connect the local capacitor into associated PCT detection circuitry, where each capacitor is formed with a thin dielectric layer to achieve a high capacitance increase when the force-sensitive switch is closed by the pressing of a stylus or finger. The same PCT detection circuitry can therefore be used to detect changes in mutual capacitance when touched with a finger (touch mode) and changes in sensel capacitance when the force-sensitive switch is depressed (stylus or fingerprint mode). [0075] The combined, multi-functional sensor device enables a single touchscreen to perform additional functions such as handwriting input and fingerprint recognition. In some implementations, these multiple features allow increased security through user authentication, and allow better capture of handwriting and a more interactive approach to user interfaces. A handheld mobile device such as a cell phone with the sensor device enables an array of applications, including using the mobile device as a gateway for user authentication to enable transactions and physical access; using the handwriting input function for signature recognition and transmittal for transaction applications; and using the handwriting input feature to automatically capture notes and other documents of students in an academic setting or employees in a corporate setting. [0076] In some such implementations, a separate controller may be configured for the sensor device, or the controller may be included as part of an applications processor. Software for handwriting, touch and fingerprint detection may be included on one or more controllers or the applications processor. Low, medium and high resolution can be obtained with a single sensor device by scanning a subset of the sensels, or by aggregating lines or columns. Power consumption may be reduced by aggregating sensor pixels (or rows or columns) electrically using the controller, so that they perform as a low power small array until higher resolution with a larger array is needed. Power consumption may be reduced by turning off portions or all of the sensor device, turning off parts of the controller, or employing first-level screening at a reduced frame rate. In some such implementations, a combination PCT sensor and digital resistive touch (DRT) sensor has a passive array of capacitors (PCT) and a passive array of resistive switches (DRT). While the touch sensor and stylus sensor systems generally use different sensing techniques, a holistic approach with a common structure saves on PCB part count, reduces area in an ASIC implementation, reduces power, and eliminates the need for isolation between touch and stylus subsystems. [0077] An example of a suitable EMS or MEMS device, to which the described implementations may apply, is a reflective display device. Reflective display devices can incorporate interferometric modulators (IMODs) to selectively absorb and/or reflect light incident thereon using principles of optical interference. IMODs can include an absorber, a reflector that is movable with respect to the absorber, and an optical resonant cavity defined between the absorber and the reflector. The reflector can be moved to two or more different positions, which can change the size of the optical resonant cavity and thereby affect the reflectance of the interferometric modulator. The reflectance spectrums of IMODs can create fairly broad spectral bands which can be shifted across the visible wavelengths to generate different colors. The position of the spectral band can be adjusted by changing the thickness of the optical resonant cavity. One way of changing the optical resonant cavity is by changing the position of the reflector. [0078] Figure 1 shows an example of an isometric view depicting two adjacent pixels in a series of pixels of an interferometric modulator (IMOD) display device. The IMOD display device includes one or more interferometric MEMS display elements. In these devices, the pixels of the MEMS display elements can be in either a bright or dark state. In the bright ("relaxed," "open" or "on") state, the display element reflects a large portion of incident visible light, e.g., to a user. Conversely, in the dark ("actuated," "closed" or "off) state, the display element reflects little incident visible light. In some implementations, the light reflectance properties of the on and off states may be reversed. MEMS pixels can be configured to reflect predominantly at particular wavelengths allowing for a color display in addition to black and white. [0079] The IMOD display device can include a row/column array of IMODs. Each IMOD can include a pair of reflective layers, i.e., a movable reflective layer and a fixed partially reflective layer, positioned at a variable and controllable distance from each other to form an air gap (also referred to as an optical gap or cavity). The movable reflective layer may be moved between at least two positions. In a first position, i.e., a relaxed position, the movable reflective layer can be positioned at a relatively large distance from the fixed partially reflective layer. In a second position, i.e., an actuated position, the movable reflective layer can be positioned more closely to the partially reflective layer. Incident light that reflects from the two layers can interfere constructively or destructively depending on the position of the movable reflective layer, producing either an overall reflective or non-reflective state for each pixel. In some implementations, the IMOD may be in a reflective state when unactuated, reflecting light within the visible spectrum, and may be in a dark state when unactuated, absorbing and/or destructively interfering light within the visible range. In some other implementations, however, an IMOD may be in a dark state when unactuated, and in a reflective state when actuated. In some implementations, the introduction of an applied voltage can drive the pixels to change states. In some other implementations, an applied charge can drive the pixels to change states. [0080] The depicted portion of the pixel array in Figure 1 includes two adjacent interferometric modulators 12 (i.e., IMOD pixels). In the IMOD 12 on the left (as illustrated), a movable reflective layer 14 is illustrated in a relaxed position at a distance (which may be predetermined based on design parameters) from an optical stack 16, which includes a partially reflective layer. The voltage V0 applied across the IMOD 12 on the left is insufficient to cause actuation of the movable reflective layer 14. In the IMOD 12 on the right, the movable reflective layer 14 is illustrated in an actuated position near, adjacent or touching the optical stack 16. The voltage Vbias applied across the IMOD 12 on the right is sufficient to move and can maintain the movable reflective layer 14 in the actuated position. [0081] In Figure 1, the reflective properties of pixels 12 are generally illustrated with arrows 13 indicating light incident upon the pixels 12, and light 15 reflecting from the pixel 12 on the left. A person having ordinary skill in the art will readily recognize that most of the light 13 incident upon the pixels 12 may be transmitted through the transparent substrate 20, toward the optical stack 16. A portion of the light incident upon the optical stack 16 may be transmitted through the partially reflective layer of the optical stack 16, and a portion will be reflected back through the transparent substrate 20. The portion of light 13 that is transmitted through the optical stack 16 may be reflected at the movable reflective layer 14, back toward (and through) the transparent substrate 20. Interference (constructive or destructive) between the light reflected from the partially reflective layer of the optical stack 16 and the light reflected from the movable reflective layer 14 will determine the wavelength(s) of light 15 reflected from the pixel 12. [0082] The optical stack 16 can include a single layer or several layers. The layer(s) can include one or more of an electrode layer, a partially reflective and partially transmissive layer and a transparent dielectric layer. In some implementations, the optical stack 16 is electrically conductive, partially transparent and partially reflective, and may be fabricated, for example, by depositing one or more of the above layers onto a transparent substrate 20. The electrode layer can be formed from a variety of materials, such as various metals, for example indium tin oxide (ITO). The partially reflective layer can be formed from a variety of materials that are partially reflective, such as various metals, such as chromium (Cr), semiconductors, and dielectrics. The partially reflective layer can be formed of one or more layers of materials, and each of the layers can be formed of a single material or a combination of materials. In some implementations, the optical stack 16 can include a single semi-transparent thickness of metal or semiconductor which serves as both an optical absorber and electrical conductor, while different, more electrically conductive layers or portions (e.g., of the optical stack 16 or of other structures of the IMOD) can serve to bus signals between IMOD pixels. The optical stack 16 also can include one or more insulating or dielectric layers covering one or more conductive layers or an electrically conductive/optically absorptive layer. [0083] In some implementations, the layer(s) of the optical stack 16 can be patterned into parallel strips, and may form row electrodes in a display device as described further below. As will be understood by one having ordinary skill in the art, the term "patterned" is used herein to refer to masking as well as etching processes. In some implementations, a highly conductive and reflective material, such as aluminum (Al), may be used for the movable reflective layer 14, and these strips may form column electrodes in a display device. The movable reflective layer 14 may be formed as a series of parallel strips of a deposited metal layer or layers (orthogonal to the row electrodes of the optical stack 16) to form columns deposited on top of posts 18 and an intervening sacrificial material deposited between the posts 18. When the sacrificial material is etched away, a defined gap 19, or optical cavity, can be formed between the movable reflective layer 14 and the optical stack 16. In some implementations, the spacing between posts 18 may be approximately 1-1000 um, while the gap 19 may be approximately less than 10,000 Angstroms (A). [0084] In some implementations, each pixel of the IMOD, whether in the actuated or relaxed state, is essentially a capacitor formed by the fixed and moving reflective layers. When no voltage is applied, the movable reflective layer 14 remains in a mechanically relaxed state, as illustrated by the pixel 12 on the left in Figure 1 , with the gap 19 between the movable reflective layer 14 and optical stack 16. However, when a potential difference, e.g., voltage, is applied to at least one of a selected row and column, the capacitor formed at the intersection of the row and column electrodes at the corresponding pixel becomes charged, and electrostatic forces pull the electrodes together. If the applied voltage exceeds a threshold, the movable reflective layer 14 can deform and move near or against the optical stack 16. A dielectric layer (not shown) within the optical stack 16 may prevent shorting and control the separation distance between the layers 14 and 16, as illustrated by the actuated pixel 12 on the right in Figure 1. The behavior is the same regardless of the polarity of the applied potential difference. Though a series of pixels in an array may be referred to in some instances as "rows" or "columns," a person having ordinary skill in the art will readily understand that referring to one direction as a "row" and another as a "column" is arbitrary. Restated, in some orientations, the rows can be considered columns, and the columns considered to be rows. Furthermore, the display elements may be evenly arranged in orthogonal rows and columns (an "array"), or arranged in non-linear configurations, for example, having certain positional offsets with respect to one another (a "mosaic"). The terms "array" and "mosaic" may refer to either configuration. Thus, although the display is referred to as including an "array" or "mosaic," the elements themselves need not be arranged orthogonally to one another, or disposed in an even distribution, in any instance, but may include arrangements having asymmetric shapes and unevenly distributed elements. [0085] Figure 2 shows an example of a system block diagram illustrating an electronic device incorporating a 3x3 interferometric modulator display. The electronic device includes a processor 21 that may be configured to execute one or more software modules. In addition to executing an operating system, the processor 21 may be configured to execute one or more software applications, including a web browser, a telephone application, an email program, or any other software application. [0086] The processor 21 can be configured to communicate with an array driver 22. The array driver 22 can include a row driver circuit 24 and a column driver circuit 26 that provide signals to, e.g., a display array or panel 30. The cross section of the IMOD display device illustrated in Figure 1 is shown by the lines 1-1 in Figure 2. Although Figure 2 illustrates a 3x3 array of IMODs for the sake of clarity, the display array 30 may contain a very large number of IMODs, and may have a different number of IMODs in rows than in columns, and vice versa. [0087] Figure 3 shows an example of a diagram illustrating movable reflective layer position versus applied voltage for the interferometric modulator of Figure 1. For MEMS interferometric modulators, the row/column (i.e., common/segment) write procedure may take advantage of a hysteresis property of these devices as illustrated in Figure 3. An interferometric modulator may require, for example, about a 10-volt potential difference to cause the movable reflective layer, or mirror, to change from the relaxed state to the actuated state. When the voltage is reduced from that value, the movable reflective layer maintains its state as the voltage drops back below, e.g., 10-volts, however, the movable reflective layer does not relax completely until the voltage drops below 2-volts. Thus, a range of voltage, approximately 3 to 7-volts, as shown in Figure 3, exists where there is a window of applied voltage within which the device is stable in either the relaxed or actuated state. This is referred to herein as the "hysteresis window" or "stability window." For a display array 30 having the hysteresis characteristics of Figure 3, the row/column write procedure can be designed to address one or more rows at a time, such that during the addressing of a given row, pixels in the addressed row that are to be actuated are exposed to a voltage difference of about 10-volts, and pixels that are to be relaxed are exposed to a voltage difference of near zero volts. After addressing, the pixels are exposed to a steady state or bias voltage difference of approximately 5 -volts such that they remain in the previous strobing state. In this example, after being addressed, each pixel sees a potential difference within the "stability window" of about 3-7 -volts. This hysteresis property feature enables the pixel design, e.g., illustrated in Figure 1, to remain stable in either an actuated or relaxed pre-existing state under the same applied voltage conditions. Since each IMOD pixel, whether in the actuated or relaxed state, is essentially a capacitor formed by the fixed and moving reflective layers, this stable state can be held at a steady voltage within the hysteresis window without substantially consuming or losing power. Moreover, essentially little or no current flows into the IMOD pixel if the applied voltage potential remains substantially fixed. [0088] In some implementations, a frame of an image may be created by applying data signals in the form of "segment" voltages along the set of column electrodes, in accordance with the desired change (if any) to the state of the pixels in a given row. Each row of the array can be addressed in turn, such that the frame is written one row at a time. To write the desired data to the pixels in a first row, segment voltages corresponding to the desired state of the pixels in the first row can be applied on the column electrodes, and a first row pulse in the form of a specific "common" voltage or signal can be applied to the first row electrode. The set of segment voltages can then be changed to correspond to the desired change (if any) to the state of the pixels in the second row, and a second common voltage can be applied to the second row electrode. In some implementations, the pixels in the first row are unaffected by the change in the segment voltages applied along the column electrodes, and remain in the state they were set to during the first common voltage row pulse. This process may be repeated for the entire series of rows, or alternatively, columns, in a sequential fashion to produce the image frame. The frames can be refreshed and/or updated with new image data by continually repeating this process at some desired number of frames per second. [0089] The combination of segment and common signals applied across each pixel (that is, the potential difference across each pixel) determines the resulting state of each pixel. Figure 4 shows an example of a table illustrating various states of an interferometric modulator when various common and segment voltages are applied. As will be readily understood by one having ordinary skill in the art, the "segment" voltages can be applied to either the column electrodes or the row electrodes, and the "common" voltages can be applied to the other of the column electrodes or the row electrodes. [0090] As illustrated in Figure 4 (as well as in the timing diagram shown in Figure 5B), when a release voltage VCREL is applied along a common line, all interferometric modulator elements along the common line will be placed in a relaxed state, alternatively referred to as a released or unactuated state, regardless of the voltage applied along the segment lines, i.e., high segment voltage VSH and low segment voltage VSL. In particular, when the release voltage VCREL is applied along a common line, the potential voltage across the modulator (alternatively referred to as a pixel voltage) is within the relaxation window (see Figure 3, also referred to as a release window) both when the high segment voltage VSH and the low segment voltage VSL are applied along the corresponding segment line for that pixel. [0091] When a hold voltage is applied on a common line, such as a high hold voltage VCHOLD H or a low hold voltage VCHOLD L, the state of the interferometric modulator will remain constant. For example, a relaxed IMOD will remain in a relaxed position, and an actuated IMOD will remain in an actuated position. The hold voltages can be selected such that the pixel voltage will remain within a stability window both when the high segment voltage VSH and the low segment voltage VSL are applied along the corresponding segment line. Thus, the segment voltage swing, i.e., the difference between the high VSH and low segment voltage VSL, is less than the width of either the positive or the negative stability window. [0092] When an addressing, or actuation, voltage is applied on a common line, such as a high addressing voltage VCADD H or a low addressing voltage VCADD L, data can be selectively written to the modulators along that line by application of segment voltages along the respective segment lines. The segment voltages may be selected such that actuation is dependent upon the segment voltage applied. When an addressing voltage is applied along a common line, application of one segment voltage will result in a pixel voltage within a stability window, causing the pixel to remain unactuated. In contrast, application of the other segment voltage will result in a pixel voltage beyond the stability window, resulting in actuation of the pixel. The particular segment voltage which causes actuation can vary depending upon which addressing voltage is used. In some implementations, when the high addressing voltage VCADD H is applied along the common line, application of the high segment voltage VSH can cause a modulator to remain in its current position, while application of the low segment voltage VSL can cause actuation of the modulator. As a corollary, the effect of the segment voltages can be the opposite when a low addressing voltage VCADD L is applied, with high segment voltage VSH causing actuation of the modulator, and low segment voltage VSL having no effect (i.e., remaining stable) on the state of the modulator. [0093] In some implementations, hold voltages, address voltages, and segment voltages may be used which always produce the same polarity potential difference across the modulators. In some other implementations, signals can be used which alternate the polarity of the potential difference of the modulators. Alternation of the polarity across the modulators (that is, alternation of the polarity of write procedures) may reduce or inhibit charge accumulation which could occur after repeated write operations of a single polarity. [0094] Figure 5A shows an example of a diagram illustrating a frame of display data in the 3x3 interferometric modulator display of Figure 2. Figure 5B shows an example of a timing diagram for common and segment signals that may be used to write the frame of display data illustrated in Figure 5A. The signals can be applied to the, e.g., 3x3 array of Figure 2, which will ultimately result in the line time 60e display arrangement illustrated in Figure 5A. The actuated modulators in Figure 5A are in a dark-state, i.e., where a substantial portion of the reflected light is outside of the visible spectrum so as to result in a dark appearance to, e.g., a viewer. Prior to writing the frame illustrated in Figure 5A, the pixels can be in any state, but the write procedure illustrated in the timing diagram of Figure 5B presumes that each modulator has been released and resides in an unactuated state before the first line time 60a. [0095] During the first line time 60a: a release voltage 70 is applied on common line 1 ; the voltage applied on common line 2 begins at a high hold voltage 72 and moves to a release voltage 70; and a low hold voltage 76 is applied along common line 3. Thus, the modulators (common 1, segment 1), (1,2) and (1,3) along common line 1 remain in a relaxed, or unactuated, state for the duration of the first line time 60a, the modulators (2,1), (2,2) and (2,3) along common line 2 will move to a relaxed state, and the modulators (3,1), (3,2) and (3,3) along common line 3 will remain in their previous state. With reference to Figure 4, the segment voltages applied along segment lines 1 , 2 and 3 will have no effect on the state of the interferometric modulators, as none of common lines 1 , 2 or 3 are being exposed to voltage levels causing actuation during line time 60a (i.e., VCREL - relax and VCHOLD_L - stable). [0096] During the second line time 60b, the voltage on common line 1 moves to a high hold voltage 72, and all modulators along common line 1 remain in a relaxed state regardless of the segment voltage applied because no addressing, or actuation, voltage was applied on the common line 1. The modulators along common line 2 remain in a relaxed state due to the application of the release voltage 70, and the modulators (3,1), (3,2) and (3,3) along common line 3 will relax when the voltage along common line 3 moves to a release voltage 70. [0097] During the third line time 60c, common line 1 is addressed by applying a high address voltage 74 on common line 1. Because a low segment voltage 64 is applied along segment lines 1 and 2 during the application of this address voltage, the pixel voltage across modulators (1,1) and (1,2) is greater than the high end of the positive stability window (i.e., the voltage differential exceeded a predefined threshold) of the modulators, and the modulators (1 ,1) and (1,2) are actuated. Conversely, because a high segment voltage 62 is applied along segment line 3, the pixel voltage across modulator (1,3) is less than that of modulators (1,1) and (1,2), and remains within the positive stability window of the modulator; modulator (1,3) thus remains relaxed. Also during line time 60c, the voltage along common line 2 decreases to a low hold voltage 76, and the voltage along common line 3 remains at a release voltage 70, leaving the modulators along common lines 2 and 3 in a relaxed position. [0098] During the fourth line time 60d, the voltage on common line 1 returns to a high hold voltage 72, leaving the modulators along common line 1 in their respective addressed states. The voltage on common line 2 is decreased to a low address voltage 78. Because a high segment voltage 62 is applied along segment line 2, the pixel voltage across modulator (2,2) is below the lower end of the negative stability window of the modulator, causing the modulator (2,2) to actuate. Conversely, because a low segment voltage 64 is applied along segment lines 1 and 3, the modulators (2,1) and (2,3) remain in a relaxed position. The voltage on common line 3 increases to a high hold voltage 72, leaving the modulators along common line 3 in a relaxed state. Then the voltage on common line 2 transitions back to low hold voltage 76. [0099] Finally, during the fifth line time 60e, the voltage on common line 1 remains at high hold voltage 72, and the voltage on common line 2 remains at low hold voltage 76, leaving the modulators along common lines 1 and 2 in their respective addressed states. The voltage on common line 3 increases to a high address voltage 74 to address the modulators along common line 3. As a low segment voltage 64 is applied on segment lines 2 and 3, the modulators (3,2) and (3,3) actuate, while the high segment voltage 62 applied along segment line 1 causes modulator (3,1) to remain in a relaxed position. Thus, at the end of the fifth line time 60e, the 3x3 pixel array is in the state shown in Figure 5A, and will remain in that state as long as the hold voltages are applied along the common lines, regardless of variations in the segment voltage which may occur when modulators along other common lines (not shown) are being addressed. [0100] In the timing diagram of Figure 5B, a given write procedure (i.e., line times 60a-60e) can include the use of either high hold and address voltages, or low hold and address voltages. Once the write procedure has been completed for a given common line (and the common voltage is set to the hold voltage having the same polarity as the actuation voltage), the pixel voltage remains within a given stability window, and does not pass through the relaxation window until a release voltage is applied on that common line. Furthermore, as each modulator is released as part of the write procedure prior to addressing the modulator, the actuation time of a modulator, rather than the release time, may determine the necessary line time. Specifically, in implementations in which the release time of a modulator is greater than the actuation time, the release voltage may be applied for longer than a single line time, as depicted in Figure 5B. In some other implementations, voltages applied along common lines or segment lines may vary to account for variations in the actuation and release voltages of different modulators, such as modulators of different colors. [0101] The details of the structure of interferometric modulators that operate in accordance with the principles set forth above may vary widely. For example, Figures 6A-6E show examples of cross-sections of varying implementations of interferometric modulators, including the movable reflective layer 14 and its supporting structures. Figure 6A shows an example of a partial cross-section of the interferometric modulator display of Figure 1, where a strip of metal material, i.e., the movable reflective layer 14 is deposited on supports 18 extending orthogonally from the substrate 20. In Figure 6B, the movable reflective layer 14 of each IMOD is generally square or rectangular in shape and attached to supports at or near the corners, on tethers 32. In Figure 6C, the movable reflective layer 14 is generally square or rectangular in shape and suspended from a deformable layer 34, which may include a flexible metal. The deformable layer 34 can connect, directly or indirectly, to the substrate 20 around the perimeter of the movable reflective layer 14. These connections are herein referred to as support posts. The implementation shown in Figure 6C has additional benefits deriving from the decoupling of the optical functions of the movable reflective layer 14 from its mechanical functions, which are carried out by the deformable layer 34. This decoupling allows the structural design and materials used for the reflective layer 14 and those used for the deformable layer 34 to be optimized independently of one another. [0102] Figure 6D shows another example of an IMOD, where the movable reflective layer 14 includes a reflective sub-layer 14a. The movable reflective layer 14 rests on a support structure, such as support posts 18. The support posts 18 provide separation of the movable reflective layer 14 from the lower stationary electrode (i.e., part of the optical stack 16 in the illustrated IMOD) so that a gap 19 is formed between the movable reflective layer 14 and the optical stack 16, for example when the movable reflective layer 14 is in a relaxed position. The movable reflective layer 14 also can include a conductive layer 14c, which may be configured to serve as an electrode, and a support layer 14b. In this example, the conductive layer 14c is disposed on one side of the support layer 14b, distal from the substrate 20, and the reflective sub-layer 14a is disposed on the other side of the support layer 14b, proximal to the substrate 20. In some implementations, the reflective sub-layer 14a can be conductive and can be disposed between the support layer 14b and the optical stack 16. The support layer 14b can include one or more layers of a dielectric material, for example, silicon oxynitride (SiON) or silicon dioxide (Si02). In some implementations, the support layer 14b can be a stack of layers, such as, for example, a Si02/SiON/Si02 tri-layer stack. Either or both of the reflective sub-layer 14a and the conductive layer 14c can include, e.g., an aluminum (Al) alloy with about 0.5% copper (Cu), or another reflective metallic material. Employing conductive layers 14a, 14c above and below the dielectric support layer 14b can balance stresses and provide enhanced conduction. In some implementations, the reflective sub-layer 14a and the conductive layer 14c can be formed of different materials for a variety of design purposes, such as achieving specific stress profiles within the movable reflective layer 14. [0103] As illustrated in Figure 6D, some implementations also can include a black mask structure 23. The black mask structure 23 can be formed in optically inactive regions (e.g., between pixels or under posts 18) to absorb ambient or stray light. The black mask structure 23 also can improve the optical properties of a display device by inhibiting light from being reflected from or transmitted through inactive portions of the display, thereby increasing the contrast ratio. Additionally, the black mask structure 23 can be conductive and be configured to function as an electrical bussing layer. In some implementations, the row electrodes can be connected to the black mask structure 23 to reduce the resistance of the connected row electrode. The black mask structure 23 can be formed using a variety of methods, including deposition and patterning techniques. The black mask structure 23 can include one or more layers. For example, in some implementations, the black mask structure 23 includes a molybdenum-chromium (MoCr) layer that serves as an optical absorber, a Si02 layer, and an aluminum alloy that serves as a reflector and a bussing layer, with a thickness in the range of about 30-80 A, 500-1000 A, and 500-6000 A, respectively. The one or more layers can be patterned using a variety of techniques, including photolithography and dry etching, including, for example, carbon tetrafluoromethane (CF4) and/or oxygen (02) for the MoCr and Si02 layers and chlorine (Cl2) and/or boron trichloride (BC13) for the aluminum alloy layer. In some implementations, the black mask 23 can be an etalon or interferometric stack structure. In such interferometric stack black mask structures 23, the conductive absorbers can be used to transmit or bus signals between lower, stationary electrodes in the optical stack 16 of each row or column. In some implementations, a spacer layer 35 can serve to generally electrically isolate the absorber layer 16a from the conductive layers in the black mask 23. [0104] Figure 6E shows another example of an IMOD, where the movable reflective layer 14 is self-supporting. In contrast with Figure 6D, the implementation of Figure 6E does not include support posts 18. Instead, the movable reflective layer 14 contacts the underlying optical stack 16 at multiple locations, and the curvature of the movable reflective layer 14 provides sufficient support that the movable reflective layer 14 returns to the unactuated position of Figure 6E when the voltage across the interferometric modulator is insufficient to cause actuation. The optical stack 16, which may contain a plurality of several different layers, is shown here for clarity including an optical absorber 16a, and a dielectric 16b. In some implementations, the optical absorber 16a may serve both as a fixed electrode and as a partially reflective layer. [0105] In implementations such as those shown in Figures 6A-6E, the IMODs function as direct-view devices, in which images are viewed from the front side of the transparent substrate 20, i.e., the side opposite to that upon which the modulator is arranged. In these implementations, the back portions of the device (that is, any portion of the display device behind the movable reflective layer 14, including, for example, the deformable layer 34 illustrated in Figure 6C) can be configured and operated upon without impacting or negatively affecting the image quality of the display device, because the reflective layer 14 optically shields those portions of the device. For example, in some implementations a bus structure (not illustrated) can be included behind the movable reflective layer 14 which provides the ability to separate the optical properties of the modulator from the electromechanical properties of the modulator, such as voltage addressing and the movements that result from such addressing. Additionally, the implementations of Figures 6A-6E can simplify processing, such as patterning. [0106] Figure 7 shows an example of a flow diagram illustrating a manufacturing process 80 for an interferometric modulator, and Figures 8A-8E show examples of cross-sectional schematic illustrations of corresponding stages of such a manufacturing process 80. In some implementations, the manufacturing process 80 can be implemented to manufacture, e.g., interferometric modulators of the general type illustrated in Figures 1 and 6, in addition to other blocks not shown in Figure 7. With reference to Figures 1, 6 and 7, the process 80 begins at block 82 with the formation of the optical stack 16 over the substrate 20. Figure 8A illustrates such an optical stack 16 formed over the substrate 20. The substrate 20 may be a transparent substrate such as glass or plastic, it may be flexible or relatively stiff and unbending, and may have been subjected to prior preparation processes, e.g., cleaning, to facilitate efficient formation of the optical stack 16. As discussed above, the optical stack 16 can be electrically conductive, partially transparent and partially reflective and may be fabricated, for example, by depositing one or more layers having the desired properties onto the transparent substrate 20. In Figure 8 A, the optical stack 16 includes a multilayer structure having sub-layers 16a and 16b, although more or fewer sub-layers may be included in some other implementations. In some implementations, one of the sub-layers 16a, 16b can be configured with both optically absorptive and conductive properties, such as the combined conductor/absorber sublayer 16a. Additionally, one or more of the sub-layers 16a, 16b can be patterned into parallel strips, and may form row electrodes in a display device. Such patterning can be performed by a masking and etching process or another suitable process known in the art. In some implementations, one of the sub-layers 16a, 16b can be an insulating or dielectric layer, such as sub-layer 16b that is deposited over one or more metal layers (e.g., one or more reflective and/or conductive layers). In addition, the optical stack 16 can be patterned into individual and parallel strips that form the rows of the display. [0107] The process 80 continues at block 84 with the formation of a sacrificial layer 25 over the optical stack 16. The sacrificial layer 25 is later removed (e.g., at block 90) to form the cavity 19 and thus the sacrificial layer 25 is not shown in the resulting interferometric modulators 12 illustrated in Figure 1. Figure 8B illustrates a partially fabricated device including a sacrificial layer 25 formed over the optical stack 16. The formation of the sacrificial layer 25 over the optical stack 16 may include deposition of a xenon difluoride (XeF2)-etchable material such as molybdenum (Mo) or amorphous silicon (Si), in a thickness selected to provide, after subsequent removal, a gap or cavity 19 (see also Figures 1 and 8E) having a desired design size. Deposition of the sacrificial material may be carried out using deposition techniques such as physical vapor deposition (PVD, e.g., sputtering), plasma- enhanced chemical vapor deposition (PECVD), thermal chemical vapor deposition (thermal CVD), or spin-coating. [0108] The process 80 continues at block 86 with the formation of a support structure e.g., a post 18 as illustrated in Figures 1, 6 and 8C. The formation of the post 18 may include patterning the sacrificial layer 25 to form a support structure aperture, then depositing a material (e.g., a polymer or an inorganic material, e.g., silicon oxide) into the aperture to form the post 18, using a deposition method such as PVD, PECVD, thermal CVD, or spin-coating. In some implementations, the support structure aperture formed in the sacrificial layer can extend through both the sacrificial layer 25 and the optical stack 16 to the underlying substrate 20, so that the lower end of the post 18 contacts the substrate 20 as illustrated in Figure 6 A. Alternatively, as depicted in Figure 8C, the aperture formed in the sacrificial layer 25 can extend through the sacrificial layer 25, but not through the optical stack 16. For example, Figure 8E illustrates the lower ends of the support posts 18 in contact with an upper surface of the optical stack 16. The post 18, or other support structures, may be formed by depositing a layer of support structure material over the sacrificial layer 25 and patterning portions of the support structure material located away from apertures in the sacrificial layer 25. The support structures may be located within the apertures, as illustrated in Figure 8C, but also can, at least partially, extend over a portion of the sacrificial layer 25. As noted above, the patterning of the sacrificial layer 25 and/or the support posts 18 can be performed by a patterning and etching process, but also may be performed by alternative etching methods. [0109] The process 80 continues at block 88 with the formation of a movable reflective layer or membrane such as the movable reflective layer 14 illustrated in Figures 1, 6 and 8D. The movable reflective layer 14 may be formed by employing one or more deposition steps, e.g., reflective layer (e.g., aluminum, aluminum alloy) deposition, along with one or more patterning, masking, and/or etching steps. The movable reflective layer 14 can be electrically conductive, and referred to as an electrically conductive layer. In some implementations, the movable reflective layer 14 may include a plurality of sub-layers 14a, 14b, 14c as shown in Figure 8D. In some implementations, one or more of the sub-layers, such as sub-layers 14a, 14c, may include highly reflective sub-layers selected for their optical properties, and another sub-layer 14b may include a mechanical sub-layer selected for its mechanical properties. Since the sacrificial layer 25 is still present in the partially fabricated interferometric modulator formed at block 88, the movable reflective layer 14 is typically not movable at this stage. A partially fabricated IMOD that contains a sacrificial layer 25 also may be referred to herein as an "unreleased" IMOD. As described above in connection with Figure 1, the movable reflective layer 14 can be patterned into individual and parallel strips that form the columns of the display. [0110] The process 80 continues at block 90 with the formation of a cavity, e.g., cavity 19 as illustrated in Figures 1, 6 and 8E. The cavity 19 may be formed by exposing the sacrificial material 25 (deposited at block 84) to an etchant. For example, an etchable sacrificial material such as Mo or amorphous Si may be removed by dry chemical etching, e.g., by exposing the sacrificial layer 25 to a gaseous or vaporous etchant, such as vapors derived from solid XeF2 for a period of time that is effective to remove the desired amount of material, typically selectively removed relative to the structures surrounding the cavity 19. Other etching methods, e.g. wet etching and/or plasma etching, also may be used. Since the sacrificial layer 25 is removed during block 90, the movable reflective layer 14 is typically movable after this stage. After removal of the sacrificial material 25, the resulting fully or partially fabricated IMOD may be referred to herein as a "released" IMOD. [0111] In some implementations described herein, at least part of a combined sensor device may be incorporated in a cover glass apparatus that can be overlaid on or otherwise combined with a display. The cover glass apparatus may have 2, 3 or more layers. In some implementations, the cover glass apparatus may include a substantially transparent and flexible upper substrate and a substantially transparent and relatively more rigid lower substrate. The cover glass may include intermediate layers disposed on and/or between the substrates, such as electrodes, a substantially transparent elastomeric layer and/or force-sensitive resistor material. In some such implementations, the lower substrate of the cover glass apparatus may be overlaid on a display substrate. [0112] Figure 9A shows an example of sensor electrodes formed on substrates of a cover glass apparatus. In the example shown in Figure 9A, three rows 915 of diamond-shaped substantially transparent electrodes are depicted on the substantially transparent upper substrate 905 and seven columns 920 of substantially transparent diamond-shaped electrodes are located on the substantially transparent lower substrate 910. Relatively few rows and columns are shown here for illustrative purposes, while in actual sensor devices the number of rows and columns may extend from tens to hundreds or even a thousand or more. One may note that the rows and columns are largely interchangeable, and no limitation is intended here. In some implementations, the upper substrate 905 of the combined sensor device 900 may be formed of a relatively flexible material, such as a flexible polymer. In some such examples, the upper substrate 905 may be a clear plastic film made of polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyimide, or a similar material. In some implementations, the upper substrate 905 may have a modulus of elasticity in the range of 0.5-5 GPa. The lower substrate 910 may be formed of glass, plastic, a polymer, etc. In some implementations, the lower substrate 910 may be a display substrate. For example, in some implementations the lower substrate 910 may be the same substrate as the transparent substrate 20 described above. [0113] In this example, every other column electrode 920 includes diamond electrodes that are located directly under corresponding diamonds of the row electrodes 915 in overlapping regions 925a. Some implementations have offsets of the diamonds of the row electrodes 915 and the column electrodes 920, whereby the diamonds in the row electrodes 915 and the columns 920 partially overlie each other. [0114] In some implementations, the row electrodes 915 and/or the column electrodes 920 may be formed into other shapes, such as squares, rectangles, triangles, circles, ovals, etc., and shapes that include predominantly open regions in the center of the shape such as a frame, a ring, or a series of connected line segments. A description of some such shapes is included in various parts of pending U.S. Patent Application No. 12/957,025 filed December 21, 2010 and entitled "Capacitive Touch Sensing Devices and Methods of Manufacturing Thereof," (see, e.g., Figures 11A- 11J and the corresponding description) the contents of which are hereby incorporated by reference in their entirety. Moreover, in alternative implementations the row electrodes 915 may be formed on the lower substrate 910 and the column electrodes 920 may be formed on the upper substrate 905. In some implementations, such as that described below with reference to Figures IOC and 10D including a compressible material 1025 positioned between the row electrodes 915 and the column electrodes 920, a light touch may be detected by measuring the change in mutual capacitance between adjacent diamonds (also referred to as projective capacitive touch (PCT)). In such implementations, contact with a stylus may be detected when the upper substrate 905 is depressed by measuring the change in capacitance between the row electrodes 915 and the column electrodes 920. [0115] In implementations with a patterned dielectric material between the row electrodes 915 and the column electrodes 920, gaps may be formed between corresponding row electrodes 915 and column electrodes 920. In such implementations, light touches can be detected with PCT measurements between adjacent electrodes, and stylus depressions can be detected either by a change in the effective parallel plate capacitance between the row electrodes 915 and the column electrodes 920 (see Figure 10B) or by measuring changes in resistance that occur when the row electrodes 915 and the column electrodes 920 come in direct mechanical and electrical contact (see Figure 10A), or by measuring changes in a force-sensitive resistor positioned between row electrodes 915 and column electrodes 920 when pressed with a finger, a stylus tip, ridges of a finger, or the like (see Figure 10D). The force-sensitive resistors may be included between row electrodes 915 and column electrodes 920 in a handwriting and touch sensor zone 1005, in a fingerprint sensor zone 1010, or both. In some such implementations, a high resistivity layer may be formed on the row electrodes 915 or the column electrodes 920 to minimize the effect of parasitic signals during the sensing of the location of the stylus. [0116] Figure 9B shows an alternative example of sensor electrodes formed on a cover glass. In the example shown in Figure 9B, the column electrodes 920 in which diamonds lay beneath the diamonds of the row electrodes 915 have been removed from the design. Ohmic membrane switches, resistive membrane switches, resistive switches with force-sensitive resistive (FSR) material, FSR switches with a fixed series resistor, or capacitive membranes of the combined sensor device 900 may be formed at the intersections between the row electrodes 915 and the column electrodes 920 (in overlapping regions 925b) for detecting stylus contact and, in some cases, a fingertip or ridges of a finger. Such implementations can reduce the number of column electrodes 920 (note that the number of column electrodes 920 and associated connection pads in Figure 9B is fewer than the column electrodes 920 and connection pads in Figure 9A) that need to be connected to the external processing circuitry, because the same columns can serve the purpose of detecting a light touch through the PCT method or detecting the stylus contact through either a capacitance change method or a resistive change method. [0117] For example, in the touch mode, only a very light force may be required to register a touch. However, in the handwriting mode, the sensor may be configured to accept many forms of stylus, pen, or other pointer input, regardless of whether or not the pointing device is conducting or non-conducting. Some implementations described herein provide sensors capable of distinguishing a large number of multi-touch events simultaneously, such as may occur when reading a fingerprint while operating in a fingerprint sensor mode, or detecting and rejecting an inadvertent palm touch when operating in a handwriting sensor mode. [0118] Figure 10A shows an example of a cross-sectional view of a combined sensor device. While the sensor array shown in Figure 10A is depicted as a combination touch, stylus, and fingerprint sensor, it should be noted that the configuration of Figure 10A and other configurations described below may serve as only a touch sensor, a stylus sensor, a fingerprint sensor, or a combination thereof. In the example shown in Figure 10A, two repeating cells are shown in a first region referred to as a handwriting and touch sensor zone 1005. Such sensing elements may be referred to herein as "sensels." An optional second region, referred to as a fingerprint sensor zone 1010, generally has a finer pitch between electrodes to allow for higher resolution often needed for fingerprint detection. As noted elsewhere herein, in some implementations the fingerprint sensor and the handwriting and touch sensor are not in different zones. Figures 10B-10D show examples of cross-sectional views of alternative combined sensor devices. Figures 10A-10D, like many other drawings provided herein, may not be drawn to scale. Touch, handwriting, and fingerprint zones are shown in Figures 10A-10D, although not all zones would normally be activated simultaneously. Nor may all zones and operating modes be available in a sensor device. Single or multi-touching using one or more fingers is depicted as being sensed using PCT in handwriting and touch sensor zone 1005, where particularly light touches as well as moderate and heavy touches may be detected. In the example shown in Figure 10A, proximity of a finger 1047 alters the electric field 1050 between the upper electrode 1015 and the lower electrode 1030b, producing a change in mutual capacitance. This effect is schematically depicted by the variable capacitor of the associated circuit diagram 1055 a. In some implementations, the upper electrode 1015 may be a row electrode and, as mentioned above, in some other implementations the upper electrode 1015 may be a column electrode (see Figures 9 A and 9B). [0119] High forces or high localized pressure (such as that incurred when a tip of a stylus such as a pen, pencil, or pointer is pressed against the surface of the combined sensor device 900) may be detected with ohmic or resistive membrane switches. One example is shown in Figure 10A, in which high localized pressure produced by a pen or stylus 1042 can be detected by a mechanical switch that includes the upper electrode 1015 and the lower electrode 1030a. A resistor 1035, sometimes referred to as a fixed resistor, may be positioned between upper electrode 1015 and lower electrode 1030a to prevent direct shorting of the upper electrode 1015 and the lower electrode 1030a. The switch including a vertical or serpentine fixed resistor is represented schematically in the circuit diagram 1055 a. The resistor 1035 may have an additional metal layer disposed thereon (not shown) to aid in electrical contact between it and the upper electrode 1015. While a resistive membrane switch as defined here includes at least a fixed resistor in each sensel (the resistive membrane switch also may include a force-sensitive resistor in series with the fixed resistor or in lieu of the fixed resistor), an ohmic membrane switch does not require an additional fixed resistor in series with the upper and lower electrodes. The fixed resistor may be formed of an ohmic material in some implementations. In some other implementations, the fixed resistor may be a non-linear device such as a leaky diode or other device that provides a relatively high resistance to current flow. The fixed resistor may include a thin-film conductive cap that serves as a conductive contact surface. Whereas a one-to-one correspondence with digital resistive touch (DRT) lower electrodes 1030a and PCT lower electrodes 1030b is shown in Figure 10A, in some configurations the PCT lower electrodes 1030b could span one or more adjacent sensels. In some configurations, the PCT lower electrode 1030b is wider and longer than the DRT lower electrode 1030a. [0120] In some implementations, the upper electrodes 1015 and the lower electrodes 1030a may be configured to form two plates of a deformable parallel plate capacitor, instead of the mechanical switch described above. In some implementations, the electrodes 1015 and 1030a may be separated by an air gap, as shown in areas 1065 of Figure 10B, and may have a spacing corresponding to a baseline capacitance in the normal unpressed state. Upon the application of force or pressure, upper electrode 1015 is displaced and the electrodes 1015 and 1030a come closer. When the inter-electrode distance between the electrodes 1015 and 1030a is reduced, the capacitance changes (e.g., increases), enabling the sensing of an analog change in the displacement and allowing inference of the presence of the applied force or pressure. Accordingly, high localized pressure or force from a pen, a stylus, etc., may be detected via parallel plate capacitance changes between upper electrodes 1015 and lower electrodes 1030a. The capacitance changes caused by such localized changes in pressure are represented schematically by the variable capacitor 1056 of the circuit diagram 1055b. In the configuration shown, the fixed resistor 1035 is in series with the variable capacitor 1056. In other configurations (not shown), the fixed resistor 1035 may be omitted. [0121] In some implementations, an interlayer separation 1032 may be formed between the upper substrate 905 and the lower substrate 910 by disposing a compressible layer 1025 between the upper and lower electrodes. In some implementations, the compressible layer 1025 may be a patternable, thin (e.g., 1 to 10 microns) polymer with a low elastic modulus, such as an elastomer. In some such implementations, the compressible layer 1025 may allow direct measurement of capacitance changes when the upper substrate 905 is depressed by a touch of a pen, a stylus, a finger, etc. and the distance between an upper electrode 1015 and a lower electrode 1030a changes. The compressible layer 1025 may have a lower modulus of elasticity than the upper substrate 905. For example, the upper substrate 905 may be a clear plastic film made of PET, PEN, polyimide, or a similar material having a modulus of elasticity in the range of 0.5-5 GPa. The compressible layer 1025 may have a significantly lower modulus of elasticity, such as in the range of 0.5-50 MPa. [0122] In some implementations, the compressible layer 1025 may be patterned to include spaces or voids (which also may be referred to herein as "air gaps") between the upper substrate 905 and the lower substrate 910. Some implementations, such as those shown in Figures 10A and 10B, include voids in the areas 1065, wherein the compressible layer 1025 is not formed between the upper electrodes 1015 and the lower electrodes 1030a. However, in these examples the compressible layer 1025 extends without voids between the upper substrate 905 and the lower electrodes 1030b in the areas 1070. According to some such implementations, the compressible layer 1025 may be patterned such that there are air gaps in the areas 1065 and 1080. The indicated thickness and spacing of the compressible layer 1025 regions are merely indicated by way of example. The locations and lateral dimensions of the air gaps in the areas 1065 and 1080 may be selected according to desired parameters of force sensitivity, reliability and/or optical performance, as a person having ordinary skill in the art will readily comprehend. For example, the interlayer separation 1032 may be a fraction of a micron to several microns. The thickness of the air gaps in the areas 1065 and 1080 also may be a fraction of a micron to several microns thick. The pitch or spacing between adjacent upper electrodes 1015 (adjacent sensels) may range from a few tenths of a millimeter to over five millimeters in the handwriting and touch sensor zone 1005 (with the pitch between lower electrodes 1030a and 1030b approximately half that), while the pitch or spacing between adjacent electrodes 1040 in the fingerprint sensor zone 1010 may be as small as 50 microns or so. [0123] The compressible layer 1025 may aid in enabling measurable deflections of the upper substrate 905. In some implementations, the compressible layer 1025 also may be formed in the areas 1065, as shown in Figure IOC and described below. In some such implementations, the compressible layer 1025 may include an elastomeric material (or a similar material) that allows direct measurement of capacitance changes when the upper substrate 905 is depressed by a touch of a pen, a stylus, a finger, etc. and the distance between an upper electrode 1015 and a lower electrode 1030a changes. Alternatively the mutual capacitance between an upper electrode 1015 and a laterally displaced lower electrode 1030b also may change to allow the detection of a pen, stylus, finger, etc. [0124] The fingerprint sensor zone 1010 may be configured for fingerprint detection. In the implementation shown in Figure 10A, the upper fingerprint electrodes 1020 and the lower fingerprint electrodes 1040 form an array of resistive membrane switches, one of which is schematically represented in the circuit diagram 1060a. In the examples shown in Figures lOA-lOC, the compressible layer 1025 is not formed between the upper fingerprint electrodes 1020 and the lower fingerprint electrodes 1040 in the area 1080. However, in the implementation depicted in Figure 10D (which will be described in more detail below), the compressible layer 1025 is formed in the area 1080 except for regions where FSR material 1085 is located. [0125] In the examples shown in Figures 10A-10D, the upper fingerprint electrodes 1020 and the lower fingerprint electrodes 1040 have a smaller pitch than that of the upper electrodes 1015 and the lower electrodes 1030 in the handwriting and touch sensor zone 1005, in order to provide relatively higher resolution in the fingerprint sensor zone 1010. However, in some alternative implementations, the pitch of the upper fingerprint electrodes 1020 and the lower fingerprint electrodes 1040 may be substantially the same as that of the upper electrodes 1015 and the lower electrodes 1030 in the handwriting and touch sensor zone 1005. [0126] The compressible layer 1025 may be patterned using lithography and etch techniques (or other lithography-based techniques). In some implementations, the compressible layer 1025 can keep the ohmic or resistive switches of areas 1065 and 1080 open until a suitable force is applied to the outer surface of the sensor (which is the top surface of the upper substrate 905 in this example). Because the compressible layer 1025 is part of a sensor that would overlay a display, the compressible layer 1025 can be substantially transparent. [0127] In some implementations, the compressible layer 1025 may have an index of refraction closely matched to that of the lower substrate 910 and the upper substrate 905. In some implementations, the compressible layer 1025 may have an index of refraction that differs from that of the lower substrate 910 and the upper substrate 905 by less than 5%, by less than 10%, by less than 20%, etc. For example, a 6% or less difference in the index of refraction may result in less than 0.2% reduction in transmission through the material stack. Such implementations can provide good optical transmission in areas where the compressible layer 1025 extends from the upper substrate 905 to the lower substrate 910. However, the optical transmission may be reduced in the air gap regions, caused by reflections at each air- material interface. Such reflections may be greater than, e.g., 4%, as calculated using the index of refraction of the upper substrate 905 (which may be approximately n = ~1.5) and the index of refraction of air (nQ = 1), in Equation 1 : 2 2 [0128] (n-nQ) /(n+ nQ) = R, where R is reflectance. (Equation 1) [0129] Accordingly, implementations having air gaps with minimal lateral dimensions can provide better optical performance. However, some such implementations may result in less deflection for a given pressure and may therefore be less sensitive to pressure or applied forces. [0130] Therefore, some implementations provide an index -matched compressible layer 1025, which can improve the optical performance. Even in some implementations having air gaps in the areas 1065, the optical performance may already be quite good due to an architecture having the areas 1065 occupy a relatively small fraction of the handwriting and touch sensor zone 1005. For example, the areas 1065 with air gaps may occupy less than about 50% of the total area, whereas in other examples the areas 1065 may occupy less than about 10% of the total area. In such implementations, the majority of the sensor area will not have an air gap, and therefore will exhibit much reduced reflection at the layer 905/layer 1025 and the layer 1025/layer 910 interfaces, i.e., such that the total reflection for both interfaces may be « 1%, as estimated per Equation 1. [0131] The sensitivity to pressure or force from a pen, stylus, or finger of the individual sensing elements (regardless of whether they are used in a resistive switch mode or in a deformable parallel plate capacitor mode) may be increased by the use of a low-modulus compressible layer 1025, as shown in Figures 1 lA-1 ID. The low- modulus compressible layer 1025 may remove the clamped boundary condition that can be imposed by a higher-modulus material. Having a low modulus compressible layer 1025 can effectively increase the diameter of an area 1110 of the compressible layer 1025 that is deflected by the stylus tip 1105, thereby increasing the deflection of the upper substrate 905 in the area 1110. [0132] Figures 11 A-l ID show examples of cross-sectional views of combined sensor devices having high-modulus and low-modulus compressible layers. Figure 11A shows a stylus tip 1105 in contact with a flexible upper substrate 905 of a portion of a simplified combination touch, handwriting, and fingerprint sensor, wherein the compressible layer 1025a is a patterned high-modulus material that is sandwiched between the upper substrate 905 and the lower substrate 910. Air gaps 1115 in the compressible layer 1025a allow the upper substrate 905 of the combined sensor device 900 to deform with applied forces, although the deflected area 1110 obtained is limited in part by the small air gaps 1115 in the relatively stiff compressible layer 1025a. [0133] Figure 11B shows a low-modulus compressible layer 1025b sandwiched between the relatively more flexible upper substrate 905 and the relatively less flexible lower substrate 910. In this example, the deflected area 1110 of the upper substrate 905 from stylus forces is larger due to the ability of the compressible layer 1025b to compress and deform as the stylus tip 1105 is pressed against the outer surface of the upper substrate 905. In the example shown in Figure l lC, the stylus 1105 has been pressed hard enough for the flexible upper substrate 905 to make (or nearly make) physical contact with the lower substrate 910. [0134] Use of a low-modulus elastomeric compressible layer 1025b also may effectively increase the lateral resolution from applied pressure or force without decreasing the pitch of the row or column electrodes, as illustrated in Figure 1 ID. Appreciable deflections of the upper substrate 905 can occur even when the tip of the stylus tip 1105 is not directly above an air gap 1115 in the compressible layer 1025, thus allowing detection of the stylus tip 1105 even if the combined sensor device 900 has relatively wide spacings between adjacent sensing elements. For example, handwriting might be resolved at a resolution of 0.2 mm even if the pitch between adjacent rows or columns were 0.5 mm by averaging the responses from adjacent sensels. By allowing a relatively larger pitch between adjacent rows or columns, such configurations may enable the reduction of the total number of row electrodes and column electrodes for a given resolution, thereby reducing the number of I/Os on the handwriting sensor controller. This reduction can reduce the number of leadouts and reduce the cost and complexity of the handwriting controller. [0135] An alternative implementation of a combination sensor is shown in IOC. As compared to the implementations shown in Figures 10A and 10B, the air gaps have been removed from the areas 1065 of the handwriting and touch sensor zone 1005. Thus, the optical performance of the handwriting and touch sensor zone 1005 may be enhanced with respect to the implementation of the combined sensor device 900 shown in Figures 10A and 10B. The handwriting sensor in the implementation of the combined sensor device 900 shown in Figure IOC functions as a variable parallel plate capacitor, where heavy touches or deflections of the upper layer are detected from changes in the parallel plate capacitance. This functionality is represented by the variable capacitor 1056 of the circuit diagram 1055c. [0136] Figure 10D illustrates another example of an alternative implementation. In the example shown in Figure 10D, the air gaps have been removed in the area 1080 of the fingerprint sensor zone 1010 and replaced with a commercially available FSR material 1085. The FSR material 1085 provides a relatively high value of resistance when not compressed and a relatively low value of resistance when compressed, thereby functioning as a switch though without a direct contact region. This functionality is represented by the variable resistor 1087 of the circuit diagram 1060b. A fixed resistor 1045 such as a vertical resistor or a serpentine resistor may be included in series with the FSR material 1085 in each sensel. Transparent FSR material 1085 that includes either transparent particles or low fill ratios of particles may be used in some implementations. Non-transparent FSR material 1085 may be used in some applications where, for example, the diameter or width of the resistors is sufficiently small (on the order of a few to tens of microns) to avoid excessive occlusion of an underlying display. [0137] Figure 12 shows an example of a device that includes a cover glass with a combination touch, handwriting and fingerprint sensor. In this example, the cover glass includes an implementation of the combined sensor device 900 and is overlaid on the display of a display device 40, such as a mobile phone. Some examples of the display device 40 are described below with reference to Figures 25A and 25B. The combined sensor device 900 can serve as a single or multi-touch sensor, a handwriting input sensor, and a fingerprint image sensor. In this example, the fingerprint sensor zone 1010 is in a dedicated portion above the display. The remaining portion of the combined sensor device 900 is configured as the handwriting and touch sensor zone 1005. In some other configurations, fingerprint sensor zone 1010 may be positioned anywhere throughout the combined sensor device 900. In yet other configurations, the position of fingerprint sensor zone 1010 is software programmable and software selectable. [0138] An example of touch mode operation will now be described with reference to Figure 10A. When a finger is used to touch anywhere in the handwriting and touch sensor zone 1005, either all or a selected subset of the upper electrodes 1015 on the upper substrate 905 and the lower electrodes 1030b on the lower substrate 910 may be addressed during a scanning sequence. In some implementations, the capacitance between the upper electrodes 1015 and the lower electrodes 1030b may be measured at each of the intersections between row and column electrodes (see Figures 9A and 9B). The conducting surface of the finger 1047 interferes with the electric field lines 1050, as shown in Figures 10A-10D, and modifies the capacitance between the upper electrodes 1015 and the lower electrodes 1030b. Detecting this change in capacitance allows a reading of which sensels of the handwriting and touch sensor zone 1005 are in the vicinity of the finger. In this example, the electrodes on the upper substrate 905 and the lower substrate 910 that are scanned during touch mode are not necessarily disposed directly above and below each other. In the examples shown in Figures 10A-10D, a change in capacitance can be detected between an upper electrode 1015 on the upper substrate 905 and an adjacent lower electrode 1030b of the lower substrate 910. Note that for this PCT measurement, a very light touch or even the proximity of a finger may be detectable, because the capacitance change does not depend on the pressure being applied to the upper substrate 905. [0139] When a pointing device, such as a stylus (either conducting or non- conducting) is placed on the sensor surface, the resultant pressure can be significantly higher than that associated with a finger touch, due to the smaller area of contact between the stylus and the surface. This pressure can be up to two orders of magnitude (or more) greater than the pressure exerted by a finger touch. In some implementations, during the readout process in handwriting mode, a different set of electrodes from those used for the touch mode (such as upper electrodes 1015 and lower electrodes 1030a depicted in Figure 10A) may be excited and a different circuit may be deployed for the measurement. The different circuit may sense either the closure of a switch for an implementation such as that shown in Figure 10A, or the change in parallel plate capacitance for an implementation such as that shown in Figures 10B-10D. [0140] In some implementations, the addressing and/or measurement circuitry for a touch mode, handwriting mode and/or fingerprint sensing mode may be contained within one or more controller or driver Application Specific Integrated Circuit (ASIC) chips. The ASIC chip or chips may be attached directly to the underside of the upper substrate 905 or connected electrically to the electrodes on the upper substrate 905 and the lower substrate 910 by means such as direct die attach using solder or anisotropic conductive film, or connection through a cable or traces on a flex tape that are coupled to ICs on the tape or on an external printed circuit board. [0141] In some implementations described above, the electrodes scanned during the handwriting mode on the upper substrate 905 and the lower substrate 910 are disposed directly above and below each other (for example, see Figure 10A). When the stylus tip 1105 (which may be a tip of pen 1042 as shown in Figures 10A or 10B) is applied with sufficient force, the pressure exerted by the stylus tip 1105 may cause the upper substrate 905 and the compressible layer 1025 to deflect (see Figure 11C) and may cause the upper electrodes 1015 and the resistor 1035 on the lower electrode 1030a to make physical contact, resulting in a closure of a membrane switch (see Figure 10A). A large resistance at each switch may be enabled by the inclusion of a fixed resistor 1035. This resistance may substantially lower the current and allow determination of the sensel locations that are being pressed in the handwriting, fingerprint or touch mode when one or more membrane switches are being pressed simultaneously. This may occur, for example, when a palm is resting on the surface of the combined sensor device 900 and a stylus is also applied to the surface. The resistor 1035 may be formed from a resistive layer that is fabricated to be in series with the lower electrodes 1030a. Alternatively, the displacement of the upper substrate 905 with the force or pressure from a stylus or finger on the outer surface can be measured from a change in the parallel plate capacitance between an upper electrode 1015 and a corresponding lower electrode 1030a. [0142] Some implementations allow operation of the combined sensor device 900 in a fingerprint acquisition mode, such as in a specific region of the combined sensor device 900 that is configured to enable this mode. Examples of fingerprint sensor zones 1010 are shown in the far right portion of Figures 10A-10D and in the lower right portion of Figure 12. In some implementations, the fingerprint sensor zones 1010 may be fabricated using the same process flow and materials as those used for fabricating the rest of the combined sensor device 900. However, in some implementations, the fingerprint sensor zone 1010, the upper fingerprint electrodes 1020 and the lower fingerprint electrodes 1040, as well as the resistors 1045 of the lower fingerprint electrodes 1040, may be arranged with a significantly closer pitch or spacing than the upper electrodes 1015 or the lower electrodes 1030 of the handwriting and touch sensor zone 1005. For example, the pitch or spacing in the fingerprint sensor zone 1010 may be on the order of about 10 microns to 100 microns. Such configurations can provide a sensor with sufficiently high resolution to distinguish between the ridges and valleys of a fingerprint. [0143] When a finger is pressed down on the surface of the upper substrate 905 in the fingerprint sensor zone 1010, certain regions of the upper substrate 905 that are directly below the ridges of the fingerprint may deflect and cause the upper fingerprint electrodes 1020 to make contact with the fixed resistors 1045 on the lower fingerprint electrodes 1040. This switch closure may be through a resistor, such as a large value resistor, which can provide for distinguishing which of the many sensor elements are being pressed and which are not. Scanning rows or columns of such a fingerprint sensor array can produce digital output that represents the fingerprint ridges or absence of the same. Such fingerprint sensor implementations can enable scanning of the fingerprint array and acquisition of a fingerprint image. [0144] The use of the digital resistive technique for handwriting and fingerprint recognition can result in a fast scan rate. This is due in part to the "digital" nature of the output from each cell during the scanning process, which can enable high frame rates for fingerprint capture and handwriting recognition. [0145] In some implementations, a force-sensitive membrane switch may be used to locally connect an extra capacitor into a PCT measurement circuit, thus causing a large change in capacitance when the switch is closed with applied pressure from, for example, a finger or a stylus tip. The switches may be formed near the intersections of sensor rows and columns. The extra capacitor may be formed in series with the switch using conductive material to connect with row and column lines. In some implementations, this capacitor can produce a large change in capacitance relative to the change in mutual capacitance of a PCT-only configuration. [0146] One such implementation is depicted in Figures 13 and 14. Figure 13 shows an example of a top view of a force-sensitive switch implementation. Figure 13 indicates portions of two columns and two rows of such a combined sensor device 900, wherein the column electrodes 1305 have a width 1310 and a spacing or "pitch" 1315. The widths of the column and row electrodes are generally made small, on the order of a few microns, to improve overall transparency of the combined sensor device. The pitch can range from about 10-50 microns, suitable for fingerprint detection, to about 5 mm for lower resolution devices. Alternative implementations may have pitches of less than 50 microns or more than 5 mm. Figure 14 shows an example of a cross-section through a row of the force-sensitive switch implementation shown in Figure 13. [0147] In the implementation depicted in Figures 13 and 14, a capacitor 1317 is formed over the rows between the row electrodes 1335 and the capacitor top electrode 1320 in each sensel. A connection between the column electrodes 1305 and the capacitors 1317 may be made through a contact 1325 at the intersection of the rows and columns, which may include a fixed resistor in series with the contact. This contact 1325 may be electrically connected to the capacitor top electrode 1320, forming an electrode of a switch that may be open or closed. In some alternative configurations there may be no separate contact 1325 - physical contact may be made directly between the column electrode 1305 and the capacitor top electrode 1320. The row electrodes 1335 may be disposed on a substantially transparent lower substrate 910, which may be made of a material such as glass, plastic, etc. In some implementations, the other components depicted in Figures 13 and 14 also may be substantially transparent. [0148] In this example, a compressible layer 1025 is disposed between the upper substrate 905 and the capacitor top electrode 1320. The compressible layer 1025 may be an insulator that is formed of a material having a sufficiently low elastic modulus that may be easily compressed and does not interfere with the switch to the capacitor. Here, the upper substrate 905 is a flexible membrane disposed on top of the sensor to protect the surface and yet deflect locally when touched, in order to actuate the switches. [0149] Figure 15A shows an example of a circuit diagram that represents components of the implementation shown in Figures 13 and 14. In the circuit 1500a, a signal may be applied at the input 1505 and sensed by the analog-to-digital converter (ADC) 1540. The signal may be modulated by a change in mutual capacitance, Cm, when a finger is on or near the flexible membrane. Such changes in Cm are represented by a variable capacitor 1525. The self-capacitances of rows and columns are represented by capacitors 1530 and 1535, respectively. The contacts at the intersection of the rows and columns (see Figures 13 and 14) are represented as a switch 1510 having a resistance Rl represented by the resistor 1515 and a capacitance CI represented by the series capacitor 1520. The resistance Rl also may include the line resistance of the corresponding row or column electrodes. When force (such as a touch) on the flexible upper substrate 905 closes the switch 1510, capacitance CI is added to the mutual capacitance Cm. In some implementations, CI is substantially larger than Cm because a touch can generally reduce Cm whereas closing the switch 1510 adds capacitance: when the switch is closed, the mutual capacitive effect of the touch may be masked by the value of CI . [0150] In one example, a high-resolution sensor may be formed having row and column widths of 5 um and a pitch of 50 um between rows and columns (for example, see Figures 13 and 14). If, for example, the capacitor insulator 1330 is 1000 A thick and formed of silicon nitride (SiN), and the capacitor top electrodes 1320 cover a 40 um x 5 um area (see Figure 14), a modulation of greater than 60 femtofarads (fF) may be obtained using the parallel-plate capacitor equation C = ere0A/d where er is the relative permittivity of the insulator, e0 is the permittivity of free space, A is the area of the top electrodes, and d is the thickness of the dielectric. In some implementations, this can be considered adequate for determination by PCT controller circuitry. Decreasing the length or the width of the capacitor electrodes will decrease the capacitance value, whereas decreasing the thickness of the dielectric insulator will increase the capacitance. In some implementations, the capacitance value can be made appreciably larger by spanning a portion of the sensel area between the row and column electrodes with the capacitor top electrode or by increasing the row and column widths. In some implementations, the value of the capacitance can be reduced by reducing the electrode width or the pitch of the sensel. By changing the dimensions of the capacitor electrodes and the thickness of the insulator, values of capacitance in the range from less than about 10 fF to more than about 0.1 pF may be obtained. [0151] Figure 15B shows an example of a circuit diagram that represents components of an alternative implementation related to Figures 13 and 14. The circuit 1500b can be used to consider the response times for a sensor such as that depicted in Figures 13 and 14. Here, a leakage resistor 1545 having a resistance R2 has been added to the circuit to allow for the discharge of series capacitor 1520 when switch 1510 is open. If, for example, R2 were 100 megaohms and Rl were 10 kilohms, then the frequency response (1/RC) for the CI value for a 40 um x 5 um capacitor as described above would be a minimum of 150 KHz for a closed-to-open transition of the switch 1510 and a maximum value of 1.5 GHz to charge the capacitor though the series resistor 1515 when switch 1510 is closed. The frequency response may be helpful in determining a minimum obtainable frame rate for the combination sensor. The frequency response and frame rate may be increased, if needed, by decreasing the RC time constant with reductions to the resistor values Rl or R2 or with reductions in the capacitance. [0152] In some implementations, the resistor 1515 represents the contact resistance of contact 1325 (e.g., no fixed resistor and no FSR). In some other implementations, the resistor 1515 represents the contact resistance directly between the column electrode 1305 and the capacitor top electrode 1320 as shown in Figure 14 (e.g., no fixed resistor, no FSR, and no contact 1325). In some implementations, the resistor 1515 may include the resistance of an additional fixed resistor such as a vertical or serpentine fixed resistor (not shown) positioned between a contact 1325 and the capacitor top electrode 1320 in Figure 14. The fixed resistor may include a thin- film conductive cap disposed thereon serving as the contact 1325 to aid in electrical contact with a column electrode 1305. The resistor 1515 may include a force-sensitive resistor in series with a fixed resistor or in lieu of a fixed resistor. The resistor 1515 may include an ohmic material such as a resistive or metal thin film. Alternatively, the resistor 1515 may include a non- linear device such as a leaky diode or other device. According to some implementations, the resistor 1515 may have a resistance ranging from less than a few ohms to over 100 megaohms. In some implementations, the leakage resistor 1545 may have a value on the order of 100 kilohms or larger. [0153] The switched capacitor configuration described with respect to Figures 13 through 15B encompass what may be called digital capacitive touch (DCT), in that a local capacitor near the intersection of a row and a column of a DCT sensor array can be digitally switched in or out, depending on whether a force-actuated switch at the intersection is open or closed. The DCT array, in some configurations, may serve as a fingerprint sensor, a stylus or handwriting sensor, a touch sensor, or a combination thereof without a corresponding PCT array. The DCT array, in some other configurations, may be combined with a PCT array. In one such configuration, one or more capacitive electrodes electrically connected near each intersection between overlapping rows and columns in an array surround a force-actuated capacitive switch located at each intersection (for example, see Figure 9B). The combined sensor array may use the force-sensitive capacitive switch for stylus detection and the PCT array for light touch or proximity sensing. As noted above, the same PCT detection circuitry may be used for detecting the application of force or pressure from the pressing of a stylus, pen or finger in the DCT aspect, as well as the light touch from a finger or stylus in the PCT aspect. As noted earlier, the designations regarding rows and columns, the manner of overlapping, the various aspect ratios, and other features are intended to be illustrative and not limiting. For example, the rows and columns may be interchanged, the column electrodes may pass over or under the row electrodes, and the pitch or resolution may be changed without loss of generality. [0154] Figure 16 shows an example of a flow diagram illustrating a manufacturing process for a combined sensor device. Figures 17A-17D show examples of partially formed combined sensor devices during various stages of the manufacturing process of Figure 16. According to some implementations, block 1605 of the process 1600 involves depositing a substantially transparent conductor, such as ITO, on upper and lower substantially transparent substrates. In this example, the lower substrate 910 is a glass substrate. However, in alternative implementations, the lower substrate 910 may be formed of plastic or a similar material. Some such implementations can lend themselves to a roll-to-roll manufacturing process. [0155] Block 1605 also may involve patterning the substantially transparent conductive material into electrodes, using photolithography and etching processes or other "additive" processes such as plating, screen printing, etc. In some implementations, this patterning process results in diamond electrode shapes (or other shapes as appropriate), connected to one another within columns or rows patterned on the upper substrate 905 and the lower substrate 910. [0156] A resistive material may subsequently be deposited (e.g., by sputter deposition) on at least some electrodes of the lower substrate 910 and on or connected to the patterned electrodes, as shown in block 1610. In alternative implementations, resistive material may be deposited on at least some electrodes of the upper substrate 905. The resistive material may be patterned to be in series with all or a subset of the sensing locations on the electrodes. According to some implementations, the resulting resistors may have a resistance on the order of 1 megaohm; other implementations may produce resistors having a smaller or greater resistance such as between 100 kilohm and 10 megaohm. [0157] The electrodes and resistors may be patterned in at least two general ways, as shown in Figures 17A and 17B. A first option (top view illustrated in Figure 17A) is to form a serpentine resistor 1035 by patterning the lower electrode material or other resistive material deposited on lower substrate 910 into a thin, narrow sequence of one or more connected segments that conduct in the plane of the film to achieve a sufficiently high resistance. A conductive contact region 1036 formed from the lower electrode material or other suitable material may be included at the end of the resistor 1035. A second option (side view illustrated in Figure 17B) is to pattern a vertical resistor 1035 directly on top of the lower electrodes 1030, in which case the conduction path is through the resistor in a direction substantially normal to the plane of the film. In some implementations, a thin metal contact region 1037 may be included above the vertical resistor 1035. [0158] Block 1615 of the process 1600 may involve depositing or otherwise disposing the compressible layer 1025 on the lower substrate 910. In some implementations, the compressible layer 1025 may be a patternable, thin (e.g., 1 to 10 microns) polymer with a low elastic modulus, such as an elastomer. In some implementations that include gaps in the compressible layer 1025 (such as those discussed above with reference to Figures lOA-lOC), the compressible layer 1025 may be patterned such that the regions above the resistors 1035 are opened up. Figure 17C provides a cross-sectional view of a portion of a combined sensor device 900 that has been partially fabricated according to one such example. In some other implementations, the regions above resistors 1035 that are opened up may be filled with a force-sensitive resistor material (not shown). In some other implementations with or without the FSR material, an upper surface of vertical or serpentine resistors 1035 may be covered with a thin metal layer. [0159] At this stage of the process 1600, the compressible layer 1025 has been patterned to expose the lower electrodes 1030 on which the resistors 1035 have been formed. In some implementations of the process 1600, FSR material may be formed on fingerprint sensor electrodes of the lower substrate 910 (see optional block 1620), the handwriting and touch sensor electrodes of the lower substrate 910, or both. Figure 10D provides an example of the force-sensitive resistor material 1085 formed on the lower fingerprint electrodes 1040. The force-sensitive material may be formed on the electrodes by methods such as dispensing, screening, depositing, or patterning. Force-sensitive resistor material also may be included on the handwriting and touch sensor electrodes of the lower substrate 910 (not shown). [0160] Subsequent to the patterning and curing (if needed) of the compressible layer 1025, an additional thin layer of adhesive 1705 (such as -1-5 microns) may be applied on the surface of the compressible layer 1025 (see optional block 1625) to improve adhesion, taking care not to apply the adhesive on the top surface of the resistors 1035. Methods to apply the adhesive include photolithography, screen printing, squeegeeing, and dispensing. An example of such an adhesive layer 1705 may be seen in Figure 17D. [0161] Figure 17D depicts the apparatus after the upper substrate 905 has been joined to the compressible layer 1025. The upper substrate 905 may be formed of a substantially transparent material and may have substantially transparent upper electrodes 1015 patterned on the underside. The upper substrate 905 may, for example, be formed of a plastic film such as polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyimide, or a similar material. In this example, the upper electrodes 1015 are made of ITO that has been formed into rows that are continuous in the plane of Figure 17D. In alternative implementations, the upper electrodes 1015 as well as the lower electrodes 1030 may be patterned into similarly- shaped pads, connected as rows or columns. In some implementations, the two substrates may be joined by bringing the upper substrate 905 into alignment with the lower substrate 910 and attaching the layers via the adhesive 1705 that has been applied over the compressible layer 1025. Other techniques may be used, such as hot pressing the two layers together, mechanical clamping the periphery of the substrates, or an adhesive-free method. [0162] Implementations such as those depicted in Figures 17C and 17D include air gaps in the compressible layer 1025 around the electrodes on which resistors have been formed. Such air gaps are depicted in areas 1065 of Figure 17D.. Air gaps can result in higher levels of undesirable reflectance from the air-substrate interfaces. Details are described above with reference to Equation 1. Accordingly, in some implementations, the air gap regions may be spatially limited, so that the air gaps do not materially impact the overall optical transmission of the stack. For example, the air gap regions may be limited to an area in the range of 1-5% of the total area of the sensor. Alternatively, the air gaps may be limited to the region of the fingerprint imaging area only, which may be a limited region of lower optical transmission, and therefore may be on the cover glass but not directly above the display area. [0163] In alternative implementations, such as the examples described with reference to Figures IOC and 10D, the compressible layer 1025 also may be deposited on at least some of the lower electrodes 1030 on which the resistors 1035 have been formed. In some such implementations, there are no air gaps in the handwriting and touch sensor zone 1005. However, other electrodes on which resistors have been formed (such as in the fingerprint sensor zone 1010) may or may not have the compressible layer 1025 deposited on them. In still other implementations, however, the fingerprint sensor zone 1010 may include no air gaps. As shown in Figure 10D, such implementations may include FSR material 1085 in the fingerprint sensor zone 1010. In some other implementations, the FSR material 1085 also may be included above lower electrodes 1030 in the handwriting and touch sensor zone 1005, with or without fixed vertical or serpentine resistors 1035. [0164] Some implementations of the process 1600 involve a process flow with relatively few masking steps. Some such implementations involve two masking steps for depositing material on the lower substrate 910 and a single masking step for depositing material on the upper substrate 905. Structures may be formed on at least the upper substrate 905 using roll-to-roll manufacturing processes. For implementations wherein the lower substrate 910 is plastic or a similar material, a roll-to-roll manufacturing process may be used for depositing material on the lower substrate 910. In such implementations, the lower substrate 910 may be thicker than the upper substrate 905. In some examples, the upper substrate 905 may be laminated onto the lower substrate 910 to form the sensor stacks described above. The resultant combined sensor device 900 may be inexpensive, light, thin and highly suitable for mobile and other handheld electronic devices. In some implementations, this laminate of an upper plastic layer and a lower plastic layer may be further laminated onto or otherwise attached to a substantially transparent and relatively more rigid substrate, such as a glass substrate. In some implementations, the substantially transparent substrate may be a display substrate such as the transparent substrate 20 described above. [0165] In this implementation, block 1635 involves processing and packaging. Block 1635 may involve the singulation of individual combined sensor devices 900 from large substrates such as large plates of glass or long rolls of plastic having multiple combined sensor devices 900 formed thereon by cutting, cleaving, sawing, or other suitable methods. Singulation of sensor devices from larger substrates may be performed prior to block 1635, such as prior to attaching the upper substrate (see block 1630) or prior to applying adhesive to the compressible material (see block 1625). Block 1635 may involve configuring combined sensor devices 900 for electrical communication with one or more sensor controllers, such as the combined sensor controller 77 described below with reference to Figure 25B. Block 1635 may involve attaching combined sensor devices 900 to a display device 40 such as described elsewhere herein. Block 1635 may involve packaging individual combined sensor devices 900 for shipment or storage. [0166] Figure 18A shows an example of a block diagram that illustrates a high-level architecture of a combined sensor device. In this example, a multi-touch sensor 1801, a high resolution handwriting sensor 1803, and a fingerprint sensor 1805 are integrated into the combined sensor device 900. A cover glass included with the combined sensor device 900 can be overlaid onto many displays, including but not limited to LCD, OLED and reflective displays. Some such displays may be displays suitable for mobile devices and some may be suitable for other devices, such as consumer electronic devices, depending on the implementation. The multi-touch sensor 1801, such as a PCT sensor, and the high-resolution handwriting sensor 1803, such as a parallel-plate capacitive displacement sensor or a DRT sensor, may be interleaved at the intersection of rows and columns of an addressable sensor device as described above with respect to Figures 9A and 9B and Figures 10A-10D. The fingerprint sensor 1805 with even higher resolution may be included in a preselected region over a portion of the display area, such as in the example shown in Figure 12. Alternatively, the multi-touch sensor 1801 and the high-resolution handwriting sensor 1803 may serve as a fingerprint sensor 1805 anywhere above the display area when the combined sensor device has sufficient resolution. [0167] In the example shown in Figure 18A, the control system 1807 includes at least one microcontroller 1809 and at least one application processor 1810. In some implementations, hardware, software and/or firmware for all sensors of the combined sensor device 900 may be integrated on a single microcontroller 1809, whereas in other implementations separate microcontrollers 1809 may be used for touch sensing, handwriting sensing and fingerprint sensing functionality. Applications for all sensors may be integrated on a single application processor 1810 or on multiple application processors 1810. These processors may reside, for example, within a display device or within a mobile device. [0168] Here, the sensors in the combined sensor device 900 communicate with the microcontroller 1809, which in turn communicates with application processor 1810. The communication between these devices may go in both directions. In some implementations, the microcontroller 1809 drives the sensors of the combined sensor device 900 and receives sense data from the sensors. The application processor 1810 may be configured both to monitor the output of the microcontroller 1809 and to send commands to the microcontroller 1809. The microcontroller 1809 may, for example, be located on the lower substrate 910, on an attached flex cable, or on an electrically connected printed circuit board. In some implementations, the microcontroller 1809 also may be configured to control a display and/or to perform other functions. [0169] Some implementations may be provided via application software stored in one or more tangible, machine-readable media. Such media may be part of the applications processor 1810 or may be separate media accessible by the applications processor 1810. The application software may include instructions for controlling one or more devices to perform various functions. For example, the application software may include instructions to activate the fingerprint sensor zone 1010 for fingerprint sensing only when fingerprint sensing is needed. Otherwise, the fingerprint sensor zone 1010 may be de-activated or activated for multi-touch and/or handwriting functionality, depending on the implementation. [0170] Alternatively, or additionally, the application software may include instructions to reduce power consumption by turning off sensors, turning off parts of the microcontroller 1809 and/or employing first-level screening at a reduced frame rate on a low-resolution sensor before activating power-hungry higher-resolution sensors. For example, the application software may include instructions for reducing power consumption by aggregating sensels (or aggregating rows or columns of the combined sensor device 900) electronically using the microcontroller 1809, so that the combined sensor device 900 performs at a lower resolution and may consume less power and provide a higher signal until higher resolution is needed. [0171] In some implementations, the combined sensor device 900 can be configured to function in either a touch mode or a handwriting mode (which also may be referred to herein as a stylus mode), instead of being configured to function in both modes simultaneously. It may be advantageous not to have the combined sensor device 900 function in both modes simultaneously. For example, when a user is writing on the combined sensor device 900 with a stylus, it may be preferable to avoid sensing the user's palm or fingers that also may be resting on the device. Operating the combined sensor device 900 to function as a handwriting sensor may influence and/or interfere with the combined sensor device 900's functionality as a touch sensor, and vice versa. Accordingly, some implementations provide separate drive and/or sense subsystems for touch and handwriting mode functionality. Some implementations provide drive and/or sense subsystems that may be switched quickly between touch mode functionality and handwriting mode functionality. [0172] Figure 18B shows an example of a block diagram that illustrates a control system for a combined sensor device. In this example, the control system 1807 includes a stylus drive circuit 1811 and a touch drive circuit 1813. When the combined sensor device 900 is being operated in a handwriting mode, the stylus drive circuit 1811 sends one or more drive signals 1814 to the handwriting and touch sensor zone 1005. When the combined sensor device 900 is being operated in a touch mode, the touch drive circuit 1813 sends the drive signals 1814 to the handwriting and touch sensor zone 1005. However, in some alternative implementations, the drive signals 1814 are substantially the same whether the combined sensor device 900 is being operated in a handwriting mode or in a touch mode. [0173] In this example, the control system 1807 includes a stylus sense circuit 1815 and a touch sense circuit 1817. When the combined sensor device 900 is being operated in a handwriting mode, the stylus sense circuit 1815 processes one or more sense signals 1818 from the handwriting and touch sensor zone 1005. When the combined sensor device 900 is being operated in a touch mode, the touch sense circuit 1817 processes the sense signals 1818 from the handwriting and touch sensor zone 1005. In some implementations, the control system 1807 may include a single circuit that can be switched from a touch configuration to a handwriting configuration. Some examples are described below. [0174] Figure 18B also shows an example of a circuit diagram representing components of a sensel 1819 in the handwriting and touch sensor zone 1005. In this enlarged view of the sensel 1819, the resistance of a switch 1823 is schematically depicted, as well as the mutual capacitance 1824 between associated electrodes of the sensel 1819. [0175] Figure 18C shows an example representation of physical components and their electrical equivalents for a sensel in a combined sensor device. In this example, the sensel 1819 includes a switch 1823 formed in an overlapping region between a drive electrode 1820 and a sense electrode 1821. The switch 1823 is represented by a switch capacitance 1826 and a leakage resistance 1828 positioned between the drive electrode 1820 and the sense electrode 1821 that accounts for small amounts of leakage current that can flow through switch 1823 when the switch is open. The leakage resistor 1828 may have a value on the order of 1 megaohms or larger. A fixed resistor 1822 may be positioned between the drive electrode 1820 and the sense electrode 1821, electrically connected in series with the contacts of the sensel switch 1823. The fixed resistor 1822 may be a serpentine resistor, a vertical resistor, a high-resistivity film, a leaky diode, or other linear or non-linear resistive element. The fixed resistor 1822 may be in the range of a hundred kilohms to 10 megaohms or larger. In this example, the switch 1823 includes a serpentine fixed resistor 1822, which may be similar to the configuration depicted in Figure 17A. [0176] When the finger 1047, a stylus, etc., presses on the switch 1823, portions of the drive electrode 1820 are brought closer to the sense electrode 1821, increasing a parallel capacitance 1832 between the drive electrode 1820 and the sense electrode 1821. A sufficiently high applied pressure or force will close the switch 1823. The proximity of the finger 1047, a conductive stylus, etc., also may result in a change in inter-electrode mutual capacitances 1824 between adjacent drive electrodes 1820 and sense electrodes 1821. [0177] Figure 18D shows an example of an alternative sensel of a combined sensor device. The configuration shown in Figure 18D is similar to that of Figure 9B, which is described above. In this example, the drive electrodes 1820 and the sense electrodes 1821 include diamond-shaped sections 1825 and narrow portions 1827. In this example, the switches 1823 are formed in the overlapping regions 925b (see also Figure 9B). [0178] The parallel capacitance 1832 is formed between the drive electrode 1820 and the sense electrode 1821 in the overlapping regions 925b. The total mutual capacitance of the sensel 1819 is equal to the sum of each of the individual inter- electrode mutual capacitances 1824 between adjacent drive electrodes 1820 and sense electrodes 1821. In this example, the total mutual capacitance is about four times the inter-electrode mutual capacitance. Each of the diamond-shaped sections 1825 of the drive electrodes 1820 has a sensel drive resistance 1853 and each of the diamond- shaped sections 1825 of the sense electrodes 1821 has a sensel sense resistance 1854. [0179] Figure 18E shows an example of a schematic diagram representing equivalent circuit components of a sensel in a combined sensor device. The axis 1829 represents various levels of applied drive signals, such as the drive signals 1814 from the stylus drive circuit 1811 or the touch drive circuit 1813 (for example, see Figure 18B). The axis 1831 represents various levels of responsive sense signals, e.g., the sense signals 1818 to the stylus sense circuit 1815 or the touch sense circuit 1817 of Figure 18B. [0180] Mutual capacitance component 1833 may represent the mutual capacitance between the drive electrodes 1820 and the sense electrode 1821 and the changes caused by the proximity of the finger 1047, as shown in Figure 18C. Parasitic capacitance component 1835 represents self-capacitance of an electrode, such as sense electrode 1821 of Figure 18C, and the changes caused by the proximity of the finger 1047 or of another conductive body. Parallel capacitance component 1836 represents the parallel-plate capacitance, and changes such as that caused by the pressure of finger 1047, a stylus, etc., causing the drive electrode 1820 to be moved closer to the sense electrode 1821 of Figure 18C. The position of the switch 1823 represents the closure or non-closure of the switch 1823. In one example, mutual capacitance component 1833 has a value of about 0.5 pF; parasitic capacitance component 1835 has a value between about 0.5 pF and 20 pF; parallel capacitance component 1836 has a value of about 0.5 pF; and switch 1823 has a value of about 10 gigaohm when open and about 1 kilohm when closed. A person having ordinary skill in the art will readily understand that other capacitance and resistance values are also possible depending on the desired implementation. In some alternative implementations, the switch 1823 will have a value of less than 100 ohms (such as when the fixed resistor is omitted) when closed. In some other implementations, the switch 1823 will have a value effectively equal to the fixed resistor when closed. [0181] Some implementations described herein provide a single circuit that can be switched between a touch mode configuration and a handwriting mode configuration. For example, a single circuit may be configured to perform the functions of the stylus sense circuit 1815 and the touch sense circuit 1817 of Figure 18B. [0182] Figure 18F shows an example of an operational amplifier circuit for a combined sensor device that may be configured for handwriting or stylus mode sensing. When operating in handwriting mode, the circuit 1837 is configured to function as an integrator with reset capability. The circuit 1837 may be configured to generate relatively large output voltages from the relatively small input currents that result from handwriting sensing of the combined sensor device 900 when one or more switches 1823 are closed. [0183] In this example, the circuit 1837 includes an operational amplifier 1839, a feedback capacitor 1841 and a feedback resistor 1843, as well as switches 1842 and 1844. In one example, the feedback capacitor 1841 has a value between about 6 pF and 20 pF, and the feedback resistor 1843 has a value of about 5 megaohm or higher. However, the circuit 1837 may be implemented with other capacitance and resistance values and have other configurations that provide similar functionality. For example, alternative implementations may include a transistor (such as a metal oxide semiconductor field effect transistor (MOSFET)) operating in the off state instead of feedback resistor 1843. Instead of the switch 1842, some implementations may include a lossy device such as a high-value resistor or an NMOS or PMOS transistor with a known resistance. Moreover, some implementations may include an additional resistor in series with the switch 1842. [0184] When operating in the stylus mode, the switch 1844 can be left open and the switch 1842 can be opened and closed. The graphs 1845, 1847 and 1849 show examples of steady-state input current operation. The graph 1845 indicates input current over time. In this example, the current is held constant at a steady-state value Iss. At time tls the switch 1842 is opened. Referring to the graph 1847, it may be seen that to open switch 1842, the voltage applied to switch 1842 is changed to switch open voltage 1848. The switch open voltage 1848 may vary according to the particular implementation. In some implementations, the switch open voltage 1848 may be 1.8V, whereas in other implementations the switch open voltage 1848 may be 3.3V, 5V, 10V, 20V or some other voltage. [0185] The graph 1849 indicates the output voltage that results from opening the switch 1842. In this example, because the input current is constant, the output voltage 1850a increases linearly between time tls when the switch 1842 is opened, and time t2, when the switch 1842 is closed again. The time interval (t2 - ti) during which the switch 1842 is open may be, for example, on the order of 0.1 to 10 μβεΰ, or even less. In this example, the output voltage 1850a reaches a maximum output voltage 1851. Here, the maximum output voltage 1851 is opposite in sign from the switch open voltage 1848 and has a lower absolute value than the switch open voltage 1848. When the switch 1842 is closed (at time t2), the capacitor 1841 may be discharged and the output voltage 1850a is reset. [0186] Figure 18G shows an example of the operational amplifier circuit of Figure 18F configured for touch mode sensing. In this configuration, the switch 1844 is closed, which allows the circuit 1837 to function as a charge amplifier for detecting changes in mutual capacitance Cm between adjacent drive electrodes 1820 and sense electrodes 1821 (for example, see Figures 18C and 18D). In this example, drive signal 1852 is a square wave having a voltage Va™. [0187] An example of the resulting output voltage 1850b is shown in Figure 18G. The output voltage 1850b is not a linear response like that of the output voltage 1850a, but instead is an inverted and non-linear response to the leading and trailing edges of the drive signal 1852. This response follows from the basic relationship between the current into a capacitor I = C dV/dt, where / is the current, C is the capacitance of the capacitor and dV/dt is the derivative of voltage with respect to time. [0188] A PCT sensor can exhibit shorted sensels when, for example, a sensel is pressed with a finger or a stylus and the sensel switch is closed. This condition has the potential to create larger-than-normal signals that can saturate the operational amplifier 1839 of the circuit 1837. While a saturated state can be sensed and identified, saturation recovery time can be problematic for array sensing systems. Amplifier recovery time is usually not known with a high degree of confidence, typically being characterized in a testing facility. If the operational amplifier 1839 remains saturated, subsequent sensel measurements may be corrupted. Thus, recovery time can have a significant impact on the achievable scan rate of a sensor array. [0189] In addition, the circuit 1837 may have feedback components with large time constants that also can contribute to a long recovery period. In some implementations, the circuit 1837 may include a large feedback resistor (such as the resistor 1843) to provide DC feedback to stabilize the circuit 1837. A large feedback resistor in parallel with the capacitor 1841 can create a larger time constant that can inhibit sensor scan rates. [0190] Accordingly, some implementations of the circuit 1837 are configured to inhibit or prevent saturation of the operational amplifier 1839. Some such implementations provide a low-impedance path to bleed off charge of the capacitor 1841, allowing for fast re-set of the circuit 1837 and/or fast recovery from a saturated state of the operational amplifier 1839. [0191] Figure 18H shows an example of an operational amplifier circuit for a combined sensor device that includes a clamp circuit. The clamp circuit 1855 may be configured to inhibit or prevent saturation of the operational amplifier 1839 by limiting the output voltage of the circuit 1837. In this example, the clamp circuit 1855 is disposed in parallel with other components of the circuit 1837. [0192] Figure 181 shows examples of clamp circuit transfer functions. The function 1857 is an ideal clamp circuit transfer function, whereas the function 1859 is an example of an actual clamp circuit transfer function. Both of the functions 1857 and 1859 indicate a very high impedance while the clamp circuit 1855 is operating within the clamp voltage range (Vc_ <V0 <VC+). The clamp circuit 1855 may be configured with clamp voltages Vc_ and Vc+ with absolute values that are less than those of the corresponding saturation voltages Vsat- and Vsat+. [0193] Within the clamp voltage range, the circuit 1837 can operate in a touch mode with little or no influence from the clamp circuit 1855. When the operational amplifier is "clamped" (when Vout reaches or exceeds Vc+ or Vc_), the impedance of the clamp circuit 1859 is very low, as shown by the significant increase in the absolute value of Iout. If the impedance of the clamp circuit 1855 is made very low, this essentially shorts the feedback components of the circuit 1837, thereby allowing the feedback capacitor 1841 to discharge (see Figure 18H). [0194] Figure 18J shows an example of a circuit diagram for a clamp circuit. In the configuration depicted in Figure 18 J, the clamp circuit 1855 includes n diodes 1861 arranged in series and having a first forward direction. The diodes 1861 are disposed in parallel with the diodes 1863. In this example, there are n diodes 1863 arranged in series and having a second forward direction that is opposite to that of the diodes 1861. In some implementations, the forward voltage of each of the diodes 1861 and 1863 may be on the order of IV or less, e.g., 0.2V, 0.3V or 0.6V. The value of n, as well as the forward voltage of the diodes 1861 and 1863, may vary according to the implementation. The clamp circuit transfer function of a clamp circuit 1855 having a relatively larger number of diodes, each with a relatively lower forward voltage, will approximate an ideal clamp circuit transfer function more closely than a clamp circuit 1855 having a relatively smaller number of diodes, each with a relatively higher forward voltage. [0195] However, the clamp circuit 1855 may be configured in various other ways. In some alternative implementations, at least one of the diodes 1861 and 1863 may be a Zener diode. In some such implementations, one of the diodes 1861 is a Zener diode having a first forward direction and one of the diodes 1863 is a Zener diode having a second and opposing forward direction. In some such implementations, each of the Zener diodes may be paired, in series, with a Schottky diode having an opposing forward direction. In some implementations, the Schottky diodes may have forward voltage drops of about 0.2V or 0.3V. The Zener breakdown voltage of the corresponding Zener diodes may be substantially higher. For example, in a ±5V analog system, the Zener breakdown voltage may be 4.2V in one implementation. [0196] In some implementations described herein, the lower substrate may form at least a portion of the cover glass apparatus of a display device. In some such implementations, the signal lines may be formed on the upper surface of the cover glass, rather than underneath the cover glass. Such a configuration has implications for the design of the sensing elements in the array, because these elements may be routed outside the array and attached to integrated circuits (ICs) that are configured to address and sense the signals from the various sensing elements in the array. [0197] Previous approaches (such as covering these routing wires or attaching ICs on the top side of the cover glass and covering them with black border epoxy) may not be optimal. One reason is that the epoxy may result in topography on the touch surface that may be felt by the user. [0198] Accordingly, some implementations described herein provide novel routing configurations. Some implementations involve the use of a flexible upper substrate 905 of a combined sensor device 900 as a platform for direct attachment of one or more ICs, including but not limited to ASICs. The flexible upper substrate 905 may be wrapped around the edge of the lower substrate 910 (the edge of a glass substrate or another such substantially transparent substrate). Some such implementations involve wrapping the sensing wires and routing leads, and attaching ICs to these leads in a manner that enables the cover glass to extend all the way to the edge of a mobile display device, such as a smart phone device. The IC(s) may be directly attached to the wrap-around portion of the upper substrate 905, thus enabling a minimal edge border on the device, eliminating or minimizing the need for a bezel, and reducing cost by integrating the cover layer and flexible printed circuit. Some such implementations may not result in a topography that can be felt by a user. [0199] Some examples will now be described with reference to Figures 19 through 2 IB. Figure 19 shows an example of a cross-section of a portion of an alternative combined sensor device. In this implementation, the lower substrate 910 is formed of glass and the upper substrate 905 is formed of a flexible and substantially transparent material, such as a clear polyimide. Here, conductive material (metallization in this example) has been patterned into the upper electrodes 1015 on the upper substrate 905. The upper electrodes 1015 on the underside of the upper substrate 905 may be used to route the sensor's signal lines. The portion 1910 of the upper substrate 905 (which is not drawn to scale) may be configured to wrap around the edge of the lower substrate 910 in some implementations, such as the implementation shown in Figure 2 IB. In the example shown in Figure 19, the lower electrodes 1030 on the lower substrate 910 may be bonded electrically to upper electrodes 1015 or other electrical traces or circuitry on the upper substrate 905 using an anisotropic conductive film (ACF) 1905 or a similar connection scheme. [0200] Figure 20 shows an example of a top view of routing for a combined sensor device. The combined sensor device 900 illustrated in Figure 20 includes both fl ex-on-glass (FOG) 2005 and chip-on- flex (COF) 2010a configurations. Figure 20 also indicates the handwriting and touch sensor zone 1005 and the fingerprint sensor zone 1010 of the combined sensor device 900. A ground ring 2015 may be included around portions of the handwriting, touch and fingerprint sensor zones 1005 and 1010 to isolate noise coupling from the system and to minimize false touches. While fingerprint sensor zone 1010 is shown as physically distinct from handwriting and touch sensor zone 1005, in some implementations with sufficiently high resolution in the handwriting and touch zone, the two zones merge and are indistinguishable. Software may be used to allocate a portion of the combined sensor device 900 for fingerprint detection. When combined with an underlying display device, the software may be used to display a box or other suitable designator for prompting a user where (and when) to place a finger on the sensor device. [0201] Figure 21 A shows an example of a cross-sectional view of the device through the combined sensor device shown in Figure 20. In this example, the upper substrate 905 is bonded to the lower substrate 910 with the adhesive layer 1705. An additional COF 2010b may be seen in this view of the combined sensor device 900. Additional components such as passive devices (not shown) and connective traces for signals, power, ground, and external connectors may be included on an extended portion of the upper substrate 905 along with a controller or other integrated circuits such as COF 2010a and 2010b. Electrical or connective vias (not shown) may be included in the flexible upper substrate 905 to aid in connectivity of any electrical and electronic components. A stiffener 2120 such as a Kapton® tape may be attached to an extended portion of upper substrate 905. [0202] Figure 21B shows an example of a cross-sectional view of a wraparound implementation. In the combined sensor device 900 illustrated in Figure 2 IB, the flexible upper substrate 905 is wrapped around the edge of the lower substrate 910. Figure 21B depicts the connection of IC 2105, which is an ASIC in this example, to the upper electrodes 1015 on the inside (lower side) of the upper substrate 905. The IC 2105 may, for example, be configured for controlling the combined sensor device 900 to provide touch sensor, handwriting sensor and/or fingerprint sensor functionality. An electrical connector 2110 is attached to the upper electrodes 1015 or to other traces on one or both sides of upper substrate 905 in this example. A bezel 2115 is shown in Figure 2 IB. However, other implementations may not include the bezel 2115. [0203] Here, the signal lines that address the electrodes on the lower substrate 910 are routed and connected to corresponding upper electrodes 1015 on the underside of the flexible upper substrate 905. According to some such implementations, both the cost and the complexity of the combined sensor device 900 may be reduced by integrating the functionality of the flexible upper substrate 905 with that of a flexible printed circuit. [0204] Using devices such as those described above, an array of applications can be enabled. Some such implementations involve using a mobile handheld device as a user authentication-based secure gateway to enable transactions and/or physical access. Some implementations involve using a fingerprint sensor as part of a user authentication system, such as for commercial or banking transactions. In some implementations, a handwriting input function may be used for signature recognition and related applications. Alternatively, or additionally, some implementations involve using the handwriting input feature to automatically capture notes and stylus input from people in an enterprise, such as students an educational setting, employees in a corporate setting, etc. [0205] For example, there is a growing trend to enable use of a mobile device for commercial transactions, in a manner similar to that in which a credit card is used. In this usage model, a user may simply input a PIN number into a cellular telephone that is equipped with a communication interface such as Near Field Communication (NFC) configured to communicate with payment terminals. [0206] One challenge with this model is that of user authentication. PINs and passwords may be ineffective for preventing unauthorized access. A stolen mobile device or cellular telephone could result in improper usage of the device or phone for credit or debit transactions. [0207] Some implementations provided herein relate to the use of a built-in fingerprint sensor, such as the fingerprint sensor of the combined sensor device 900, to enable local user authentication. Figure 22 shows an example of a flow diagram illustrating a fingerprint-based user authentication process. The process 2200 may involve using a cellular telephone as a fingerprint-based user authentication system to enable transactions and/or physical access. [0208] According to some such implementations, the user may be enrolled on a mobile device, such as a cellular telephone, by providing one or more fingerprints. In some such implementations, the mobile device includes a combined sensor device 900. Alternatively, or additionally, the user may provide handwriting data. The fingerprint and/or handwriting data may be encrypted and stored securely within the mobile device. However, some alternative implementations provide for authentication by a remote device, such as a server. Such implementations may involve storing the fingerprint and/or handwriting data in a remote device. Moreover, some implementations involve acquiring fingerprint and/or handwriting data from more than one person, so that more than one person may be authenticated using the same mobile device. [0209] During an authentication process, the user provides fingerprint and/or handwriting data to the mobile device, such as through one or more sensors integrated in a cover glass apparatus of the mobile device (block 2205). The user may do so, for example, when the user wishes to make a commercial transaction using the mobile device. The obtained fingerprint and/or handwriting data may be processed securely, either within the mobile device or via a remote device such as an authentication server, and compared to the previously enrolled and stored fingerprint and/or handwriting data (block 2210). In block 2210, the mobile device or the authentication server determines whether there is a match between the obtained fingerprint and/or handwriting data and the stored fingerprint and/or handwriting data. [0210] If and only if there is a match will the transaction be permitted. If no match is found in block 2215, the process 2200 may allow the user to try again, e.g., for a limited number of times (block 2220). If the user cannot provide matching fingerprint and/or handwriting data within this number of times, the process may end (block 2230). In some implementations, the mobile device or the authentication server may send a notification to, e.g., a financial institution and/or to local governmental authorities if improper data is received. In this example, either the mobile device or the authentication server is configured to send an authorization signal to another device if the transaction is permitted (block 2225). Examples of such devices include the mobile device 40 and the payment terminal 2310 shown in Figure 23 A. [0211] Figure 23 A shows an example of a mobile device that may be configured for making commercial transactions. In this example, the mobile device is a fingerprint-secured cellular telephone that is configured for wireless communication with the payment terminal 2310, such as via NFC. The cellular telephone is an instance of the display device 40, described elsewhere herein, and may include a combined sensor device 900 such as that described above. Alternatively, the cellular telephone may include a fingerprint sensor zone 1010 that is not part of a combined sensor device 900. [0212] According to some implementations, a user may provide fingerprint data to the mobile device according to a process such as that described above with reference to Figure 22. If there is a match between the stored and recently-provided fingerprint data, the transaction can be permitted. For example, the payment terminal 2310 of Figure 23 A may send a signal to a corresponding device of a financial institution indicating that a payment should be authorized. The financial institution may or may not approve the payment, depending on factors such as the availability of funds or credit. Figure 23A shows a mobile device used to authorize a payment at a payment terminal in physical proximity to the phone. In some other implementations, the mobile device can be used to authorize payments made remotely, such as an e- commerce transaction made via a web browser or other application running on the mobile device, or to authorize a payment made through a separate system, such as an e-commerce transaction made via a web browser or other application running on a personal computer under the control of a user. Referring to Figures 22 and 23 A, the authorization signal of block 2225 can be used to control the release of data on the mobile device itself, such as a control bit to authorize transmission of payment or credit card information to payment terminal 2310. In another implementation, the authorization signal of block 2225 may be sent to another device or process server, such as a device or server of a financial institution indicating that a payment should be authorized. [0213] Many physical facilities in corporate and government locations are secured electronically, and are accessed using wireless radio frequency identification (RFID) cards, key fobs, etc., that operate on specific wireless frequencies, such as 128 kHz. These are short-range devices that draw energy by inductively coupling power from a card reader or a similar device located near a door. If an RFID card or key fob falls into the wrong hands, security could be compromised at these access points. [0214] Instead of using a separate RFID card or key fob, some implementations involve the use of a fingerprint-secured mobile device, such as a fingerprint-secured cellular telephone, to gain access to such physical facilities. Figure 23 B shows an example of using of a fingerprint-secured mobile device for physical access applications. The mobile device is an instance of the display device 40, described elsewhere herein, and may include a combined sensor device 900. [0215] In some such implementations, a fingerprint-secured mobile device may be used for opening an NFC-enabled access point 2320, such as a door 2315 of a building, an automobile, a locker, a safe, etc., that may be electronically locked. In some implementations, the access point may be configured for communication with other devices, such as an authentication server, via a network. The fingerprint sensor zone 1010 of the mobile device 40 may be used to implement (at least in part) an authentication process for the user before the mobile device 40 initiates its communications with the access point 2320. The authentication procedure may be similar to that described above for the secure payment gateway; however, the application enabled is that of physical access, rather than a transaction. [0216] Mobile devices are becoming a ubiquitous means for storage, transmission, and playback of documents, music, videos, and other digital assets. In order to preserve digital and other rights, and to prevent unauthorized access, distribution and copying of such digital assets, some implementations involve the use of a fingerprint sensor and/or a handwriting sensor to be "married" to the asset in question. In this manner, only the person (or persons) authorized to access the digital asset can access the asset through the use of the fingerprint sensor and/or the handwriting sensor, which may be sensors of a combined sensor device 900 described herein. [0217] In many enterprises, including corporate, government, educational and other settings, it may be beneficial to have an individual write notes on the screen of a mobile device. A device such as a tablet with a large screen can substitute as a notepad, allowing meeting notes, interactive discussions between colleagues and other important discoveries to be automatically captured. One such device is depicted in Figure 24A. [0218] Figure 24A shows an example of a secure tablet device. The tablet device 2400a of Figure 24A may be configured for wireless communication with a network, such as a network maintained by an enterprise. The tablet device 2400a may include a combined sensor device 900 such as described elsewhere herein. Such network communications can facilitate storage of information captured by the tablet device 2400a on an enterprise's database of documents. Due to the often confidential and private nature of the information contained within these devices, access to such tablets and phones should be restricted only to the authorized user(s). Otherwise, loss of such devices can result in unauthorized usage and compromise of the data contained within. [0219] Some such implementations provide access control according to a handwriting recognition process and/or a fingerprint recognition process. Access to the tablet device 2400a may be controlled according to an analysis of a user's handwriting on the tablet device 2400a and/or according to fingerprint data received from a fingerprint sensor provided on the cover glass apparatus, as described above. In the example depicted in Figure 24 A, the stylus tip 1105 can be used to provide the handwriting data 2410 via the tablet device 2400a. Such data may be used for an authentication process similar to that described above with reference to Figure 22. [0220] Figure 24B shows an example of an alternative secure tablet device. The screen of the tablet device 2400b illustrated in Figure 24B may act as the handwriting input device or notepad. The tablet device 2400b may include a combined sensor device 900 such as described elsewhere herein. As shown in Figure 24B, access to the tablet device 2400b may be controlled according to a handwriting authentication procedure: here, the stylus tip 1105 can be used to provide the handwriting data 2410. Alternatively, or additionally, access to the tablet device 2400b may be controlled according to a fingerprint authentication procedure using fingerprint data acquired via the fingerprint sensor zone 1010. The tablet device 2400b may or may not be configured for finger touch sensing, depending on the particular implementation. Information may be automatically captured on the screen and, in some implementations, may be wirelessly synchronized with an enterprise's database. Alternatively, or additionally, such data can be stored locally. Some such data may subsequently be synchronized with the enterprise's database, such as through a wired or wireless interface. [0221] Figures 25A and 25B show examples of system block diagrams illustrating a display device that includes a combined sensor device. The display device 40 can be, for example, a smart phone, a cellular phone, or a mobile telephone. However, the same components of the display device 40 or slight variations thereof are also illustrative of various types of display devices such as televisions, tablets, e- readers, hand-held devices and portable media players. [0222] The display device 40 includes a housing 41, a display 30, a combined sensor device 900, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The housing 41 can be formed from any of a variety of manufacturing processes, including injection molding, and vacuum forming. In addition, the housing 41 may be made from any of a variety of materials, including, but not limited to: plastic, metal, glass, rubber, and ceramic, or a combination thereof. The housing 41 can include removable portions (not shown) that may be interchanged with other removable portions of different color, or containing different logos, pictures, or symbols. [0223] The display 30 may be any of a variety of displays, including a bistable or analog display, as described herein. The display 30 also can be configured to include a flat-panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD, or a non-flat-panel display, such as a CRT or other tube device. In addition, the display 30 can include an interferometric modulator display, as described herein. The combined sensor device 900 may be a device substantially as described herein. [0224] The components of the display device 40 are schematically illustrated in Figure 25B. The display device 40 includes a housing 41 and can include additional components at least partially enclosed therein. For example, the display device 40 includes a network interface 27 that includes an antenna 43 which is coupled to a transceiver 47. The network interface 27 may be a source for image data that could be displayed on the display device 40. Accordingly, the network interface 27 is one example of an image source module. The transceiver 47 is connected to a processor 21, which is connected to conditioning hardware 52. The conditioning hardware 52 may be configured to condition a signal (e.g., filter a signal). The conditioning hardware 52 is connected to a speaker 45 and a microphone 46. The processor 21 is also connected to an input device 48 and a driver controller 29. The driver controller 29 is coupled to a frame buffer 28, and to an array driver 22, which in turn is coupled to a display array 30. In some implementations, a power supply 50 can provide power to substantially all components in the particular display device 40 design. [0225] In this example, the display device 40 also includes a combined sensor controller 77. The combined sensor controller 77 may be configured for communication with the combined sensor device 900 and/or configured for controlling the combined sensor device 900. The combined sensor controller 77 may be configured to determine a touch location of a finger, a conductive or non- conductive stylus, etc., proximate the combined sensor device 900. The combined sensor controller 77 may be configured to make such determinations based, at least in part, on detected changes in capacitance in the vicinity of the touch location. The combined sensor controller 77 also may be configured to function as a handwriting sensor controller and/or as a fingerprint sensor controller. The combined sensor controller 77 may be configured to supply touch sensor, handwriting sensor, fingerprint sensor and/or user input signals to the processor 21. [0226] Although the combined sensor controller 77 is depicted in Figure 25B as being a single device, the combined sensor controller 77 may be implemented in one or more devices. In some implementations, separate sensor controllers may be configured to provide touch, handwriting and fingerprint sensing functionality. Such sensor controllers may, for example, be implemented in separate integrated circuits. In some such implementations, the addressing and/or measurement circuitry for touch mode, handwriting mode and/or fingerprint sensing mode may be contained within one or more controller or driver ASIC chips. In some alternative implementations, however, the processor 21 (or another such device) may be configured to provide some or all such sensor controller functionality. [0227] The network interface 27 includes the antenna 43 and the transceiver 47 so that the display device 40 can communicate with one or more devices over a network. The network interface 27 also may have some processing capabilities to relieve, for example, data processing requirements of the processor 21. The antenna 43 can transmit and receive signals. In some implementations, the antenna 43 transmits and receives RF signals according to the IEEE 16.11 standard, including IEEE 16.11(a), (b), or (g), or the IEEE 802.11 standard, including IEEE 802.1 la, b, g, n, and further implementations thereof. In some other implementations, the antenna 43 transmits and receives RF signals according to the BLUETOOTH standard. In the case of a cellular telephone, the antenna 43 is designed to receive code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Global System for Mobile communications (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), IxEV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, or other known signals that are used to communicate within a wireless network, such as a system utilizing 3G or 4G technology. The transceiver 47 can pre-process the signals received from the antenna 43 so that they may be received by and further manipulated by the processor 21. The transceiver 47 also can process signals received from the processor 21 so that they may be transmitted from the display device 40 via the antenna 43. [0228] In some implementations, the transceiver 47 can be replaced by a receiver. In addition, in some implementations, the network interface 27 can be replaced by an image source, which can store or generate image data to be sent to the processor 21. The processor 21 can control the overall operation of the display device 40. The processor 21 receives data, such as compressed image data from the network interface 27 or an image source, and processes the data into raw image data or into a format that is readily processed into raw image data. The processor 21 can send the processed data to the driver controller 29 or to the frame buffer 28 for storage. Raw data typically refers to the information that identifies the image characteristics at each location within an image. For example, such image characteristics can include color, saturation, and gray-scale level. [0229] The processor 21 can include a microcontroller, CPU, or logic unit to control operation of the display device 40. The conditioning hardware 52 may include amplifiers and filters for transmitting signals to the speaker 45, and for receiving signals from the microphone 46. The conditioning hardware 52 may be discrete components within the display device 40, or may be incorporated within the processor 21 or other components. [0230] The driver controller 29 can take the raw image data generated by the processor 21 either directly from the processor 21 or from the frame buffer 28 and can re-format the raw image data appropriately for high speed transmission to the array driver 22. In some implementations, the driver controller 29 can re-format the raw image data into a data flow having a raster-like format, such that it has a time order suitable for scanning across the display array 30. Then the driver controller 29 sends the formatted information to the array driver 22. Although a driver controller 29, such as an LCD controller, is often associated with the system processor 21 as a standalone Integrated Circuit (IC), such controllers may be implemented in many ways. For example, controllers may be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated in hardware with the array driver 22. [0231] The array driver 22 can receive the formatted information from the driver controller 29 and can re-format the video data into a parallel set of waveforms that are applied many times per second to the hundreds, and sometimes thousands (or more), of leads coming from the display's x-y matrix of pixels. [0232] In some implementations, the driver controller 29, the array driver 22, and the display array 30 are appropriate for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (such as an IMOD controller). Additionally, the array driver 22 can be a conventional driver or a bi-stable display driver (such as an IMOD display driver). Moreover, the display array 30 can be a conventional display array or a bi-stable display array (such as a display including an array of IMODs). In some implementations, the driver controller 29 can be integrated with the array driver 22. Such an implementation can be useful in highly integrated systems, for example, mobile phones, portable-electronic devices, watches or other small-area displays. [0233] In some implementations, the input device 48 can be configured to allow, for example, a user to control the operation of the display device 40. The input device 48 can include a keypad, such as a QWERTY keyboard or a telephone keypad, a button, a switch, a rocker, a touch-sensitive screen, a touch-sensitive screen integrated with the display array 30, or a pressure- or heat-sensitive membrane. The microphone 46 can be configured as an input device for the display device 40. In some implementations, voice commands through the microphone 46 can be used for controlling operations of the display device 40. [0234] The power supply 50 can include a variety of energy storage devices. For example, the power supply 50 may include a rechargeable battery, such as a nickel-cadmium battery or a lithium-ion battery. In implementations using a rechargeable battery, the rechargeable battery may be chargeable using power coming from, for example, a wall socket of a photovoltaic device or array. Alternatively, the rechargeable battery can be wirelessly chargeable. The power supply 50 also can include a renewable energy source, a capacitor, or a solar cell, including a plastic solar cell or solar-cell paint. The power supply 50 also can be configured to receive power from a wall outlet. [0235] In some implementations, control programmability resides in the driver controller 29 which can be located in several places in the electronic display system. In some other implementations, control programmability resides in the array driver 22. The above-described optimization may be implemented in any number of hardware and/or software components and in various configurations. The various illustrative logics, logical blocks, modules, circuits and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and steps described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system. [0236] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular steps and methods may be performed by circuitry that is specific to a given function. [0237] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus. [0238] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above also may be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product. [0239] Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein. The word "exemplary" is used exclusively herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other possibilities or implementations. Additionally, a person having ordinary skill in the art will readily appreciate, the terms "upper" and "lower" are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of an IMOD as implemented. [0240] Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. [0241] Similarly, while operations are depicted in the drawings in a particular order, a person having ordinary skill in the art will readily recognize that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted can be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. |
An amount of valid data for each data block of multiple data blocks stored at a first memory is determined. An operation to write valid data of a particular data block from the first memory to a second memory is performed based on the amount of valid data for each data block. A determination is made that a threshold condition associated with when valid data of the data blocks was written to the first memory has been satisfied. In response to determining that the threshold condition has been satisfied, the operation to write valid data of the data blocks from the first memory to the second memory is performed based on when the valid data was written to the first memory. |
1.A method including:Determining the effective data amount of each data block among the plurality of data blocks stored at the first memory;Based on the effective data amount of each of the plurality of data blocks, perform an operation of writing effective data of a specific data block of the plurality of data blocks from the first memory to the second memory ;Determining that the threshold condition associated with the time when valid data of the plurality of data blocks is written to the first memory has been satisfied; andIn response to determining that the threshold condition has been satisfied, the processing device executes the writing of the valid data of the plurality of data blocks from the first memory to the first memory based on the time when the valid data is written to the first memory. The operation of the second memory.2.The method of claim 1, wherein determining that the threshold condition associated with the time when valid data of the plurality of data blocks is written to the first memory has been satisfied further comprises:Identifying the indicator associated with the time when the last data block was written to the first memory; andIdentifies a second indicator associated with the time when the first data block is written to the first memory, wherein the threshold condition corresponds to the time when the last data block is written to the first memory relative to the time when the last data block is written to the first memory. The difference between the time the first data block is written to the first memory, and wherein the threshold condition is satisfied when the difference exceeds the threshold difference.3.The method of claim 1, wherein determining that the threshold condition associated with the time when valid data of the plurality of data blocks is written to the first memory has been satisfied further comprises:Identifying the indicator associated with the time when the first data block was written to the first memory; andIt is determined whether the amount of data written to other data blocks of the first memory since the data was written to the first data block of the first memory exceeds the threshold condition, wherein the threshold condition is since It is satisfied when the amount of data written to the other data block exceeds a threshold amount since the data was written to the first data block.4.The method of claim 1, wherein determining that the threshold condition associated with the time when valid data of the plurality of data blocks is written to the first memory has been satisfied further comprises:Identifying an indicator associated with the time when a first data block of the plurality of data blocks is written to the first memory;It is determined whether the amount of time that has passed since the data was written to the first data block exceeds the threshold condition, wherein the threshold condition is in the amount of time that has elapsed since the data was written to the first data block. It is satisfied when the amount of time exceeds the threshold amount of elapsed time.5.The method according to claim 1, wherein the valid data is data that has not been erased by the host system, indicating to be erased, updated, or reprogrammed.6.The method of claim 1, wherein performing the operation of writing the valid data from the first memory to the second memory further comprises:Determining that the valid data is successfully written to the second memory; andIn response to the determination that the valid data is successfully written to the second memory, the valid data is erased from the first memory.7.The method according to claim 1, wherein the operation corresponds to garbage collection, and the operation is based on the specific data block of the plurality of data blocks relative to the other of the plurality of data blocks The data block stores the minimum effective data amount and the effective data of the specific data block is written from the first memory to the second memory, and the priority of which data blocks undergo the operation is satisfied based on the threshold condition And an indication of switching from determining the minimum effective data amount to determining the time for valid data to be written into the plurality of data blocks.8.The method of claim 1, wherein the first memory includes a first memory type of a single-level (SLC) memory, and the second memory includes a second of a multi-level cell, a three-level cell, or a four-level cell Storage type.9.A non-transitory computer-readable storage medium including instructions that, when executed by a processing device, cause the processing device to perform the following operations:Determining the effective data amount of each data block among the plurality of data blocks stored at the first memory;Based on the effective data amount of each of the plurality of data blocks, perform an operation of writing effective data of a specific data block of the plurality of data blocks from the first memory to the second memory ;Determining that the threshold condition associated with the time when valid data of the plurality of data blocks is written to the first memory has been satisfied; andIn response to determining that the threshold condition has been satisfied, writing the valid data of the plurality of data blocks from the first memory to the second memory is performed based on the time when the valid data is written to the first memory. The operation of the memory.10.The non-transitory computer-readable storage medium according to claim 9, wherein in order to determine that the threshold condition associated with the time at which the valid data of the plurality of data blocks is written to the first memory has been satisfied, the The processing device further performs the following operations:Identifying the indicator associated with the time when the last data block was written to the first memory; andIdentifies a second indicator associated with the time when the first data block is written to the first memory, wherein the threshold condition corresponds to the time when the last data block is written to the first memory relative to the time when the last data block is written to the first memory. The difference between the time the first data block is written to the first memory, and wherein the threshold condition is satisfied when the difference exceeds the threshold difference.11.The non-transitory computer-readable storage medium according to claim 9, wherein in order to determine that the threshold condition associated with the time at which the valid data of the plurality of data blocks is written to the first memory has been satisfied, the The processing device further performs the following operations:Identifying the indicator associated with the time when the first data block was written to the first memory; andIt is determined whether the amount of data written to other data blocks of the first memory since the data was written to the first data block of the first memory exceeds the threshold condition, wherein the threshold condition is since It is satisfied when the amount of data written to the other data block exceeds a threshold amount since the data was written to the first data block.12.The non-transitory computer-readable storage medium according to claim 9, wherein in order to determine that the threshold condition associated with the time at which the valid data of the plurality of data blocks is written to the first memory has been satisfied, the The processing device further performs the following operations:Identifying an indicator associated with the time when a first data block of the plurality of data blocks is written to the first memory;It is determined whether the amount of time that has passed since the data was written to the first data block exceeds the threshold condition, wherein the threshold condition is in the amount of time that has elapsed since the data was written to the first data block. It is satisfied when the amount of time exceeds the threshold amount of elapsed time.13.9. The non-transitory computer-readable storage medium of claim 9, wherein the valid data is data that has not been erased by the host system, indicating to be erased, updated, or reprogrammed.14.The non-transitory computer-readable storage medium according to claim 9, wherein to perform the operation of writing the valid data from the first memory to the second memory, the processing device further performs the following operate:Determining that the valid data has been successfully written to the second memory; andIn response to determining that the valid data has been successfully written to the second memory, the valid data is erased from the first memory.15.The non-transitory computer-readable storage medium according to claim 9, wherein the operation corresponds to garbage collection, and the operation is based on the comparison of the specific data block in the plurality of data blocks with respect to the plurality of data blocks. The other data blocks in the data block store the minimum effective data amount and the effective data of the specific data block is written from the first memory to the second memory, and which data blocks undergo the operation priority An indication of switching from determining the minimum effective data amount to determining the time when the effective data is written to the plurality of data blocks based on the threshold condition being satisfied.16.The non-transitory computer-readable storage medium according to claim 9, wherein the first memory includes a first memory type of a single-level cell (SLC) memory, and the second memory includes a multi-level cell, a three-level cell Cell or four-level cell of the second memory type.17.A system including:Memory components; andA processing device, which is operatively coupled with the memory component to perform the following operations:Performing a garbage collection operation on the data of the data blocks in the multiple data blocks of the memory component;In response to performing the garbage collection operation, determining whether the second data of the other data block that is kept written to the other data block of the plurality of data blocks of the memory component satisfies a threshold condition; andIn response to determining that the second data satisfies the threshold condition, perform a memory flush operation on the data of the plurality of data blocks of the memory component.18.The system according to claim 17, wherein to perform the memory emptying operation on the data of the plurality of data blocks of the memory component, the processing device further performs the following operations:Writing each of the data of the plurality of data blocks to another memory component; andEach of the data is erased from the plurality of data blocks at the memory component.19.The system according to claim 17, wherein to determine that the another data block of the plurality of data blocks of the memory component is kept written to the another data after the garbage collection operation is performed Whether the second data of the block meets the threshold condition, the processing device further performs the following operations:Identifying whether the difference between the indicator associated with the other data block and the second indicator associated with any one of the remaining data blocks exceeds a threshold;Identifying whether the amount of data written to the multiple data blocks since the second data was written to the another data block exceeds a threshold; orIdentifies whether the amount of time that has passed since the second data was written to the another data block exceeds a threshold.20.The system according to claim 16, wherein the memory component includes a memory type of a single-level cell. |
Periodic emptying of memory components collected using greedy garbageTechnical fieldThe embodiments of the present disclosure generally relate to memory subsystems, and more specifically, to periodically empty memory components that use greedy garbage collection techniques.Background techniqueThe memory subsystem may be a storage system, such as a solid state drive (SSD), and may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. In general, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.FIG. 2 is an operation performed to move valid data from a first memory to a second memory based on the amount of valid data and to perform an operation of moving valid data from a first memory based on the time when valid data is written to the first memory according to some embodiments of the present disclosure. A flowchart of an example method of operation of moving a memory to a second memory.FIG. 3A illustrates an example of performing an operation of moving data from a first memory to a second memory based on an effective data amount according to some embodiments of the present disclosure.FIG. 3B illustrates another example of performing an operation of moving data from the first memory to the second memory based on the effective data amount according to some embodiments of the present disclosure.FIG. 3C illustrates an example of selecting a block according to some embodiments of the present disclosure, and performing a garbage collection operation on the block based on the time when valid data is written to the first memory.4 is a flowchart of an example method to perform different garbage collection operations based on different priorities according to some embodiments of the present disclosure.Figure 5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to periodically emptying memory components that use greedy garbage collection. The memory subsystem is also referred to as "memory device" hereinafter. An example of a memory subsystem is a storage system, such as a solid state drive (SSD). In some embodiments, the memory subsystem is a hybrid memory/storage subsystem. In general, the host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.The memory subsystem may include multiple memory components that can store data from the host system. Each storage component can contain different types of media. Examples of media include, but are not limited to, cross-point arrays of non-volatile memories and flash-based memories such as single-level cell (SLC) memory, three-level cell (TLC) memory, and four-level cell (QLC) memory.Conventional memory subsystems can use greedy garbage collection operations to relocate data from the SLC cache (also referred to herein as static cache) to data blocks with different memory types. In a flash-based memory subsystem that uses two-pass programming, power loss during programming during the second pass can cause data loss on pages programmed during the first pass. In this case, the system can write the data in SLC mode, and then relocate the data to MLC or TLC or QLC. Relocating data from the first data block to the second data block and erasing the data from the first data block may be referred to herein as garbage collection. The greedy garbage collection operation may refer to an operation of relocating valid data from the first data block to the second data block based on the amount of valid data stored at the first data block. Valid data may refer to data successfully written to the data block, and the data has not been erased by the host system, indicating to be erased, updated, or reprogrammed. Therefore, valid data may be data currently used by the host system, and invalid data may be data no longer used by the host system.The greedy garbage collection operation can select to perform garbage collection on the valid data at the data block having the smallest effective data amount relative to the other data blocks of the memory component. For example, if one data block has 5% valid data and another data block of the memory component has 10% valid data, the data block with 5% valid data will be selected for the greedy garbage collection operation, and The valid data and invalid data at the data block can be erased through a greedy garbage collection operation. Subsequently, the data block can be used to store additional host data. A theoretical basis for selecting the data block with the smallest effective data amount for garbage collection is that compared with the larger effective data amount at the relocation data block, relocating a smaller effective data amount can take less time and use less time. Less computing resources. The garbage collection process may be performed during the idle time or in the context when the data blocks of the SLC cache each store at least some valid data.In some cases, a longer data residency length can return the minimum amount of valid data. The data resident length may refer to the amount of data written to the SLC cache from the time the data block is written with data to the time when the data block is garbage collected. A larger resident length will produce a minimum amount of effective data, because the host write service may include hot data that is immediately overwritten by data written subsequently. A larger resident length means that a large amount of host data is written between the time of initial writing to the block and the time of garbage collection on the block. This increases the probability of overwriting host data before garbage collection. The maximum resident length can converge to the size of the SLC cache. For example, for a memory component with a SLC cache size of 6 gigabytes (GB), the maximum amount of data written at the moment when the first data block of the SLC cache is written and before garbage collection is triggered The data written in between is equal to the SLC cache size. This can be easily conceived when garbage collection follows a first-in first-out strategy. Compared with the memory component operating with a smaller data resident length, the memory component operating with the longest data resident length has the highest data overwrite rate and is more desirable in terms of resource usage. However, in order to select the data block with the smallest effective data amount for garbage collection, some data blocks in the SLC cache may be ignored due to cold host data and inadvertently remain in the SLC cache without garbage collection. . Cold host data may refer to data that is not frequently used or modified by the host system (for example, usually media files).As more SLC cache data blocks are programmed with cold host data, these data blocks are not selected for use because the cold host data of the data block contains a higher amount of effective data than the data of other data blocks. Garbage collection. When using the greedy garbage collection operation, the data block with cold host data is not used until the effective data of other data blocks in the SLC cache equals or exceeds the effective data amount stored in the data block with cold host data Unit collection. Therefore, a part of the SLC cache is "blocked". The number of SLC cache data blocks written with new host data will decrease. Correspondingly, a smaller data resident length of the data consistently selected by the greedy garbage collection operation will result in a smaller overwrite rate, thereby generating larger effective data than theoretically realized. The reduced host data overwrite rate due to the smaller data residency length can lead to inefficient use of the data blocks of the memory device. In some cases, the throughput performance of the memory component may be reduced due to the inefficient use of data blocks. In addition, the longer the data resides (for example, cold host data), the more likely the data will become corrupted. Therefore, the technology disclosed herein moves the cold host data after determining that the threshold condition is satisfied.Aspects of the present disclosure solve the above and other deficiencies by having a memory subsystem that changes the garbage collection to be performed for the SLC cache based on a threshold condition related to the time at which valid data is written to a data block The type of operation. The data cached by the SLC may be a useless unit collected by relocating valid data to a data block having another memory type (eg, TLC, MLC, QLC). The data block with another memory type may be part of the same memory component as the SLC cache, or may be part of a different memory component.In some embodiments, the host data can be initially programmed into the SLC cache. A garbage collection operation (for example, a greedy garbage collection operation) may be performed to relocate valid data from the data block with the smallest amount of valid data in the SLC cache. The valid data can be copied to another data block with a different memory type. In some cases, valid data and invalid data in a data block of the SLC cache are not erased until the data is successfully written to another data block. As noted above, using the first garbage collection operation (for example, the greedy garbage collection operation), some data blocks may be ignored. Therefore, in response to determining the threshold condition associated with the time when valid data is written to the SLC cache, a second garbage collection operation (for example, a memory empty operation) may be performed based on the time when valid data is written to the SLC cache. Write valid data from the SLC cache to another data block. It should be understood that the greedy garbage collection operation and the memory flush operation are each a type of garbage collection operation.In some embodiments, the indicator can be used to determine when a threshold condition is met. The number of other SLC cache blocks programmed between the time the data block is programmed with host data and the time the data block is garbage collected can be used as an indicator. Each SLC cache data block newly opened to program host data is given an index value. Whenever a new SLC cache block is opened, the index value is incremented. The difference between the index value of the SLC cache block and the index value of the latest open SLC cache block may be used as a resident length indicator. For example, suppose that the SLC data block is the first programmed data block in the life cycle of the memory device. Such SLC data blocks may have an index assigned to 1. If the data written is an operating system (OS) image, this SLC data block may have 100% valid data, because the host usually does not overwrite the OS image. When the host writes additional data, more and more SLC data blocks can be newly opened to write host data. For each open SLC data block, the index value is incremented and allocated to the block. Each SLC data block can have a unique index number. If there are 10 additional SLC data blocks that are open and programmed, the 10th block has an index value of 11. At any time, the resident length of the SLC data block can be calculated by the difference between the index number of the SLC data block and the highest index number of each SLC data block. The difference or difference between the maximum index value (newest data) and the minimum index value (oldest data) of two corresponding data blocks can be determined. If the difference exceeds the threshold difference, the threshold condition is met. Alternatively, the index value can be processed as a serial number, and the SLC block is opened for programming according to the serial number.In some embodiments, the minimum index value may be determined, and the amount of data that has been written since the data stored at the data block associated with the minimum index may be determined. If the amount of data written after the data stored at the data block associated with the smallest index exceeds the threshold amount, the threshold condition is satisfied. In some embodiments, the indicator may be a timestamp, and if more than a threshold amount of time has passed since the data of the data block associated with the timestamp was written, the threshold condition is satisfied.When the threshold condition is met, a memory empty operation can be performed. The memory flush operation may include copying valid data in a data block that meets the threshold condition to another data block and erasing the valid data and invalid data from the data block that meets the threshold condition. In some embodiments, when the threshold condition is met, the memory flush operation may include copying valid data from each data block in the static cache to another data block and erasing all the valid data in the static cache and Invalidate data to completely empty the static cache. In some cases, the oldest data can be selected for copying to another block of data first, and then released (e.g., erased). Then, the next oldest data block can be selected for copying to another data block and then released. This type of process can continue until all valid data from all data blocks are moved from the static cache to other data blocks.The advantages of the present disclosure include, but are not limited to, performance improvement of the memory subsystem, because static cached data blocks can be released or made available, thereby storing host data at a faster overwrite rate. That is, the present disclosure can eliminate cold host data congestion and increase the data resident length. The elimination of cold host data congestion and the increase of data resident length can improve the performance of the memory subsystem, because the full size of the static cache can be more efficiently utilized. In addition, improving the performance of the memory subsystem can achieve energy savings. For example, performing a flush operation can provide an optimal operation residency length for the data in the data block of the SLC cache. The optimal operation residency length can produce a smaller amount of valid data in the data block of the SLC cache. Therefore, the work load of moving data from the SLC cache to another memory component can be reduced by performing an emptying operation, because smaller effective data during garbage collection will consume less MLC/costs on garbage collection. TLC/QLC program-erase (P/E) cycle. The MLC/TLC/QLC program-erase (P/E) cycle with less consumption for garbage collection can reduce energy consumption. Therefore, smaller MLC/TLC/QLC P/E cycle requirements for garbage collection operations can improve performance and/or reduce energy consumption.Figure 1 illustrates an example computing environment 100 including a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media, such as memory components 112A to 112N. The memory components 112A to 112N may be volatile memory components, non-volatile memory components, or a combination of such components.The characteristics of different types of media may differ between one media type and another media type. An example of a characteristic associated with memory components is data density. The data density corresponds to the amount of data (for example, data bits) that can be stored per memory cell of the memory component. Using the example of a flash-based memory, a four-level cell (QLC) can store four data bits, and a single-level cell (SLC) can store one data bit. Therefore, memory components containing QLC memory cells will have a higher data density than memory components containing SLC memory cells.Some types of memory components lack storage capacitors and may suffer data corruption in the event of a power outage when programming data to the memory components. For example, the TLC memory component can use a two-pass programming operation to program data received from the host system. During the first pass, data can be programmed into the first portion of the memory cell of the TLC memory component (e.g., lower page). During the second pass, data can be programmed into the second part of the memory cell of the TLC memory component (e.g., upper page, extra page). If a power failure event occurs during the second pass, the data stored in the first part of the memory cell may be damaged.In conventional systems, in order to solve this problem, in addition to data blocks of MLC, TLC, or QLC memory types, statically allocated reserved SLC data blocks of memory components can also be used. The data block with the SLC memory type is not affected by the power-off event because the data block is programmed using a single-pass programming operation. In these conventional systems, host data can be mirrored between SLC data blocks and data blocks with MLC, TLC, or QLC memory types. The data in the reserved SLC data block can be released after all pages are successfully written to other data blocks. In another example, only the data to be written in the first pass of the two-pass programming operation is written to the reserved SLC data block. In yet another example, a static SLC cache can be used to route host data to the SLC data block first. After the static SLC cache is completely consumed, the data from the SLC data block can be relocated to the data block with MLC, TLC, or QLC memory type. After the SLC data block is completely relocated to other data blocks, the data in the SLC data block can be erased.In some embodiments, the memory subsystem is a storage system. An example of a storage system is SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage subsystem. In general, the computing environment 100 may include a host system 120 that uses the memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110.The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including memory and processing devices. The host system 120 may include or be coupled to the memory subsystem 110 such that the host system 120 can read data from the memory subsystem 110 or write data to the memory subsystem. The host system 120 may be coupled to the memory subsystem 110 through a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, without intervening components), whether wired or wireless, including, for example, electrical connections, optical connections, and magnetic connections. Connect and other connections. Examples of physical host interfaces include but are not limited to Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect High Speed (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS) Wait. The physical host interface can be used to transfer data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 can also use an NVM high-speed (NVMe) interface to access the memory components 112A to 112N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory components 112A to 112N may include different types of non-volatile memory components and/or any combination of volatile memory components. Examples of non-volatile memory components include NAND flash memory. Each of the memory components 112A to 112N may include one or more arrays of memory cells, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). Level Unit (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. For example, the SLC part may be the static cache used to initially write host data to the memory component as discussed above. The garbage collection operation may be performed on the data of the data block of the SLC part based on the effective data amount of the data block initially. After the threshold condition is satisfied, a garbage collection operation may be performed on the data of the data block of the SLC part based on the time when valid data is written to the data block to efficiently use the data block of the SLC part to enhance the performance of the memory subsystem 110. In some embodiments, the first memory component 112A may include a data block having an SLC memory type, and the second memory component 112N may include a data block having another memory type (eg, MLC, TLC, QLC, etc.). The first memory component 112A can be used as a cache to initially store host data, and a garbage collection operation can be performed on the data of the data block of the first memory component 112A to relocate the data to the second memory component 112N. data block.Each of the memory units can store one or more data bits (eg, blocks of data) used by the host system 120. Although a non-volatile memory component such as a NAND-type flash memory is described, the memory components 112A to 112N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory components 112A to 112N may be, but are not limited to, random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase change memory (PCM), magnetic random access memory (MRAM), NOR flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in body resistance. In addition, compared with many flash-based memories, cross-point non-volatile memories can perform in-place write operations, in which non-volatile memory cells can be processed without pre-erasing the non-volatile memory cells. Programming. In addition, the memory cells of the memory components 112A to 112N may be grouped into memory pages or data blocks, and the memory pages or data blocks may refer to the cells of the memory components for storing data.The memory system controller 115 (hereinafter referred to as "controller") may communicate with the memory components 112A to 112N to perform operations, such as reading data, writing data, or erasing data at the memory components 112A to 112N, and other such operate. The controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 115 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines to control the operation of the memory subsystem 110 , Including processing the communication between the memory subsystem 110 and the host system 120. In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 is illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may instead rely on (for example, an external host Or provided by a processor or controller separate from the memory subsystem) external control.In general, the controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or suitable commands to achieve the desired access to the memory components 112A to 112N. The controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block addresses and addresses associated with the memory components 112A to 112N Address translation between physical block addresses. The controller 115 may also include a host interface circuit system to communicate with the host system 120 through a physical host interface. The host interface circuitry can convert commands received from the host system into command instructions to access the memory components 112A to 112N, and convert responses associated with the memory components 112A to 112N into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 may include a cache or buffer (such as DRAM) and an address circuitry (such as a row decoder and a column decoder), which may receive addresses from the controller 115 and compare The addresses are decoded to access the memory components 112A to 112N.The memory subsystem 110 includes a garbage collection component 113 that can be used to perform garbage collection on the data stored at the memory components 112A to 112N. In some embodiments, the controller 115 includes at least a part of the garbage collection component 113. For example, the controller 115 may include a processor 117 (processing device) configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, the garbage collection component 113 is part of the host system 110, application programs, or operating system.The garbage collection component 113 may perform garbage collection operations based on the effective data amount of each data block of the memory component 112A having a specific memory type (for example, SLC), and may change the priority of which data blocks undergo the garbage collection operation from Switching based on the amount of valid data stored at the data block is based on a threshold condition associated with the time when the valid data is written to the data block of the memory component 112A. The garbage collection component 113 can remove any blockage in the data block of the memory component 112A by using garbage collection operations to relocate any cold host data that has not been relocated based on the effective data amount stored at the data block. Other details regarding the operation of the garbage collection component 113 are described below.FIG. 2 is an operation performed to move valid data from a first memory to a second memory based on the amount of valid data and to perform an operation of moving valid data from a first memory based on the time when valid data is written to the first memory according to some embodiments of the present disclosure. A flowchart of an example method 200 of operations of moving a memory to a second memory. The method 200 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 200 is executed by the garbage collection component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At block 210, the processing device determines the effective data amount of each data block in a set of data blocks stored at the first memory (eg, memory component 112A). Valid data may refer to data successfully written to the data block and not yet erased by the host system, indicating to be erased, updated, or reprogrammed. The first memory may have a memory type, such as SLC, which is programmed using one-pass programming operations (for example, programming data after applying one voltage pulse).At block 220, the processing device executes writing valid data of a specific data block in the set of data blocks from the first memory to the second memory based on the effective data amount of each data block in the set of data blocks. (For example, the memory component 112N, or a different data block in the memory component 112A) operation (for example, garbage collection operation). The second memory may be a memory type that is programmed using a two-pass programming operation (for example, the first data portion is programmed in the first pass, and the remaining data is programmed in the second pass), such as MLC, TLC, or QLC. The data block storing the smallest amount of valid data can be selected, and valid data from the selected data block can be written to another data block of the second memory. The processing device may determine that the valid data is successfully written to the second memory (for example, the completion of two programming operations or an appropriate percentage of two programming operations completed). In response to determining that the valid data is successfully written to the second memory, the processing device may erase the valid data and the invalid data from the corresponding data block at the first memory.As discussed above, performing operations with a priority related to the effective amount of data stored at the data block may result in some effective data (for example, the data block containing the most effective amount of data) not being copied to the second memory, and may reduce the first memory. The number of data blocks rewritten in a memory. Therefore, in some embodiments, the priority of which data blocks are to undergo the operation is switched based on a threshold condition associated with the time of valid data writing of a group of data blocks of the first memory.For example, at block 230, the processing device determines that the threshold condition associated with the time when valid data of the set of data blocks is written to the first memory has been satisfied. In some embodiments, determining that the threshold condition associated with the time when valid data of the set of data blocks is written to the first memory has been satisfied includes: identifying that the last data block (for example, the latest data) was written to The indicator associated with the time of the first memory. The indicator may be an index value (e.g., version number) that starts at 0 or 1 and is incremented every time a data block is programmed. The processing device may also identify a second indicator associated with the time when the first data block (e.g., the oldest data) was written to the first memory. The threshold condition may correspond to the time when the last data block in view of the indicator was written to the first memory relative to the first data block in view of the second indicator (or the oldest data block currently stored at the first memory) The difference between the time written to the first memory (for example, comparing the index value), and the threshold condition is satisfied when the difference exceeds the threshold difference.In some embodiments, determining that the threshold condition associated with the time when valid data of the set of data blocks is written to the first memory has been satisfied includes: a processing device identifier and the first data block (for example, the oldest data) An indicator associated with the time of writing to the first memory. In this embodiment, the indicator may be an index value, a timestamp or a flag, and so on. The processing device may determine whether the amount of data written to other data blocks of the first memory since the data was written to the first data block of the first memory exceeds the threshold condition. The threshold condition may be satisfied when the amount of data written to other data blocks since the data was written to the first data block exceeds the threshold amount.In some embodiments, determining that the threshold condition associated with the time when the valid data of the set of data blocks is written to the first memory has been satisfied includes: identifying the first data block currently stored in the first memory. An indicator associated with the time of a data block (for example, the oldest data). In this embodiment, the indicator may be a time stamp. In view of the time stamp, the processing device can determine whether the amount of time that has passed since the data was written to the first data block exceeds a threshold condition. The threshold condition may be satisfied when the amount of time that has passed since the data was written to the first data block exceeds the threshold amount of time that has passed.At block 240, in response to determining that the threshold condition has been met, the processing device performs an operation of writing valid data of the set of data blocks from the first memory to the second memory based on the time when valid data is written to the first memory ( For example, garbage collection operation). It should be noted that when the threshold condition is met, the garbage collection strategy can switch from selecting the data block with the least valid data to selecting the data block with the oldest valid data. In some embodiments, the processing device may start writing the oldest valid data to the second memory and erase the oldest valid data in the first memory after successfully writing to the second memory. Then, the processing device may write the next oldest valid data to the second memory and erase the next oldest valid data in the first memory after successfully writing to the second memory. This process can continue until all valid data is cleared from the first memory. In some embodiments, only a part of the valid data can be cleared. For example, it is possible to clear any valid data in a data block that has an index value that exceeds a threshold difference of the index value of the data block associated with the latest valid data, and data blocks that have been written in a data block that exceeds the threshold amount of elapsed time. Any valid data, etc. In this way, only "blocked" data blocks are emptied.FIG. 3A illustrates an example of performing an operation of moving data from the first memory 300 to the second memory 302 based on the effective data amount according to some embodiments of the present disclosure. The host system 120 may send data to the memory subsystem 110 for storage. The first memory 300 may be a static cache with reserved data blocks 304, 306, 308, and may have a memory type (for example, SLC) that is programmed using a single-pass programming operation. The second memory 302 may have a memory type (for example, MLC, TLC, QLC) that is programmed using a two-pass programming operation. The data from the host may first be written to the first memory 300 using a single-pass programming operation to ensure that the data is not damaged due to power failure during the two-pass programming operation when the data is written to the second memory 302. The first memory 300 and the second memory 302 may be different parts of the same memory component 112A, or different parts of different memory components 112A and 112N.As depicted, the first memory 300 contains two data blocks 304 and 306 with different effective data amounts stored therein. The data block 304 stores 100% valid data, and the data block 306 stores 5% valid data. The processing device may perform an operation of writing valid data of a specific data block from the first memory 300 to the second memory 302 (for example, garbage collection) based on the effective data amount of each data block 304 and 306. In some cases, the processing device selects the smallest amount of valid data in the data blocks 304 and 306 of the first memory 300. Therefore, the processing device selects 5% of the valid data from the data block 306 to write to the data block 310 of the second memory 302, as depicted by the arrow 312. Although 5% of the valid data is shown as being written to the block 302 of the second memory 302 for simplicity, it should be noted that if the second memory 302 is of type MLC, 2.5% of the valid data will be written because the MLC data The block is twice the size of the SLC data block. Similarly, if the data block 302 is of type TLC, 1.67% will be written, because the TLC data block is three times the size of the SLC data block, and if the data block 302 is of type QLC, 1.25% of the valid data will be written Because the QLC data block is four times the size of the SLC data block.In addition, as depicted, when a data block is programmed, an index value is associated with each data block of the first memory 300. It should be understood that the index value is used as an example of the indicator, and there are other indicators that can be used (eg, timestamp, amount of data written, etc.). The processing device may associate an index value with an initial value to the initially programmed data block, and increment the index value every time the data block is programmed. Therefore, the index value of the initially programmed data block 304 is 0, and the index value of the data block 306 is incremented to 1 once. After the processing device determines that 5% of the valid data is successfully written to the data block 310, the 5% of the valid data can be erased from the data block 306 of the first memory 300.FIG. 3B illustrates another example of performing an operation of moving data from the first memory 300 to the second memory 302 based on the effective data amount according to some embodiments of the present disclosure. As depicted, the first memory 300 contains two data blocks 304 and 306 with different effective data amounts stored therein. The data block 304 stores 100% valid data, and the data block 306 stores 10% valid data. The processing device may perform an operation of writing valid data of a specific data block from the first memory 300 to the second memory 302 (for example, garbage collection) based on the effective data amount of each data block 304 and 306. In some cases, the processing device selects the smallest amount of valid data in the data blocks 304 and 306 of the first memory 300. Therefore, the processing device selects 10% from the data block 306 to be written to the data block 310, and by combining the 10% with the previously stored 5% to generate 15% valid data at the data block 310 of the second memory 302, As depicted by arrow 316.In addition, as depicted, the index value of the data block 306 is again incremented to 2 because the data block has been reprogrammed with 10% valid data. The index value of the data block 304 is still at 0 because it has only been programmed once. After the processing device determines that 10% of the valid data is successfully written to the data block 310, the 10% of the valid data can be erased from the data block 306 of the first memory 300. It should be understood that after the second operation is performed, 100% of the valid data in the data block 304 is still stored at the data block 304, because it contains more valid data than 5% of the valid data and 10% of the valid data of the data block 306. . In some cases, the data block 306 may continue to be programmed with less valid data than the data block 304, and the data block 304 may be blocked by cold host data.Therefore, the embodiments of the present disclosure provide switching to switch the priority of which data blocks undergo operations from the effective data amount based on the data block of the first memory 300 to the time based on the effective data written to the data block of the first memory 300 The associated threshold. For example, FIG. 3C illustrates an example of selecting a block according to some embodiments of the present disclosure, and performing a garbage collection operation on the block based on the time when valid data is written to the first memory 300. After 10% of the effective data is written to the second memory 302 and erased from the first memory 300 in FIG. 3B, the data block 306 is written with 7% of the effective data. In response to programming with 7% of the data, the index value of the data block 306 is incremented to 3 for the third time.The processing device may determine the threshold condition associated with the time when the valid data of the set of data blocks is written to the first memory 300 by identifying the indicator associated with the time when the last data block is written to the first memory 300 Has been satisfied. In the depicted example, the indicator is the index value associated with data block 306, and the last data block written is data block 306 because it has the highest index value of 3. The processing device may identify the second indicator associated with the time when the first data block was written to the first memory 300. In the depicted example, the second indicator is the index value associated with the data block 304, and the first data block written to the first memory 300 is the data block 304 because it has the lowest index value. The threshold condition corresponds to the difference between the time when the last data block 306 was written to the first memory 300 with respect to the time when the first data block 304 was written to the first memory 300. When the difference exceeds the threshold difference, the threshold condition is satisfied. In the depicted example, the threshold difference is 2 and the difference between the index value (3) of the last data block 306 and the index value (0) of the first data block 304 is 3. Therefore, the threshold difference is exceeded.In response to determining that the threshold condition has been satisfied, the processing device may perform an operation of writing valid data of a set of data blocks 304 and 306 to the second memory 302 based on the time when the valid data is written to the first memory. For example, the processing device may write the oldest valid data to the second memory 302 first. The oldest valid data can be identified by the data block with the lowest index value (for example, the data block 304 has an index value of 0). As depicted by arrow 318, the oldest valid data (eg, 100% valid data) of the data block 304 is written to the data block 314 of the second memory 302. In some embodiments, at least a portion of the oldest valid data of the data block 304 may be appended to the block 310. For example, in the data block 310, the least effective data from the data block 306 with index 1 in FIG. 3A, the data block 306 with index 2 in FIG. 3B, and the data block 306 with index 3 in FIG. 3B can be mixed. Strategy. For example, the data block 310 may be designated for data that meets a certain criterion, such as least valid data, and another open data block (for example, data block 314) is used for data that meets another criterion, such as a certain criterion. An age threshold.In some embodiments, after the oldest valid data of the data block 304 is written to the data block 314 of the second memory 302, the next oldest valid data of the data block 306 (for example, 7% valid data) Write to the data block 310 and combine with the previous 15% of the valid data of the second memory 302 (thus obtaining 22%), as depicted by the arrow 320. After it is determined that the valid data is successfully written into the second memory 302, any data in the data blocks 304 and 306 can be erased.4 is a flowchart of an example method 400 to perform garbage collection operations based on different priorities according to some embodiments of the present disclosure. The method 400 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 400 is executed by the garbage collection component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At block 410, the processing device performs a garbage collection operation on the data of the data block in the set of data blocks of the memory component 112A. The memory component 112A may be a cache, which has a memory type (e.g., SLC) that is programmed using a single-pass programming operation. The host data may be initially written to the memory component 112A, and then relocated to another part of the memory component 112A or to a different memory component 112N. The garbage collection operation selects data blocks based on the priority, which is associated with the effective data amount stored at each data block in the set of data blocks. The garbage collection operation may select a data block with the smallest amount of valid data from the set of data blocks, and write the valid data from the selected data block to another data block of the memory component 112A or another data block of a different memory component 112N. Another data block. In addition, the garbage collection operation can erase the valid data from the data block of the memory component 112A after the valid data is successfully written.At block 420, in response to performing the garbage collection operation, the processing device determines whether the second data of the other data block of the other data block in the set of data blocks kept written to the memory component 112A satisfies the threshold condition. The processing device switches the priority of which data blocks are to perform the operation from based on the amount of valid data stored at the group of data blocks to based on the time for valid data to be written to the group of data blocks. In some embodiments, to determine whether the second data written to the other data block after the garbage collection operation is performed in another data block in the set of data blocks of the memory component 112A meets the threshold condition , The processing device may identify whether the difference between the indicator associated with the other data block and the second indicator associated with any one of the remaining group of data blocks exceeds a threshold. For example, the processing device may compare the index value of the other data block with the index value of the latest data block in which data is written. If the difference is greater than the threshold difference between the index values, the processing device may determine that the threshold condition is satisfied.In some embodiments, to determine whether the second data written to the other data block after the garbage collection operation is performed in another data block in the set of data blocks of the memory component 112A meets the threshold condition , The processing device may identify whether the amount of data written to the group of data blocks since the second data was written to the another data block exceeds a threshold. For example, the processing device may identify the indicator associated with the another data block, and may determine the data written to the other data block of the memory component 112A after the data is written to the another data block quantity. If more than the threshold amount of data has been written, the threshold condition is met.In some embodiments, to determine whether the second data written to the other data block after the garbage collection operation is performed in another data block in the set of data blocks of the memory component 112A meets the threshold condition , The processing device may identify whether the amount of time that has passed since the second data was written to the another data block exceeds a threshold. If the amount of time that has passed since the second data was written to the another data block exceeds the threshold, the threshold condition may be satisfied.At block 430, in response to determining that the second data satisfies the threshold condition, the processing device performs a memory flush operation on the data of the set of data blocks of the memory component. The memory emptying operation may refer to a type of garbage collection performed on the data block based on the time when valid data is written to the data block based on the usage priority of the data block. To perform a memory flush operation on the data of the set of data blocks of the memory component 112A, the processing device may write each of the data of the set of data blocks to another memory component 112N or another memory component 112A. One data block, and each of the data is erased from the set of data blocks of the memory component 112A. In some embodiments, the processing device may select the oldest valid data from the set of data blocks of the memory component 112A and reverse the operation to copy the valid data from the oldest to the newest to another memory component 112N. After copying each of the valid data to a new location, the valid data can be erased from the set of data blocks.Figure 5 illustrates an example machine of a computer system 500 within which a set of instructions can be executed for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, the computer system 500 may correspond to a host system (for example, the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1), or is available To perform the operation of the controller (for example, execute the operating system to perform the operation corresponding to the garbage collection component 113 of FIG. 1). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can be used as a peer-to-peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment, with the capacity of a server or client machine in a client-server network environment operate.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network device, a server, a network router, a switch or a bridge, or it can (in sequence or in other ways) Way) Any machine that executes a set of instructions that specify actions to be taken by the machine. In addition, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that individually or collectively execute a set (or sets of) instructions to perform any one or more of the methods discussed herein.The example computer system 500 includes a processing device 502, a main memory 504 (for example, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), The static memory 506 (eg, flash memory, static random access memory (SRAM), etc.) and the data storage system 518 communicate with each other through the bus 530.The processing device 502 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , Or a processor that implements a combination of instruction sets. The processing device 502 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 may also include a network interface device 508 to communicate on the network 520.The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) having stored thereon one or more sets of instructions 526 or software embodying any one or more of the methods or functions described herein. The instructions 526 may also completely or at least partially reside in the main memory 504 and/or the processing device 502 during the execution of the instructions 526 by the computer system 500, and the main memory 504 and the processing device 502 also constitute machine-readable storage media. The machine-readable storage medium 524, the data storage system 518, and/or the main memory 504 may correspond to the memory subsystem 110 of FIG.In one embodiment, the instructions 526 include instructions to implement a function corresponding to the garbage collection component (eg, the garbage collection component 113 of FIG. 1). Although the machine-readable storage medium 524 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented in terms of algorithms and symbolic representations for operations on data bits in computer memory. These algorithm descriptions and representations are the most effective way for the technical personnel in the data processing field to convey the main idea of their work to other technical personnel in the field. Algorithms are here and generally considered to be a self-consistent sequence of operations that produce the desired result. The operations are operations that require physical manipulation of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the actions and processes of a computer system or similar electronic computing device, which manipulates and transforms data represented as physical (electronic) quantities in the registers and memory of the computer system into similarly represented as computer system memory or registers or other such Other data of physical quantities in the class information storage system.The present disclosure also relates to equipment for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical card, or any type of media suitable for storing electronic instructions, each of which is coupled to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other equipment. Various general-purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized equipment to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having instructions stored on the machine-readable medium, and the instructions may be used to program a computer system (or other electronic device) to execute the Process. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It should be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense. |
Embodiments of the disclosure describe data transmission via a touchscreen display of a mobile computing device. The mobile computing device includes a peripheral component, integrated into a touchscreen display surface housing, and a plurality of photonic pulse transmitters and receivers disposed on edges of the touchscreen display surface. One or more receivers receive pulses from the photonic pulse transmitters for detecting user touch inputs on the touchscreen display surface. A photonic pulse modulator modulates a pulse to be transmitted from one of the photonic pulse transmitters based, at least in part, on peripheral component data. A photonic pulse demodulator demodulates the modulated pulse received by the pulse detector(s) to retrieve the peripheral component data. By utilizing these pulse transmitters/receivers, used for user touch input detection, to also exchange data via modulated light, the bezel area around the touchscreen display surface may be reduced. |
1.A mobile computing device comprising a peripheral component data transmission/reception subsystem, the mobile computing device comprising:a touch screen display surface included in the housing;a peripheral component integrated into the housing of the touch screen display surface;a plurality of photon pulse transmitters disposed on at least one edge of the surface of the touch screen display;One or more pulse detectors disposed on at least one edge of the touch screen display surface and arranged to receive pulses from the plurality of photon pulse transmitters to detect user touch input on the touch screen display surface;a photonic pulse modulator for modulating a pulse to be transmitted from one of the plurality of photon pulse transmitters based at least in part on peripheral component data;A photon pulse demodulator for demodulating modulated pulses received by the one or more pulse detectors to retrieve the peripheral component data from the modulated pulses.2.The mobile computing device of claim 1 wherein said peripheral component data comprises data to be transferred from said peripheral component.3.The mobile computing device of claim 2 wherein the peripheral component comprises at least one of an image sensor, an audio sensor, an ambient light sensor, or an antenna circuit.4.The mobile computing device of claim 1 wherein said peripheral component data comprises data to be communicated to said peripheral component.5.The mobile computing device of claim 4 wherein the peripheral component comprises at least one of an antenna circuit, an audio output component, or a haptic feedback component.6.The mobile computing device of claim 1 wherein said pulses transmitted from one of said plurality of photon pulse transmitters are modulated according to at least one of a Miller coding scheme or a Manchester coding scheme.7.The mobile computing device of claim 1 wherein said plurality of photon pulse transmitters are to transmit pulses to a single pulse transmitter in accordance with a Time Division Multiple Access (TDMA) protocol.8.The mobile computing device of claim 1 further comprising:a second peripheral component; wherein the photonic pulse modulator is further transmitted from one of the plurality of photon pulse transmitters based at least in part on data modulation associated with the second peripheral component in accordance with a wavelength division multiplexing (WDM) protocol pulse.9.The mobile computing device of claim 1 wherein said mobile computing device comprises a handheld mobile computing device.10.The mobile computing device of claim 1 wherein said mobile computing device comprises a laptop computer and further comprising:A flip-type chassis comprising a second housing coupled to the housing of the touch screen display surface; wherein the peripheral component data includes data exchanged with a processor included in the second housing.11.A method for transmitting/receiving peripheral component data, the method comprising:Receiving data related to peripheral components of the mobile computing device;Transmitting a photon pulse from a photon pulse transmitter disposed on a side of the touch screen display surface of the mobile computing device to a photon pulse detector disposed on the other side of the touchscreen display surface, wherein the photon pulse is based at least in part on The peripheral component data is modulated;Determining whether a user touch input has occurred on the surface of the touch screen display based at least in part on an amplitude value of the modulated photon pulse;The modulated photon pulses are demodulated to retrieve the peripheral component data.12.The method of claim 11 wherein said peripheral component data comprises data to be transferred from said peripheral component.13.The method of claim 12 wherein said peripheral component comprises at least one of an image sensor, an audio sensor, an ambient light sensor, or an antenna circuit.14.The method of claim 11 wherein said peripheral component data comprises data to be transferred to said peripheral component.15.The method of claim 14 wherein said peripheral component comprises at least one of an antenna circuit, an audio output component, or a haptic feedback component.16.The method of claim 11 wherein said photon pulses are modulated according to at least one of a Miller coding scheme or a Manchester coding scheme.17.The method of claim 11 wherein said mobile computing device comprises a plurality of photon pulse transmitters for transmitting pulses to a single pulse transmitter in accordance with a Time Division Multiple Access (TDMA) protocol.18.The method of claim 11 wherein said mobile computing device further comprises:a second peripheral component; wherein the photon pulse is further modulated based at least in part on data from the second peripheral component in accordance with a wavelength division multiplexing (WDM) protocol.19.A non-transitory computer readable storage medium comprising instructions which, when executed, cause a computer to perform a method for transmitting/receiving peripheral component data as claimed in any one of claims 11-18.20.A device for transmitting/receiving peripheral component data, the device comprising:Means for receiving data related to peripheral components of the mobile computing device;Means for transmitting a photon pulse from a photon pulse transmitter disposed on a side of the touch screen display surface of the mobile computing device to a photon pulse detector disposed on the other side of the touchscreen display surface, wherein the photon pulse Modulating based at least in part on the peripheral component data;Means for determining whether a user touch input has occurred on the surface of the touch screen display based at least in part on an amplitude value of the modulated photon pulse;A means for demodulating the modulated photon pulse to retrieve the peripheral component data.21.The device of claim 20 wherein said peripheral component data comprises data to be transferred from said peripheral component.22.The device of claim 21, wherein the peripheral component comprises at least one of an image sensor, an audio sensor, an ambient light sensor, or an antenna circuit.23.The device of claim 20, wherein the peripheral component data comprises data to be transferred to the peripheral component.24.The apparatus of claim 20 wherein said mobile computing device comprises a plurality of photon pulse transmitters for transmitting pulses to a single pulse transmitter in accordance with a Time Division Multiple Access (TDMA) protocol.25.The device of claim 20 wherein said mobile computing device further comprises:a second peripheral component; wherein the photon pulse is further modulated based at least in part on data from the second peripheral component in accordance with a wavelength division multiplexing (WDM) protocol. |
Data transfer on touch screen displayTechnical fieldEmbodiments of the present invention generally relate to computing devices and, more particularly, to mobile computing devices.Background techniqueMobile computing devices, such as laptops, tablets, and smart phones, utilize touch screen displays, where the display surface also acts as a user input device. As a complement to other user input devices, such as a keyboard, or as an alternative to other user input devices, such as a keyboard, these touch screen displays can be on the device. The user ideally wants to maximize the display surface of the mobile computing device and minimize the form factor of the device; this can be challenging when designing devices that utilize touch screens, as some touch sensing solutions require circuitry to be included in the thick border surrounding the touch screen. .DRAWINGSThe following description contains a discussion of the accompanying drawings in which the description of the implementation of the embodiments of the invention, given by way of example. The drawings are to be understood as illustrative and not restrictive. The use of one or more "embodiments" as used herein is to be understood as describing a particular feature, structure, or characteristic that is included in at least one implementation of the invention. Thus, phrases such as "in one embodiment" or "in an alternative embodiment" are used to describe various embodiments and implementations of the invention, and are not necessarily all referring to the same embodiment. However, they do not have to be mutually exclusive.1A and 1B are illustrations of data transfer components of a touch screen display system in accordance with an embodiment of the present disclosure.2 is a flow diagram of a process for transmitting data via a touch screen display of a computing device in accordance with an embodiment of the present disclosure.3 is an illustration of a mobile computing device utilizing a data transfer component of a touch screen display in accordance with an embodiment of the present disclosure.4 is an illustration of a mobile computing device utilizing a data transfer component of a touch screen display in accordance with an embodiment of the present disclosure.FIG. 5A is an illustration of a pulse modulation circuit in accordance with an embodiment of the present disclosure.FIG. 5B is an illustration of modulated pulses in accordance with an embodiment of the present disclosure.FIG. 6 is an illustration of a pulse demodulation circuit in accordance with an embodiment of the present disclosure.7 is a block diagram of computing components of a computing device in accordance with an embodiment of the present disclosure.Some details and implementations are described below, including a description of the figures, which may depict some or all of the embodiments described below, as well as a discussion of other potential embodiments or implementations of the inventive concepts presented herein. An overview of embodiments of the invention is provided below, followed by a more detailed description with reference to the accompanying drawings.Detailed waysEmbodiments of the present invention describe apparatus, systems, and methods for data transfer via a touch screen display of a mobile computing device. Throughout this specification, several technical terms are used. These terms assume their ordinary meaning in the field in which they come from, unless the context clearly dictates otherwise, or the context of their use will be explicitly suggested in other ways. In the following description, numerous specific details are set forth However, one skilled in the relevant art will recognize that the techniques described herein may be implemented without one or more of the specific details, or with other methods, components, materials, and the like. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.1A and 1B are illustrations of data transfer components of a touch screen display system in accordance with an embodiment of the present disclosure. In this embodiment, system 100 can be included in a desktop or mobile computing device and displayed to include a display layer 102, a plurality of pulse transmitters (including pulse transmitters 104), and a plurality of pulse receivers (including Pulse receiver 106).Touch screens are increasingly becoming part of the user interface of computing devices. Touch screens are used in desktop computing devices and mobile computing devices, such as laptops, tablets, smart phones, and wearable devices. Some solutions utilized by these devices to detect user touch input on a display/surface include resistive touch detection, surface acoustic wave touch detection, capacitive touch detection, infrared grid touch detection, and infrared waveguide touch detection.The illustrated system 100 includes an infrared waveguide touch detection solution in which infrared light is transmitted to a light guide screen layer 102 (eg, including glass or a polymer) via a plurality of pulse transmitters. While infrared waveguide solutions can utilize several different types of solutions, including camera-based and projector-based light guides, system 100 utilizes frustrated total internal reflection (FTIR).FTIR utilizes light propagation within the medium (ie, reflection inside the medium) due to the critical angle of reflection and refractive index of the material. Applications such as fiber use the concept of total internal reflection (TIR) to deliver light substantially without loss. If additional material is introduced into the surface, it can suppress internal reflections, allowing light to escape at that point of contact. Using FTIR, a multi-touch detection surface can be constructed in which an object touching a display screen having internally reflected light, such as a user's finger, generates a touch event. These touch events can be interpreted as user input.In this embodiment, the FTIR implementation of system 100 is shown to include a plurality of infrared (IR) transmitters and receivers surrounding screen layer 102. These pulse transmitters are shown arranged to emit a beam of light onto at least one edge of the display layer 102. Thus, the display screen layer 102 functions as a light transmitting element such that when the user touches the screen, it produces attenuation of the propagating light. A pulse receiver, such as pulse receiver 106, receives light from a pulse transmitter, such as pulse transmitter 104, and determines the location of the user's touch input on screen 102 based on the nature of the received signal (e.g., the amplitude of the signal).It is noted that the number of pulse generators/receivers utilized by other embodiments may differ from the embodiments illustrated herein; for example, in some embodiments, the pulse generator/receiver may completely surround the display layer. For example, a large number of IR transmitters and receivers can be placed around the periphery of the display such that each IR transmitter sequentially transmits pulses received by all of the receivers. This process can continue in sequence to determine the location of the touch on the screen.FIG. 1B further illustrates system 100 as including a modulated pulse generator 114 and a pulse demodulator 116. In an embodiment of the present disclosure, a pulse generator, such as pulse generator 114, drives a transmitter, such as pulse transmitter 104, to emit a string of data modulated IR light into display layer 102. As described in further detail below, this data is modulated according to bit data associated with one or more peripheral components of the mobile computing device and transmitted according to a pulse transfer trigger. This modulated IR light is then propagated through the screen and collected by one or more pulse receivers, such as pulse receiver 106. In some embodiments, in addition to an addressable receiver for collecting data that supports touch functionality, an IR receiver can be designated as the master of the connection to the controller processor involved in the overall operation of the system. receiver.In this example, pulse demodulator 116 demodulates the received light pulses to recover the bit data modulated onto the pulses. Thus, the amplitude of the pulses is used by the user to touch the input detection process, and the recovered bit data is used by other components of the mobile computing device (eg, for storage or processing).Embodiments may achieve a reduction in the area of the bezel around the surface of the touch screen display by utilizing these pulse transmitters/receivers that are normally used for user touch input detection to also exchange data via modulated light. Embodiments of the present disclosure route signals through the touch screen display layer over modulated IR light that has been utilized for user touch input detection, instead of routing signals around the display on the copper traces using printed circuit boards or flex circuits. Thus, embodiments can consider thinner screen bezels and lower cost by removing the flexible circuitry necessary to carry the data signals.2 is a flow diagram of a process for transmitting data via a touch screen display of a computing device in accordance with an embodiment of the present disclosure. The flowcharts described herein provide examples of the sequence of various process actions. Although shown in a particular order or order, the order of acts may be modified unless otherwise specified. Thus, the illustrated implementations are only to be understood as examples, and the illustrated processes may be performed in a different order, and some acts may be performed in parallel.Moreover, one or more actions may be omitted in various embodiments of the present disclosure; thus, not all acts are required in every implementation. Other process flows are possible.Process 200 includes operations 202 for receiving data from peripheral components of a mobile computing device or data for peripheral components of a mobile computing device (other embodiments may be utilized in a desktop computing device). Some peripheral components, such as sensors, capture data to be processed and/or stored; other peripheral components, such as audio speakers and haptic feedback components, receive data for output. Further, some components, such as antenna circuits, capture and output data.A trigger is received to transmit a photon pulse associated with the touch screen user input detection process, 204. The processes described above can be configured to periodically transmit photon pulses in accordance with a predetermined frequency. This pulse is modulated 206 based on data received from the peripheral component or data for the peripheral component. For example, the pulses can be modulated according to the encoded version of the peripheral component data.Still further, some embodiments may utilize wavelength division multiplexing (WDM) to transfer data from multiple peripheral components/for multiple peripheral components - ie, different wavelength ranges for different peripheral components.The modulated pulse is received at the pulse detector, 208. Two separate operations are performed on the received modulated pulses - the amplitude of the pulses is determined 210 for the touch screen user input detection process, and the pulses are demodulated to obtain peripheral component data 212. The process described above contemplates removing copper traces for peripheral components for transmitting/receiving data from other components disposed on opposite ends of the touch screen display, thereby potentially reducing the size of the housing components of the mobile computing device.3 is an illustration of a mobile computing device utilizing a data transfer component of a touch screen display in accordance with an embodiment of the present disclosure. Mobile computing device 300 is a laptop computer that is shown to include a clamshell chassis having an upper portion 320 and a lower portion 310 coupled together via hinges 302.The upper portion 320 of the mobile computing device 300 is shown to include a touch screen display 322 placed in a housing that includes an upper bezel 330 and a lower bezel 332. Camera 324, microphone 326, and ambient light sensor 328 are shown integrated into upper bezel 330. Touch screen 322 is displayed to display touch icon 340. The lower portion 310 is shown to include a keyboard 312 and a mouse/tracking pad 314. Other components not shown by mobile computing device 300 include a processor, a memory component, a data bus, an audio output component (ie, an audio speaker), and the like. Still further, an antenna circuit for providing wireless connectivity of the device 300 is not shown. The antenna circuits mentioned herein may describe communication protocols, such as GSM, CDMA, UMTS, EV-DO, WiMAX or LTE, for Bluetooth® radio technology, described in IEEE 802.11 (including any IEEE 802.11 version). ) or other possibilities of circuitry. In the examples described below, at least one of these components is included in the lower portion 310; however, in other embodiments, the upper portion 320 can include these components and can be further detached from the lower portion 310 for tablet computing The device (similar to the form factor of device 400 of Figure 4 discussed below) functions.Data from the upper portion 320 component is transferred to the computing component of the lower portion 310 for storage and/or processing. For example, image data from camera 324 and audio data from microphone 326 may be stored in a memory, and data captured from ambient light sensor 328 may be used when performing a process of adjusting the brightness of touch screen display 322.As discussed above and described in further detail below, instead of routing data around or behind the screen, embodiments of the present disclosure route data through touch screen display 322 over modulated infrared light that has been utilized for user touch input detection processes. Thus, the touch screen display 322 can be extended to the edge of the upper portion 320 instead of using the side borders to hide the wires for transmitting/receiving data. Lines 350 and 352 illustrate where the side borders of upper portion 320 may be needed to route signals from components 322, 324, and 326 to the processor/memory components contained in lower portion 310 via wires or other electrical transfer components.4 is an illustration of a mobile computing device utilizing a data transfer component of a touch screen display in accordance with an embodiment of the present disclosure. Device 400 includes a smart device and includes an upper housing portion 402 (including ambient light sensor 420, speaker 422, and camera 424), a lower housing portion 404 (including user input button 426), and a touch screen interface 410 (displaying touch icon 412). Other components not shown by mobile computing device 400 include a processor, a memory component, a data bus, etc., and each may be included behind touch screen interface 410 or, alternatively, included in lower housing portion 404 to reduce the device thickness.In this example, data from ambient light sensor 420 and camera 424 are to be transferred to a computing component of lower housing portion 402 for storage and/or processing, and the output audio data is transmitted from lower housing portion 402 to speaker 426. As discussed above and described in further detail below, instead of routing data around or behind touch screen interface 410, embodiments of the present disclosure route data through the touch screen over modulated infrared light that has been utilized for user touch input detection processes. Thus, the touch screen interface 410 can be extended to the edge of the device 400 instead of using a bezel to hide wires for transmitting/receiving data. Lines 430 and 432 illustrate where the side border of device 400 will be if data is routed to/from the components of upper housing portion 402 via wires or other electrical transport components.FIG. 5A is an illustration of a pulse modulation circuit in accordance with an embodiment of the present disclosure. In this embodiment, circuit 500 is shown to include a pulse generator 502, an encoder 504 for encoding bit data 506, an LED driver 508, and a modulator 514 that is shown receiving pulses 510 and generating modulated pulses 512.Bit data 506 is associated with one or more peripheral components of the computing device. This data can be buffered until the data threshold is exceeded, and then the data transfer process is initiated. This process includes encoding bit data 506 via encoder 504 to generate an encoded data signal for reusing the pulses from pulse generator 502. For example, with Manchester encoding, the bits of the data are indicated by a transition from a high state to a low state (or vice versa). Other embodiments may use other coding techniques, such as Miller coding.Modulator 514 modulates carrier signal 510 generated by pulse generator 502 based on the encoded signal produced by encoder 504. FIG. 5B is an illustration of modulated pulses in accordance with an embodiment of the present disclosure. Graph 520 of Figure 5B illustrates pulse 510, and graph 530 illustrates modulated pulse 512, where the amplitude and period of pulse 512 are unchanged, but the pulse is now modulated to include (encoded) bit data 506. In these graphs, Tp is the time duration of pulse 210 and TB is the bit time duration of modulated pulse 512.FIG. 6 is an illustration of a pulse demodulation circuit in accordance with an embodiment of the present disclosure. Circuit 600 is shown to include a transimpedance amplifier (TIA) 602, an amplitude integrator 604, an analog to digital converter (ADC) 606, a high pass filter 608, a bit integrator 610, bit decision logic 612, a decoded symbol 614, and a modulator 616. .In this example, light is received from the photodiode and TIA 602 (ie, current to voltage converter) to obtain a modulated modulated pulse (ie, pulse 512 of Figure 5B). At this stage, the pulse is split into two different components. The amplitude integrator 604 is configured to integrate the pulses based on Tp; thereby, substantially detaching the modulation from the pulses such that the signals received by the ADC 606 are more closely similar to the non-modulated pulses (ie, the pulses 502 of FIG. 5B); the ADC 606 The digital output can be used for the user to touch the input detection process.High pass filter 608 receives the transmitted modulated pulses to obtain a high frequency modulation of the signal. The decoder 616 decodes the signal based on the boundaries defined by the decoded symbols 614. The bit integrator 610 integrates the transition of the modulator based on the time period TB of FIG. 5B, and the bit decision logic 612 decodes whether the transition represents 0 or represents 1 (eg, for Manchester encoding, low to high transition representation 0, and high to low transition Expression 1).7 is a block diagram of computing components of a computing device in accordance with an embodiment of the present disclosure. It will be appreciated that certain components are generally shown, and that all components of such devices are not shown in device 700. Still further, it will be understood that any of the illustrated components can be discrete components or can be components that are included on a system-on-a-chip (SoC) integrated circuit (IC) and can be coupled in any manner through any direct or indirect component.Apparatus 700 can include any computing device (moving or otherwise) that utilizes the data transfer and touch screen user input detection processes discussed above. Apparatus 700 includes one or more processor cores 710 that perform primary processing operations of apparatus 700. Each processor core(s) 710 can be a SoC component or can be included in one or more physical devices (such as single or multi-core microprocessors, application processors, microcontrollers, programmable logic devices) Or other processing components). The processing operations performed by one or more processor cores 710 include execution of an operating platform or operating system on which applications and/or device functions are executed. Processing operations include operations related to I/O (input/output) of a human user or other device, operations related to power management, and/or operations associated with connecting device 700 to another device. Processing operations may also include operations related to audio I/O and/or display I/O.In one embodiment, apparatus 700 includes an audio subsystem 720 that represents hardware (eg, audio hardware and audio circuitry) and software (eg, drivers, codecs) components associated with providing audio functionality to a computing device. The audio function can include a speaker and/or headphone output, as well as a microphone input via any of the audio jacks described above. Devices for such functions may be integrated into device 700 or connected to device 700. In one embodiment, a user interacts with device 700 by providing audio commands that are received and processed by one or more processor cores 710.I/O controller 740 represents hardware devices and software components associated with user interaction. I/O controller 740 is operative to manage hardware that is part of audio subsystem 720 and/or display subsystem 730. In addition, I/O controller 740 illustrates the connection point of an add-on device connected to device 700 through which a user may interact with the system. For example, a device attachable to device 700 can include a microphone device, a speaker or stereo system, a video system or other display device, a keyboard or keypad device, or other I/O device for a particular application (such as a card reader or other device). Device).As mentioned above, I/O controller 740 can interact with audio subsystem 720 and/or display subsystem 730. For example, input through a microphone or other audio device may provide input or commands to one or more applications or functions of device 700.In addition, an audio output can be provided instead of or in addition to the display output. Display subsystem 730 includes a touch screen, and thus, the display device also functions as an input device that can be at least partially managed by I/O controller 740. There may also be additional buttons or switches on device 700 to provide I/O functionality managed by I/O controller 740. The I/O controller can further include logic for interfacing with the touch screen user input detection process as discussed above, or interface with it.In one embodiment, I/O controller 740 manages devices such as accelerometers, cameras, light sensors, or other environmental sensors or other hardware that may be included in device 700. The input can be part of a direct user interaction and provide an environmental input to the system to affect its operation (such as for noise filtering, adjusting the display for brightness detection, applying a flash or other feature to the camera). In one embodiment, device 700 includes power management 750 that manages battery power usage, battery charging, and features associated with power saving operations.Memory subsystem 760 includes memory devices for storing information in device 700. The memory may include non-volatile (the state does not change if the power to the memory device is interrupted) and/or volatile (the state is indeterminate if the power to the memory device is interrupted) the memory device. Memory 760 can store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of applications and functions of system 700. The memory 760 further stores a firmware image associated with the boot path operation, and thus may include a DRAM device that stores the firmware image described above.Connectivity 770 includes hardware devices (e.g., wireless and/or wired connectors and communication hardware) and software components (e.g., drivers, protocol stacks) to enable device 700 to communicate with external devices. The device may be a separate device such as other computing devices, wireless access points or base stations, and peripherals such as a headset, printer, or other device.Connectivity 770 can include multiple different types of connectivity. In general terms, device 700 is illustrated as having cellular connectivity 772 and wireless connectivity 774. Cellular connectivity 772 generally refers to cellular network connectivity provided by wireless carriers, such as via GSM (Global System for Mobile Communications) or variant or derivative, CDMA (Code Division Multiple Access) or variant or derivative, TDM (Time Division Multiplexing) Or connectivity provided by variant or derivative or other cellular service standards. Wireless connectivity 774 refers to non-cellular wireless connectivity and may include a personal area network (such as Bluetooth), a local area network (such as Wi-Fi), and/or a wide area network (such as Wi-Max) or other wireless communication.Peripheral connection 780 includes a connector and hardware interface for implementing non-flash firmware storage support as described above, as well as software components (eg, drivers, protocol stacks) for making peripheral connections. It will be appreciated that device 700 may be a peripheral device to other computing devices ("to" 782) and a peripheral device ("from" 784) to it.Device 700 may have a "docking" connector to connect to other computing devices, such as for purposes of managing (eg, downloading and/or uploading, changing, synchronizing) content on device 700. In addition, the docking connector can allow device 700 to connect to certain peripherals that allow device 700 to control content output, for example, to an audiovisual system or other system. In addition to a proprietary docking connector or other proprietary connection hardware, device 700 can make peripheral connections 780 via a common connector or a standards-based connector. Common types can include Universal Serial Bus (USB) connectors (which can include any number of different hardware interfaces), and display ports include Mini Display Port (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other types.The various components described above as processes, servers or tools described herein may be components for performing the functions described. Each component described herein includes software or hardware or a combination of these. Each and all components can be implemented as logic such as software modules, hardware modules, special purpose hardware (eg, application specific hardware, ASIC, DSP, etc.), embedded controllers, hardwired circuits, hardware logic, and the like. Software content (eg, data, instructions, configuration) may be provided via an article of manufacture comprising a non-volatile tangible computer or machine readable storage medium that provides content indicative of instructions that are executable. Content can cause a computer to perform the various functions/operations described herein.A computer readable non-transitory storage medium includes providing (ie, storing and/or transmitting) a computer (eg, a computing device, an electronic system, etc.) such as a recordable/non-recordable medium (eg, read only memory (ROM), random access memory ( RAM), disk storage media, optical storage media, flash memory devices, etc.) Any mechanism that can access information in the form. Content is directly executable ("object" or "executable" form), source code or different code ("Δ" or "patch" code). The computer readable non-transitory storage medium can also include a storage device or database from which the content can be downloaded. The computer readable medium can also include a device or product on which content is stored for sale or delivery. Thus, delivering a device with stored content or providing content for downloading over a communication medium can be understood as providing the article with such content as described herein.Embodiments of the present disclosure describe a mobile computing device including: a touch screen display surface included in a housing; a peripheral component integrated into the housing of the touch screen display surface; a plurality of photon pulse transmitters disposed at At least one edge of the touch screen display surface; one or more pulse detectors disposed on at least one edge of the touch screen display surface and arranged to receive pulses from the plurality of photon pulse transmitters to detect the a user touch input on a surface of the touch screen display; a photonic pulse modulator modulating a pulse to be transmitted from one of the plurality of photon pulse transmitters based at least in part on the peripheral component data; and a photonic pulse demodulator, demodulating by The modulated pulse received by one or more pulse detectors to retrieve the peripheral component data from the modulated pulses.In some embodiments, the peripheral component data includes data to be transferred from the peripheral component. In some of these embodiments, the peripheral component includes at least one of an image sensor, an audio sensor, an ambient light sensor, or an antenna circuit.In some embodiments, the peripheral component data includes data to be transferred to the peripheral components. In some of these embodiments, the peripheral component includes at least one of an antenna circuit, an audio output component, or a haptic feedback component.In some embodiments, the pulses transmitted from one of the plurality of photon pulse transmitters are modulated according to at least one of a Miller coding scheme or a Manchester coding scheme. In some embodiments, a plurality of photon pulse transmitters are to transmit pulses to a single pulse transmitter in accordance with a time division multiple access (TDMA) protocol.In some embodiments, the mobile computing device further includes a second peripheral component, wherein the photonic pulse modulator is further adapted to be based at least in part on data modulation associated with the second peripheral component in accordance with a wavelength division multiplexing (WDM) protocol A pulse transmitted by one of the plurality of photon pulse transmitters.In some embodiments, the mobile computing device comprises a handheld mobile computing device. In other embodiments, the mobile computing device includes a laptop computer, and further comprising: a flip-type chassis including a second housing coupled to the touch screen display surface housing; wherein the peripheral component data includes and is included Data exchanged by the processor in the second housing.Embodiments of the present disclosure describe a non-transitory computer readable storage medium containing instructions that, when executed, cause a computer to perform a method, the method comprising: receiving a peripheral component associated with a mobile computing device Data; transmitting a photon pulse from a photon pulse transmitter disposed on a side of the touch screen display surface of the mobile computing device to a photon pulse detector disposed on the other side of the touchscreen display surface, wherein the photon pulse is at least partially Modulating based on the peripheral component data; determining whether a user touch input occurs on the surface of the touch screen display based at least in part on an amplitude value of the modulated photon pulse; and demodulating the modulated photon pulse to retrieve the peripheral component data.In some embodiments, the peripheral component data includes data to be transferred from the peripheral component. In some of these embodiments, the peripheral component includes at least one of an image sensor, an audio sensor, an ambient light sensor, or an antenna circuit.In some embodiments, the peripheral component data includes data to be transferred to the peripheral components. In some of these embodiments, the peripheral component includes at least one of an antenna circuit, an audio output component, or a haptic feedback component.In some embodiments, the photon pulses are modulated according to at least one of a Miller coding scheme or a Manchester coding scheme. In some embodiments, the mobile computing device includes a plurality of photon pulse transmitters to transmit pulses to a single pulse transmitter in accordance with a Time Division Multiple Access (TDMA) protocol.In some embodiments, the mobile computing device further includes a second peripheral component; wherein the photon pulse is further modulated based at least in part on data from the second peripheral component in accordance with a wavelength division multiplexing (WDM) protocol. |
Embodiments of the present invention provide a method and apparatus for conserving power in an electronic device. In particular, embodiments of the present invention dynamically place the memory in self-refresh and chipset clock circuits in power down mode while keeping the isochronous streams (such as display) updated and servicing bus master cycles in a power savings mode. |
CLAIMS What is claimed is: 1. A method for conserving power in an electronic device, comprising: automatically transitioning the electronic device into a power reduced mode of operation in response to no outstanding memory requests. 2. The method claimed in claim 1, further comprising: automatically transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met. 3. The method claimed in claim 2, wherein automatically transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met further comprising: placing the memory in self-refresh in response to a deterministic set of configurations being met. 4. The method claimed in claim 3, wherein automatically transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met further comprises: placing clocks, control signals, clock trees, DLLs, or other unnecessary logic/circuits in power down mode in response to a deterministic set of configurations being met. 5. The method claimed in claim 4, wherein automatically transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met further comprises: keeping the isochronous data updated and servicing bus master data in the reduced power mode. 6. The method claimed in claim 5, wherein the power savings mode comprises a C2 power savings mode 7. The method claimed in claim 5, wherein placing the memory in self-refresh in response to a deterministic set of configurations being met further comprises: determining whether the combination of isochronous and bus master data exceeds a predefined buffering threshold; and placing the memory in self-refresh in response to the combination not exceeding a predefined threshold. 8. The method claimed in claim 7, wherein the predefined threshold covers the maximum exit latency for memory to come out of self-refresh. 9. The method claimed in claim 8, wherein isochronous data includes display data. 10. The method claimed in claim 8, wherein determining whether the combination of isochronous and bus master data exceeds a predefined buffering threshold further comprises: accessing parameters of isochronous and bus master data; and using parameters to precompute whether powerdown mode exit latencies fall within the predefined threshold. 11. The method claimed in claim 10, wherein accessing parameters of isochronous and bus master data further comprises: using the bios/driver to access isochronous and bus master data parameters. 12. The method claimed in claim 11, further comprising: representing the computation by coding of memory controller configuration registers or state machines controlling powerdown modes such as memory self refresh or DLL powerdown, or clock disabling. 13. The method claimed in claim 12, further comprising: computing on-the-fly whether powerdown exit latency latencies fall within the predetermined threshold. 14. The method claimed in claim 8, wherein determining whether the combination of isochronous and bus master data exceeds a predefined threshold further comprises: computing the maximum powerdown exit time in accordance with: maximum powerdown exit time = self refresh exit time + exit time implementation overhead/inefficiencies + applicable fraction of DLL powerdown exit time. 15. The method claimed in claim 14, wherein display latency tolerance is determined in accordance with FIFO size, and display mode requirements. 16. The method claimed in claim 15, wherein display latency tolerance is greater than the maximum powerdown exit time. 17. The method claimed in claim 16, wherein the isochronous latency tolerance is determined by FIFO size and minimum periodicy interval requirements. 18. The method claimed in claim 17, wherein the isochronous latency tolerance is greater than the maximum powerdown exit time. 19. A system, comprising: a memory, and a power management logic to automatically transition an electronic device into a power reduced mode of operation in response to no outstanding memory requests. 20. The system claimed in claim 19, wherein the power management logic automatically transitions the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met. 21. The system claimed in claim 20, wherein the power management logic places the memory in self-refresh in response to a deterministic set of configurations being met. 22. The system claimed in claim 21, wherein the power management logic places clocks or DLLs in power down mode in response to a deterministic set of configurations being met. 23. The system claimed in claim 22, wherein the power management logic keeps isochronous data updated and servicing bus master data in reduced power mode. 24. The system claimed in claim 23, wherein the power savings mode comprises a C2 power savings mode 25. The system claimed in claim 23, wherein the power management logic determines whether the combination of isochronous and bus master data exceeds a predefined buffering threshold and places the memory in self-refresh in response to the combination not exceeding a predefined threshold. 26. The system claimed in claim 25, wherein the predefined threshold covers the maximum exit latency for memory to come out of self-refresh. 27. The system claimed in claim 26, wherein isochronous data includes display data. 28. The system claimed in claim 26, wherein the power management logic accesses parameters of isochronous and bus master data, and uses parameters to precompute whether powerdown mode exit latencies fall within the predefined threshold. 29. The system claimed in claim 28, wherein the power management logic uses bios or driver to access isochronous and bus master data parameters. 30. The system claimed in claim 25, wherein the power management logic computes on-the-fly whether powerdown exit latency latencies fall within the predefined threshold. 31. A machine-accessible medium including instructions that, when executed, cause a machine to: transition an electronic device into a power reduced mode of operation in response to lack of memory requests. 32. The machine-accessible medium claimed in claim 31, further comprising: transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met. 33. The machine-accessible medium claimed in claim 31 , wherein transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met further comprising: placing the memory in self-refresh in response to a deterministic set of configurations being met. 34. The machine-accessible medium claimed in claim 31 , wherein transitioning the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met further comprises: placing clocks, control signals, clock trees, DLLs9or other unnecessary logic/circuits in power down mode in response to a deterministic set of configurations being met. 35. A system, comprising: a memory manager to automatically transition an electronic device into a power reduced mode of operation in response to no outstanding memory requests. 36. The system claimed in claim 35, wherein the memory manager transitions the electronic device into a power reduced mode of operation in response to a deterministic set of configurations being met. 37. The system claimed in claim 35, wherein the power management logic places clocks or DLLs in power down mode in response to a deterministic set of configurations being met. 38. The system claimed in claim 35, wherein the power management logic keeps isochronous data updated and servicing bus master data in reduced power mode. |
METHOD AND APPARATUS FOR DYNAMIC DLL PO-WEEBOWM AND MEMORY SELF-REFRESHBACKGROUND[0001] Computing devices, particularly portable devices, are frequently limited by the amount of time that they can run on battery power without reconnection to an AC power supply. Thus, there is a continuous effort to reduce the power consumption of various components of computers, including the central processing unit. Keeping electronic devices such as a central processing unit, a memory controller or a memory in their lowest possible power state provides a number of benefits. For example, it allows battery-operated machines to operate for longer periods of time between recharging. A reduction in power consumption also reduces thermal dissipation by the central processing unit. Reduced thermal dissipation allows the central processing unit to run at full speed for longer periods of time, while remaining within its thermal dissipation specifications. Reduced thermal dissipation also reduces the need for fans and other components used to prevent heat build-up in a computer.[0002] A standard specification used in developing power management systems is the advanced configuration and power interface (ACPI) specification (for example, rev. 2.0 dated July 27, 2000; see also ACPI Component Architecture Programmer Reference, rev. 1.05 dated February 27, 2001 available from Intel Corporation of Santa Clara, California). One goal of the ACPI is to enhance power management functionality and robustness, as well as facilitating industry wide implementation of common power management features.[0003] The ACPI defines a number of processor power states that are processor power consumption and thermal management states within a global working state. These processor states include a (i) C0 power state, (ii) Cl power state, (iii) C2 power state, and (iv) C3 power state. In the C0 power state, the processor executes instructions and is at full power. In the Cl and C2 power states, the processor is in a non-executing power state. However, the C2 power state uses less power than the Cl state. In the Cl and C2 power state, the processor still allows the bus to snoop the processor cache memory and thereby maintain cache coherency. The C3 power state offers improved power savings over the Cl and C2 power states, but at the cost of higher power down exit latency to memory.In conventional systems, the power management logic causes the CPU to transition from a C2 power state back to a high-powered C0 power state under certain circumstances. Keeping the electronic device in a lower power state than could otherwise be achieved and reducing the number of transitions between power states improves system performance by reducing latencies caused by switching between designated power states, as well keeping the overall power consumption lower.BRWJF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates a diagram of an embodiment of transitions between processor power states in the ACPI specification.FIG. 2 illustrates a flow diagram of an embodiment of a routine for placing the memory in self-refresh and memory digital locked loops (DLLs) in power down mode while keeping the display updated and maintaining use of bus masters during the C2 power state for an integrated graphics configuration.[0007] FIG. 3 is a diagram of an embodiment of an exemplary integrated graphics configuration for placing the memory in self-refresh and DLL in power down mode while maintaining use of bus masters and keeping the display updated during the C2 power state.[0008] FIGS. 4(a) and (b) illustrate flow diagrams of embodiments of routines for placing the memory in self-refresh and DLLs in power down mode while maintaining use of bus masters during the C2 power state for a discrete configuration.DETAILED DESCRIPTION[0009] Embodiments of the present invention provide a method and apparatus for conserving power in an electronic device. In particular, embodiments of the present invention dynamically place the memory in self-refresh and chipset clock circuits in power down mode while keeping the display updated and servicing bus master cycles in a power savings mode, such as C2. Maintaining the processor in a power savings mode, such as C2, saves power and reduces the power difference between integrated and non-integrated graphics chipset platforms even when snoopable bus mastering cycles are occurring (unlike in the C3 state, for example).In the detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be understood by those skilled in the art that the present invention maybe practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have been described in detail so as not to obscure the present invention.[0011] Some portions of the detailed description that follow are presented in terms of algorithms and symbolic representations of operations on data bits or binary signals within a computer. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, considered to be a self-consistent sequence of steps leading to a desired result. The steps include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the specification, discussions utilizing such terms as "processing" or "computing" or "calculating" or "determining" or the like, refer to the action and processes of a computer or computing system, or similar electronic computing device, that manipulate and transform data represented as physical (electronic) quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.[0012] Embodiments of the present invention may be implemented in hardware or software, or a combination of both. However, embodiments of the invention may be implemented as computer programs executing on programmable systems comprising at least one processor, a data storage system (including volatile and non- volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example, a digital signal processor (DSP), a micro-controller, an application specific integrated circuit (ASIC), or a microprocessor.[0013] The programs may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The programs may also be implemented in assembly or machine language, if desired. In fact, the invention is not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.The programs may be stored on a storage media or device (e.g., hard disk drive, floppy disk drive, read only memory (ROM), CD-ROM device, flash memory device, digital versatile disk (DVD), or other storage device) readable by a general or special purpose programmable processing system, for configuring and operating the processing system when the storage media or device is read by the processing system to perform the procedures described herein. Embodiments of the invention may also be considered to be implemented as a machine-readable storage medium, configured for use with a processing system, where the storage medium so configured causes the processing system to operate in a specific and predefined manner to perform the functions described herein.[0015] FIG. 1 illustrates a diagram of an embodiment 100 of transitions between processor power states in the ACPI specification. All states, the C0 state 102, the Cl state 104, the C2 state 106 and the C3 state 108 are encompassed within a G0 working state 110. A G0 working state is defined by the ACPI specification as a computer state where the system dispatches user mode (application) threads. In the G0 working state, these threads are executed. In this state, devices (peripherals) are dynamically having their power states changed. Within this G0 state 110, a processor transitions between various processor power states including the C0 state 102, the Cl state 104, the C2 state 106, and the C3 state 108. In the C0 state 102, the processor is at full power. In this state, the components of a typical system are powered and the clocks in the system can run at full speed. The Cl state 104 defines a non-executing state in which the processor power state has the lowest latency.[0017] The C2 state 106 is a second non-executing power state which offers improved power savings over the Cl state 104. The C2 state 106 is a common chipset mode while a computer is in a passive state (i.e. operating system idle) and connected to bus masters such as USB devices or audio ports. During the C2 state 106, discrete chipsets access memory primarily to service bus master cycles and integrated graphics chipsets access memory primarily to fetch display refresh data, service bus master cycles or continue graphics rendering. The CPU does not need to access memory. The DRAM memory operates in an extended power conservation mode, sometimes referred to as a stand-by mode, or self refresh. A refresh unit recharges electrical cells within DRAM memory in order to maintain data integrity.[0018] The C3 power state 108 offers improved savings over both the Cl state104, and the C2 state 106. While in the C3 state 104, the processor's caches maintain the current information state and snoops are not possible. The processor is brought back out to the CO, Cl or C2 states to handle snoopable traffic.[0019] The transitions between states occur from the C0 state 102 along path 112 to the Cl state 104 and back to the C0 state 102 along return path 114. Transitions also occur from the C0 state 102 to the C2 state 104 along path 116 and return to the C0 state 104 along path 118. Finally, transitions occur from the C0 state 104 along path 120 to the C3 state 116 and return to the C0 state along path 122. CPU inactivity for a sufficient duration will trigger a transition from the C0 state 102 to the C2 state 104 along path 116. A break event such as an interrupt will result in a transition of the system from the C2 state 104 along a path 118 to the C0 state 102.[0020] It should be recognized that although the description of this system will be described according to the ACPI specifications power states of C0, Cl, C2 and C3 for convenience, the invention is not limited by the ACPI specification. In general, for embodiments not following the ACPI specification, the C0 power state is defined for purposes of this invention as a full power state in which the CPU carries on its normal functions. The ACPI C2 power state is defined generally to be an intermediate power state between full power and the C3 power state. With an Intel processor, the C2 power state is equivalent to the STOP GRANT state. In general the C2 power state allows snooping memory accesses and maintaining cache coherency.[0021] FIG. 2 illustrates a flow diagram of an embodiment 200 of a routine for placing the memory in self-refresh and DLLs in power down mode while keeping the display updated and maintaining use of bus masters during the C2 power state for an integrated graphics configuration. Embodiments of the present invention (1) place the memory in self-refresh during idle times, rather than just in precharge power down mode and/or (2) dynamically power down the DDR clocks/DLLs. For purposes of this invention, this power savings state is referred to as "C2 self-refresh" even though more power savings are obtained than just memory going into self-refresh. In particular, since the other bus masters on the platform generally have very large latency tolerance compared to display, display updates can proceed properly as long as the buffering provided for display is sufficient to cover the maximum exit latency for memory to come out of self-refresh. If a non-isochronous bus master has started to do a very long burst to memory when a display request must be served, the completion of the bus master request can be postponed until after the display request has been serviced. As long as any isochronous streams (for example, isochronous audio) that must also get memory access are of sufficiently short burst sizes that they stay within the bounds of the other isochronous streams (for example, display) ability to handle latency, and as long as these streams request memory accesses at a rate lower than that required to exit memory self refresh, then the C2 self-refresh state can be enabled. Isochronous streams have the characteristic that their maximum burst sizes and minimum repetition rates are deterministic in the platform, so it is easy to know when the C2 self-refresh state is achievable.[0022] In step 202, the processor is confirmed to be in the C2 power state.[0023] In step 204, lack of memory requests from any source (bus master, display refresh) is confirmed. [0024] In step 206, the memory burst size and display FIFO threshold level are set to predefined levels conducive to the C2 power state. In particular, as shown in FIGS. 3 and 4 and discussed in detail below, the display FIFO has a threshold level that triggers a burst request when it is reached. The FIFO threshold value is set such that the memory bursts that are required for display refresh are large enough, and spaced far enough apart in time, so that substantial power down time in the C2 power state is possible before the DDR DLLs and chipset memory need to be re-enabled. In a typical configuration for an integrated graphics configuration, display logic manages a display FIFO. The threshold value is present in a threshold register. The threshold value is programmable and pre-set depending on the power savings mode. This can save power in limiting the number of memory transfers (each of which uses power) and can create idle periods during static display in which low-power devices can enter a power-savings mode. The request burst size and threshold level control the spacing in time of these requests.[0025] A rendering engine is confirmed or forced to be idle. The chipset is generally in a state that provides opportunities for entering the self-refresh state when graphics rendering is not required or is completed.[0026] In step 208, any or a combination of the following can occur: 1) system memory is placed in self-refresh with clocks and other memory control signals tri-stated for the system memory, 2) memory DLLs not needed during C2 self-refresh state can be placed in power down and/or 3) any other functional block and clock trees that are not needed during C2 self-refresh state can be placed in power down. The decision about which functions can be powered down is dependent on decision logic including comparing impact to powerdown exit latency of the powerdown features versus time available. The time available depends on the maximum latency tolerated by the display and the isochronous stream periodicity and burst size requirements.[0027] Memory DLLs may be placed in power down mode. In particular, integrated circuits such as DDR DRAMs often generate a plurality of synchronized DLL outputs (phases) and utilize a plurality of operation modes, such that the output signals produced by a circuit such as a DLL are selectively applied to circuits in the device to reduce unnecessary power consumption. In a typical implementation, the power management unit controls a clock generator that clocks other chips in the system, such as the processor, memory controller and memory. Integrated circuits, such as DDR DRAMS5 typically include DLLs that provide distributed signals, e.g., clock signals, to multiple circuits. A DLL typically receives a reference clock signal from which it generates an internal clock signal, the phase of which typically depends on the reference clock signal. DLLs are of some complexity and operate at high frequency, hence consume significant power. It may be desirable to operate a large number of circuits in synchronism with such an internal clock signal. If these circuits are driven in common, the total output load on the DLL can be very large, causing the DLL to consume a large amount of power. Thus, it is advantageous to power down the DLLs.[0028] In step 210, until a bus master request and/or display refresh is confirmed, the self-refresh and dynamic DLL power down remains intact.[0029] In step 212, in response to confirmation that a bus master and/or display refresh request has been executed, the system memory clock is enabled and the system memory placed in an idle mode.[0030] In step 214, the DLLs are powered up. The chipset DLL associated with the memory being used to update display refresh will optionally be kept enabled during the C2 state.[0031] In step 216, the system waits until the DLLs and system memory are both powered up.[0032] In step 218, the next memory burst is executed and the routine returns to step 204. The processor remains in the C2 power state as long as there is not a break event (for example, an interrupt).[0033] In typical implementations, the processor clock is restarted or signal to the processor de-asserted to accomplish the transition. The memory burst size and watermark levels are then set in accordance with the C0 power state requirements. During operation in the full power state, such as C0, the memory bursts are generally smaller and spaced much closer in time, in accordance with the C0 power state. The C0 state imposes a display FIFO size that is large enough to encompass the new C2 burst size and threshold level requirements of this invention. [0034] The above-described method of handling bus requests while the processor is in a low power state may be accomplished by a variety of different apparatus as described in detail below.[0035] For example, FIG. 3 is a diagram of an embodiment of an integrated graphics configuration for placing the memory in self-refresh and DLL in power down mode while maintaining use of bus masters and keeping the display updated during the C2 power state, as illustrated in FIG. 2. The computer system 300 includes processor 302, graphics and memory controller 304 including graphics engine 306, memory 308, display FIFO 310, display pipeline 312 and display device 314. Processor 302 processes data signals and may be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a process implementing a combination of instruction sets, or other processor device, such as a digital signal processor, for example. Processor 302 may be coupled to common bus 312 that transmits data signals between processor 302 and other components in the system 300.[0036] Processor 302 issues signals over common bus 312 for communicating with memory 308 or graphics and memory controller 304 in order to manipulate data as described herein. Processor 302 issues such signals in response to software instructions that it obtains from memory 308. Memory 308 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or other memory device. Memory 308 may store instructions and/or data represented by data signals that may be executed by processor 302, graphics engine 306 or some other device. The instructions and/or data may comprise code for performing any and/or all of the techniques of the present invention. Memory 308 may also contain software and/or data. An optional cache memory may be used to speed up memory accesses by the graphics engine 306 by taking advantage of its locality of access. In some embodiments, graphics engine 306 can offload from processor 302 many of the memory-intensive tasks required for rendering an image. Graphics engine 306 processes data signals and may be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a process implementing a combination of instruction sets, or other processor device, such as a digital signal processor, for example. Graphics engine 306 may be coupled to common bus 312 that transmits data signals between graphics engine 306 and other components in the system 300, including render cache 310 and display device 314. Graphics engine 306 includes rendering hardware that among other things writes specific attributes (e.g. colors) to specific pixels of display 314 and draw complicated primitives on display device 314. Graphics and memory controller 304 communicates with display device 314 for displaying images rendered or otherwise processed by a graphics controller 304 for displaying images rendered or otherwise processed to a user. Display device 314 may comprise a computer monitor, television set, flat panel display or other suitable display device.[0037] Memory 308 stores a host operating system that may include one or more rendering programs to build the images of graphics primitives for display. System 300 includes graphics engine 306, such as a graphics accelerator that uses customized hardware logic device or a co-processor to improve the performance of rendering at least some portion of the graphics primitives otherwise handled by host rendering programs. The host operating system program and its host graphics application program interface (API) control the graphics engine 306 through a driver program.[0038] FIFO 310 receives display data from graphics and memory controller 304 through data bus 318 and outputs display data to display pipeline 312 through data bus 320. Graphics and memory controller 304 decides which one of the devices should be granted access to memory 308. A part of the graphics engine controls block transfer of images to, from or within the memory 308. A memory address generator 322 is connected to graphics and memory controller 304 and display FIFO 310. The memory address generator 322 generates memory addresses to the graphics and memory controller 304. Graphics and memory controller 304 controls memory address generator 322 and display pipeline 312. The graphics and memory controller 304 instructs the memory address generator 322 when to start loading the FIFO 310. Display FIFO 310 is used for receiving and storing display data for the display device 314.[0039] When the FIFO level is greater than the threshold value, a memory burst request for a non-display stream can be generated without harming display. Based upon the comparison of the FIFO data level against the threshold values, a control circuit issues a request to graphics and memory controller 304 for memory access so that data can be loaded into FIFO 310, as illustrated by the flowchart in FIG. 1.FIGS. 4(a) and (b) illustrate flow diagrams of embodiments of routines for placing memory in self-refresh and DLLs in power down mode while maintaining use of bus masters during the C2 power state for a discrete configuration. A discrete chipset configuration has no graphics, and can put memory in self refresh as long as the isochronous constraints (i.e, isochronous periodicity must be greater than the powerdown exit latency) are met. A discrete graphics controller has a display stream to maintain. But a discrete graphics controller has no knowledge of C2 state.[0041] Referring to FIG. 4(a), in one embodiment 400, the discrete graphics controller enters its local memory related powerdown modes such as self refresh state (for reference purposes, called the graphics c2 power state) (step 404) whenever there are no outstanding requests to local memory (step 402).[0042] Referring to FIG. 4(b), in another embodiment 406, a discrete graphics controller computes the demand based on bandwidth threshold and/or duration of local memory request idleness on its local memory (step 408). In response to the demand being sufficiently low, it enters its local memory into self-refresh (step 410).[0043] Having described the invention in accordance with the requirements of the patent statutes, those skilled in the art will understand how to make changes and modifications to the present invention to meet their specific requirements or conditions. Such changes and modifications may be made without departing from the scope and spirit of the invention as set forth in the following claims. |
A processor is described that includes a quick signal path from an input of the processor to logic circuitry within the processor. The input is to receive a fast throttle down signal. The logic circuitry is to throttle down a rate at which the processor issues instructions for execution in response to the fast throttle down signal. The quick signal path is to impose practicably minimal propagationdelay of the fast throttle down signal within the processor. |
1.A processor, the processor includes:The path from the sensing circuit to the circuit of the processor,The sensing circuit is coupled between:The output of the power supply unit for supplying power to the processor; andAn input of a voltage regulator for regulating the voltage of the power supplied by the power supply unit to the processor; andThe circuit of the processor is configured to receive a throttling value in response to the sensing circuit detecting that the power draw of the processor exceeds the rated maximum power of the power supply unit of the processor, and in response to The throttle value throttles the rate at which the processor executes instructions to reduce the power draw of the processor to within a predetermined time window during which the power supply unit supplies power exceeding the rated maximum power Below the rated maximum power.2.The processor of claim 1, wherein the path includes a controlled impedance transmission line.3.The processor of claim 1, wherein the path does not include a circuit that performs processing on the throttle value.4.The processor of claim 1, further comprising a register for storing a value indicating a rate at which instructions are to be executed in response to the throttle value.5.The processor of claim 1, wherein the circuit is configured to throttle the rate at which the processor issues instructions for execution, so as to throttle the rate at which the processor executes instructions.6.The processor of claim 1, wherein the processor includes a register for specifying a time period during which the processor executes instructions at a throttled rate.7.The processor of claim 1, wherein the processor includes a register for specifying that the processor will switch to Performance status.8.7. The processor of any one of claims 1-7, wherein the predetermined time window is less than 100 microseconds.9.A computing system including:Power supply unitA processor having a path from an input of the processor to a circuit of the processor, the input for receiving in response to the power draw of the processor exceeding the rated maximum power of the power supply unit A throttling value, where the circuit is used to throttle the rate at which the processor executes instructions in response to the throttling value, so as to be within a predetermined time window when the power supply unit supplies power exceeding the rated maximum power Reducing the power draw of the processor below the rated maximum power; andA sensing circuit having an output coupled to the input, the sensing circuit coupled to a circuit path for supplying power to the processor, wherein the sensing circuit Used to detect power draw exceeding the rated maximum power and generate the throttle value in response.10.The computing system of claim 9, wherein the sensing circuit is coupled between the voltage regulator and the power supply unit along the circuit path, and the voltage regulator is coupled along the circuit path Between the sensing circuit and the processor.11.The computing system of claim 9, wherein the sensing circuit is coupled between a voltage regulator and the processor along the circuit path, and the voltage regulator is coupled along the circuit path Between the sensing circuit and the power supply unit.12.9. The computing system of claim 9, wherein the circuit is configured to throttle the rate at which the processor issues instructions for execution, so as to throttle the rate at which the processor executes instructions.13.The computing system of claim 9, wherein in response to the throttling of the rate at which instructions are executed to the processor, the power draw of the processor is reduced to make the corresponding power from the power supply unit The draw drops to at least a level that the power supply unit will continue after the throttling.14.9. The computing system of claim 9, further comprising a non-transitory machine-readable medium containing program code that, when processed by the computing system, causes the execution of a method, the method include:Do any of the following:Programming into the processor a value that specifies the rate when the instruction of the processor is throttled;A value specifying how long the processor will issue instructions at the throttle rate is programmed into the processor; andA value specifying the performance state of the processor that the processor will enter after the processor is no longer restricted to issue instructions at the throttle rate is programmed into the processor.15.9. The computing system of claim 9, wherein the throttle rate is a zero command per unit time.16.15. The computing system of any one of claims 9-15, wherein the predetermined time window is less than 100 microseconds.17.A method for power control, the method comprising:Asserting a value in response to detection by a sensing circuit coupled between the power supply unit and the voltage regulator that the power draw of the power supply unit exceeds the rated maximum power of the power supply unit, wherein the power draw is caused by the processor; as well asIn response to the assertion of the value, the instruction execution rate of the processor is throttled to reduce the processor's instruction execution rate within a predetermined time window during which the power supply unit supplies power exceeding the rated maximum power The power draw is reduced below the rated maximum power.18.The method of claim 17, wherein the assertion is performed between the voltage regulator and the power supply unit.19.The method of claim 17, wherein the assertion is performed between the voltage regulator and the processor.20.The method of claim 17, further comprising placing the processor in a predetermined performance state after the throttled instruction execution rate limit has been released, and the predetermined performance state is low. In the highest performance state.21.The method of claim 17, further comprising passing the value through the processor through a controlled impedance transmission line.22.The method of claim 17, further comprising passing the value through the processor without modifying the value in the processor.23.The method of claim 17, wherein the throttling includes throttling the rate at which the processor issues instructions for execution.24.22. The method of any one of claims 17-23, wherein the predetermined time window is less than 100 microseconds. |
A meter with fast power surge detection and command throttling to provide low-cost power supply units
Computing system and processorThis application is that the international filing date is 2013/6/28, the international application number is PCT/US2013/048655, and the application number entering the Chinese national phase is 201380039994.1, titled "With fast power surge detection and instruction throttling to provide low cost A divisional application for the invention patent application of "Computer System and Processor of Power Supply Unit".Technical fieldThe field of the invention relates generally to computing systems, and more specifically to computing systems and processors with fast power surge detection and command throttle down to provide low-cost power supply units.Background techniqueFIG. 1 shows a typical power supply arrangement 100 for the processor 101. As observed in FIG. 1, the power supply unit 105 and the voltage regulator 102 work together to provide the processor 101 with a specific supply voltage with sufficient supply current during the operation of the processor. The voltage regulator 102 provides a specific supply voltage to the processor at the processor supply node 103. Modern processors usually accept a variable range of supply voltage (for example, 0.6 to 0.8 volts (V)) under the control of the processor itself (for simplicity, from the processor to the voltage regulator 102 or other components that affect the supply voltage The connection is not shown).In order to provide a “stable” supply voltage to the processor 101, the voltage regulator 102 receives an input voltage higher than the supply voltage at the supply node 103 at the input 104. For example, a modern voltage regulator that supplies a +1.8V supply voltage can generally receive a voltage anywhere in the range of +4.0V to +36.0V at the input 104. The voltage regulator 102 then "steps down" the voltage received at the input 104 (e.g., +12.0V) to the supply voltage provided at the supply node 103 (e.g., +1.8V). According to one view, the step-down activity of the voltage regulator 102 permits a "stable" supply voltage at the node 103 when faced with a large swing in the current drawn from the processor 101.When the processor draws a large amount of current, the effect can be observed at the input node 104. Specifically, a sudden current draw derived from the power increase required by the processor 101 and the inefficiency of the voltage regulator 102 will be observed at node 104. For example, consider a processor that receives a +1.8V supply voltage at the supply node 103 and typically draws 36 amperes (A) of current. The supply voltage of +1.8V and the current draw of 36A correspond to the power dissipation of 65 watts (W) in the processor ((1.8V)×(36A)=65W). The power supply unit 105 will require not only sufficient power (65W) to be supplied to the processor, but also additional power to compensate for the imperfect efficiency of the voltage regulator 102.For example, if the regulator 102 is currently typically 80% effective, an additional 20% power increase from the power supply unit 105 needs to be provided to the voltage regulator 102. That is, the power supply unit 105 needs to provide ((65W)/.8)=80W to the voltage regulator 102. If the power supply unit 105 at the node 104 supplies an input voltage of +12V to the voltage regulator 102, the current draw of the voltage regulator from the power supply unit will be ((80W)/(12V))=6.67A. (Note that the effect of the step-down conversion from +12V to +1.8V by the voltage regulator 102 includes the current draw required by the voltage regulator 102 that is considerably lower than that of the processor 101).If the processor 101 suddenly increases its current draw requirement from 36A to 56A, the power supply unit 105 will observe that the current draw by the voltage regulator 102 increases from 6.67A to 10.42A (assuming that the voltage provided by the power supply unit is fixed at + 12V). That is, the power dissipation in the processor 101 will increase to (56A)×(1.8V)=100W. In order to account for the imperfect efficiency of the voltage regulator 102, the power supply unit will need to supply 100W/0.8=128W to the voltage regulator 102. Supplying 125W at +12V corresponds to 125W/12V = 10.42A.The above analysis confirms that due to the inefficiency of the voltage regulator 102, the power supply unit 105 is usually designed to supply much more power than the processor consumes. Generally, the power supply unit 105 is designed to provide more power, the larger and more expensive the power supply unit becomes.Description of the drawingsA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:Figure 1 shows the traditional design of the processor power supply:Figure 2 shows the improved power supply system design;FIG. 3 shows a timeline of the operation of the power supply system of FIG. 2;Figures 4a, b show different processor and system configurations that can utilize the concepts of Figures 2 and 3;Figure 5 shows another improved power supply system design;Figure 6 shows a multi-core processor that can be used to build a multi-processor computer.Detailed waysThe problem is that when processor power consumption continues to increase (for example, due to the increased number of transistors, die size, and clock speed), the maximum power of the power supply unit 105 also continues to increase. To make matters worse, in some rare cases (for example, "optimized power virus loops"), the processor's maximum power draw can far exceed its "typical" maximum power draw ( For example, in its highest performance state under a workload of the type of more typical workload that caused the processor to enter its highest performance state). For example, the Pmax power draw of a rated processor can be 100% higher than what the processor normally draws when processing workloads that are typically typical when the processor is operating at its highest performance state.Here, Pmax is closer to the theoretical worst-case measurement of processor power draw, rather than what the processor usually draws when it is required to perform its maximum workload in real-world applications. For example, Pmax may correspond to the power drawn when the processor is required to process a continuous stream of the most energy-consuming instructions at the processor's highest supply voltage and operating frequency. In real-world applications, such a flow of instructions is impossible. However, the system is designed to handle the Pmax event if it occurs. Therefore, the power supply unit 105 tends to be designed to have a size and cost far beyond what would be sufficient under normal operating conditions.Figure 2 relates to an improved design of a smaller and/or cheaper power supply unit 205 that permits even an increased processor Pmax in future processor generations. The design perspective of the solution of FIG. 2 is that a smaller and/or cheaper power supply unit will not be able to provide sufficient power to the voltage regulator 102 for a sustained period of time at Pmax. However, a smaller and cheaper power supply unit can provide enough power to the voltage regulator 102 for a short and limited period of time (for example, 100 μs) below the Pmax power draw.Therefore, referring to FIG. 2, a fast power sensing circuit 206 is inserted at the power supply unit output 207 to quickly detect surges in the power draw from the voltage regulator 202 that exceed the predefined power level established for the power supply unit 205. In an embodiment, the pre-established power level is below the power level that the power supply unit will be required to provide if the processor will draw power at the Pmax level.The fast power sensing circuit 206 can detect an increase in power draw at the power supply output 207 with specially designed analog and/or digital circuits that measure, for example, the current draw from the voltage regulator 202 or from the voltage regulator 202 and/ Or the current draw of the voltage provided by the power supply unit 205.In response to it quickly detecting that the power draw from the voltage regulator 202 has exceeded the pre-established threshold, the power sensing circuit 206 sends a fast throttling signal 208 to the processor 201. The fast throttle signal 208 is received at the input 211 of the processor 201 and is routed through the "fast" signal path 209 within the processor 201 to the logic circuit 210, which controls the instructions within the processor 201 in some way The rate at which the execution pipeline 213 executes instructions. For example, the logic circuit 210 may control the rate at which instructions of the pipeline 213 are fetched (for example, from the cache, system memory, or both) and/or the rate at which the fetched instructions are fed (issued) to the pipeline 213.The fast signal path 219 is designed such that the fast throttling signal 208 only lasts for a small end-to-end propagation delay from the processor input 211 to the logic circuit 210. The small propagation delay can be affected by, for example, minimizing the number of logic gates or other types of logic processing between the input 211 and the logic circuit 210. The fast signal path 209 may also be implemented at least partially as a transmission line having a controlled (e.g., specially designed) characteristic impedance to minimize signal distortion as the signal propagates through the processor.The transmission line is also driven by a driver circuit having a source impedance that substantially matches the characteristic impedance of the transmission line, and can be terminated with a termination resistance that matches the characteristic impedance of the transmission line. It can be understood that the end-to-end operation of the fast path 209 can be divided into, for example, a series of transmission line segments, each of which has its own driver and termination pair as described above.Basically, in an embodiment, one or more analog transmission lines are affected to transmit, for example, a signal from the input 211 to the logic circuit 210 as quickly as possible. By doing so, it is possible to avoid as much substantial logic processing as possible with logic gates that each have associated undesirable propagation delays. As a result, the propagation delay of the fast throttle signal through the processor 201 is reduced so that it reaches the logic circuit 210 as quickly as possible.Based on the above discussion, the emphasis is then on reducing the overall propagation delay through the power sensing circuit 206 and along the fast signal path 209 within the processor 201. By doing so, the logic circuit 210 causes the instruction execution pipeline 213 to reduce the rate at which instructions are executed "almost immediately" after the power draw exceeds the threshold of the power supply unit 205.Here, the less the propagation delay through these circuits 206, 209, the more, in fact the more permitting the power supply unit 205 to be smaller and cheaper. As mentioned above, the power supply unit 205 can usually handle the "power surge" exceeding its rated maximum power for a short time, but not for a continuous period of time. By designing into the system the closed-loop response that rapidly reduces the power draw of the processor 201 within the time window during which the power supply unit 205 can supply power exceeding its pre-established threshold, there may be no need to design to handle extremes in a continuous period of time. Larger and more expensive power supplies for power surges are designed into the system. As a result, the system can “get rid of” the use of the power supply unit 205 that reduces performance.Figure 3 shows the scene in a timeline. In the initial period 350, the power draw from the processor 301 is much lower than its Pmax "worst case" scenario. Therefore, the power draw of the voltage regulator 302 on the power supply unit (which, for example, may be 25% higher than the power draw of the processor 301 due to the inefficiency of the voltage regulator) is also lower than the pre-established threshold level 320 of the power supply voltage.After the time window 350, the processor suddenly approaches the worst-case Pmax power draw state. In response, the voltage regulator power draws a 302 surge. During the surge, the power draw from the voltage regulator exceeds the threshold 320 of the power supply unit 305. Soon thereafter, the power sensing circuit sends out a fast throttling signal 306, which quickly propagates through the processor and reaches the logic that starts throttling the command issuing rate 330. The processor power draw 301 begins to decrease in response 331 and eventually causes the power draw from the voltage regulator 302 to fall below the threshold 320 at 332.Regard any voltage regulator power draw below the threshold 320 as a power draw that the power supply unit can handle for a long period of time, and treat any power draw above the threshold 320 as a power draw that the power supply unit cannot handle for a long period of time but can To deal with short-term power draw, note that the fast action of the power sensing circuit and the low propagation delay path through the processor cause the power draw from the voltage regulator 302 to exceed the threshold level 320 of the power supply unit for only a short period of time 323. Therefore, power supply units that cannot meet continuous power draw (and, for example, can only meet continuous power draw at or below the threshold level 320) when the processor draws at its Pmax level, can still be implemented in the system.In an embodiment, a smaller and/or less expensive power supply unit can provide power above its threshold level 320 for a short amount of time of 100 μs. Therefore, in the embodiment, the time period 323 should be less than 100 μs. The high-performance sensing circuit should be able to achieve a sensing time in the range of 1-10μs.In one embodiment, a time budget of 40 μs is specified for the time period 324. Here, it should take 40 μs from the moment when the power draw from the voltage regulator 302 exceeds the threshold 320 to the moment when the power draw from the voltage regulator 302 starts to decrease. According to one scheme, the total time budget is roughly divided between the power sensing circuit and the processor. Therefore, the power sensing circuit is allocated 20 μs to issue a fast throttling signal after the power draw of the voltage regulator exceeds the threshold 320, and the processor is allocated 20 μs to start reducing its power consumption after it receives the fast throttling signal for the first time (Note that Figure 3 is not drawn to scale). This leaves 60 μs for the declining power draw from the voltage regulator 302 to fall below the threshold 320.In an embodiment, the threshold level 320 established for the power supply unit is no greater than what is expected on the power supply unit when the processor is in its highest performance state and is processing a workload of the type of workload that is normally handled by the processor in its highest performance state. The level of power draw is low (or more than a percentage of such power draw, for example 10%). In another or related embodiment, the threshold 320 is not higher than the power that would be drawn if the processor were drawing at its Pmax level. In many embodiments, the threshold level 320 should be significantly lower than this level.In order to assist the system designer, in the embodiment, the processor's published specification expresses a fast throttling signal response that specifies the propagation delay from the moment the processor receives the fast throttling signal to the moment the processor quickly reduces its power draw. In other embodiments, the published specification also specifies the rate or envelope of power draw fading or other similar information. For example, the published specification may specify one or more propagation delays, which specify the time for the processor's power draw to drop from the Pmax level to one or more lower levels after the fast throttling signal is declared at the processor input quantity.With this type of information, the system designer can determine the appropriate voltage regulator response time and power draw and power sensing circuit response time for any particular power supply unit threshold level 320. The power supply unit threshold level 320 basically determines the size and/or cost of power draw. That is, a smaller and/or cheaper power supply unit has a lower threshold level 320 than a larger and/or more expensive power supply unit. Therefore, the more the designer is motivated to integrate a smaller and/or less expensive power supply unit into the system, the designer is accordingly motivated to integrate the faster power sensing circuit 206 and the voltage regulator 202.In other embodiments, the "throttled" instruction issuance rate of the instruction execution pipeline in response to the asserted fast throttling signal is a programmable feature of the processor. This allows the system designer to control the rate at which the processor will reduce its power consumption once the fast throttling signal is declared. For example, the processor may include a model-specific register (MSR) space that allows an operating system (OS) instance or a virtual machine monitor (VMM) to set the MSR space to fetch and/or issue instructions per unit of time. The value of the maximum limit on the number of instructions. Note that the restriction on fetching instructions into the pipeline basically restricts the issuance of instructions. Therefore, command issuance will be used to refer to two mechanisms.Once the fast throttling signal has been asserted to be more than the higher limit, the lower limit causes the processor's power consumption to drop more quickly. Permitting the system designer to specify the rate of power reduction of the processor in response to the assertion of the fast throttling signal should provide the system designer with additional flexibility in terms of defining appropriate voltage regulators, power sensing circuits, and power supply units. In an embodiment, the specification of the processor also specifies different power reduction rates for the processor that fetches and/or issues the value of the different programmed reduced instructions.According to another solution, once the fast throttling signal is asserted, the instruction execution pipeline stops issuing instructions, so that the processor effectively stops further processing activities while reducing its power draw at or near the highest rate. The complete rest can be hardwired into the processor through a fixed design, or the user can be able to program the value of fetching/issuing 0 instructions per unit time in, for example, MSR space.Regardless of the rate at which the command is issued and throttled down, there are also different design options on how to exit the throttle mode after entering it. According to the first solution, the throttled mode exists for a fixed period of time and then switches to the established performance state of the processor. In an embodiment, the performance state is not the highest performance state. Entering a performance state below the maximum performance state should force at least one of the supply voltage and/or clock frequency of the processor to be reduced compared to the voltage/frequency that existed before the processor received the fast throttling signal.In another embodiment, the time period the processor spends in throttling mode is programmable. That is, for example, the OS instance or the VMM can enter a value in the MSR space that specifies how long the processor will remain in the throttle mode once the throttle mode is entered. In another or alternative embodiment, the specific performance state that the processor switches over when coming out of the throttling state may also be programmed into the processor in, for example, the MSR space.In yet another embodiment, receiving a fast throttling signal causes an interrupt or other type of warning flag to be issued to the software (such as an OS instance or VMM), so that, for example, a sequence of instructions that cause a power surge can be detected in a lower performance state. Branch out or be processed. Either or both of these reactions can be applied in a software-controlled manner through an appropriate registrar. Here, the processor can be designed to include a logic circuit that issues an interrupt or flag in response to the processor receiving a fast throttling signal.It is believed that the software processes discussed above can be executed by a processor, controller, microcontroller, or similar components. Therefore these processes can be implemented with continuous code such as machine executable instructions that cause the machine executing these instructions to perform certain functions. These processes can also be executed by an electronic circuit (or a part thereof) designed to execute the process (as an alternative to executing program code or in combination with executing program code).It is believed that any software process can be described by source-level program codes in various object-oriented or non-object-oriented computer programming languages. Articles of manufacture such as computer readable media can be used to store program codes. Products beyond the program code can be embodied as, but not limited to, one or more memories (for example, one or more flash memory, random access memory (static, dynamic or other)), optical disk, CD-ROM, DVD ROM, EPROM, EEPROM, magnetic or optical card or other type of machine-readable medium suitable for going beyond electronic instructions. The program code can also be downloaded from a remote computer (such as a server) to a requesting computer (such as a client) by means of a data signal embodied in a propagation medium (for example, via a communication link (such as a network connection)).Figure 4a shows a single processor 401 with multiple processing cores 402, where each core has multiple instruction execution pipelines 413. As observed in Figure 4a, a single fast throttling signal is routed to the logic circuit 410, which issues each individual pipeline throttling command in the processor based on a single fast throttling signal input. Here, different paths to different pipelines can be implemented according to the same design principles discussed above with reference to FIG. 2 and the single path 209 observed there. However, additional considerations may be necessary for additional stubs or branches to ensure small propagation delays to all pipelines within the processor. Examples include drivers at the input or and/or drivers for each core. Figure 4b shows multiple processors of the type that are powered by the same power supply unit as observed in Figure 4a.Figure 5 shows that a fast power sensing circuit can be inserted between the voltage regulator and the processor. In this case, the power sensing circuit detects the power draw of the processor directly from the processor instead of detecting the processor's power through the power regulator. The system designer can plan for the inefficiency of the power regulator to correlate what specific directly monitored draw of the processor corresponds to the threshold level of the power supply unit being crossed where the power supply unit can no longer provide continuous power.FIG. 6 shows the architecture of an exemplary multi-core processor 600. As observed in Figure 6, the processor includes: 1) multiple processing cores 601_1 to 601_N; 2) interconnection network 602; 3) final cache system 603; 4) memory controller 604 and I/O hub 605 . Each processing core contains one or more instruction execution pipelines for executing program code instructions. The interconnection network 602 is used to interconnect each of the cores 601_1 to 601_N to each other and other components 603, 604, 605. The last-level cache system 603 serves as a last-level cache in the processor before instructions and/or data are expelled to the system memory 606.The memory controller 604 reads/writes data and instructions from the system memory 606 to the system memory 606. The I/O hub 605 manages the communication between the processor and "I/O" devices (eg, non-volatile beyond devices and/or network interfaces). The port 607 is generated from the interconnection network 602 to link multiple processors so that a system with more than N cores can be implemented. The graphics processor 608 performs graphics calculations. The power management circuit 609 manages the performance and power state of the processor as a whole ("package level") and power state aspects such as the performance of individual units within the processor of individual cores. For convenience, other important functional blocks (for example, a phase-locked loop (PLL) circuit) are not shown in FIG. 6.In the foregoing specification, the present invention has been described with reference to specific exemplary embodiments thereof. However, it will be apparent that various modifications and changes can be made thereto without departing from the boundary spirit and scope of the present invention as set forth in the appended claims. Therefore, the description and drawings are considered to be illustrative rather than restrictive. |
An indication to perform an eviction operation on a cache line in a cache can be received. A determination can be made as to whether at least one sector of the cache line is associated with invalid data. In response to determining that at least one sector of the cache line is associated with invalid data, a read operation can be performed to retrieve valid data associated with the at least one sector. The at least one sector of the cache line that is associated with the invalid data can be modified based on the valid data. Furthermore, the eviction operation can be performed on the cache line with the modified at least one sector. |
1.A method including:Receive an instruction to perform an eviction operation on the cache line in the cache;Judging whether at least one of the multiple sectors of the cache line is associated with invalid data;In response to determining that at least one sector of the plurality of sectors of the cache line is associated with invalid data, the processing device performs a read operation to retrieve valid data associated with the at least one sector;Modifying the at least one sector of the cache line associated with the invalid data based on the valid data; andThe eviction operation is performed on the cache line having the modified at least one sector.2.The method according to claim 1, wherein said performing said eviction operation on said cache line having said modified at least one sector corresponds to rewriting said plurality of sectors of said modified cache line The data of the zone is written to one or more memory components.3.The method of claim 1, wherein the read operation is performed at one or more memory components, and wherein the eviction operation stores data of the cache line in the one or more memory components Place.4.The method of claim 1, wherein performing the read operation to retrieve the valid data associated with the at least one sector comprises:Determining that the valid data is stored in the second cache; andThe read operation is performed on the second cache to obtain the valid data for the at least one sector, and wherein the eviction operation stores the data of the cache line in the backing storage One or more memory components connected.5.The method of claim 1, wherein the cache line is selected from a plurality of cache lines in the cache.6.5. The method of claim 5, wherein the cache line is selected based on the cache line being the least recently used cache line among the plurality of cache lines in the cache.7.The method of claim 1, wherein each sector of the plurality of sectors of the cache line corresponds to data associated with one or more read operations or one or more write operations .8.A system including:Memory components; andA processing device, which is operatively coupled with the memory component to perform the following operations:Receive an instruction to perform an eviction operation on the cache line in the cache;Determine that the part of the cache line is associated with invalid data;In response to determining that the portion of the cache line is associated with invalid data, determining whether the second cache contains valid data for the portion of the cache line that is associated with the invalid data;In response to determining that the second cache contains the valid data, retrieving the valid data from the second cache; andBased on the retrieved valid data, the eviction operation is performed on the cache line in the cache.9.The system of claim 8, wherein the cache is a write-read cache, and the second cache is a read-only cache.10.The system of claim 9, wherein the write-read cache stores the data from the host system in response to the host system being associated with the first workload, and wherein the read-only cache is responsive to The host system is associated with a different second workload to store data from the host system.11.The system according to claim 8, wherein the processing device further performs the following operations:In response to determining that the second cache does not contain the valid data, the valid data is retrieved from the backing store.12.The system according to claim 8, wherein the processing device further performs the following operations:The invalid data at the portion of the cache line is replaced with the retrieved valid data, and the repetition is performed on the cache line with the retrieved valid data that has replaced the invalid data.出操作。 Out operation.13.The system according to claim 8, wherein the selected cache line is selected from the plurality of cache lines based on the cache line being the least recently used cache line among the plurality of cache lines in the cache The cache line.14.A non-transitory computer-readable medium including instructions that, when executed by a processing device, cause the processing device to perform operations including the following:Receive an instruction to perform an eviction operation on the cache line in the cache;Judging whether at least one of the multiple sectors of the cache line is associated with invalid data;In response to determining that at least one sector of the plurality of sectors of the cache line is associated with invalid data, performing a read operation to retrieve valid data associated with the at least one sector;Modifying the at least one sector of the cache line associated with the invalid data based on the valid data; andThe eviction operation is performed on the cache line having the modified at least one sector.15.The non-transitory computer-readable medium of claim 14, wherein the performing the eviction operation on the cache line having the modified at least one sector corresponds to transferring the modified cache line The data of the plurality of sectors is written to one or more memory components.16.The non-transitory computer-readable medium of claim 14, wherein the read operation is performed at one or more memory components, and wherein the eviction operation stores data of the cache line in the At one or more memory components.17.The non-transitory computer-readable medium of claim 14, wherein performing the read operation to retrieve the valid data associated with the at least one sector, the operation further comprising:Determining that the valid data is stored in the second cache; andThe read operation is performed on the second cache to obtain the valid data for the at least one sector, and wherein the eviction operation stores the data of the cache line in the backing storage One or more memory components connected.18.The non-transitory computer-readable medium of claim 14, wherein the cache line is selected from a plurality of cache lines in the cache.19.The non-transitory computer-readable medium of claim 18, wherein the cache line is selected based on the cache line being the least recently used cache line among the plurality of cache lines in the cache Row.20.The non-transitory computer-readable medium of claim 14, wherein each of the plurality of sectors of the cache line corresponds to one or more read operations or one or more write operations. Enter the data associated with the operation. |
Evict the cache line based on the modification of the cache line's sectorTechnical fieldThe present disclosure generally relates to a memory subsystem, and more specifically, relates to the eviction of cache lines based on the modification of the sectors of the cache line at the memory subsystem.Background techniqueThe memory subsystem may be a storage system, such as a solid state drive (SSD) or a hard disk drive (HDD). The memory subsystem may be a memory module, such as a dual in-line memory module (DIMM), a small form-factor DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory subsystem may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. Generally speaking, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Description of the drawingsThe present disclosure will be understood more fully based on the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.Figure 2 is a flowchart of an example method of performing an eviction operation according to some embodiments.Figure 3 illustrates the replacement of invalid data at a cache line to be evicted according to some embodiments of the present disclosure.4 is a flowchart of an example method of modifying a cache line in a cache based on valid data at another cache according to some embodiments of the present disclosure.Figure 5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to eviction of cache lines based on the modification of the sectors of the cache line at the memory subsystem. The memory subsystem is also referred to as "memory device" in the following. An example of a memory subsystem is a storage device coupled to a central processing unit (CPU) through peripheral interconnects (e.g., input/output bus, storage area network). Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, and hard disk drives (HDD). Another example of a memory subsystem is a memory module coupled to the CPU through a memory bus. Examples of memory modules include dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), non-volatile dual in-line memory modules (NVDIMMs), and the like. The memory subsystem may be a hybrid memory/storage subsystem. Generally speaking, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.Conventional memory subsystems can use cache to improve the performance of the memory subsystem. A cache can be a type of memory in which data can be retrieved in a shorter time than the time to retrieve the data from the backing store of the memory subsystem (e.g., one or more of the memory components). The cache can store data from the backing store that has been recently read or written by the host system. For example, the cache can store data that has been recently written to the memory component or data that has been recently read from the memory component.A cache can store multiple cache lines, where each cache line contains a set of data organized into sectors. For example, each sector may include data associated with a read operation or a write operation from the host system. In some embodiments, the data of the read operation or the write operation from the host system may be divided into multiple sectors based on the size of the management unit utilized by the memory component of the memory subsystem. Each memory component of the memory subsystem can be associated with a protocol that specifies the size of the management unit used by the memory component. The host system may initially request to read 512KB of data from the memory component, but due to the protocol of the memory component, the 512KB request can be divided into smaller granular requests (for example, eight 64KB read requests). Conventional memory subsystems can perform smaller granular requests to obtain data from memory components, which can then be stored in cache and/or returned to the host system. Therefore, each cache line of the cache may contain data of multiple sectors corresponding to a read operation or a write operation of the host system.In conventional memory subsystems, cache lines can be evicted from the cache. For example, when a threshold amount of cache lines are stored at the cache, a specific cache line can be removed (ie, evicted) from the cache, and the corresponding data of the sectors in the cache line can be stored in At the backing store (eg, the one or more memory components) of the memory subsystem. Over time, the host system can invalidate the data at a particular sector (e.g., provide new data that will replace the sector or erase the data). However, if the cache line contains invalid data, when the cache line is evicted from the cache, the invalid data can be stored in the backing store along with valid data from the remaining sectors of the cache line. This storage of invalid data as a result of the eviction operation may cause inconsistencies in the data already stored at the memory subsystem.Aspects of the present disclosure solve the above and other shortcomings by evicting the cache line based on the modification of the sectors of the cache line at the memory subsystem. For example, as previously described, a cache line may include multiple data sectors. A judgment can be made as to whether any of the sectors of the cache line contains invalid data. In some embodiments, a content addressable memory (CAM) or other such indication can be used to determine whether a sector of a cache line contains invalid data. If the cache line does contain sectors with invalid data, the memory subsystem can perform a read-modify-write operation to modify the sectors with invalid data with valid data (ie, invalid sectors). For example, valid data can be retrieved or read from the backing storage, and the retrieved valid data can be written to the invalid sector of the cache line. Subsequently, the cache line with the retrieved valid data can then be evicted, and the data at each sector of the cache line can be stored or written in the backing store of the memory subsystem. Therefore, the replacement of invalid data with valid data can be performed by using a read-modify-write operation before evicting the cache line from the cache.In some embodiments, the memory subsystem may contain multiple caches. For example, the first cache may be a write-read cache that is used to store read operations and write operations from the host system when the workload of the host system is random The data. For example, when a combination of read operations and write operations is issued by the host system, the workload of the host system can be regarded as random. The second cache may be a read-only cache for storing read operations from the host system when the workload of the host system is based on a sequential number of read operations (ie, no write operations) The data. For example, a read-only cache may contain data from backing storage that is retrieved in response to a read operation from the host system. In some embodiments, the cache line to be evicted from the write-read cache may contain invalid sectors, and the read-only cache may contain corresponding valid sectors. For example, the host system can provide new data that will replace invalid data, and the new data can be stored in a sector of another cache line of the read-only cache. In this case, a read-modify-write operation can be performed to retrieve valid data from the read-only cache, and replace invalid data at the write-read cache with the retrieved valid data. Therefore, when valid data is available at the read-only cache, the valid data can be retrieved from the read-only cache instead of the backing store. Since the valid data can be retrieved from the read-only cache in a shorter time than the valid data from the backing store, the read-modify-write operation can also be performed in a shorter time, so when the write-read is high-speed Improves the performance of the memory subsystem when the cache evicts cache lines.The advantages of the present disclosure include, but are not limited to, improving the data consistency of the memory subsystem. For example, when a cache line is evicted from a write-read cache, invalid data can be replaced by valid data. Therefore, when the cache line is evicted from the write-read cache, invalid data is not stored in the backing store of the memory subsystem. Therefore, when the host system is associated with a random workload, the performance of the memory subsystem can be improved, because a separate write-read cache can be used, and when the cache line is evicted from the write-read cache and one by one. When the data of the outgoing cache line is stored in the backup storage of the memory subsystem, data consistency is maintained by replacing any invalid data at the sector of the cache line.Figure 1 illustrates an example computing environment 100 including a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media, such as memory components 112A to 112N. The memory components 112A to 112N may be volatile memory components, non-volatile memory components, or a combination of such components. In some embodiments, the memory subsystem is a storage system. An example of a storage system is SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage subsystem. Generally speaking, the computing environment 100 may include a host system 120 that uses the memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110.The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including memory and processing devices. The host system 120 may include or be coupled to the memory subsystem 110 such that the host system 120 can read data from the memory subsystem 110 or write data to the memory subsystem 110. The host system 120 may be coupled to the memory subsystem 110 through a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, no intervening components), whether wired or wireless, including electrical, optical, Magnetic connection. Examples of physical host interfaces include but are not limited to Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect High Speed (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. . The physical host interface can be used to transfer data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM high-speed (NVMe) interface to access the memory components 112A to 112N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory components 112A to 112N may include any combination of different types of non-volatile memory components and/or volatile memory components. Examples of non-volatile memory components include NAND type flash memory. Each of the memory components 112A to 112N may include one or more arrays of memory cells, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). Unit (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. Each of the memory units may store one or more data bits (e.g., blocks of data) used by the host system 120. Although a non-volatile memory component such as a NAND type flash memory is described, the memory components 112A to 112N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory components 112A to 112N may be, but are not limited to, random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase change memory (PCM), magnetic random access memory (MRAM), NOR flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in body resistance. In addition, compared with many flash-based memories, cross-point non-volatile memories can perform in-place write operations, in which non-volatile memory cells can be processed without pre-erasing the non-volatile memory cells. Programming. In addition, the memory cells of the memory components 112A to 112N may be grouped into memory pages or data blocks, and the memory pages or data blocks may refer to the cells of the memory components for storing data.The memory system controller 115 (hereinafter referred to as "controller") may communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N, and other such operations . The controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 115 may be a microcontroller, a dedicated logic circuit (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for executing various processes, operations, logic flows, and routines that control the operations of the memory subsystem 110, including handling Communication between the memory subsystem 110 and the host system 120. In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may instead rely on (e.g., external Host or external control provided by a processor or controller separate from the memory subsystem.In general, the controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction (ECC) operations, encryption operations, cache operations, and logical block addresses and physical block addresses associated with the memory components 112A to 112N Address translation between. The controller 115 may further include a host interface circuit system to communicate with the host system 120 through a physical host interface. The host interface circuit may convert commands received from the host system into command instructions to access the memory components 112A to 112N, and convert responses associated with the memory components 112A to 112N into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 may include a cache or buffer (such as DRAM) and an address circuitry (such as a row decoder and a column decoder), which may receive addresses from the controller 115 and compare The addresses are decoded to access the memory components 112A to 112N.The memory subsystem 110 includes a cache eviction component 113 that can be used to perform an eviction operation on the cache of the memory subsystem 110. In some embodiments, the controller 115 includes at least a portion of the cache eviction component 113. For example, the controller 115 may include a processor 117 (processing device) configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, the cache eviction component 113 is part of the host system 120, application program, or operating system.The cache eviction component 113 may receive an instruction to perform an eviction operation on the cache line in the cache. In response to the instruction to perform the eviction operation, the cache eviction component 113 can determine whether any sector of the cache line contains invalid data. If the sector does contain invalid data, the cache obvious component 113 may retrieve the corresponding valid data from the backing store (for example, the memory components 112A to 11N) or another cache included in the memory subsystem 110. The cache eviction component 113 may replace invalid data at the cache line with the retrieved valid data, and then may perform an eviction operation of the cache line. Other details regarding the operation of the operation interrupt component 113 are described below.Figure 2 is a flowchart of an example method 200 of performing an eviction operation in accordance with some embodiments. The method 200 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute) or a combination thereof. In some embodiments, the method 200 is performed by the cache eviction component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are possible.As shown in Figure 2, at operation 210, the processing logic receives an instruction to perform an eviction operation on a cache line in the cache. When a threshold number of cache lines have been stored at the cache, an indication can be received. In some embodiments, an indication may be received when a threshold number of cache lines are stored at the cache and new data will be stored at the cache lines of the cache. The new data may be data from the most recent write operation of the host system or data from the most recent read operation of the host system. The cache line to be evicted can be selected based on the cache policy. For example, the least recently used policy may specify that when a cache line is about to store new data, the least recently used (ie, accessed) cache line can be evicted. In some embodiments, the cache line may be selected based on the least frequently used policy, which may specify that when the cache line is about to store new data, the cache line that is the least frequently accessed over a period of time may be evicted .At operation 220, the processing logic determines whether the sector of the cache line contains invalid data. In some embodiments, the memory subsystem may identify invalid data based on identifying the data structure of the data stored at the cache. For example, a content addressable memory (CAM) may contain a data structure that identifies information of data currently stored at the cache. The information can specify a logical address for the data and an indication of whether the data corresponding to the logical address is invalid or valid. For example, the bit value of the data structure of each entry can specify whether the corresponding data is valid or invalid. Therefore, the information in the CAM can specify whether a particular sector of the cache line contains valid data or invalid data. At operation 230, the processing logic performs a read operation to retrieve valid data for the sector of the cache line in response to determining that the sector contains invalid data. For example, the valid data may be data that has been provided by a write operation from the host system and is directed to the same logical address as the invalid data. Therefore, when compared to valid data, invalid data can be regarded as older data associated with previous read operations or write operations from the host system. Valid data can be retrieved from the backing store of the memory subsystem. For example, valid data can be retrieved based on the logical address of invalid data. In some embodiments, a data group (for example, multiple sectors) can be retrieved from the backup storage, and a portion of the retrieved data that is valid data of the logical address can be selected to replace the invalid data. For example, data of multiple logical addresses can be retrieved from the backup storage at a single time, and valid data of the logical address of invalid data can be selected. In some embodiments, as further described in connection with FIG. 4, valid data may be retrieved from another cache of the memory subsystem.At operation 240, the processing logic modifies the sectors of the cache line based on the valid data. For example, a write operation can be performed on the cache line in the cache to replace invalid data at the sector with the retrieved valid data. In some embodiments, the cache line can be retrieved from the cache and stored in the buffer, and a write operation can be performed on the cache line by changing the invalid data stored in the buffer. In addition, at operation 250, the processing logic performs an eviction operation on the cache line with the modified sector. For example, the cache line can be evicted from the cache after the invalid data at the sector has been replaced by valid data. Therefore, all valid data can be used to evict the cache line instead of invalid data. The eviction of the cache line can cause the data of the cache line to be stored in the backing store of the memory subsystem.Figure 3 illustrates the replacement of invalid data at a cache line to be evicted according to some embodiments of the present disclosure. Invalid data can be replaced by processing logic, which can include hardware (for example, processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (for example, processing device Instructions to run or execute on) or a combination thereof. In some embodiments, the replacement of invalid data is performed by the cache eviction component 113 of FIG. 1.As shown in FIG. 3, the memory subsystem may include a read-only cache 311, a write-read cache 312, and a backing store 320. The write-read cache 312 may store data associated with read operations and write operations from the host system when the workload of the host system is random. For example, the write-read cache 312 may store data that has been received from the host system as part of a write operation to write the data at the memory component 321 of the backup storage 320, and may store the data that has been received from the The host system reads data received from the memory component 321 of the backup storage 320. The read-only cache may be a separate cache, and when the workload of the host system is sequential, it is used to store data received from the memory component 321 of the backing memory 320 in response to a read operation from the host system. Therefore, when the workload of the host system is the first type (for example, random), the data from the read operation and the write operation can be stored in the write-read cache 312, and when the workload of the host system When it is a different second type (for example, sequential), the data from the read operation can be stored at the read-only cache 311.As previously described, an eviction operation can be performed on the write-read cache 312. For example, the third cache line can be evicted. As shown, the third cache line may contain sectors with data A1, A2, A3, A4, and A5. The eviction of the third cache line can remove the data A1 to A5 so that subsequent data can be stored at the sector of the cache line. In addition, the eviction of the third cache line may cause the data A1 to A5 to be stored at the backing store 320. In some embodiments, the data A2 of the second sector in the third cache line may be regarded as invalid data. Before evicting the third cache line, data A2 can be replaced by corresponding valid data. For example, the data A2 may be data from a specific logical address. When retrieving valid data from the memory component 321 of the backing store 320, the valid data A2 can then be retrieved by using the same logical address. The valid data retrieved from the backing store 320 may be written to the second sector of the third cache line of the write-read cache 312 so that each sector of the third cache line stores valid data. Subsequently, the third cache line can be evicted from the write-read cache 312, and the data can be stored at the backing store 320.In some embodiments, the read-only cache 311 can also store valid data. For example, as shown, the read-only cache 311 can also store valid data A2. Therefore, in some embodiments, valid data from the read-only cache 311 can be written to the second sector of the write-read cache 312. After valid data has been written to the write-read cache 312 to replace invalid data, the cache line can be evicted from the write-read cache 312. For example, the data at the sector of the cache line can be removed and stored at the backing store 320.Therefore, a judgment can be made whether another cache (for example, a read-only cache) contains valid data that can be used to replace invalid data at the write-read cache. If the read-only cache contains valid data, the valid data can be retrieved from the read-only cache so that invalid data can be replaced at the write-read cache. Otherwise, if the read-only cache does not contain valid data, then valid data can be retrieved from the backing store.4 is a flowchart of an example method 400 of modifying a cache line in a cache based on valid data at another cache according to some embodiments of the present disclosure. The method 400 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute) or a combination thereof. In some embodiments, the method 400 is performed by the cache eviction component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are possible.As shown in Figure 4, at operation 410, the processing logic receives an instruction to perform an eviction operation on a cache line in the cache. For example, the write-read can be evicted when a threshold amount of cache line in the write-read cache contains valid data and new data is stored at the cache line of the write-read cache The cache line of the cache. Therefore, it is possible to perform the step by step when the threshold amount of cache lines of the write-read cache contains at least some valid data and new data will be stored in one or more of the cache lines of the write-read cache.出操作。 Out operation. At operation 420, the processing logic determines that a portion of the cache line is associated with invalid data. For example, the sectors of the cache line can be identified as containing invalid data as previously described. Invalid data can be data that is no longer current or used by the host system. At operation 430, the processing logic determines whether the second cache contains valid data for the portion of the cache line. For example, a read-only cache separate from the write-read cache can be checked to verify whether valid data exists at the read-only cache. For example, as previously described, a CAM or other such data structure can be used to determine whether the second cache contains valid data for the logical address of the invalid data. Therefore, in some embodiments, the content address memory of the read-only cache may be searched to determine whether valid data currently exists at the read-only cache.At operation 440, the processing logic retrieves valid data from the second cache in response to determining that the second cache contains valid data. For example, a read operation can be performed on the second cache to retrieve cache lines containing valid data. Then valid data can be selected or removed from the cache line. In addition, at operation 450, the processing logic replaces invalid data at the portion of the cache line with the retrieved valid data. For example, a write operation may be performed on the cache to replace invalid data with valid data that has been retrieved from the second cache. Therefore, the invalid data at a specific sector of the cache line can be replaced with valid data from the second cache. In addition, at operation 460, the processing logic evicts the cache line with valid data at the portion of the cache line. For example, a cache line can be evicted from the cache after valid data has been written to the portion of the cache line that already contains invalid data. The eviction of the cache line can cause the data of the cache line to be stored in the backing store of the memory subsystem.Figure 5 illustrates an example machine of a computer system 500 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, the computer system 500 may correspond to a host system (for example, the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1), or may be used to execute The operation of the controller (for example, executing the operating system to perform the operation corresponding to the cache eviction component 113 of FIG. 1). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can be used as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment while in the capacity of a server or client machine in a client-server network environment operate.The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network equipment, server, network router, switch or bridge, digital or non-digital circuit system, or capable ( Sequentially or otherwise) any machine that executes a set of instructions that specify actions to be taken by the machine. In addition, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that individually or collectively execute one (or more) sets of instructions to perform any of the methods discussed herein Or multiple.The example computer system 500 includes a processing device 502, a main memory 504 (for example, read only memory (ROM), flash memory, such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.) that communicate with each other via a bus 530. Fetch memory (DRAM), static memory 506 (for example, flash memory, static random access memory (SRAM), etc.), and data storage system 518.The processing device 502 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , Or a processor that implements a combination of instruction sets. The processing device 502 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and so on. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 may further include a network interface device 508 to communicate via the network 520.The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium), on which one or more sets of instructions 526 are stored or embodied in the methods or functions described herein Any one or more of the software. The instructions 526 may also completely or at least partially reside in the main memory 504 and/or in the processing device 502 during execution by the computer system 500, and the main memory 504 and the processing device 502 also constitute machine-readable storage media. The machine-readable storage medium 524, the data storage system 518, and/or the main memory 504 may correspond to the memory subsystem 110 of FIG.In one embodiment, instructions 526 include instructions to implement functionality corresponding to a cache eviction component (eg, cache eviction component 113 of FIG. 1). Although the machine-readable storage medium 524 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented based on the algorithm and symbolic representation of the operation of the data bits in the computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other technicians in the field. Algorithms are here and are generally considered to be self-consistent sequences of operations that lead to the desired result. Operations are operations that require physical manipulation of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are only convenient labels applied to these quantities. The present disclosure may refer to the manipulation and transformation of data expressed as physical (electronic) quantities in the registers and memories of the computer system into computer system memories or registers or other data similarly expressed as physical quantities in other such information storage systems The actions and processes of a computer system or similar electronic computing device.The present disclosure also relates to equipment for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to floppy disks, optical disks, CD-ROMs and magneto-optical disks, read-only memory (ROM), which are each coupled to the computer system bus. , Random Access Memory (RAM), EPROM, EEPROM, magnetic or optical card, any type of magnetic disk or any type of medium suitable for storing electronic instructions.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods described. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative rather than restrictive sense. |
Providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media. In this regard, in one aspect, an apparatus comprising an MMU is provided. The MMU comprises a translation cache providing a plurality of translation cache entries defining address translation mappings. The MMU further comprises a partition descriptor table providing a plurality of partition descriptors defining a corresponding plurality of partitions each comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a partition translation circuit configured to receive a memory access request from a requestor. The partition translation circuit is further configured to determine a translation cache partition identifier (TCPID) of the memory access request, identify one or more partitions of the plurality of partitions based on the TCPID, and perform the memory access request on a translation cache entry of the one or more partitions. |
An apparatus comprising a memory management unit (MMU) for providing partitioned translation caches, comprising: a translation cache configured to provide a plurality of translation cache entries each defining an address translation mapping; a partition descriptor table configured to provide a plurality of partition descriptors defining a corresponding plurality of partitions of the translation cache, each partition of the plurality of partitions comprising one or more translation cache entries of the plurality of translation cache entries; and a partition translation circuit configured to: receive a memory access request from a requestor; determine a translation cache partition identifier (TCPID) of the memory access request; identify one or more partitions of the plurality of partitions based on the TCPID; and perform a cache operation on a translation cache entry of the one or more translation cache entries of the one or more partitions. The apparatus of claim 1, wherein the partition descriptor table is configured to provide the plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and an end pointer to an ending translation cache entry of the corresponding partition. The apparatus of claim 1, wherein the partition descriptor table is configured to provide the plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and a count indicator indicative of a count of the one or more translation cache entries of the corresponding partition. 19 WO 2016/195869 PCT/US2016/03004. The apparatus of claim 1, wherein the partition translation circuit is configured to determine the TCPID by deriving the TCPID based on an attribute of the memory access request. The apparatus of claim 1, wherein the partition translation circuit is configured to determine the TCPID by retrieving a requestor-supplied TCPID provided by the memory access request. The apparatus of claim 1, further comprising a partition remapping table configured to provide a plurality of remapping entries each defining a remapping of an input TCPID to an output TCPID; wherein the partition translation circuit is configured to: determine the TCPID by identifying a remapping entry of the plurality of remapping entries, in which the input TCPID of the remapping entry corresponds to the TCPID of the memory access request; and identify the one or more partitions of the plurality of partitions based on the output TCPID of the remapping entry. The apparatus of claim 1, wherein: the memory access request comprises a source indicator indicating a source type of the requestor; and the partition translation circuit is configured to determine the TCPID by deriving the TCPID based on the source indicator. The apparatus of claim 1, further comprising a partition selection table comprising a plurality of partition selection entries, each defining at least one of a search control indicator and an eviction control indicator, and each corresponding to one or more partitions of the plurality of partitions; and wherein the partition translation circuit is configured to identify the one or more partitions of the plurality of partitions based on a partition selection entry of the plurality of partition selection entries. WO 2016/195869 PCT/US2016/03004. The apparatus of claim 8, wherein the partition translation circuit is configured to perform the cache operation by determining that the one or more translation cache entries of the one or more partitions are eligible for searching based on the search control indicator of the partition selection entry for the one or more partitions. The apparatus of claim 8, wherein the partition translation circuit is configured to perform the cache operation by determining that the one or more translation cache entries of the one or more partitions are eligible for eviction based on the eviction control indicator of the partition selection entry for the one or more partitions. The apparatus of claim 1 integrated into an integrated circuit (IC). The apparatus of claim 1 integrated into a device selected from the group consisting of a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player. A memory management unit (MMU) comprising: a means for providing a plurality of translation cache entries each defining an address translation mapping; a means for providing a plurality of partition descriptors defining a corresponding plurality of partitions of a translation cache of the MMU, each partition of the plurality of partitions comprising one or more translation cache entries of the plurality of translation cache entries; a means for receiving a memory access request from a requestor; a means for determining a translation cache partition identifier (TCPID) of the memory access request; a means for identifying one or more partitions of the plurality of partitions based on the TCPID; and 21 WO 2016/195869 PCT/US2016/030040 a means for performing a cache operation on a translation cache entry of the one or more translation cache entries of the one or more partitions. A method for providing partitioned translation caches, comprising: receiving, by a memory management unit (MMU), a memory access request from a requestor; determining a translation cache partition identifier (TCPID) of the memory access request; identifying, based on the TCPID, one or more partitions of a plurality of partitions of a translation cache of the MMU; and performing a cache operation on a translation cache entry of one or more translation cache entries of the one or more partitions. The method of claim 14, wherein identifying the one or more partitions of the plurality of partitions is further based on a corresponding plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and an end pointer to an ending translation cache entry of the corresponding partition. The method of claim 14, wherein identifying the one or more partitions of the plurality of partitions is further based on a corresponding plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and a count indicator indicative of a count of the one or more translation cache entries of the corresponding partition. The method of claim 14, wherein determining the TCPID comprises deriving the TCPID based on an attribute of the memory access request. 22 WO 2016/195869 PCT/US2016/03004. The method of claim 14, wherein determining the TCPID comprises retrieving a requestor-supplied TCPID provided by the memory access request. The method of claim 14, comprising: determining the TCPID by identifying a remapping entry among a plurality of remapping entries each defining a remapping of an input TCPID to an output TCPID, in which the input TCPID of the remapping entry corresponds to the TCPID of the memory access request; and identifying the one or more partitions of the plurality of partitions based on the output TCPID of the remapping entry. The method of claim 14, wherein: the memory access request comprises a source indicator indicating a source type of the requestor; and determining the TCPID comprises deriving the TCPID based on the source indicator. The method of claim 14, further comprising identifying the one or more partitions of the plurality of partitions based on a partition selection entry of a plurality of partition selection entries, each defining at least one of a search control indicator and an eviction control indicator and corresponding to one or more partitions of the plurality of partitions. The method of claim 21, wherein performing the cache operation based on the partition selection entry for the one or more partitions comprises determining that the one or more translation cache entries of the one or more partitions are eligible for searching based on the search control indicator of the partition selection entry for the one or more partitions. The method of claim 21, wherein performing the cache operation based on the partition selection entry for the one or more partitions comprises determining that the one or more translation cache entries of the one or more partitions are eligible for 23 WO 2016/195869 PCT/US2016/030040 eviction based on the eviction control indicator of the partition selection entry for the one or more partitions. A non-transitory computer-readable medium having stored thereon computer executable instructions which, when executed by a processor, cause the processor to: receive a memory access request from a requestor; determine a translation cache partition identifier (TCPID) of the memory access request; identify, based on the TCPID, one or more partitions of a plurality of partitions of a translation cache of a memory management unit (MMU); and perform a cache operation on a translation cache entry of one or more translation cache entries of the one or more partitions. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to identify the one or more partitions of the plurality of partitions based a corresponding plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and an end pointer to an ending translation cache entry of the corresponding partition. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to identify the one or more partitions of the plurality of partitions based a corresponding plurality of partition descriptors each comprising: a start pointer to a starting translation cache entry of a corresponding partition defined by the partition descriptor; and a count indicator indicative of a count of the one or more translation cache entries of the corresponding partition. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause 24 WO 2016/195869 PCT/US2016/030040 the processor to determine the TCPID by deriving the TCPID based on an attribute of the memory access request. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to determine the TCPID by retrieving a requestor-supplied TCPID provided by the memory access request. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to: determine the TCPID by identifying a remapping entry among a plurality of remapping entries each defining a remapping of an input TCPID to an output TCPID, in which the input TCPID of the remapping entry corresponds to the TCPID of the memory access request; and identify the one or more partitions of the plurality of partitions based on the output TCPID of the remapping entry. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to determine the TCPID by deriving the TCPID based on a source indicator of the memory access request indicating a source type of the requestor. The non-transitory computer-readable medium of claim 24 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to identify the one or more partitions of the plurality of partitions based on a partition selection entry of a plurality of partition selection entries, each defining at least one of a search control indicator and an eviction control indicator and corresponding to one or more partitions of the plurality of partitions. The non-transitory computer-readable medium of claim 31 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to perform the cache operation based on the partition selection entry for WO 2016/195869 PCT/US2016/030040 the one or more partitions by determining that the one or more translation cache entries of the one or more partitions are eligible for searching based on the search control indicator of the partition selection entry for the one or more partitions. The non-transitory computer-readable medium of claim 31 having stored thereon computer-executable instructions which, when executed by the processor, further cause the processor to perform the cache operation based on the partition selection entry for the one or more partitions by determining that the one or more translation cache entries of the one or more partitions are eligible for eviction based on the eviction control indicator of the partition selection entry for the one or more partitions. 26. |
WO 2016/195869 PCT/US2016/030040 PROVIDING MEMORY MANAGEMENT UNIT (MMU) PARTITIONED TRANSLATION CACHES, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA PRIORITY APPLICATION [0001] The present application claims priority to U.S. Patent Application Serial No. 14/725,882, filed on May 29, 2015 and entitled "PROVIDING MEMORY MANAGEMENT UNIT (MMU) PARTITIONED TRANSLATION CACHES, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA," which is incorporated herein by reference in its entirety. BACKGROUND I. Field of the Disclosure [0002] The technology of the disclosure relates generally to translation caches provided by memory management units (MMUs). II. Background [0003] Virtual memory is a memory management technique provided by most modern computing systems. Using virtual memory, a central processing unit (CPU) or a peripheral device of the computing system may access a memory buffer using a virtual memory address mapped to a physical memory address within a physical memory space. In this manner, the CPU or peripheral device may be able to address a larger physical address space than would otherwise be possible, and/or may utilize a contiguous view of a memory buffer that is, in fact, physically discontiguous across the physical memory space. [0004] Virtual memory is conventionally implemented through the use of a memory management unit (MMU) for translation of virtual memory addresses to physical memory addresses. The MMU may be integrated into the CPU of the computing system (a CPU MMU), or may comprise a separate circuit providing memory management functions for peripheral devices (a system MMU, or SMMU). In conventional operation, the MMU receives memory access requests from "upstream" devices, such as direct memory access (DMA) agents, video accelerators, and/or display engines, as non-limiting examples. For each memory access request, the MMU 1 WO 2016/195869 PCT/US2016/030040 translates the virtual memory addresses included in the memory access request to a physical memory address, and the memory access request is then processed using the translated physical memory address. [0005] Because an MMU may be required to translate a same virtual memory address repeatedly within a short time interval, performance of the MMU and the computing system overall may be improved by caching address translation data within the MMU. In this regard, the MMU may include a structure known as a translation cache (also referred to as a translation lookaside buffer, or TLB). The translation cache provides translation cache entries in which previously generated virtual-to-physical memory address translation mappings may be stored for later access. If the MMU subsequently receives a request to translate a virtual memory address stored in the translation cache, the MMU may retrieve the corresponding physical memory address from the translation cache rather than retranslating the virtual memory address. [0006] However, the performance benefits achieved through use of the translation cache may be lost in scenarios in which the MMU provides address translation services for multiple upstream devices. Because the upstream devices must share the resources of the MMU's translation cache, competition for the limited number of translation cache entries may result in "thrashing," in which two or more upstream devices repeatedly evict each other's translation cache entries in favor of their own. In a worst-case scenario, the additional overhead resulting from thrashing may cancel out the benefits of caching. A larger translation cache may mitigate the effects of inter-device competition for translation cache entries, but may also result in increased power consumption and a larger physical footprint. SUMMARY OF THE DISCLOSURE [0007] Aspects disclosed in the detailed description include providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media. In this regard, an MMU is provided for enabling translation cache partitioning. The MMU includes a translation cache that provides translation cache entries, each of which stores a virtual-to-physical address mapping determined by a previous address translation operation. To enable partitioning, the MMU provides a partition descriptor table, and, optionally, a partition 2 WO 2016/195869 PCT/US2016/030040 remapping table and/or a partition selection table. The partition descriptor table includes partition descriptors that each define a partition containing one or more translation cache entries of the translation cache. Upon receiving a memory access request from a requestor, a partition translation circuit of the MMU determines a translation cache partition identifier (TCPID) of the memory access request, and identifies one or more of the partitions based on the TCPID. In some aspects, determining the TCPID may include using the partition remapping table to locate the TCPID of the memory access request as an input TCPID associated with an output TCPID. The output TCPID, in turn, may then be used to identify the one or more partitions using the partition selection table. Once the one or more partitions are identified, a cache operation (e.g., a cache search operation and/or a cache eviction operation) is performed on a translation cache entry of the one or more translation cache entries of the one or more partitions. In this manner, the translation cache of the MMU may be effectively partitioned among multiple requestors, resulting in reduced competition between requestors for translation cache entries. [0008] In one aspect, an apparatus is provided, comprising an MMU for providing partitioned translation caches. The MMU comprises a translation cache configured to provide a plurality of translation cache entries each defining an address translation mapping. The MMU further comprises a partition descriptor table configured to provide a plurality of partition descriptors defining a corresponding plurality of partitions of the translation cache, each partition of the plurality of partitions comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a partition translation circuit. The partition translation circuit is configured to receive a memory access request from a requestor. The partition translation circuit is further configured to determine a translation cache partition identifier (TCPID) of the memory access request. The partition translation circuit is also configured to identify one or more partitions of the plurality of partitions based on the TCPID. The partition translation circuit is additionally configured to perform a cache operation on a translation cache entry of the one or more translation cache entries of the one or more partitions. [0009] In another aspect, an MMU is provided. The MMU comprises a means for providing a plurality of translation cache entries each defining an address translation 3 WO 2016/195869 PCT/US2016/030040 mapping. The MMU further comprises a means for providing a plurality of partition descriptors defining a corresponding plurality of partitions of a translation cache of the MMU, each partition of the plurality of partitions comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a means for receiving a memory access request from a requestor. The MMU additionally comprises a means for determining a TCPID of the memory access request. The MMU further comprises a means for identifying one or more partitions of the plurality of partitions based on the TCPID. The MMU also comprises a means for performing a cache operation on a translation cache entry of the one or more translation cache entries of the one or more partitions. [0010] In another aspect, a method for providing partitioned translation caches is provided. The method comprises receiving, by an MMU, a memory access request from a requestor. The method further comprises determining a TCPID of the memory access request. The method also comprises identifying, based on the TCPID, one or more partitions of a plurality of partitions of a translation cache of the MMU. The method additionally comprises performing a cache operation on a translation cache entry of one or more translation cache entries of the one or more partitions. [0011] In another aspect, a non-transitory computer-readable medium is provided, having stored thereon computer-executable instructions. When executed by a processor, the computer-executable instructions cause the processor to receive a memory access request from a requestor. The computer-executable instructions further cause the processor to determine a TCPID of the memory access request. The computer executable instructions also cause the processor to identify, based on the TCPID, one or more partitions of a plurality of partitions of a translation cache of an MMU. The computer-executable instructions additionally cause the processor to perform a cache operation on a translation cache entry of one or more translation cache entries of the one or more partitions. BRIEF DESCRIPTION OF THE FIGURES [0012] Figure 1 is a block diagram illustrating an exemplary computing system illustrating communications flows from upstream devices to a memory management unit (MMU) providing address translation services; 4 WO 2016/195869 PCT/US2016/030040 [0013] Figure 2 is a block diagram illustrating an exemplary MMU for providing a partitioned translation cache; [0014] Figures 3A and 3B are block diagrams illustrating exemplary aspects of a partition descriptor illustrated in Figure 2 for defining a translation cache partition; [0015] Figure 4 is a block diagram illustrating exemplary aspects of a memory access request and a partition translation circuit illustrated in Figure 2 for determining a translation cache partition identifier (TCPID); [0016] Figure 5 is a flowchart illustrating exemplary operations of the MMU of Figure 2 for providing partitioned translation caches; [0017] Figures 6A-6C are flowcharts illustrating further exemplary operations for providing partitioned translation caches, including TCPID remapping and use of partition selection entries; and [0018] Figure 7 is a block diagram of an exemplary processor-based system that can include the MMU of Figure 1. DETAILED DESCRIPTION [0019] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. [0020] Before discussing exemplary apparatuses and methods for providing MMU partitioned translation caches as disclosed herein, a conventional computing system providing virtual-to-physical memory address translation is described. In this regard, Figure 1 is a block diagram illustrating an exemplary computing system 100 in which a central processing unit (CPU) MMU 102 provides address translation services for a CPU 104, and a system MMU (SMMU) 106 provides address translation services for upstream devices 108, 110, and 112. It is to be understood that the computing system 100 and the elements thereof may encompass any one of known digital logic elements, semiconductor circuits, processing cores, and/or memory structures, among other elements, or combinations thereof. Aspects described herein are not restricted to any WO 2016/195869 PCT/US2016/030040 particular arrangement of elements, and the disclosed techniques may be easily extended to various structures and layouts on semiconductor dies or packages. [0021] As seen in Figure 1, the computing system 100 includes the upstream devices 108, 110, and 112 having master ports (M) 114, 116, and 118, respectively, that are connected to corresponding slave ports (S) 120, 122, and 124 of an interconnect 126. In some aspects, each of the upstream devices 108, 110, and 112 may comprise a peripheral device such as a direct memory access (DMA) agent, a video accelerator, and/or a display engine, as non-limiting examples. The interconnect 126 may receive memory access requests (not shown) from the upstream devices 108, 110, and 112, and may transfer the memory access requests from a master port (M) 128 to a slave port (S) 130 of the SMMU 106. After receiving each memory access request, the SMMU 106 may perform virtual-to-physical memory address translation, and, based on the address translation, may access a memory 132 and/or a slave device 134 via a system interconnect 136. As shown in Figure 1, a master port (M) 138 of the SMMU 106 communicates with a slave port (S) 140 of the system interconnect 136. The system interconnect 136, in turn, communicates via master ports (M) 142 and 144 with slave ports (S) 146 and 148, respectively, of the memory 132 and the slave device 134. In some aspects, the memory 132 and/or the slave device 134 may comprise a system memory, system registers, and/or memory-mapped input/output (1/0) devices, as non limiting examples. It is to be understood that, while the SMMU 106 serves the upstream devices 108, 110, and 112, some aspects may provide that the SMMU 106 may serve more or fewer upstream devices than illustrated in Figure 1. [0022] As noted above, the computing system 100 also includes the CPU 104 having integrated therein the CPU MMU 102. The CPU MMU 102 may provide address translation services for CPU memory access requests (not shown) of the CPU MMU 102 in much the same manner that the SMMU 106 provides address translation services to the upstream devices 108, 110, and 112. After performing virtual-to physical memory address translation of a CPU memory access request, the CPU MMU 102 may access the memory 132 and/or the slave device 134 via the system interconnect 136. In particular, a master port (M) 150 of the CPU 104 communicates with a slave port (S) 152 of the system interconnect 136. The system interconnect 136 then 6 WO 2016/195869 PCT/US2016/030040 communicates via the master ports (M) 142 and 144 with the slave ports (S) 146 and 148, respectively, of the memory 132 and the slave device 134. [0023] To improve performance, an MMU, such as the CPU MMU 102 and/or the SMMU 106, may provide a translation cache (not shown) for storing previously generated virtual-to-physical memory address translation mappings. However, in the case of an MMU that is shared among multiple upstream devices, such as the SMMU 106, the upstream devices may be forced to compete for the limited resources of the translation cache. This may result in thrashing, as the upstream devices repeatedly evict each other's translation cache entries in favor of their own. In a worst-case scenario, the extra overhead incurred by thrashing may cancel out the benefits of the translation cache. [0024] In this regard, Figure 2 is provided to illustrate an exemplary MMU 200 for providing a partitioned translation cache. In some aspects, the MMU 200 may be employed in a computing system, such as the computing system 100 of Figure 1, in place of the CPU MMU 102 and/or the SMMU 106. The MMU 200 includes a translation cache 202 providing translation cache entries 204(0)-204(X). In some aspects, each of the translation cache entries 204(0)-204(X) defines an address translation mapping (not shown), such as a virtual-to-physical memory address translation mapping, as a non-limiting example. It is to be understood that some aspects may provide that the translation cache 202 may include more or fewer translation cache entries 204(0)-204(X) than illustrated in Figure 2. The translation cache 202 is also referred to herein as "a means for providing a plurality of translation cache entries each defining an address translation mapping." [0025] The MMU 200 further includes a partition descriptor table 206. The partition descriptor table 206 provides partition descriptors 208(0)-208(N), which define corresponding partitions 210(0)-210(N). As shown in Figure 2, each of the partitions 210(0)-210(N) includes one or more of the translation cache entries 204(0)-204(X) of the translation cache 202. For instance, in the example of Figure 2, the partition 210(0) includes translation cache entries 204(0)-204(2), while the partition 210(1) includes translation cache entries 204(3)-204(5) and the partition 210(N) includes translation cache entries 204(6)-204(X). According to some aspects, the partition descriptor table 206 may include more or fewer partition descriptors 208(0)-208(N) than illustrated in 7 WO 2016/195869 PCT/US2016/030040 Figure 2. The partition descriptor table 206 is also referred to herein as "a means for providing a plurality of partition descriptors defining a corresponding plurality of partitions of a translation cache of the MMU." Exemplary mechanisms that may be used by the partition descriptors 208(0)-208(N) to define the corresponding partitions 210(0)-210(N) are discussed below in greater detail with respect to Figures 3A and 3B. [0026] In some aspects, the partitions 210(0)-210(N) may be regarded as logical constructs defined by the partition descriptors 208(0)-208(N). Some aspects may provide that the partition descriptors 208(0)-208(N) may be configured at design time. Accordingly, in such aspects, the number of the partitions 210(0)-210(N) and the number of the translation cache entries 204(0)-204(X) allocated to each of the partitions 210(0)-210(N) may be determined at design time. In some aspects, the partition descriptors 208(0)-208(N) may be programmable by software at run time, thus permitting the number of the partitions 210(0)-210(N) and the number of the translation cache entries 204(0)-204(X) for each of the partitions 210(0)-210(N) to be dynamically configured. [0027] With continuing reference to Figure 2, the MMU 200 also includes a partition translation circuit 212. In exemplary operation, the partition translation circuit 212 receives a memory access request (not shown) from a requestor, such as one of the upstream devices 108, 110, 112 of Figure 1. The partition translation circuit 212 may then determine a TCPID (not shown) of the memory access request. As discussed in greater detail below with respect to Figure 4, the TCPID may be expressly provided by the requestor as part of the memory access request, and/or may be derived by the partition translation circuit 212 based on the source type and/or attributes of the memory access request itself. The partition translation circuit 212 then identifies one or more of the partitions 210(0)-210(N) based on the TCPID, and performs a cache operation on one or more of the translation cache entries 204(0)-204(X) corresponding to the identified one or more of the partitions 210(0)-210(N). In some aspects, performing a cache operation may comprise searching the translation cache entries 204(0)-204(X), writing to one or more of the translation cache entries 204(0)-204(X), and/or evicting contents of one or more of the translation cache entries 204(0)-204(X), as non-limiting examples. The partition translation circuit 212 may be referred to herein as "a means for receiving a memory access request from a requestor," "a means for determining a 8 WO 2016/195869 PCT/US2016/030040 TCPID of the memory access request," "a means for identifying one or more partitions of the plurality of partitions based on the TCPID," and/or "a means for performing a cache operation on a translation cache entry." [0028] The partition translation circuit 212 thus may ensure that, in response to the memory access request from the requestor, the partition translation circuit 212 performs a cache operation only on the particular translation cache entries 204(0)-204(X) that are associated with the one or more of the partitions 210(0)-210(N) identified by the TCPID. For example, if the TCPID identifies the partition 210(0), the partition translation circuit 212 may be able to perform a cache operation only on the translation cache entries 204(0)-204(2) associated with the partition 210(0). In effect, the partition translation circuit 212 may use the partitions 210(0)-210(N) to provide an access control mechanism to the translation cache entries 204(0)-204(X), preventing requestors associated with different TCPIDs from negatively affecting each other's translation cache entries 204(0)-204(X). [0029] In some aspects, circumstances may arise under which it may be desirable to map the TCPID received within or derived from the memory access request to an "output" TCPID that is actually used to identify one or more of the partitions 210(0) 210(N). For example, providing TCPID remapping may facilitate software reconfiguration of the partition descriptors 208(0)-208(N). In this regard, in some aspects the partition translation circuit 212 may optionally provide a partition remapping table 214 containing one or more remapping entries 216(0)-216(M). The remapping entries 216(0)-216(M) each map a corresponding input TCPID 218(0) 218(M) (i.e., a TCPID that identifies a translation cache partition or set of partitions that an upstream requestor specifies to use for address translation) to a corresponding output TCPID 220(0)-220(M) (i.e., a TCPID that identifies a translation cache partition or set of partitions actually used for address translation). The partition translation circuit 212 may thus perform TCPID remapping after determining the TCPID received from or derived from the memory access request. [0030] To do so, the partition translation circuit 212 first identifies one of the remapping entries 216(0)-216(M) in which the input TCPID 218(0)-218(M) corresponds to the TCPID of the memory access request. In some aspects, the TCPID of the memory access request may be software programmable, or may be hard-coded 9 WO 2016/195869 PCT/US2016/030040 such that software cannot modify the values of the TCPID of the memory access request. The partition translation circuit 212 may then retrieve the output TCPID 220(0)-220(M) from the remapping entry 216(0)-216(M) containing the input TCPID 218(0)-218(M), and may use the output TCPID 220(0)-220(M) to identify one or more of the partitions 210(0)-210(N) as the target of the cache operation. In this manner, the partition remapping table 214 may enable programmatic remapping of the TCPID received as part of the memory access request, which may allow software performance optimization, system performance tuning, and/or correction of hardware issues resulting from incorrect requestor-specified TCPIDs, as non-limiting examples. [0031] According to some aspects, the MMU 200 may also optionally provide a partition selection table 222 to facilitate selection of the translation cache entries 204(0) 204(X) that are active and eligible for cache searching and/or cache eviction. To this end, the partition selection table 222 includes partition selection entries 224(0)-224(Y) corresponding to the partitions 210(0)-210(N). Each of the partition selection entries 224(0)-224(Y) may correspond to one or more of the partitions 210(0)-210(N). In the example of Figure 2, for instance, the partition selection entry 224(0) corresponds to the partitions 210(0) and 210(1), while the partition selection entry 224(Y) corresponds to the partition 210(N). In some aspects, the partition selection entries 224(0)-224(Y) may be selected using one of the output TCPID 220(0)-220(M) retrieved from the partition remapping table 214. Each of the partition selection entries 224(0)-224(Y) may include one or both of a search control indicator (SRCH) 226(0)-226(Y) and an eviction control indicator (EVCT) 228(0)-228(Y). In some aspects, the search control indicators 226(0) 226(Y) and/or the eviction control indicators 228(0)-228(Y) may comprise bit indicators, flags, and/or other state indicators as known in the art. [0032] The partition translation circuit 212 may be configured to identify one or more of the partitions 210(0)-210(N) as targets for a cache operation based on a corresponding partition selection entry 224(0)-224(Y) for the one or more partitions 210(0)-210(N). For example, before performing a cache search operation on the partitions 210(0) and 210(1), the partition translation circuit 212 may first determine whether the partitions 210(0) and 210(1) are eligible for searching based on the search control indicator 226(0) of the partition selection entry 224(0) corresponding to the partitions 210(0) and 210(1). Similarly, the partition translation circuit 212 may WO 2016/195869 PCT/US2016/030040 determine whether the partitions 2 10(0) and 2 10(1) are eligible for eviction based on the eviction control indicator 228(0) of the partition selection entry 224(0) corresponding to the partitions 210(0) and 210(1). [0033] As noted above, the partition descriptors 208(0)-208(N) of the partition descriptor table 206 may be provided to define corresponding partitions 210(0)-210(N) of the translation cache 202. Figures 3A and 3B are block diagrams 300 and 302, respectively, showing two exemplary partition descriptors illustrating different mechanisms for defining a partition such as the partition 210(0) of Figure 2 (not shown). In Figures 3A and 3B, the translation cache 202 of Figure 2 provides the translation cache entries 204(0)-204(X), as discussed above. Figures 3A and 3B also provide partition descriptors 304 and 306, respectively, each defining the partition 210(0) including the translation cache entries 204(0)-204(2) of Figure 2. Thus, the partition descriptors 304 and 306 may thus correspond in functionality to the partition descriptor 208(0) of Figure 2. [0034] In Figure 3A, the partition descriptor 304 defines the partition 210(0) using a start pointer 308 and an end pointer 310. The start pointer 308 indicates a starting translation cache entry 204(0) for the partition 210(0), as shown by arrow 312. Similarly, the end pointer 310 indicates an ending translation cache entry 204(2) for the partition 210(0), as shown by arrow 314. [0035] The partition descriptor 306 of Figure 3B illustrates an alternate partition definition mechanism. In Figure 3B, the partition descriptor 306 provides a start pointer 316 and a count indicator 318. The start pointer 316 indicates the starting translation cache entry 204(0) for the partition 210(0), as shown by arrow 320. The count indicator 318 provides a value ("3") indicating a count of the translation cache entries 204(0) 204(2) contained in the partition 210(0), as indicated by arrow 322. [0036] Figure 4 provides a diagram 400 to illustrate exemplary aspects of the memory access request and the partition translation circuit 212 of the MMU 200 of Figure 2 for determining a TCPID. In Figure 4, the partition translation circuit 212 receives a memory access request 402 from a requestor 404. In some aspects, the requestor 404 may comprise one of the upstream devices 108, 110, 112 of Figure 1. Some aspects may provide that the MMU 200 is a second-stage MMU, and the requestor 404 is a first-stage MMU. As seen in Figure 4, the memory access request 11 WO 2016/195869 PCT/US2016/030040 may include a source indicator 406 that is indicative of a source type of the requestor 404. As a non-limiting example, the source indicator 406 may be a flag indicating whether the requestor 404 is one of the upstream devices 108, 110, 112 of Figure 1, or whether the requestor 404 is a first-stage MMU. The partition translation circuit 212 may then derive the TCPID based on the source indicator 406. This may allow the partition translation circuit 212 to allocate a portion of the translation cache 202 for exclusive use by the first-stage MMU, as a non-limiting example. [0037] The memory access request 402 may also include an optional requestor supplied TCPID 408 provided by the requestor 404. When the requestor-supplied TCPID 408 is received as part of the memory access request 402, the partition translation circuit 212 may retrieve the requestor-supplied TCPID 408, and use it as a TCPID 410 for identifying one or more of the partitions 210(0)-210(N) of Figure 2 as a target for a cache operation, as indicated by arrow 412. Some aspects may provide that, in addition to or instead of using the requestor-supplied TCPID 408 as the TCPID 410, the partition translation circuit 212 may derive the TCPID 410 based on an attribute 414 of the memory access request 402, as shown by arrow 416. As non-limiting examples, the TCPID 410 may be determined based on one or more attributes 414 such as a master identifier (ID) attribute that uniquely identifies the requestor 404, a read/write attribute, a secure/non-secure attribute, a memory type attribute, a cacheability attribute, and/or a shareable attribute of the memory access request 402. In some aspects, the partition translation circuit 212 may optionally remap the TCPID 410 using the partition remapping table 214, as shown by arrow 418. [0038] To illustrate exemplary operations of the MMU 200 of Figure 2 for providing partitioned translation caches, Figure 5 is provided. For the sake of brevity, elements of Figures 2 and 4 are referenced in describing Figure 5. In Figure 5, operations begin with the MMU 200 (in particular, the partition translation circuit 212) receiving the memory access request 402 from the requestor 404 (block 500). In some aspects, the requestor 404 may comprise a first-stage MMU, or may comprise an upstream device such as the upstream devices 108, 110, 112 of Figure 1. [0039] The partition translation circuit 212 determines a TCPID 410 of the memory access request 402 (block 502). The partition translation circuit 212 next identifies one or more partitions, such as the partitions 210(0)-210(1), of the plurality of partitions 12 WO 2016/195869 PCT/US2016/030040 210(0)-210(N) of the translation cache 202 of the MMU 200 based on the TCPID 410 (block 504). The partition translation circuit 212 then performs a cache operation on a translation cache entry, such as the translation cache entry 204(0), of the one or more translation cache entries 204(0)-204(5) of the one or more partitions 210(0)-210(1) (block 506). Some aspects may provide that performing the cache operation may comprise searching the translation cache entries 204(0)-204(5), writing to one or more of the translation cache entries 204(0)-204(5), and/or evicting contents of one or more of the translation cache entries 204(0)-204(5), as non-limiting examples. It is to be understood that the selection of the translation cache entries 204(0)-204(5) in this example are non-limiting examples, and that other or additional translation cache entries 204(0)-204(X) may be selected based on the partitions 210(0)-210(N) identified by the TCPID 410. [0040] Figures 6A-6C are flowcharts illustrating further exemplary operations for providing partitioned translation caches. In particular, Figure 6A includes operations of the partition translation circuit 212 for TCPID remapping, while Figure 6B provides operations of the partition translation circuit 212 for using exemplary partition definition mechanisms. Figure 6C illustrates operations of the partition translation circuit 212 for employing partition selection entries in performing a cache operation. Elements in Figures 2-4 are referenced in describing Figures 6A-6C for the sake of brevity. [0041] In Figure 6A, operations begin with the MMU 200 (in particular, the partition translation circuit 212) receiving the memory access request 402 from the requestor 404 (block 600). The partition translation circuit 212 next determines a TCPID 410 of the memory access request 402 (block 602). Some aspects may provide that the operations of block 602 for determining the TCPID 410 may comprise deriving the TCPID 410 based on an attribute 414 of the memory access request 402 (block 604). In some aspects, the operations of block 602 for determining the TCPID 410 may include retrieving a requestor-supplied TCPID 408 provided by the memory access request 402 (block 606). According to some aspects, the operations of block 602 for determining the TCPID 410 may comprise identifying a remapping entry, such as the remapping entry 216(0), among a plurality of remapping entries 216(0)-216(M) defining a remapping of an input TCPID 218(0) to an output TCPID 220(0), in which the input 13 WO 2016/195869 PCT/US2016/030040 TCPID 218(0) of the remapping entry 216(0) corresponds to the TCPID 410 of the memory access request 402 (block 608). In some aspects, the operations of block 602 for determining the TCPID 410 may comprise deriving the TCPID 410 based on a source indicator 406 of the memory access request 402 indicating a source type of the requestor 404 (block 609). [0042] The partition translation circuit 212 next identifies one or more partitions, such as the partitions 210(0)-210(1), of a plurality of partitions 210(0)-210(N) of a translation cache 202 of the MMU 200 based on the TCPID 410 (block 610). In some aspects, the operations of block 610 for identifying the partitions 210(0)-210(1) may be based on the output TCPID 220(0) of the remapping entry 216(0) (block 611). Some aspects may also provide that the operations of block 610 for identifying the one or more partitions 210(0)-210(1) may be based on a partition selection entry such as the partition selection entry 224(0) of the plurality of partition selection entries 224(0) 224(Y) (block 612). Each of the partition selection entries 224(0)-224(Y) may define at least one of a search control indicator 226(0) and an eviction control indicator 228(0), and may correspond to the one or more partitions 210(0)-210(1) of the plurality of partitions 210(0)-210(N), as a non-limiting example. In some aspects, the partition selection entry 224(0) may be selected based on an output TCPID such as the output TCPID 220(0), as a non-limiting example. Processing may then resume at block 613 of Figure 6B. [0043] Referring now to Figure 6B, the partition translation circuit 212, according to some aspects, may identify the one or more partitions 210(0)-210(1) further based on a corresponding plurality of partition descriptors 208(0)-208(N) (block 613). According to some aspects, each of the plurality of partition descriptors 208(0)-208(N) may comprise a start pointer, such as the start pointer 308, to a starting translation cache entry 204(0) of a corresponding partition 210(0) defined by the partition descriptor 208(0), and an end pointer, such as the end pointer 310, to an ending translation cache entry 204(X) of the corresponding partition 210(0) (block 614). In other aspects, each of the partition descriptors 208(0)-208(N) may comprise a start pointer, such as the start pointer 316, to a starting translation cache entry 204(0) of a corresponding partition 210(0) defined by the partition descriptor 208(0), and a count indicator, such as the count indicator 318, indicative of a count of the one or more translation cache entries 14 WO 2016/195869 PCT/US2016/030040 204(0)-204(X) of the corresponding partition 210(0) (block 616). Processing then resumes at block 620 of Figure 6C. [0044] Turning now to Figure 6C, the partition translation circuit 212 next performs a cache operation on a translation cache entry 204(0) of one or more translation cache entries 204(0)-204(2) of the one or more partitions 210(0)-210(1) (block 620). In some aspects, the operations of block 620 for performing the cache operation may be based on the source indicator 406 of the TCPID 410 indicating a source type of the requestor 404 (block 622). Some aspects may provide that the operations of block 620 for performing the cache operation may be based on the partition selection entry 224(0) for the one or more partitions 210(0)-210(1) (block 624). The operations of block 624 for performing the cache operation based on the partition selection entry 224(0) may include, in some aspects, determining that the one or more translation cache entries 204(0)-204(2) of the one or more partitions 210(0)-210(1) are eligible for searching based on the search control indicator 226(0) for the one or more partitions 210(0)-210(1) (block 626). The operations of block 624 for performing the cache operation based on the partition selection entry 224(0) according to some aspects may include determining that the one or more translation cache entries 204(0)-204(2) of the one or more partitions 210(0) 210(1) are eligible for eviction based on the eviction control indicator 228(0) for the one or more partitions 210(0)-210(1) (block 628). [0045] Providing MMU partitioned translation caches, and related apparatuses, methods, and computer-readable media, according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player. [0046] In this regard, Figure 7 illustrates an example of a processor-based system 700 that may employ the MMU 200 illustrated in Figure 2. In this example, the processor-based system 700 includes one or more central processing units (CPUs) 702, WO 2016/195869 PCT/US2016/030040 each including one or more processors 704. The CPU(s) 702 may have cache memory 706 coupled to the processor(s) 704 for rapid access to temporarily stored data. The CPU(s) 702 further includes a CPU MMU 707 for providing address translation services for CPU memory access requests. The CPU(s) 702 is coupled to a system bus 708 and can intercouple master and slave devices included in the processor-based system 700. As is well known, the CPU(s) 702 communicates with these other devices by exchanging address, control, and data information over the system bus 708. For example, the CPU(s) 702 can communicate bus transaction requests to a memory system 710, which provides memory units 712(0)-712(N). In the example of Figure 7, SMMUs 713 and 714 are also coupled to the system bus 708. It is to be understood that one or more of the CPU MMU 707 and the SMMUs 713 and 714 may comprise the MMU 200 of Figure 2. It is to be further understood that the processor-based system 700 may include multiple SMMUs 713 and 714. [0047] Other master and slave devices can be connected to the system bus 708 via the SMMUs 713 and 714. As illustrated in Figure 7, these devices can include a memory controller 715 one or more input devices 716, one or more output devices 718, one or more network interface devices 720, and one or more display controllers 722, as examples. The input device(s) 716 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 718 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 720 can be any devices configured to allow exchange of data to and from a network 724. The network 724 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wide local area network (WLAN), and the Internet. The network interface device(s) 720 can be configured to support any type of communications protocol desired. [0048] The CPU(s) 702 may also be configured to access the display controller(s) 722 over the system bus 708 to control information sent to one or more displays 726. The display controller(s) 722 sends information to the display(s) 726 to be displayed via one or more video processors 728, which process the information to be displayed into a format suitable for the display(s) 726. The display(s) 726 can include any type of 16 WO 2016/195869 PCT/US2016/030040 display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc. [0049] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0050] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0051] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM 17 WO 2016/195869 PCT/US2016/030040 (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server. [0052] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0053] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. 18 |
A memory controller is described that comprises a compression map cache. The compression map cache is to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information. A processor and a memory controller integrated on a same semiconductor die is also described. The memory controller comprises a compression map cache. The compression map cache is to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information. |
Claims What is claimed is: 1. A memory controller comprising a compression map cache, said compression map cache to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information. 2. A processor and a memory controller integrated on a same semiconductor die, said memory controller comprising a compression map cache, said compression map cache to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information. |
<Desc/Clms Page number 1> PROCESSOR AND MEMORY CONTROLLER CAPABLE OF USE IN COMPUTING SYSTEM THAT EMPLOYS COMPRESSED CACHE LINES'WORTH OF INFORMATION Field of Invention [0001] The field of invention relates generally to computing systems; and, more particularly, to a processor and memory controller capable of use in computing system that employs compressed cache lines'worth of information. Background [0002] Figure 1 shows a portion of an architecture for a basic computing system that includes: 1) a processor 101; 2) a cache 102; 3) a memory controller 103; and, 4) a system memory 104. The processor 101 implements software routines by executing instructions that perform various operations on elements of data. The instructions and data elements are stored in the cache 102 and/or system memory 104. When the processor 101 needs a specific instruction or data element it looks to the cache 102 for the desired instruction or data element before requesting it from system memory 104. Generally, cache 102 is deemed to be"faster"than the system memory 104. Better said, the processor 101 waits less time waiting for an instruction or data element that resides in the cache 102 than an instruction or data element that resides in the system memory 104. This disparity in waiting time as between the cache 102 and system memory 104 typically arises as a consequence of the cache 102 being implemented with inherently faster memory cells (e. g., SRAM cells) than those of which the system memory is implemented (e. g. , DRAM cells). <Desc/Clms Page number 2> [0004] Per bit of storage space an SRAM type cache 102 is more expensive than a DRAM type system memory 104. The computing system architecture of Figure 1 therefore attempts to optimize both cost and performance by being designed to store more frequently used instructions and data elements in the cache 102 and less frequently used instructions and data elements in the system memory 104. By storing the more frequently used instructions and data elements in the cache, the processor should endure acceptable"timing penalty hits"in the form of wasted time waiting for instructions/data to be fetched from system memory 104 because a significant percentage of the instructions/data needed by the processor will be found in the cache 102. [0005] In order to enhance the percentage of"cache hits" (i. e. , the instances where a needed instruction or data element is found the cache 102), notions of"temporal locality"and"spatial locality"come into play. Temporal locality is the notion that a single instruction or data element is apt to be used soon after it has already been used. Spatial locality is the notion that instructions and data elements that are located near each other in memory (i. e., have similar addresses) tend to be used at about the same time. Temporal locality is accounted for by keeping instructions and data elements in cache 102 for at least some period of time after they are first transferred from system memory 104 into cache 102. Spatial locality is accounted for by designing the cache 102 to be loaded with a block of data from system memory 102 (i. e., multiple instructions or data elements) whose content is proximate to (e. g., "surrounds") any single instruction or data element that needs to be fetched from system memory 104. For example, if an instruction at address X is needed from system memory 104, instead of transferring only the needed instruction from system memory 104, instead of transferring only the needed instruction from system memory 104 to <Desc/Clms Page number 3> cache 102, a block of content corresponding to a plurality of addresses that are related to address X is transferred from system memory 104 to cache 102. Figure 2 attempts to depict such a situation by showing that a first contiguous"block"of content 105 (which is referenced through multiple system memory addresses) is loaded into a single cache line 107; and, that a second contiguous"block"of content 106 (which is referenced through a different set of multiple system memory addresses) is loaded into another single cache line 108. For simplicity, Figure 2 shows the cache 204 as a single structure. Various computing systems are designed with different levels of cache, however. For example, many types of computing systems have two levels of caches (a level one (L1) cache and a level two (L2) cache) where the first level cache (L1) corresponds to less processor waiting time than the second level cache (L2). The L1 cache is supposed to store the most frequently used data elements and instructions while the L2 cache is supposed to store data elements and instructions that are used less frequently than those in L1 cache but more frequently than those in system memory. Traditionally, both cache levels are implemented with a faster memory type as compared to system memory (e. g. , both L1 and L2 cache are implemented with SRAM memory cells) ; however, the L1 cache is integrated onto the same semiconductor die as the processor while the L2 cache is implemented with different semiconductor die than the processor. As"on chip"cache accesses are faster than"off chip"cache accesses, accesses to the L1 cache correspond to less waiting time for the processor than accesses to the L2 cache. The memory controller 103 is responsible for taking requests from the processor 101 for data, that are not satisfied by the cache, and managing the process of servicing those requests in system memory 104. <Desc/Clms Page number 4> There may be many different kinds of requests, such as load requests for data that is not present in the cache, and evictions of data from the cache that need to be stored back into memory. Typically, the memory controller is able to pipeline requests, so that many requests may be outstanding, and can be serviced in parallel with a much shorter average latency. The memory controller is responsible for interfacing with the details of a particular memory technology, and isolates the system memory from the processor in a modular fashion. The memory controller may either be integrated with the processor, e. g. on the same die, or may be separated, e. g. in a chipset. The system memory is typically implemented with a specific type of system memory (e. g. , EDO RAM, SDRAM, DDR, etc.). Figures [0011] The present invention is illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which like references indicate similar elements and in which: [0012] Figure 1 shows a portion of a computing system's architecture; [0013] Figure 2 shows that a block of content from system memory is loaded into a single cache line ; [0014] Figure 3a shows an improved approach in which aligned blocks of system memory content can be compressed into a single cache line ; [0015] Figure 3b shows an exemplary resulting map of substantive system memory content after compressed cache lines are evicted from cache and stored into system memory; [0016] Figure 4a shows a portion of a single processor architecture capable of using compressed cache lines ; <Desc/Clms Page number 5>] Figure 4b shows a compression map that can be utilized by a system memory controller to keep track of those blocks within its system memory that have been compressed; [0018] Figure 4c shows a memory controller having a compression map cache and compression/decompression logic ; [0019] Figure 4d shows a memory controller having register space for identifying physical system memory allocation for a compression map; [0020] Figure 5a shows a pair of methods that can be utilized by the memory controller of Figure 4a during a write of a cache line into system memory; [0021] Figure 5b shows a trio of methods that can be utilized by a memory controller during a read of a cache line from system memory; [0022] Figure 5c shows a trio of methods related to referring to the contents of a compression map;] Figure 6a shows a first embodiment of a memory controller ; [0024] Figure 6b shows a second embodiment of a memory controller ; [0025] Figure 6c shows a third embodiment of a memory controller. Figure 7a illustrates a traditional memory address implemented in an cache; [0027] Figure 7b illustrates one embodiment of a memory address implemented in a cache capable of compressing/decompressing cache lines'worth of information; [0028] Figure 8 illustrates one embodiment of a tag array entry for a cache capable of compressing/decompressing cache lines'worth of information; [0029] Figure 9 is a block diagram illustrating one embodiment of a cache controller ; <Desc/Clms Page number 6>] Figure 10 illustrates one embodiment of a set and way selection mechanism in a cache capable of compressing/decompressing cache lines'worth of information; [0031] Figure 11 illustrates one embodiment of byte selection logic. Detailed Description Compression of Cache Lines'Worth of Information [0032] As a matter of clarification, a cache line is a basic unit of storage space in a cache. For example, in many applications a unique tag and set address are used to specially identify a single cache line within a computing system's cache. A cache line is therefore implemented with specific electrical circuitry elements (e. g., SRAM cells). By contrast,"a cache line's worth of information"or"line of information"is an amount of information (e. g. , data elements or instructions) that can fill a cache line. Here, recalling the discussion of Figure 2, the amount of information stored at"block"105 corresponds to a cache line's worth of information because the content of block 105 fills cache line 107. Figure 3a demonstrates an approach that expands upon the notion of spatial locality so as to compress, into a single cache line 307, a pair of aligned system memory 304 blocks 305,309 that would ordinarily occupy a pair of cache lines (i. e. , the information of a pair of cache lines' worth of information are compressed into a single cache line). A second instance is also observed in Figure 3a in which the content of another pair of aligned memory blocks 306,310 that would ordinarily occupy a pair of cache lines are compressed together so as to occupy a second single cache line 308. Compression is a technique that reduces the amount of data needed to express information (such as an instruction or a data element) without impacting the substantive content of the message itself <Desc/Clms Page number 7> (i. e. , without eliminating the ability to recapture the"number"used to represent the instruction or data element). The ability to compress a pair of aligned blocks into a single cache line should result in faster computing system performance because the effective size of the cache is increased ( ; and, therefore, the likelihood of needing to incur the longer access latency to a slower cache level or system memory is decreased). Moreover, as described in more detail below, computing system bandwidth can be enhanced by suppressing access to information because it is compressed with other information that already been accessed. In an embodiment, referring to Figures 3a and Figure 4a, the processor's cache controller 410 is fitted with compression/decompression logic 411 that compresses two cache lines worth of information together if: 1) the cache lines worth of information represent aligned, contiguous blocks of memory; and, 2) the informational content of the pair of the cache lines worth of information is capable of compression into a single cache line. The type of compression employed may take on various forms such as Lempel-Ziv, Wilson-Kaplan, X-Match or perhaps other known or proprietary types of compression. In an embodiment, to say that companion blocks are aligned means that the lowest address associated with the companion blocks is a multiple of the combined size of the companion blocks. For example, if each cache line's worth of information is 64 bytes, then the base address, N, of two contiguous cache lines worth of information (i. e. , a first at N referred to as the"lower"cache line worth of information and a second at N+64 referred to as the"higher"or"upper"cache line worth of information) is divisible by 128 (i. e. the remainder of N/128 is 0). As a further example, referring to Figure 3a, block 305 would be addressable with a base <Desc/Clms Page number 8> address of N; and, block 309 would be addressable with an address of N +64. For convenience, aligned contiguous cache lines worth of information are referred to as"companions"of one another. Thus, in light of the preceding paragraph, a pair of companion cache lines'worth of information are compressed by the compression/decompression logic 411 if their substantive content is capable of compression. Likewise, the compression/decompression logic 411 is capable of decompressing a compressed cache line worth of information into two separate companion cache lines'worth of information if a write occurs to the content of a cache line that causes the content to no longer be compressible into a single cache line. Once a pair of cache lines'worth of information have been compressed together they are treated by the entire computing system as a single cache's line worth of information (e. g. , by being referenced with a single tag and set address while being stored within a cache) until a write occurs to the compressed information that causes it to be deemed no longer compressible. Upon being deemed non compressible, the information is split into a pair of companion cache lines. Thus, if the processor 401 continually accesses from cache 402 a specific cache line's worth of compressed information, the processor 401 continually reads only a single cache line even though the informational equivalent of a pair of cache lines are actually being read. Toward the end of this detailed description are details regarding exemplary compression/decompression logic implementation (s) that may be instantiated, for example, in a processor whose cache is capable of compressing/decompressing cache lines'worth of information. <Desc/Clms Page number 9> [0038] Once information has been compressed into a single cache line's worth of information, the single cache line's worth of information may be treated as any"normal"uncompressed cache line of information such as: 1) being read/written from/to a cache (including a particular level of cache); 2) being read/written from/to a system memory; and, 3) any structure designed to transport a cache line's worth of information (such as, to name a few: a front side bus or point-to-point link that transports cache lines'of information between a processor and a memory controller that controls access to a system memory; and, in a multiprocessor environment, a pair of processors that share cached information). As an example of a possible transfer of a cache line's worth of compressed information, referring to Figure 4a, consider a situation where a cache line's worth of information is evicted from the cache 402 (e. g. , because it has not been used frequently enough to be deemed worthy of continued storage in the cache 402) and transferred to system memory 404. Here, the cache line's worth of compressed information can be stored in the system memory addressing space of a single block that can store a single cache line's worth of information. For example Figure 3b shows a depiction of the utilization of system memory 304, with respect to the substantive content of blocks 305 and 309 of Figure 3a, after the compressed content of cache line 307 has been evicted from the cache 302,402. Figure 3b shows that upon eviction from cache 302, 402 and storage into system memory 304,404 the content of cache line 307 is stored so as to occupy only memory block 305. This is in stark contrast to the utilization of the system memory that existed prior to compression, shown in Figure 3a, for storing the same amount of information. Note that even though two cache lines'worth of data may be stored in the space normally occupied by a single cache line's worth of <Desc/Clms Page number 10> data, when stored in system memory in compacted form, this does not imply an effective increase in system physical memory capacity, as is true for compressed caches. This is because in system memory, the address space is not compacted. Compacting the address space requires modifications to page tables and thus it requires operating systems support, which the schemes presented here are capable of avoiding entirely. That is, after compression only block 305 is needed to store the informational content of that which was earlier stored in blocks 305 and 309 prior to compression. Figure 3b also demonstrates that, upon eviction, the compressed contents of cache line 308 are restored in system memory 304 so as to only occupy block 306 even though blocks 306 and 310 were used to store the same information prior to compression. If one of the"compressed content"blocks 305,306 of Figure 3b is needed again by the processor 401, it is read from system memory 304,404 by memory controller 403a as a single cache line's worth of information and is transferred (again as a single cache line's worth of information) from memory controller 403a to processor 401 and written (again as a single cache line's worth of information) into the processor's cache 402. Memory Controller [0042] In the context of single processor environments the memory controller may behave largely without any recognition or cognizance of the compression/decompression activity taking place. That is, for example, the processor 401 may"keep track of"and manipulate those cache lines worth of information that are compressed and those that are not compressed; and, by contrast, the memory controller is designed to <Desc/Clms Page number 11> simply read and write blocks of data in accordance with identifiers or labels assigned by the processor 401. However, a more sophisticated memory controller 403a that takes into account which blocks of system memory are used to store content that corresponds to compressed cache lines worth of information (and/or which blocks of system memory are used to store content that corresponds to non-compressed cache lines worth of information) may be able to reduce the demand for system memory accesses so as to make the system memory's usage more efficient within the computing system. For example, by refusing to read a second block of data because its substantive content has just been read from a compressed, first block of data, the demand that is exercised on the system memory is effectively reduced. As a more detailed example, consider a multiprocessor environment where the processors are capable of compressing information into its cache lines. Here, a first processor (e. g. , processor 401 in Fig. 4a) may compress information into a cache line and then subsequently evict it from its cache 402 so that it is stored into system memory 404. If a second processor in the multi-processor system (not shown in Figure 4a), without knowledge of the first processor's compression activity, desires to read from system memory 404 information stored in both companions of the compressed information, the memory controller 403a may be designed to be"smart enough"to only read the compressed cache line's worth of information in response to receiving a pair of read requests from the second processor (i. e. , a first request for the first companion and a second request for the second companion). Here, the compressed cache line's worth of information will be sufficient to satisfy both requests made by the second processor. <Desc/Clms Page number 12> Compression Map [0045] Figure 4b provides a trio of embodiments 412a, 412b, 412c for a body of information, referred to as a compression map 412, that may be used by the memory controller 403a to recognize the existence of compressed information within its system memory 404. Firstly, referring to"basic embodiment"412a, note that the compression map 412a may be stored as a bit map in system memory 404 that identifies, for each block of information in system memory 404, whether that block's corresponding cache line's worth of information is currently stored in system memory 404 in a compressed format or in a non-compressed format. In typical implementation, an address column is not actually included in the compression map (e. g. , in cases where the map covers the whole memory). Figure 4b shows an address column in each of embodiments 412a, 412b, 412c so that the reader can easily understand a compression map's organization and structure. Specifically, bits have been provided an active value"1" (while others have not been provided an inactive value"0") in the context of examples that are based upon the system memory shown in Figure 3b and that are discussed immediately below. As such, the compression map may be implemented as a data structure being organized to have specific values at locations (e. g. , data fields) that correspond to specific system memory blocks. Compression map embodiment 412a of Figure 4b is depicted so as to apply to the system memory observed in Figure 3b. Specifically, recall that the system memory of Figure 3b stores information in block 305 that corresponds to the compression of information that existed in blocks 305 and 309 prior to compression. Because the information of blocks 305 and 309 of Figure 3a have been compressed together, the compression map 412a of Figure 4a provides an indication (a"1") for each of these blocks 305,309. Likewise, because the information of <Desc/Clms Page number 13> blocks 306 and 310 of Figure 3a have been compressed together (into block 306), the compression map 412a of Figure 4b provides an indication for each of these blocks as well 305,309. Referring to Figure 4a, note that the compression map 412a may be stored in the system memory itself 404. A"more elaborate"compression map embodiment 412b of Figure 4b includes bitmap information as described above with respect to embodiment 412a as well as additional information in the form of: 1) information (e. g. , in select cases such as instance 414) that provides the substantive content of a cache line's worth of information; 2) indication (s) 415 of the type of compression used for each cache line's worth of information that is stored in a compressed format. The former additional information 414 corresponds to an extreme form of compression that may be applied: a) to the content of system memory blocks having non-compressed cache lines'worth of information; and/or b) "on top of"those cache lines'worth of information that are already stored in system memory in a compressed format (embodiment 412b indicates a single instance of the former). For example, if the cache line's worth of information that is stored in a particular system memory block is"all zeroes" ; then, a single"zero" (e. g., zero 414) may be stored at the particular block's location in the compression map 412b. Similar indications may be used for any type of constant value (e. g.,"all 1s"). Here, the memory controller would be expected to include logic (such as summation logic (e. g. , the sum of all zeroes will be zero) ) that identifies those cache lines having a constant value. The later form of additional information 415 indicates a specific type of compression. Here, recall that different types of compression may be employed (e. g., Lempel-Ziv, Wilson-Kaplan, X-Match, etc. ). Not only <Desc/Clms Page number 14> may compression of only a single type exist within any one particular computing system (e. g. , a single system that only uses Lempel-Ziv) ; but also, embodiments may also be crafted where a single system is capable of implementing different types of compression (e. g. , a single system that can use any of the Lempel-Ziv, Wilson-Kaplan, X-Match and perhaps other compression algorithms). Both of the compression map embodiments 412a, 412b show a bit that provides compressed/uncompressed status for each aligned block in system memory that can store a cache line's worth of information. By contrast, embodiment 412c uses only one bit to represent the compressed/uncompressed status of each pair of aligned system memory blocks. Here it is worthy to note that compression ratios other than 2: 1 may be employed (such as 4: 1); and that, the size of a compression map that is implemented according to the approach of embodiment 412c will become smaller as the compression ratio increases. That is, for 2: 1 compression, a bit is used to represent every aligned pair of memory blocks ; while, if a 4: 1 compression ratio were used, there would be a bit for every group of four aligned memory blocks. Note also that the more elaborate information of embodiment 412b can be added to embodiment 412c. Alternate embodiments of the compression map could use selective, hierarchical schemes, rather than a flat bit vector. A flat bit vector must have one bit for every block in memory. Compression may be applied selectively to only certain regions of memory, and thus the compression map could be made to cover only those regions of memory that are subject to compression. Likewise, compression may actually have occurred (so far) in only a subset of memory regions, even though additional regions may be subject to compression. The various sections of the compression map that cover the regions which have been fully or <Desc/Clms Page number 15> partially compressed can be linked together as a linked list, or worked into a hierarchy of data structures that cover progressively smaller regions and sub regions of memory. Recalling that a condition for the compression of a pair of companions is that the substantive content of the companions"be compressible"into a single cache line's worth of information, and owing to the different mathematical techniques employed across different compression schemes, a particular compression technique may regard a pair of particular companions to be compressible while other compression schemes may not regard the same companions to be compressible (e. g., the substantive content of a pair of companions may be compressible under Lempel-Ziv but not under Wilson-Kaplan or X-Match). As such, more companions are likely to be compressed in a computing system that "offers"different types of compression as compared to a computing system that offers only a single type of compression. The compression type indication 415 of the enhanced bit map embodiment 412b of Figure 4b can be used in such a system (noting that it indicates compression type"A"was used for blocks 305,309 and compression type"B"was used for blocks 306,310). Therefore, compression/decompression logic 411 of Figure 4a should be understood to be capable of performing singular or multiple types of compression depending on the particular embodiment. Also, note from Figure 4a that the compression map may be stored in system memory 404. In an embodiment, the memory controller 403a is designed to fetch a portion of the compression map 412 from system memory 412 at an appropriate moment to check upon the compression/decompression status of one or more system memory blocks. <Desc/Clms Page number 16> In order to reduce the efficiency penalty associated with accessing system memory 404 in order to fetch a portion of the compression map 412, note also that the memory controller 403a is designed to include a compression map cache 413. The compression map cache 413 contains one or more recently fetched portions of the compression map. Similar to a normal cache, compression map information may be continuously updated in the compression map cache 413 until evicted to system memory 404. As described in more detail below with respect to Figures 5a through 5d, the compression map cache 413 is referred to when compression map information is desired. If the desired information is not found in the compression map cache 413, the information is fetched from the compression map 412 that resides in system memory 404. [0056] Figure 4c demonstrates that a memory controller 403b configured to work with a compression map 412 may be instrumented not only in a computing system having a single processor 420; but also, with one or more processors (such as processor 420 and perhaps other processors not shown in Figure 4c) that do not possess the ability to compress/decompression their cached information. Thus, the memory controller 403b of Figure 4c is capable of being the main (and perhaps only) component in the computing system that is conscious of any compression activity. The depiction of Figure 4c therefore, in contrast to Figure 4a, shows that the memory controller 403b itself can be retrofitted with the appropriate compression/decompression logic 416 used for compressing and decompressing cache lines (noting also that processor 420 is devoid of such logic). The compression/decompression logic 416 may support one or more types of compression/decompression techniques. <Desc/Clms Page number 17> The memory controller 403b may further include a compression map cache 413 as described above in reference to Figure 4a. In working with processor (s) that do not maintain any cognizance of compression/decompression activity, the memory controller 403b presents/receives uncompressed cache lines worth of data to/from the processor (s). Specific methodologies that may be executed by a memory controller 403b that is operating in an environment where the processor (s) can't operate with compressed cache lines are described in more detail further below. Figure 4d is meant to convey that the compression map 412 may be stored within a"physical"continuous addressing range of the system memory 404 rather than being implemented in a"virtual"fashion across unrelated memory locations (with, for example, link listing techniques that are managed in software). By implementing the compression map 412 across a physical addressing space, the Operating System (OS) may operate without awareness of the compression/activity; which, in turn, saves the OS from being bogged down with executing instructions for managing or recognizing which locations of system memory 404 are to be used for the compression map 412. As such, a significant degree of overhead is avoided from being imparted upon the OS. By configuring the compression map to be implemented across a physical range of the system memory's addressing space, the compression map should also be capable of being managed and controlled by the computing system's hardware rather than its operating system. As discussed above, this should"free up"the OS so as to be substantially unburdened with overhead relating to the compression map. In an embodiment, the Basic Input Output System (BIOS) 430 indicates what specific physical address range of the system memory 404 is to be <Desc/Clms Page number 18> used for the compression map 412 by causing a pair of registers 431,432 to be written into. For example, a first address might be stored into register 431 that defines the starting address of the compression map; and, a second address might be stored into register 432 that defines the ending address of the compression map. Alternatively, the size of the compression map might be stored into one of registers 431,432 while a starting or ending address is stored in the other of registers 431,432 (noting that the size of the compression map might vary depending on whether 2: 1,4 : 1 or another compression aspect ratio is employed). Subsequent to the loading of registers 431 and 432 the hardware is capable of refraining from storing non compression map information into the addressing space identified through registers 431 and 432; and, likewise, directing the compression map only toward the same addressing space. The registers 431,432 may alternatively be located in a processor. If the compression map is physically distributed across multiple local memories, or a compression map scheme is used that does not require each portion of the compression map to reside in physical contiguous memory, more than one pair of registers may be used to communicate from the BIOS to the hardware where the compression map resides. It is also worthwhile to note that storing the compression map across a contiguous physical address range that is hidden from, and not paged by, the operating system should permit the compression map to be referenced using physical addresses without having to handle changes in the virtual address and page faults that may occur as the operating system swaps pages out of physical memory and into virtual memory and back again. This is another way in which this scheme avoids the need for OS support, and is transparent to software. <Desc/Clms Page number 19> Compression Map Uses [0063] As discussed above, the compression map represents whether particular cache lines'worth of information stored in main memory are compressed or uncompressed. In various embodiments it is updated with each write to memory that changes the compression state of that memory. A compression map can be used for at least the following three purposes: 1) to effectively change the target address of an upper cache line's worth of information line that has been compressed in a non-duplicative scheme; 2) to decide whether a cache line's worth of information that has just been read from system should be decompressed or not by a memory controller that performs decompression; and, 3) to suppress a system memory access if requests for separate companions are recognized and the companions have been compressed. Each of these are discussed more fully below in the context of writes to system memory and reads from system memory. System Memory Writes [0064] Figure 5a shows a pair of memory controller methods 551,552 for writing a cache line's worth of information into a block of system memory. Each of the methods 551,552 of Figure 5a invoke a compression map. According to the first methodology 551, a compressed cache line's worth of information is received by the memory controller (e. g. , as sent from a processor) 501. The compressed cache line's worth of information is presumed to be identified to the memory controller as being in a compressed format (e. g. with a set bit in a control header or an activated line). In response to the reception of the compressed cache line's worth of information, the memory controller updates 502 the compression map to reflect that the received cache line's worth of information is compressed. Any of embodiments 412-412c of Figure 4b or variants thereof can be used to implement the compression map. <Desc/Clms Page number 20> In order to perform the update 502, referring to Figure 4a, the memory controller 403a refers to the compression map cache 413. If the section of the compression map that is correlated with the system memory block to which the received compressed cache line's worth of information is associated resides within the compression map cache 413; then, only the compression map cache 413 is updated (so as to avoiding accessing the compression map 412 in system memory 404). If the appropriate portion of the compression map is not within the compression map cache 413, the appropriate portion is fetched from system memory 404 and updated 502. Note also that in an embodiment (such as that depicted in Figure 4c) where the memory controller 403b is coupled to a processor that does not use cache lines with compressed information, process 501 would be slightly modified such that: 1) only an uncompressed cache line's worth of information would be received at box 501; 2) between boxes 501 and 502 the memory controller 403b would determine that the received cache line's worth of information is compressible with its companion (e. g. , by referring to the substantive content of its companion in an inbound or outbound queue of the memory controller 403b); and, 3) prior to execution of box 503 the memory controller 403b would compress the received cache line's worth of information with its companion. Recall that two companion cache lines worth of information correspond to a pair of aligned blocks of address space in main memory. Here, the combination of a pair of aligned blocks can be viewed as a larger"macro block"of memory space; where, one companion occupies the"lower half"of the macro block, and the other occupies the"upper half" of the macro block, when they are each uncompressed. When the <Desc/Clms Page number 21> companions are compressed, the substantive content of the entire macro block can be referenced with the addressing information used for only one of the smaller companion blocks (e. g. , the addressing information used for the lower half of the macro block). When uncompressed, the upper and lower halves of the macro block are separately addressable. [0068] For example, referring briefly back to Figures 3a and 3b, the combination of blocks 305 and 309 can be viewed as a macro block of information where block 305 corresponds to the"lower half"of the macro block (because it is referenced using the lower addressing space of the pair of blocks 305,309) and block 309 corresponds to the"upper half"of the macro block (because it is referenced using the higher addressing space of the pair blocks 305,309). When uncompressed,"lower half"305 is separately addressable and"upper half"309 is separately addressable. When compressed, the combined content of both halves can be accessed by addressing lower half 305. The memory controller should be designed to recognize, for any uncompressed cache line's worth of information, which half of a macro block it is supposed to occupy and which half of a macro block its corresponding companion is supposed to occupy. For example, referring briefly back to Figures 3b and 4b, the memory controller would be designed to recognize that an uncompressed cache line's worth of information that is addressed to upper half 309 is the companion line of an uncompressed cache line's worth of information that is addressed to lower half 305. Such recognition is straightforward based upon the mathematics of the alignment scheme that defines which blocks are companions one another. For simplicity, a lower half of a macro block will hereinafter be referred to as a lower block and a higher half of a macro block will be referred to as a higher block. <Desc/Clms Page number 22> For 2: 1 compression ratios, a pair of embodiments are possible as to the usage of the upper and lower blocks of a macro block when its substantive content is compressed. Referring back to Figure 5a, in a first embodiment referred to as"non-duplication", irrespective of whether a compressed cache line of information to be written into system memory was compressed by the memory controller or a processor, the write 503 of a compressed cache line's worth of information involves a write to the address space of only the lower block of the corresponding macro block. Figures 3a and 3b illustrate a"non-duplication"approach because, as originally discussed, if blocks 305 and 309 of Figure 3a are compressed together, only the lower block 305 of Figure 3b is written to (of course, alternatively, only the higher block could be written to). According to a"non-duplication"approach, as described in more detail below with respect to methodology 555 of Figure 5b, the memory controller refers to the compression map prior to a read because a request (e. g. , by a system component that is unaware of any compression activity) for a higher block that has been compressed into a lower block can only be satisfied by reading from the lower block (i. e. , the target specified in the request is different than the location in system memory from where a read is performed to satisfy the request). [0072] In an alternative second embodiment, referred to as "duplication", the write 503 of a compressed cache line involves a write to the address space of all the blocks among the applicable companion set (e. g. , both the lower and higher blocks among the applicable companion set for 2: 1 compression). For example, for a 2: 1 compression approach, if blocks 305 and 309 of Figure 3a are compressed together, both blocks 305 and 309 of Figure 3b are written to with the same compressed information. The duplication approach allows the memory controller to <Desc/Clms Page number 23> avoid having to retrieve information from a lower compressed block of information when a request for the upper block's information is received (as described just above with respect to the"non duplication" embodiment). As such, the compression map does not need to be referred to for requests for"upper"blocks of information. If Figure 3b were to be modified to reflect a duplicative approach, upper block 309 would be shaded and it would be further understood that the content of upper block 309 is the same compressed content as that stored in lower block 305. Likewise, upper block 310 would be shaded and it would be further understood that the content of upper block 310 is the same compressed content as that stored in lower block 306. [0074] In the second memory controller write methodology 552 of Figure 5a an uncompressed cache line is received 504 from a processor that is capable of performing compression. As such, the received, uncompressed cache line is deemed"uncompressible"for whatever reason. The compression map is therefore updated 505 (e. g. , by writing a "0"in the compression map at a location that represents the uncompressed cache line's corresponding block) and written into system memory 506. Write methodology 552 could also be slightly modified to represent a write process in systems where the memory controller performs compression/decompression (such as a system as described in Figure 4c where the processor does not support compression). As such, unlike the immediately preceding discussion, it is unknown whether the received uncompressed cache line is compressible or uncompressible. In such a case, between boxes 504 and 505, the compression/decompression logic 416 of the memory controller decides that the received cache line is not compressible (e. g. , by analyzing its <Desc/Clms Page number 24> content along with the content of its companion as found in an input queue or output queue of the memory controller). If it were deemed compressible, it would be compressed with its companion and write 506 would be a write of compressed information. System Memory Reads [0076] Figure 5b shows a trio of memory controller read methods 553, 554,555. The first read method embodiment 553 is directed to implementations, such as that depicted in Figure 4c, where the memory controller performs the compression and decompression of cache lines and the processor (s) with whom the memory controller communicates do not use cache lines that support compressed information. As such, for any cache line's worth of information that is read 507 from system memory, the memory controller refers 508 to the compression map to see if the information being read is compressed (note that the reference 508 to the compression map is shown as being after the read 507 but may alternatively be performed in parallel with and/or prior to the read 507). If the read cache line's worth of information is compressed the memory controller decompresses it 509,510. If the read cache line's worth of information is not decompressed the memory controller does not attempt to decompress it 509,511. If the memory controller happens to reside in a computing system having components that recognize the existence of compressed cache lines'worth of information; then, the memory controller may be implemented without compression/decompression logic (e. g. , the environment of Figure 4a is applicable rather than the environment of Figure 4c). If so, the memory controller should be designed so as to simply signify whether the read information is compressed or decompressed (e. g. , by adjusting a value within a header that is appended to the cache line's worth of information) rather than actually <Desc/Clms Page number 25> perform decompression. To represent a read process for such a memory controller, box 510 of methodology 553 of Figure 5b should correspond to providing an indication (e. g. , in a header or activated line) that the read information is compressed and box 511 should correspond to providing an indication the read information is not compressed. [0078] Methodologies 554 and 555 may be performed by a memory controller that has compression/decompression logic or a memory controller that does not have compression/decompression logic. The second read methodology 554, which has already been briefly alluded to, involves the memory controller being designed to be"smart enough"to avoid making a second read to system memory for a companion of an already read compressed cache line's worth of information. According to this methodology, if the memory controller recognizes that there are pending read requests for cache lines'worth of information that are companions of one another, the compression map is referred to 512,514. If the compression map reveals that the companions are compressed together the memory controller only reads 518 the compressed cache line from system memory in order to satisfy both requests. If the compression map reveals that the companions are not compressed together, the memory controller reads both cache line's worth of information (for a 2: 1 compression scheme) separately 516,517 from their corresponding lower and upper blocks of information in order to satisfy the requests. If there are no pending read requests for cache lines' worth of information that are companions of one another the memory controller behaves like a normal memory controller and simply performs a separate read 513 from system memory to satisfy each request. It is worthwhile to note that the term"pending"request means that the physical memory component has not, as yet, actually responded to the memory controller that issued the request. However, it is possible <Desc/Clms Page number 26> for the memory controller to suppress a second request even if the physical memory component has already responded to the first (i. e. , the first request is no longer"pending"). For example, the memory controller could be designed to suppress any second request for compressed information provided the data for the second request can be provided (e. g. from the memory controller) from the results of the first request. Therefore, the ability to suppress requests can be extended to situations beyond those described by methodology 554 of Figure 5b. In cases where the memory controller is designed to perform decompression, the memory controller may perform both of read methods 553 and 554 together in a continuous flow ; where: 1) methodology 554 is largely performed prior to the read, 2) methodology 553 is largely performed after the read, and, 3) any of reads 518,517, 513 of methodology 554 also correspond to read 507 so as to"connect" methodologies 553,554 together. If the methodologies 553,554 are connected in this fashion note that reference 508 may be"skipped" (i. e., not performed) if reference to the compression map 514 was made prior to the memory read. This is so because the answer to inquiry 509 that methodology 553 indicates is to be performed after the read can be gleaned from reference 514 which is made prior to the read. Methodology 555 corresponds to a write methodology that can be used to implement the"non-duplication"write approach discussed above with respect to Figure 5a. Here, the compression map is referred to if the target address of the requested cache line's worth of information corresponds to the upper block for the companion pair 519,521. If the requested cache line's worth of information has been compressed, the compressed cache line is read from the lower block 522,520. If the requested cache line's worth of information has not been compressed, the uncompressed requested cache line's worth of information is read from <Desc/Clms Page number 27> the target block specified in the read request. If the target block specified in the read request is not the upper block, the memory controller simply reads a compressed or uncompressed cache line's worth of information from the system memory with addressing that corresponds to the lower block 519,520 (i. e. , no reference to the compression map is needed). Similar to methodology 554, methodology 555 may be combined with methodology 553 for a memory read performed by a memory controller that also performs decompression. Here, either of reads 523 and 520 of method 555 can be viewed as read 507 of method 553 so as to connect the two methodologies 555,553 together. If the execution of method 555 flows through compression map reference 521 prior to memory read 507, compression map reference 508 can be skipped because the answer to inquiry 509 can be gleaned from reference 521. Use of a duplicative scheme as discussed with respect to Figure 5a removes the need for methodology 555 because no change of target address is effected with a"yes"answer to inquiry 522. Compression Map Cache Lookup [0084] Figure 5c provides a trio of methodologies 556,557, 558 that are related to the references to the compression maps 508,514, 521 that were discussed just above with respect to Figure 5b. In particular, methodology 556 shows a more detailed depiction of a process that may be used to implement any of compression map references 508,514, 521 of Figure 5b. Methodology 556 corresponds to a basic cache/system memory read process-albeit applied with the novel features of a memory controller's compression map cache and a compression map residing in system memory. Better said, in order to refer to the compression map 556, the memory controller first refers 525 to its on-board compression map cache 413. <Desc/Clms Page number 28> As the compression map cache 413 only contains a portion of the entire compression map 412 that is stored in system memory 404, if the information for the cache line is found in the compression map cache 526 (i. e. ,"a hit") -the reference to the compression map is complete. If the information for a particular block is not found in the compression map cache (i. e. ,"a miss"), the information is fetched from the complete compression map that resides in system memory 526,527 (i. e. , a read to system memory is performed). [0086] A compression map miss corresponds to an efficiency penalty because a system memory read 527 is performed as a consequence. Methodologies 557,558 correspond to methodologies that may be performed by the memory controller in order to mitigate the timing penalty hit associated with a compression map cache miss for either of the compression map references 514,521 of Figure 5b that precede a corresponding memory read 516-518,520, 523. Both of methodologies 557,558 apply to a memory controller that performs decompression (e. g., because it works with a processor that does not use compressed cache lines as depicted in Figure 4c) and therefore performs methodology 553 of Figure 5b for all system memory reads of a cache line. Methodology 557 shows that the memory controller may be designed to perform the reference to the compression map cache 531 that occurs prior to a memory read 514,521 in the process of satisfying a second memory read request in a time period that overlaps with the read 530 of a cache line's worth of information from system memory to satisfy a first memory read request. That is, performing the pre memory read cache lookup 531 and the memory read 530 of different requests with some degree of parallelism should help mitigate the timing penalty hit if a cache lookup 531 turns out to be a miss. Here, the degree of temporal <Desc/Clms Page number 29> overlap (e. g., partial or otherwise) between the memory read and the cache lookup may vary depending on implementation. [0088] In the particular case of a pipelined memory controller and system memory (so as to be capable of servicing multiple system memory read requests in parallel), the read of a cache line's worth of information 530 to service a first request may continue in parallel with the read of compression map information 532 that is needed if the compression map lookup 531 is a miss. Methodology 557 shows such a situation in both flow chart form (subscript"1"in labels 530,531, 532) and Gantt chart form (subscript"2"in labels 530,531, 532). Methodology 558 is applicable to the"non-duplicated" embodiment discussed above with respect to Figure 5a. It shows that prediction (either"compressed"or"uncompressed") may be used in the case of a cache miss in performing references 514,521 ; and, that the subsequent reference to the compression map 508 to check if decompression is needed is used to check the validity of the prediction. According to methodology 558, if the cache lookup results in a miss 532, the state of the requested cache line is predicted to be compressed or uncompressed. In a first embodiment, the state is conservatively predicted to be uncompressed. In another embodiment the recent history of the compression map's content is used as a basis for predicting a compressed state or an uncompressed state. The cache line is then fetched in accordance with the prediction. For example, if the requested cache line's worth of information corresponds to an upper block and is predicted to be in a compressed state, a cache line's worth of information is read from the address of the lower block 533. Contrarily, if the predicted state of the cache line's worth of information is uncompressed, a cache line's worth of information is read 533 from the address of the upper block. The appropriate portion of the <Desc/Clms Page number 30> compression map is then fetched from system memory 534 (because miss 532 indicates that the compression map does not contain information for the applicable cache line's worth of information). The proper compression map information is then checked to see if the prediction was correct 535. If so, the remaining read request processes are performed. In a further embodiment, a compression map cache update for another request may occur after execution of inquiry 532 but before the execution of box 534. If so, box 534 may instead correspond to a"re-look"into the compression map; and, if a hit occurs, a fetch to system memory for compression map information may be eliminated altogether. Memory Controller Embodiments [0091] Figures 6a through 6c show various memory controller embodiments 603a, 603b, 603c; where, each memory controller embodiment 603 includes a compression map cache 613a, 613b, 613c. Embodiment 613a does not include any compression or decompression logic circuitry. Embodiment 613b includes decompression circuitry 616b. Embodiment 613c includes compression logic circuitry 616d and decompression logic circuitry 616c. For each of the embodiments 613a, 603b, 603c, the bus/point-to-point link interface (s) 601 correspond to an interface of the memory controller where: 1) requests for memory reads and memory writes are received; 2) responses to the requests are provided. Because requests may conceivably be received from and responded to over a bus (e. g. , a front side multidrop bus); and/or, received from and responded over a point-to-point link (e. g. , a first inbound link that receives requests and a second outbound link that sends responses), interface 601 may be interface to a bus and/or point-to-point link. The request/response queues 602 of each embodiment 603a, 603b, 603c queue requests in the inbound direction (e. g. , in a first, <Desc/Clms Page number 31> request queue). The scheduler logic circuitry 623 of each embodiment 603a, 603b, 603c schedules the servicing of these requests. The memory request queue 604 of each embodiment 603a, 603b, 603c queues requests that have been scheduled by the scheduler logic circuitry 623. The memory interface 605 of each embodiment 603a, 603b, 603c is responsible for reading/writing information from/to the particular type of memory that the memory controller is coupled to. The request/response queues 602 of each embodiment 603a, 603b, 603c also queue responses to requests in the outbound direction (e. g. , in a second, response queue). In various embodiments, the updates or references 502,505, 514,521 discussed above may be performed by the scheduler logic circuitry (or from some other appropriate location). For each of embodiments 603a, 603b, 603c, input 612 to the compression map cache 613 can be viewed in a first instance as an input that supplies compression map information from the external memory to the compression map (e. g. , in the case of a compression map cache miss). Moreover, input 612 can be viewed in a second instance as the reference to the compression map information that is performed in association with a read of a cache line's worth of information from system memory. [0094] Here, recall from the above discussion of methodology 553 of Figure 5b that if the memory controller is capable of performing decompression-e. g. , embodiments 603b, 603c apply-the compression map is referred to 508. If the read data is compressed, multiplexer 618 selects the output of the decompression logic circuitry 616b, 616c (noting that the input to the decompression logic circuitry is along a data path output of the memory interface (s) 605b, 605c). If the read data is not compressed, the multiplexer selects a data path that flows from the memory interface 605b, 605c without the decompression logic circuitry being invoked along the way. <Desc/Clms Page number 32> Figure 6c shows an embodiment that includes compression logic circuitry 616d as well as decompression logic circuitry 616c. The compression logic circuitry 616d is shown coupled to a memory request queue 604c. As such, any compressible companion lines worth of information that are observed (or referenced) in the memory request queue 604c can be compressed together before being written into system memory. Line 617 indicates that, additionally, any cache line's worth of information waiting to be written into system memory may be compressed with its companion even if its companion is located (or referenced) in a request queue or a response queue. Additionally or in the alternative, compression logic circuitry may be coupled to the request/response queues 602b, 603c. Cache Capable of Compressing/Decompressing Information [0096] Figure 7A illustrates an exemplary memory address implemented in an traditional cache. In a traditional cache, an address is divided according to tag, set and offset components. The set component is used to select one of the sets of lines. Similarly, the offset component is the low order bits of the address that are used to select bytes within a line. Figure 7B illustrates one embodiment of a memory address implemented for lookup in a cache capable of working with compressed information (hereinafter a"compressed cache"). Figure 7B shows the implementation of a companion bit used to map companion lines of information into the same set. The companion bit is used in instances where a line of information is not compressed. Accordingly, if a line of information is not compressed, the companion bit indicates which of the two compressed companion lines of information are to be used. In one embodiment, the window of address bits that are used for set selection is shifted to the left by one so that the companion bit lies between the set selection and byte offset bits. In this way, companion <Desc/Clms Page number 33> lines map to the same cache set since the companion bit and set selection bits do not overlap. The companion bit, which now is no longer part of the set selection bits, becomes part of the tag, though the actual tag size does not increase. In a traditional uncompressed cache, the companion bit is a part of the address and is used in set selection to determine whether an address hashes to an odd or even cache set. Figure 8 illustrates one embodiment of a tag array entry for a compressed cache. The tag array entries include the companion bit (e. g., as part of the address tag bits) and a compression bit. The compression bit causes the compressed cache tag to be one bit larger than a traditional uncompressed cache's tag. The compression bit indicates whether a line of information is compressed. Particularly, the compression bit specifies how to deal with the companion bit. If the compression bit indicates a line of information is compressed, the companion bit is treated as a part of the offset because the line is a compressed pair. If the compression bit indicates no compression, the companion bit is considered as a part of the tag array and ignored as a part of the offset. Figure 9 is a block diagram illustrating one embodiment of cache controller 904. Cache controller 904 includes set and way selection logic 910, byte selection logic 920 and compression logic 930. Set and way selection logic 910 is used to select cache lines within a compressed cache. Figure 10 illustrates one embodiment of set and way selection logic 910 in a compressed cache. Referring to Figure 10, set and way selection logic 910 includes tag comparison logic 1010 that receives input from a tag array to select a cache line based upon a received address. The tag comparison logic 1010 takes into account whether a cache line holds compressed data. Because cache lines can hold a variable data size, tag comparison logic 1010 is also variable length, depending on whether a <Desc/Clms Page number 34> particular line is compressed or not. Therefore, the tag match takes into account the compression bit. When compressible by at least 2: 1, the two sectors of each line are stored in a single physical cache line (e. g. , in one way). It is important to note that this differs from traditional sectored cache designs in that different logical sectors of a given logical line may be stored simultaneously in different ways when uncompressed. [0101] According to Figure 9, byte selection logic 920 selects the addressed datum within a line. According to one embodiment, byte selection logic 920 depends on the compression bit. Figure 11 illustrates one embodiment of byte selection logic 920. Byte selection logic 920 includes a decompressor 1110 to decompress a selected cache line if necessary. An input multiplexer selects between a decompressed cache line's worth of information and an uncompressed cache line's worth of information depending upon the compression bit. In one embodiment, the range of the offset depends on whether the line of information is compressed. If the line of information is compressed, the companion bit of the address is used as the high order bit of the offset. If the line of information is not compressed, decompressor 1110 is bypassed and the companion bit of the address is not used for the offset. The selected line is held in a buffer whose size is twice the physical line size to accommodate compressed data. Alternative embodiments may choose to use the companion bit to select which half of the decompressed word to store in a buffer whose length is the same as the physical line size. However, buffering the entire line of information is convenient for modifying and recompressing data after writes to the cache. Compression logic 930 may also be used to determine when a line of information is to be compressed. According to one embodiment, opportunistic compression is used to determine when a line of information <Desc/Clms Page number 35> is to be compressed. The above-described mechanism allows any two cache line's worth of information that map to the same set and that differ only in their companion bit to be compressed together into one cache line. In one embodiment, the mechanism modifies the set mapping function and selects the companion bit such that it allows adjacent memory lines of information to be compressed together, which takes advantage of spatial locality. Closing Comments [0104] Note also that embodiments of the present description may be implemented not only within a semiconductor chip but also within machine readable media. For example, the designs discussed above may be stored upon and/or embedded within machine readable media associated with a design tool used for designing semiconductor devices. Examples include a circuit description formatted in the VHSIC Hardware Description Language (VHDL) language, Verilog language or SPICE language. Some circuit description examples include: a behaviorial level description, a register transfer level (RTL) description, a gate level netlist and a transistor level netlist. Machine readable media may also include media having layout information such as a GDS-II file. Furthermore, netlist files or other machine readable media for semiconductor chip design may be used in a simulation environment to perform the methods of the teachings described above. Thus, it is also to be understood that embodiments of this invention may be used as or to support a software program executed upon some form of processing core (such as the Central Computing unit (CPU) of a computer) or otherwise implemented or realized upon or within a machine readable medium. A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e. g. , a computer). For example, a machine readable medium <Desc/Clms Page number 36> includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e. g. , carrier waves, infrared signals, digital signals, etc. ); etc. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
A system and method taught herein control data prefetching for a data cache by tracking prefetch hits and overall hits for the data cache. Data prefetching for the data cache is disabled based on the tracking of prefetch hits and data prefetching is enabled for the data cache based on the tracking of overall hits. For example, in one or more embodiments, a cache controller is configured to track a prefetch hit rate reflecting the percentage of hits on the data cache that involve prefetched data lines and disable data prefetching if the prefetch hit rate falls below a defined threshold. The cache controller also tracks an overall hit rate reflecting the overall percentage of data cache hits (versus misses) and enables data prefetching if the overall hit rate falls below a defined threshold. |
1.A method for controlling data prefetching of a data cache memory, including:Tracking prefetch hits to the data cache and disabling data prefetching of the data cache based on the tracking of prefetch hits; andTracking the overall hit to the data cache and enabling data prefetching to the data cache based on the tracking of the overall hit.2.The method of claim 1, further comprising enabling and disabling data prefetching of the data cache by combining one or both of tracking the prefetch hit and the overall hit. The tracking mechanism of the user is reset to implement the enable / disable control hysteresis.3.The method of claim 1, wherein tracking a prefetch hit to the data cache includes tracking a prefetch hit rate, the prefetch hit rate reflecting the data cache involving a prefetched data line The percentage of hits, and wherein tracking the overall hit to the data cache includes tracking the overall hit rate to the data cache.4.The method of claim 3, wherein tracking the prefetch hit rate comprises tracking a relationship between a number of prefetched data lines in the data cache and a total number of data lines in the data cache. .5.The method of claim 3, further comprising storing an indicator indicating which data lines in the data cache are pre-fetched data lines, and using the stored indicator to detect high-speed data. A prefetch hit of the buffer memory to track the prefetch hit rate.6.The method of claim 3, wherein tracking the prefetch hit rate comprises incrementing the first count by one in response to detecting a data cache hit on a prefetched data line, and in response to detecting an unaligned warp. Prefetching a data cache hit of a data line reduces the first count by one, and wherein tracking the overall hit rate includes increasing the second count by one in response to a data cache hit and responding to the data cache A memory miss causes the second count to decrease by one.7.The method of claim 6, wherein disabling data prefetching of the data cache based on the tracking of prefetch hits is included in the prefetch hits as indicated by the value of the first count Data prefetching of the data cache is disabled when the rate falls below a defined deactivation threshold.8.The method of claim 6, wherein enabling data prefetching of the data cache based on the tracking of overall hits is included in the overall hit rate falling as indicated by the value of the second count to Data prefetching of the data cache is enabled when the enable threshold is defined.9.The method of claim 6, further comprising implementing in conjunction with enabling and disabling data cache prefetching, control for resetting one or both of the first count and the second count Enable / disable hysteresis for data cache prefetch.10.The method of claim 6, further comprising maintaining the first count and the sum of the first count and the second saturation counter configured to saturate at respective first and second maximum count values. Said second count.11.The method of claim 1, wherein tracking the prefetch hit includes tracking a prefetch hit rate in a first counter, the prefetch hit rate reflecting a prefetched data line in the data cache. The percentage of data cache hits, and wherein tracking the overall hit includes tracking the overall hit rate in a second counter, the overall hit rate reflecting the overall percentage of data cache hits versus data cache misses.12.The method of claim 1, further comprising initializing the data cache to begin operation with data cache prefetch enabled.13.The method of claim 1, further comprising prefetching a data line into the data cache according to one or more defined prefetch policies only when data prefetch is enabled, and regardless of whether data prefetch is enabled Both acquire data lines into the data cache in response to a data cache miss.14.A processor comprising:Instruction execution pipeline; andA data cache that is operatively associated with the instruction execution pipeline and includes a cache memory and a cache controller;The cache controller is configured to track prefetch hits to the data cache and disable data prefetch of the data cache based on the tracking of prefetch hits, and is configured to Tracking the overall hit to the data cache and enabling data prefetching to the data cache based on the tracking of the overall hit.15.The processor of claim 14, wherein the cache controller, in conjunction with enabling and disabling data prefetching of the data cache, is configured to track the prefetch hits and the overall hits. Either or both of the tracking mechanisms are reset to implement the enable / disable control hysteresis.16.The processor of claim 14, wherein the cache controller tracks the prefetch hit by tracking a prefetch hit rate, the prefetch hit rate reflecting the data involved in the prefetched data line. The percentage of cache hits; and tracking the overall hit by tracking the overall hit rate for the data cache.17.The processor of claim 16, wherein the cache controller tracks the relationship between the number of prefetched data lines in the cache memory and the total number of data lines in the data cache memory. To track the prefetch hit rate.18.The processor according to claim 16, wherein the cache controller maintains an indicator indicating which data lines in the cache memory are prefetched data lines, and uses the indicator to detect the The prefetch hit of the data cache is tracked to track the prefetch hit rate.19.The processor of claim 16, wherein the cache controller tracks the prefetch hit ratio by making the first cache memory response in response to detecting a data cache hit on a prefetched data line. The counter is incremented by one, and the first counter is decremented by one in response to detecting that a data cache hit on a prefetched data line is not detected; and tracking the overall hit rate by: responding to the data cache A memory hit increases the second counter by one, and decreases the second counter by one in response to a data cache miss.20.The processor of claim 19, wherein the cache controller disables caching the data when the prefetch hit rate falls below a defined deactivation threshold as indicated by the value of the first counter Data prefetch from memory.21.The processor of claim 19, wherein the cache controller enables data to the data cache when the overall hit rate falls below a defined enable threshold as indicated by a value of the second counter Prefetching.22.The processor of claim 19, wherein the cache controller combines enabling and disabling of data cache prefetching by setting one or both of the first counter and the second counter Reset to implement enable / disable hysteresis for controlling data cache prefetch.23.The processor of claim 19, wherein the first counter and the second counter include a first saturation counter and a second saturation counter that are saturated at a first maximum count value and a second maximum count value, respectively.24.The processor of claim 14, wherein the cache memory controller tracks the prefetch hit by tracking a prefetch hit rate in a first counter, the prefetch hit rate reflecting the cache memory The percentage of data cache hits for the prefetched data lines in; and tracking the overall hit by tracking the overall hit rate in the second counter, the overall hit rate reflecting the data cache hit versus the data cache The overall percentage of misses.25.The processor of claim 14, wherein the cache controller initializes the data cache to start operation with data cache prefetch enabled.26.The processor of claim 14, wherein the cache controller only prefetches data lines into the data cache according to one or more defined prefetch policies when data prefetch is enabled, and Whether or not data prefetch is enabled, a data line is fetched into the data cache in response to a data cache miss. |
Data prefetch adjustmentTechnical fieldThe present invention relates generally to the field of processors, and more particularly to a system and method for controlling data prefetching in a processor.Background techniqueThe processor uses caches to alleviate processing bottlenecks associated with memory. For example, the instruction cache works by using faster access memory to maintain a selected portion of a larger program instruction set stored in slower memory (eg, main memory or higher-level cache memory).As a result, accessing instructions that exist in the cache memory has lower latency than accessing slower memory, and processors often use some form of hardware-based instruction prefetch to keep the instruction cache filled with data from Required instruction line for slower memory. Prefetching places instruction lines in the instruction cache before instructions from instruction lines from the slower memory are needed.Hardware-based prefetching can also be applied to data. However, successfully prefetching data may be more difficult than successfully prefetching instructions. For example, data values may be more scattered or spread out in memory than program instructions, making prediction-based prefetching more challenging. Thus, data prefetching may or may not improve performance, and the performance of data prefetching may change significantly during processor operation.Therefore, it is known, for example, to "filter" prefetch operations. Prefetch filtering represents a "pollution" avoidance mechanism where the data cache contains data lines that are never used before being prefetched (i.e., data lines that are prefetched but eventually replaced before they were ever accessed (hit)). Considered to be contaminated. Thus, prefetch filtering implies continuous data prefetching, but selectively skips some data prefetching that would occur in the absence of such filtering.In more detail, individual data prefetching may or may not be performed depending on the applied filtering criteria. Filtering criteria may reflect, for example, the history of prefetch performance formed within a range of program executions. However, the determination of proper filtering can require undesired hardware complexity or resource consumption, specifically to produce significant performance improvements over data prefetching without filtering.Summary of the InventionAccording to one or more embodiments, a method of controlling data prefetching of a data cache includes tracking prefetch hits of the data cache and disabling data prefetching of the data cache based on tracking of prefetch hits. take. The method further includes tracking an overall hit to the data cache and enabling data prefetching to the data cache based on the tracking of the overall hit. In this case, disabling data prefetching includes disabling all data prefetching of the data cache, but still fetching the data line to the data when needed (for example, when the data cache accessor is missed) In the cache, this has nothing to do with whether data prefetching is enabled.In at least one embodiment taught herein, a processor includes a data cache, the data cache including a cache memory and a cache controller. The cache controller disables data prefetching of the data cache based on tracking prefetch hits to the data cache and enables data prefetching of the data cache based on tracking overall hits to the data cache . In at least one such embodiment, the cache controller tracks the prefetch hits by tracking the prefetch hit rate and tracks the overall hits by tracking the overall hit rate (or equivalently, the overall miss rate).With the above examples in mind, the data prefetching control as taught herein provides the performance and power advantages (and other advantages) of data prefetching on a conditional basis, while providing simple and efficient hardware implementation.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a functional block diagram of an embodiment of a processor.FIG. 2 is a state diagram of an embodiment of data prefetch control.FIG. 3 is a functional block diagram of one embodiment of a counting and control circuit that can be used to control data prefetch.FIG. 4 is a functional block diagram of an embodiment of a data cache memory including an indicator to represent a pre-fetched data line.FIG. 5 is a functional block diagram illustrating another embodiment of an indicator of a prefetched data line in a data cache.6 and 7 are logic flow diagrams of one embodiment of processing logic for controlling data prefetching.detailed descriptionAs a non-limiting example, FIG. 1 illustrates an embodiment of a processor 100 that includes an instruction execution pipeline 120, a status / control register 104, and a data cache 106. The data cache 106 includes a cache controller 108 and Associated cache memory 110. In operation, the data cache 106 caches one or more higher-level data lines from the memory 112, and the memory 112 may include a higher-level cache and / or main (system) memory. In at least one embodiment, the data cache 106 includes a level 1 ("L1") data cache.Advantageously, the (data) cache controller 108 is configured to dynamically enable and disable the data cache prefetch according to a logic control mechanism implemented in the data cache 106 with low hardware complexity. FIG. 2 is a state diagram illustrating one embodiment of such advantageous prefetch control.As shown in FIG. 2, a state 200 indicates an operation state of the data cache 106 that enables data prefetching, and a state 202 indicates an operation state of the data cache 106 that disables prefetching. Rather than sieving or otherwise filtering individual prefetches, cache controller 108 advantageously terminates all prefetches when operating in state 202. Therefore, the prefetch control embodied in FIG. 2 operates like an on / off switch for data prefetch.In one or more embodiments, the cache controller 108 transitions from state 200 (prefetch enabled) to state 202 (prefetch disabled) in accordance with tracking "prefetch hits". In addition, the cache controller 108 transitions from the state 202 to the state 200 in accordance with the tracking "overall hit". In this case, the “prefetch hit” is a hit to a prefetched data line held in the cache memory 110 of the data cache 106, and the “overall hit” is a hit to the data cache 106 held A hit of any data line (whether prefetched or not) in cache memory 110. In this sense, the prefetch hit reflects the percentage of data cache hits involving the prefetched data lines, and the overall hit reflects the overall percentage of cache hits. Equivalently, the cache controller 108 tracks cache misses. For example, if the overall hit rate for the data cache 106 is ninety percent, the overall miss rate is ten percent.In more detail, during program execution, the processor 100 first looks for the required data in the data cache 106. The data cache hit indicates the situation where the required data resides in the data cache 106. In contrast, a data cache miss indicates a situation where the required data does not reside in the data cache 106. The cache controller 108 performs data acquisition in response to a data cache miss, which is commonly referred to as "mandatory acquisition." On the other hand, assuming prefetching is enabled, the cache controller 108 prefetches the data lines from the higher-level memory 112 to the cache memory of the data cache 106 according to one or more prefetch countermeasures ("strategies"). 110 in. As a non-limiting example, the cache controller 108 may use an order-based and / or pointer-based prefetch strategy.In any case, those skilled in the art will understand that the cache memory 110 contains a mix of pre-fetched and fetched (non-pre-fetched) data lines, assuming that the data cache 106 is with pre-fetch enabled (state 200) operation. Therefore, individual hits to the data cache 106 involve prefetched data lines or non-prefetched data lines, and tracking the prefetch hits provides the cache controller 108 with knowledge about prefetch performance. In simple terms, a smaller number of data cache hits involving prefetched data lines indicates that data prefetching does not help current program execution conditions.Disabling prefetch under these conditions is advantageous because it eliminates prefetch overhead (memory bus access and control). Stop prefetching-transition from state 200 to state 202-thus reducing processor operating power and reducing resource load. Turning prefetching off for such conditions provides another advantage of preventing data cache 106 from being contaminated with data lines that are unlikely to be used.On the other hand, the program execution conditions are subject to change, so that prefetching becomes desired again. To this end, the cache controller 108 tracks the overall hits to the data cache 106 when operating in the state 202, and enables prefetching, for example, when the overall hits to the data cache 106 become too low. (Equivalently, the overall miss becomes too high.) In other words, if the overall hit rate for the data cache 106 begins to suffer from closed data prefetches, the cache controller 108 transitions back to the state 200 To enable prefetching again.For example, the cache controller 108 tracks prefetch hits as the prefetch hit rate and tracks overall hits as the overall hit rate. In this way, the defined deactivation threshold used for the prefetch deactivation decision may be determined at a default value or a dynamically calculated value. Similarly, the delimited enable threshold used for the prefetch enable decision may be determined from a default value or a dynamically calculated value. As a non-limiting example, the cache controller 108 may be configured to turn off prefetching when the prefetch hit rate falls below two percent, and may be configured to fall to ninety percent of the overall hit rate. Prefetch is turned on when it is less than nine. Of course, these are only example values, and thresholds may be adjusted or otherwise tuned based on specific processor characteristics and data cache size and other considerations (eg, prefetch overhead, miss penalty, etc.).Regardless of the particular decision threshold used, FIG. 3 illustrates one embodiment of a tracking mechanism that can be used by the cache controller 108 to track prefetch hits and overall hits. More specifically, FIG. 3 illustrates a counter control circuit 300, a first counter 302, and a second counter 304. These circuits may be included in or associated with the cache controller 108.In one or more embodiments, the counter control circuit 300 increases the first counter 302 by one in response to the cache controller 108 detecting a hit to a prefetched data line in the cache memory 110, and responds to the high speed The buffer memory controller 108 detects a miss on a prefetched data line in the cache memory 110 and decrements the first counter 302 by one. In this manner, the value of the first counter 302 reflects the percentage of hits to the prefetched data lines on the data cache 106. Thus, the counter control circuit 300 or another circuit element within the cache controller 108 may compare the value of the first counter 302 with a defined deactivation threshold as a basis for determining whether to transition to the state 202.In addition, the counter control circuit 300 increments the second counter 304 by one in response to a hit (any hit) to the data cache 106 and decrements the second counter 304 by one in response to a data cache miss. In this manner, the value of the second counter 304 reflects the overall percentage of hits to the data cache. More specifically, the value of the second counter 304 reflects the percentage of hits / misses to the data cache 106 by increasing the count by hitting the data cache and decreasing the count by missing the data cache. Thus, the counter control circuit 300 or another circuit element within the cache controller 108 may compare the value of the second counter 304 with a defined enable threshold as a basis for determining whether to transition to the state 200.The above process involves detecting whether an individual data cache hit is a prefetched data line in the cache memory 110. 4 and 5 illustrate different embodiments provided for the detection. In these two figures, it can be seen that the cache controller 108 stores or otherwise maintains an indicator indicating which data lines in the cache memory 110 are prefetched.Specifically, FIG. 4 illustrates an embodiment in which, for each data line held in the cache memory 110, the cache memory 110 includes: a tag memory 400 for holding memory address information; and a data memory 402 for A line that holds cached data; and a prefetch flag 404 to indicate the status of the data line as prefetched (eg, "1") or unprefetched (eg, "0").In contrast, FIG. 5 illustrates an alternative embodiment where the stored (prefetched) indicator is implemented as a register bank 500 including a line identifier (ID) entry 502 for each prefetched data line in the cache memory 110 . For example, an entry may be added to the register set 500 for each data line pre-fetched into the cache memory 110 such that only the pre-fetched data lines are represented in the register set 500. Alternatively, the register set 500 may include entries for all data lines in the cache memory 110, each entry indicating whether a corresponding data line in the cache memory 110 is prefetched.6 and 7 together illustrate one embodiment of processing logic that uses the purpose of the stored indicator (404 or 502) to detect a prefetch hit. As a non-limiting example, the illustrated processes may be implemented by the cache controller 108 through digital processing logic (e.g., in a state machine). Additionally, it should be noted that one or more of the illustrated processing steps may be performed in a different order than the illustrated order, or may be performed concurrently with other steps, and / or may be performed as part of other processing tasks.Consistent with the illustrated process, the cache controller 108 uses a first count (e.g., the value of the first counter 302) to track prefetch hits to the data cache 106 and uses a second count (e.g., Two counters 304) to track overall hits to the data cache 106. The first counter 302 and the second counter 304 may include a saturation counter such that the corresponding first count value and the second count value are saturated at the corresponding maximum value. Regardless of the details, one or more embodiments of the cache controller 108 cause the data cache 106 to transition between a prefetch enabled condition and a prefetch disabled condition based on the first count value and the second count value. These counts can be initialized as part of the start operation.In more detail, the illustrated process begins with data prefetching of data cache 106 enabled (block 600). In at least one embodiment, the cache controller 108 is configured to begin operation with data prefetch enabled by default, such that the startup or restart of the processor 100 enables data prefetching to be turned on.With prefetch enabled, the data cache controller 108 fetches the data lines into the data cache 106 when needed, and prefetches the data lines into the cache according to the current prefetch strategy (block 602). Processing continues on a loop or other ongoing basis, where the cache controller 108 determines whether a data cache hit has occurred (block 604). If a data cache hit occurs ("Yes" from block 604), the cache controller 108 detects whether the hit is a prefetch hit (block 606), for example, it uses the stored (pre (Fetch) indicator (404 or 502) to determine whether a particular data line involved in a cache hit is a prefetched data line.If the hit is a prefetch hit ("Yes" from block 606), the data cache controller 108 increments the first count by one (block 608). If the hit is not a prefetch hit ("NO" from block 606), the data cache controller 108 decrements the first count by one (block 610). The first count can be maintained in this manner by operating the first counter 302 via the counter control circuit 300.Operation continues with evaluating the first count (block 612) to determine whether the value of the first count is above a defined deactivation threshold for prefetching. In the case of the arrangement, the deactivation threshold may be set to a percentage value corresponding to a point where prefetching is regarded as undesirable. In either case, for a binary count value, the determination may be made by comparing the count value to a binary pattern corresponding to the desired threshold. In at least one embodiment, the first counter 302 is sized according to a desired counting resolution for tracking prefetch hits. Note also that the evaluation of the first count may be performed for each cache hit, or the evaluation of the first count may be performed according to another schedule or trigger condition.In either case, if the value of the first count indicates that the prefetch hit rate is too low ("Yes" from block 614), the cache controller 108 disables the prefetch (block 616). From there, processing optionally continues, where the first count is reset and / or the second count is reset (block 618). That is, one or two counts may be set in combination with a transition from prefetch enabled to prefetch disabled, in a manner that enhances the state change.In at least one such embodiment, the second count is reset to a maximum value as a part changed to a prefetch disabled state, and the first count is reset to a maximum value as a part changed to a prefetch enabled state. Doing so prevents fast state inversions (sometimes called "ping pong"). More specifically, the instance counter reset represents a form of control lag expected in this paper. In one or more embodiments herein, it is widely expected to implement enable / disable control hysteresis, for example, by: a tracking mechanism (counter or other) reset used to track prefetch hits and overall hits, Adjust the control threshold, temporarily suspend state change processing after a state change, and so on.By following the connector "B" to FIG. 7 to return to the illustrated process, it is seen that the process continues with the prefetch being turned off. While prefetching is disabled, the cache controller 108 continues to monitor data cache accesses (block 700). If there is a data cache access ("yes" branched from block 700), the cache controller 108 detects whether the access caused a cache hit (block 702). If the access results in a hit ("Yes" from block 702), then processing continues with the cache controller 108 incrementing the second count by one (block 704). Conversely, if a cache access results in a cache miss ("NO" from block 702), then processing continues with the cache controller 108 decrementing the second count by one (block 706) And the data lines are fetched into the cache memory 110 when needed (block 708).Processing then continues with a second count evaluated (block 710). Cache accesses and / or counter updates may be used as a trigger for counting evaluation, or another schedule or trigger may be used. In either case, the evaluation may include comparing the value of the second count to a defined enabling threshold. In at least one such embodiment, the defined enable threshold represents a lower percentage value of a data cache hit. In the case of the arrangement, if the percentage of cache hits tracked by the second count is at or below the lower percentage, the overall hit rate is considered low.If the overall hit rate is not low ("NO" from block 712), the process loops back to block 700. On the other hand, if the overall hit rate is low ("Yes" from block 712), then processing continues by returning to connector "A" of block 600 of FIG. 6 for prefetch enablement. (Note that the first count and / or the second count may be reset as part of transitioning back to the prefetch enable condition (block 714)).In an alternative embodiment, the cache controller 108 is configured to determine the number of prefetched data lines in the cache memory 110 based on a count or otherwise (eg, as compared to the overall number of data lines in the cache memory 110). Than) to track prefetch hits. The cache controller 108 may use the first counter 302 to count the prefetched data lines, or it may be configured with other counters and / or registers for tracking the information. In either case, the comparison of the count of the prefetched data lines and the overall count of the data lines still reflects the prefetch hit rate in the following sense: If the prefetch hits are relatively infrequent, the prefetched data in the cache memory 110 is The number of fetched data lines will decrease over time due to the data cache replacement strategy.Keeping in mind the above embodiments and other changes, the data cache prefetch control as taught herein broadly includes tracking prefetch hits and tracking overall hits, so that the transition from prefetch enablement conditions is based on prefetch hits and from The transition to the prefetch deactivation condition is based on the overall hit. In at least one embodiment, prefetch is disabled if the prefetch hit rate falls below a defined deactivation threshold, and prefetch is enabled if the overall hit rate falls below a defined enable threshold. The stored indicators can be used to indicate which data lines are prefetched, and various counters or other registers can be used for prefetch hits and overall hit tracking.Therefore, although the invention has been described herein with respect to specific features, aspects, and embodiments of the invention, it should be understood that numerous changes, modifications, and other embodiments are possible within the broad scope of the invention, and therefore, All changes, modifications, and embodiments are considered to be within the scope of the present invention. The embodiments of the invention are therefore to be understood in all respects as illustrative and not restrictive, and it is intended that all changes that come within the meaning and equivalent scope of the appended claims be included therein. |
A first request to evict a first cache line that is stored in a cache memory may be received. The first cache line may be evicted based on a replacement policy. A second request to evict a second cache line from the cache memory may be received. Following the receipt of the second request, it is determined whether a condition associated with the replacement policy has been satisfied. If the condition associated with replacement policy has been satisfied, then the second cache line may be evicted based on a random replacement policy. |
An apparatus comprising:a memory; anda cache eviction component, operatively coupled with the memory, to:receive a request to evict at least one cache line of a plurality of cache lines stored at a cache memory;determine whether a condition associated with a replacement policy has been satisfied; andin response to determining that the condition associated with the replacement policy has been satisfied, evict a second cache line of the plurality of cache lines based on a random replacement policy.The apparatus of claim 1, wherein the cache eviction component is further to:identify a period of time that the replacement policy has been used to evict one or more of the plurality of cache lines stored at the cache memory, wherein the condition associated with the replacement policy has been satisfied when the period of time exceeds a threshold period of time.The apparatus of claim 1, wherein the cache eviction component is further to:identify a number of times that a particular cache line of the plurality of cache lines has been evicted by the replacement policy, wherein the condition associated with the replacement policy has been satisfied when the number of times satisfies a threshold number of times that the particular cache line has been evicted by using the replacement policy.The apparatus of claim 1, wherein the replacement policy is based on a least recently used cache line of the plurality of cache lines, and the random replacement policy is based on a random selection of a particular cache line of the plurality of cache lines to evict.The apparatus of claim 1, wherein the cache eviction component is further to:identify a particular cache line of the plurality of cache lines;identify a status of the particular cache line;determine whether the status of the particular cache line is associated with a protected status; andin response to determining that the status of the particular cache line is associated with the protected status, determine to not evict the particular cache line and selecting a second cache line to evict in response to a second request.The apparatus of claim 1, wherein the cache eviction component is further to:identify a status of a second cache line; anddetermine whether the status of the second cache line is associated with a protected status, wherein the evicting of the second cache line is based on determining that the status of the cache line is not associated with the protected status.The apparatus of claim 6, wherein the protected status indicates whether the second cache line is available to be evicted by the random replacement policy or is not available to be evicted by the random replacement policy.The apparatus of claim 1, in response to determining that the condition associated with the replacement policy has not been satisfied, the cache eviction component is further to:evict the second cache line of the plurality of cache lines based on the replacement policy.An apparatus comprising means to perform a method as described in any preceding claim.A machine readable storage medium comprising a plurality of instructions, when executed, to implement a method or realize an apparatus as claimed in any preceding claim. |
Technical Field Embodiments described herein generally relate to cache memory and, more specifically, selecting a policy to evict cache lines of a cache memory. Background A processing device may be based on an architecture that includes a cache memory. A processor core of the processing device may store data in the cache memory. For example, instructions may access data stored in a data cache memory. The data cache memory may be used to more efficiently execute instructions associated with the processor core as opposed to executing instructions from a main memory. Brief Description of the Drawings FIG. 1 is a block diagram illustrating a computing system that implements a cache eviction circuit to evict cache lines according to a replacement policy in accordance with some embodiments. FIG. 2 is a flow diagram of a method of determining a cache replacement policy to be used for a cache eviction. FIG. 3A illustrates an example of evicting cache lines according to a replacement policy. FIG. 3B illustrates an example of evicting cache lines according to a random replacement policy. FIG. 4 is a flow diagram of a method of determining a cache line is protected from eviction. FIG. 5 is a flow diagram of a method of determining a cache replacement policy to be used for a cache eviction. FIG. 6 illustrates a block diagram of the micro-architecture for a processor that includes logic in accordance with one embodiment of the disclosure. FIG. 7 is a block diagram illustrating a system in which an embodiment of the disclosure may be used. FIG. 8 is a block diagram of a system in which an embodiment of the disclosure may operate. FIG. 9 is a block diagram of a system in which an embodiment of the disclosure may operate. FIG. 10 is a block diagram of a System-on-a-Chip (SOC) in accordance with an embodiment of the present disclosure FIG. 11 is a block diagram of an embodiment of an SOC design in accordance with the present disclosure. FIG. 12 illustrates a block diagram of one embodiment of a computer system. Description of Embodiments Aspects of the present disclosure are directed to a cache line replacement policy in a processing architecture. Cache memory is a fast memory that stores the most recently used main memory data. Cache memory allows for quicker access to main memory data and, in multiprocessor systems, cache memory reduces system bus and main memory traffic. The cache memory temporarily stores the most recently used data read from the main memory. If a processor requires data from the main memory, a cache controller checks to see if the required data is stored in the cache memory. In the case of the cache controller finding matching data stored in the cache (i.e., a cache hit), the data is supplied to the processor directly from the cache. In the case of the cache controller not finding matching data (i.e., a cache miss), the data is read from the main memory. Instructions and data are transferred from the main memory to the cache in fixed blocks which are referred to as cache lines.When the cache memory is full, data may be discarded from the cache to make room for new data from the main memory. The process of discarding data from the cache to make room for the new data is known as cache eviction. Cache replacement policies are processes that determine which cache lines are to be evicted in order to make room for the new data. An example of a cache replacement policy is a least recently used (LRU) policy. An LRU policy tracks how recently cache lines in the cache are used (e.g., how much time has elapsed since the cache line was associated with a cache hit) and proceeds to evict the least recently used cache line. The LRU policy tracks how recently cache lines are used by keeping age-bits for each cache line that indicate the last time the cache line was used. The LRU policy may evict the least recently used cache line based on the age bits. Once the cache line has been evicted it is available to have new data written to it.Using such replacement policies may result in particular cache lines containing data that is less frequently used being evicted and have new data written to them more often than other cache lines. Cache memory, especially if designed with non-volatile memory cells, may be written to a certain number of times before failure of the cache line occurs and the processing device or non-volatile memory must be replaced. When a particular cache line is written to more often than others, failure of the cache line, and in turn the processing device, is likely to occur earlier than if all the cache lines in the cache memory were written to an equal number of times. Thus, the reliability and lifespan of the processing device (or a non-volatile memory) can be increased by implementing a replacement policy that ensures all cache lines are evicted and written to an equal or approximately equal number of times. However, while such a replacement policy increases the reliability of the processing device, replacing cache lines of data that are frequently read by the processing device increases the number of cache misses, decreasing the performance of the processing device.Embodiments of the present disclosure receive a request to evict cache lines from the cache memory and evict at least one cache line of the cache according to a replacement policy, such as the LRU replacement policy described above. Upon receiving a second request to evict cache lines from the cache memory, the processing device determines whether a condition associated with the replacement policy has been satisfied. Conditions associated with the replacement policy are discussed in more detail in conjunction with Fig. 2 below. If the processor determines that the condition associated with the replacement policy has been satisfied, then the cache is evicted according to a random replacement policy. The random replacement policy is discussed in more detail in conjunction with Fig. 3B below. Once a condition associated with the random replacement policy has occurred, cache lines are once again evicted according to the replacement policy.Such a process may improve the reliability of the processing device without having a significant impact on its performance. For example, the first series of cache evictions may be performed according to an LRU replacement policy, preventing cache lines containing data frequently accessed by the processor from being evicted, thereby reducing the number of cache misses and increasing performance of the processing device. Once a condition associated with the LRU replacement policy has occurred, the second series of cache evictions may be performed according to a random replacement policy. The random replacement policy allows for the eviction and writing of data to the cache lines containing frequently accessed data that would not be evicted under the LRU replacement policy, preventing a disproportionate number of eviction and write operations being performed on any of the cache lines of the cache memory. The result is an increase in the lifespan and reliability of the processing device with a minimal impact on performance Fig. 1 is a block diagram illustrating a computing system 100 that implements a cache eviction circuit 107 for determining when a condition associated with a replacement policy has occurred and identify which replacement policy to use in accordance with some embodiments. The computing system 100 is formed with a processor 102 that includes one or more execution units 108 to execute a cache eviction instruction in accordance with one or more embodiments as described herein. In short, the cache eviction circuit 107 is used by the processor 102 to determine which cache lines of the cache memory 104 to evict according to a replacement policy. In one embodiment the cache memory may be a non-volatile memory, such as Spin Transfer-Torque Magnetic Random Access Memory (STT-MRAM). Data received from the main memory 120 is then written to the evicted cache lines. The cache eviction circuit 107 may then determine that a condition associated with the replacement policy has occurred. Following the determination, the cache eviction circuit 107 is used by the processor 102 to determine which cache lines of the cache memory 104 to evict according to a random replacement policy. In some embodiments, the cache eviction circuit 107 may include an approximate counter to track the approximate number of write operations performed on each of the cache lines. When a write operation is performed on a cache line, the counter may be incremented according to the Morris Algorithm for approximate counting. Using the Morris Algorithm, when the current value of the counter isn,wherenis the number of write operations performed on a cache line, the counter may be incremented with probability 1/(2n). When the cache line is evicted, the counter may reset to 0. In another embodiment, the cache eviction circuit 107 may include an accurate counter to track the number of write operations performed on each of the cache lines. Additional details with regard to a cache eviction circuit 107 are described in more detail below with respect to Figs. 2-5 . Computing system 100 includes a component, such as a processor 102, to employ execution units 108 including logic to perform processes for processing data in accordance with the embodiments described herein. In one embodiment, sample computing system 100 executes an operating system. Embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the present disclosure can be used in other devices such as handheld devices and embedded applications. Examples of handheld devices include, but are not limited to, cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include, but are not limited to, a micro controller, a digital signal processor (DSP), system on a chip (SOC), network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.In the illustrated embodiment of Fig. 1 , processor 102 includes one or more execution units 108 to implement a process that is to perform at least one instruction. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments may be included in a multiprocessor system. System 100 may be an example of a 'hub' system architecture. The computer system 100 includes a processor 102 to process data signals. The processor 102, as one illustrative example, includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, an out of order based processor, or any other processor device, such as a digital signal processor, for example. The processor 102 is coupled to a processor bus 110 that transmits data signals between the processor 102 and other components in the system 100, such as main memory 120 storing instruction, data, or any combination thereof. The other components of the system 100 may include, but are not limited to, a graphics accelerator, a memory controller hub, an I/O controller hub, a wireless transceiver, a Flash BIOS, a network controller, an audio controller, a serial expansion port, and an I/O controller.In one embodiment, the processor 102 includes a Level 1 (L1) internal cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache memory or multiple levels of internal cache memories (e.g., L1 and L2). For example, the processor 102 may include an instruction cache (e.g., an L1 instruction cache) and a data cache (e.g. an L1 data cache) as part of its L1 internal cache memory. Other embodiments include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 is to store different types of data in various registers including, but not limited to, integer registers, floating point registers, vector registers, banked registers, shadow registers, checkpoint registers, status registers, configuration registers, and instruction pointer registers.Execution unit 108, including logic to perform integer and floating point operations, also resides in the processor 102. It should be noted that the execution unit may or may not have a floating point unit. The processor 102, in one embodiment, includes a microcode (µcode) ROM to store microcode, which when executed, is to perform processes for certain macroinstructions or handle complex scenarios. Here, microcode is potentially updateable to handle logic bugs/fixes for processor 102. Alternative embodiments of an execution unit 108 may also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits.System 100 includes a main memory 120. Main memory 120 may include, but is not limited to, a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Main memory 120 stores instructions and/or data represented by data signals that are to be executed by the processor 102. The processor 102 is coupled to the main memory 120 via a processor bus 110. A system logic chip, such as a memory controller hub (MCH) may be coupled to the processor bus 110 and main memory 120. An MCH can provide a high bandwidth memory path to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH can be used to direct data signals between the processor 102, main memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, main memory 120, cache memory 104, and system I/O, for example. The MCH may be coupled to main memory 120 through a memory interface. In some embodiments, the system logic chip can provide a graphics port for coupling to a graphics controller through an Accelerated Graphics Port (AGP) interconnect. The system 100 may also include an I/O controller hub (ICH). The ICH can provide direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the main memory 120, chipset, and processor 102. Some examples are the audio controller, firmware hub (flash BIOS), wireless transceiver, data storage, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller. The data storage device can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.For another embodiment of a system, the cache eviction circuit 107 may be used with a system on a chip. The memory for one such system may be a flash memory. The flash memory may be located on the same die as the processor and other system components. Additionally, other logic blocks, such as a memory controller or graphics controller, may also be located on a system on a chip. Fig. 2 is a flow diagram of a method 200 of determining a cache replacement policy to be used for a cache eviction. The method 200 may be performed by the cache eviction circuit 107. For example, the method 200 may be used by the cache eviction circuit 107 of Fig. 1 to receive a request to evict data from a cache line from a processor (e.g., processor 102) and determine if a cache line is to be evicted according to a baseline replacement policy (e.g., LRU) or a random replacement policy. The method 200 may also be performed by a processing circuit that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software, firmware, or a combination thereof. Alternatively, other components of the computing system 100 may perform some or all of the operations of the method 200.As shown in Fig. 2 , the method 200 may begin with the processing circuit receiving, at block 210, a first request to evict at least one cache line. The first request may be received in response to an indication to write data from a main memory to a cache memory. Upon receipt of the request, the processing circuit may evict at least one cache line based on a replacement policy (block 220). For example, the processing circuit may evict cache lines according to an LRU replacement policy. Additional details with regard to the replacement policy are described in more detail below with respect to Fig. 3A . After evicting the cache lines based on a replacement policy, at block 230, the processing circuit may receive a second request to evict at least one cache line. For example, the second request may be received in response to another indication to write data from the main memory to the cache memory. At block 240, the processing circuit may determine if a condition associated with the replacement policy has been satisfied. For example, if the number of cache evictions performed using the replacement policy exceeds a threshold number of cache evictions, then the condition associated with the replacement policy may be satisfied. Other examples of conditions associated with the replacement policy include, but are not limited to, a period of time, percentage of cache evictions, cache evictions of a cache line exceeding a threshold, and the like. For example, if the condition associated with the replacement policy is a period of 30 seconds, then once 30 seconds has elapsed the condition associated with the replacement policy may be satisfied. In another example, if the condition associated with the replacement policy is a period of 5 seconds, the once 5 seconds has elapsed the condition associated with the replacement policy may be satisfied. Thus, if a period of time that has elapsed exceeds a threshold period of time, then the condition may be considered to be satisfied. In another example, if the condition associated with the replacement policy is 95% of cache evictions are to be performed according to the replacement policy, then the condition may be satisfied. In another example, over an interval of 1 billion cache evictions, a random replacement policy may be applied at the end of this interval duration for 5-10% of the total interval. Thus, if a number of cache lines that have been evicted exceed a threshold number of cache lines of the cache memory, then the condition may be considered to be satisfied. In a final example, if the condition associated with the replacement policy is 1,000,000 cache evictions of a cache line, then when a cache line has been evicted more than 1,000,000 times the condition may be satisfied. Thus, if a number of times that a cache line has been evicted exceeds a threshold number of times, then the condition may be considered to be satisfied.Referring to Fig. 2 , at block 250, if the condition associated with the replacement policy has not been satisfied then the processing circuit may select the replacement policy described at block 220 in response to the second cache eviction request. At block 260, the processing circuit may evict the cache lines according to the replacement policy. For example, if the condition associated with an LRU replacement policy is a period of time totaling 30 seconds and only 20 seconds has elapsed, then the second cache eviction may be performed according to the LRU replacement policy. Otherwise, if the condition associated with the replacement policy has been satisfied then the processing circuit may select a random replacement policy (block 270). At block 280, the processing circuit may evict at least one cache line based on the random replacement policy in response to the second request. For example, if the condition associated with an LRU replacement policy is a period of time totaling 30 seconds and 35 seconds has elapsed, then the second cache eviction may be performed according to the random replacement policy. Additional details with regard to the random replacement policy are described in more detail below with respect to Fig. 3B . Thus, at a first time, a first replacement policy may be used to evict data from cache lines of a cache memory. After a condition associated with the first replacement policy has been satisfied, a second replacement policy (e.g., the random replacement policy) may be selected at a second time to evict data from cache lines of the cache memory. Fig. 3A illustrates an example of evicting cache lines according to a replacement policy. In general, the cache lines of Figs. 3A and 3B may correspond to cache lines as described in relation to the cache eviction circuit 107 of Fig. 1 . As shown in Fig. 3A , cache memory 300 may include cache lines 310, 320 and 330. In general, the cache memory of Figs. 3A and 3B may correspond to the cache memory 104 of Fig. 1 . Upon receiving a request to evict a cache line, the processing circuit may perform a cache line eviction according to a replacement policy. Using the previous example of an LRU replacement policy, the processing circuit may determine which cache line of the cache lines 310, 320 and 330 has been least recently used and evict that cache line. For example, if cache line 310 contains data that was least recently used, the processing circuit may evict cache line 310. Then, for a second eviction request if cache line 320 now contains data that was least recently used, the processing circuit may evict cache line 320. For a third eviction request if cache line 330 now contains data that was least recently used, the processing circuit may evict cache line 330. Although embodiments of the present disclosure may be described using an LRU replacement policy, it should be noted that embodiments of the present disclosure may also be utilized using any replacement policy. Examples of replacement policies include, but are not limited to, first in first out (FIFO), last in first out (LIFO), most recently used (MRU), pseudo-LRU (PLRU), segmented LRU (SLRU), least-frequently used (LFU) and the like.As shown in Fig. 3B , cache memory 300 may include cache lines 340, 350 and 360. Upon receiving a request to evict a cache line, the processing circuit will perform a cache eviction according to a random replacement policy. The processing circuit may select a cache line at random from cache lines 340, 350 and 360 and evict that cache line. For example, for a first cache eviction request the processing circuit may randomly select and evict cache line 360. Then, for a second cache eviction request the processing circuit may randomly select and evict cache line 340. For a third cache eviction request the processing circuit may select and evict cache line 350. In one embodiment, the processing circuit may use an arbitrary set of bits to randomly select a cache line for eviction. In another embodiment, the processing circuit may include an accurate random number generator to randomly select a cache line for eviction. Fig. 4 is a flow diagram of a method 400 of determining a cache line is protected from eviction. The method 400 may be performed by the cache eviction circuit 107. For example, the method 400 may be used by the cache eviction circuit 107 of Fig. 1 to receive a request to evict a cache line from a processor (e.g., processor 102), identify a cache line has been selected by the replacement policy to be evicted, and determine that the selected cache line is protected. The method 400 may also be performed by a processing circuit that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software, firmware, or a combination thereof. Alternatively, other components of the computing system 100 may perform some or all of the operations of the method 400.As shown in Fig. 4 , the method 400 may begin with the processing circuit identifying that a cache line has been selected by the random replacement policy (block 410). At block 420, the processing circuit may receive a status of the cache line that has been selected. In some embodiments, the status of the cache line may be indicated in a status bit or flag. At block 430, the processing circuit may determine if the selected cache line has a protected status and may not be evicted. For example, if a cache line contains data that is frequently used it may be desirable to protect the cache line from eviction in order to prevent unnecessary write operations to that cache line. In some embodiments, the protected status may be designated by a user. In other embodiments, the protected status may be designated by the processing device. If the processing circuit determines that the selected cache line is not protected, then the cache line is evicted (block 440). For example, referring back to Fig. 3A , upon selection of cache line 350 for eviction the processing circuit receives a status of cache line 350. The status of cache line 350 indicates whether cache line 350 is protected from eviction. If cache line 350 is not protected, then the processing circuit may evict cache line 350. Otherwise, if the selected cache line is protected, then the processing circuit determines to not evict the selected cache line (block 450). The processing circuit then evicts a new cache line that is not protected (block 460). Using the above example, if cache line 350 is protected, then the processing circuit will determine to not evict cache line 350. The processing circuit may then evict cache line 340 if cache line 340 does not have a protected status. Fig. 5 is a flow diagram of a method 500 of determining a cache replacement policy to be used for a cache eviction. The method 500 may be performed by the cache eviction circuit 107. For example, the method 500 may be used by the cache eviction circuit 107 of Fig. 1 to receive a request to evict a cache line from a processor (e.g., processor 102) and determine a condition to select a replacement policy. The method 500 may also be performed by a processing circuit that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software, firmware, or a combination thereof. Alternatively, other components of the computing system 100 may perform some or all of the operations of the method 500.As shown in Fig. 5 , the method 500 may begin with the processing circuit selecting a first replacement policy, such as those discussed in Figs. 3A and 3B (block 510). For example, the processing circuit may select an LRU replacement policy. The processing circuit may evict cache lines based on the first replacement policy (block 520). The processing circuit may determine that a condition to select a second replacement policy has occurred, such as the conditions discussed in Fig. 2 (block 530). The processing circuit may then select the second replacement policy. For example, if the condition is a period of time lasting 30 seconds and more than 30 seconds has elapsed, then the processing circuit may select a random replacement policy as described in Fig. 3B . The processing circuit may further evict cache lines based on the second replacement policy (block 540). The processing circuit may then determine that a condition to select the first replacement policy has occurred (block 550). For example, if the condition is a number of cache evictions of a cache line exceeding 1,000,000 cache line evictions and the number of cache evictions of the cache line exceeds 1,000,000 cycles, then the processing circuit may select the LRU replacement policy used at block 510.As such, a first replacement policy may be used to evict cache lines. After a condition has been satisfied with respect to the first replacement policy, a second replacement policy may be used to evict cache lines. Subsequently, after another condition has been satisfied with respect to the second replacement policy, the eviction of cache lines may be based on the previous first replacement policy. Figure 6 illustrates a block diagram of the micro-architecture for a processor 600 that includes hybrid cores in accordance with one embodiment of the disclosure. In some embodiments, an instruction in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 601 is the part of the processor 600 that fetches instructions to be executed and prepares them to be used later in the processor pipeline.The front end 601 may include several units. In one embodiment, the instruction prefetcher 626 fetches instructions from memory and feeds them to an instruction decoder 628 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 630 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 634 for execution. When the trace cache 630 encounters a complex instruction, the microcode ROM 632 provides the uops needed to complete the operation.Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, the decoder 628 accesses the microcode ROM 632 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 628. In another embodiment, an instruction can be stored within the microcode ROM 632 should a number of micro-ops be needed to accomplish the operation. The trace cache 630 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 632. After the microcode ROM 632 finishes sequencing micro-ops for an instruction, the front end 601 of the machine resumes fetching micro-ops from the trace cache 630.The out-of-order execution engine 603 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and reorder the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 602, slow/general floating point scheduler 604, and simple floating point scheduler 606. The uop schedulers 602, 604, 606, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 602 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.Register files 608, 610, sit between the schedulers 602, 604, 606, and the execution units 612, 614, 616, 618, 620, 622, 624 in the execution block 611. There is a separate register file 608, 610, for integer and floating point operations, respectively. Each register file 608, 610, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 608 and the floating point register file 610 are also capable of communicating data with the other. For one embodiment, the integer register file 608 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 610 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.The execution block 611 contains the execution units 612, 614, 616, 618, 620, 622, 624, where the instructions are actually executed. This section includes the register files 608, 610, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 600 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 612, AGU 614, fast ALU 616, fast ALU 618, slow ALU 620, floating point ALU 622, floating point move unit 624. For one embodiment, the floating point execution blocks 622, 624, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 622 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present disclosure, instructions involving a floating point value may be handled with the floating point hardware.In one embodiment, the ALU operations go to the high-speed ALU execution units 616, 618. The fast ALUs 616, 618, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 620 as the slow ALU 620 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 612, 614. For one embodiment, the integer ALUs 616, 618, 620, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 616, 618, 620, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 622, 624, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 622, 624, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.In one embodiment, the uops schedulers 602, 604, 606, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 600, the processor 600 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instruction sequences for text string comparison operations.The processor 600 also includes logic to implement store address prediction for memory disambiguation according to embodiments of the disclosure. In one embodiment, the execution block 611 of processor 600 may include a store address predictor (not shown) for implementing store address prediction for memory disambiguation.The term "registers" may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty-two bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data.For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMXTM registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.Referring now to Figure 7 , shown is a block diagram illustrating a system 700 in which an embodiment of the disclosure may be used. As shown in Figure 7 , multiprocessor system 700 is a point-to-point interconnect system, and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. While shown with only two processors 770, 780, it is to be understood that the scope of embodiments of the disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given multiprocessor. In one embodiment, the multiprocessor system 700 may implement hybrid cores as described herein.Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 also includes as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 includes P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in Figure 7 , IMCs 772 and 782 couple the processors to respective memories, namely a memory 732 and a memory 734, which may be portions of main memory locally attached to the respective processors.Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high-performance graphics interface 739.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.As shown in Figure 7 , various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 7 , a system may implement a multi-drop bus or other such architecture.Referring now to Figure 8 , shown is a block diagram of a system 800 in which one embodiment of the disclosure may operate. The system 800 may include one or more processors 810, 815, which are coupled to graphics memory controller hub (GMCH) 820. The optional nature of additional processors 815 is denoted in Figure 8 with broken lines. In one embodiment, processors 810, 815 implement hybrid cores according to embodiments of the disclosure.Each processor 810, 815 may be some version of the circuit, integrated circuit, processor, and/or silicon integrated circuit as described above. However, it should be noted that it is unlikely that integrated graphics logic and integrated memory control units would exist in the processors 810, 815. Figure 8 illustrates that the GMCH 820 may be coupled to a memory 840 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non-volatile cache.The GMCH 820 may be a chipset, or a portion of a chipset. The GMCH 820 may communicate with the processor(s) 810, 815 and control interaction between the processor(s) 810, 815 and memory 840. The GMCH 820 may also act as an accelerated bus interface between the processor(s) 810, 815 and other elements of the system 800. For at least one embodiment, the GMCH 820 communicates with the processor(s) 810, 815 via a multi-drop bus, such as a frontside bus (FSB) 895.Furthermore, GMCH 820 is coupled to a display 845 (such as a flat panel or touchscreen display). GMCH 820 may include an integrated graphics accelerator. GMCH 820 is further coupled to an input/output (I/O) controller hub (ICH) 850, which may be used to couple various peripheral devices to system 800. Shown for example in the embodiment of Figure 8 is an external graphics device 860, which may be a discrete graphics device, coupled to ICH 850, along with another peripheral device 870.Alternatively, additional or different processors may also be present in the system 800. For example, additional processor(s) 815 may include additional processors(s) that are the same as processor 810, additional processor(s) that are heterogeneous or asymmetric to processor 810, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There can be a variety of differences between the processor(s) 810, 815 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processors 810, 815. For at least one embodiment, the various processors 810, 815 may reside in the same die package.Referring now to Figure 9 , shown is a block diagram of a system 900 in which an embodiment of the disclosure may operate. Figure 9 illustrates processors 970, 980. In one embodiment, processors 970, 980 may implement hybrid cores as described above. Processors 970, 980 may include integrated memory and I/O control logic ("CL") 972 and 982, respectively and intercommunicate with each other via point-to-point interconnect 950 between point-to-point (P-P) interfaces 978 and 988 respectively. Processors 970, 980 each communicate with chipset 990 via point-to-point interconnects 952 and 954 through the respective P-P interfaces 976 to 994 and 986 to 998 as shown. For at least one embodiment, the CL 972, 982 may include integrated memory controller units. CLs 972, 982 may include I/O control logic. As depicted, memories 932, 934 coupled to CLs 972, 982 and I/O devices 914 are also coupled to the control logic 972, 982. Legacy I/O devices 915 are coupled to the chipset 990 via interface 996.Embodiments may be implemented in many different system types. Figure 10 is a block diagram of a SOC 1000 in accordance with an embodiment of the present disclosure. Dashed lined boxes are optional features on more advanced SOCs. In Figure 10 , an interconnect unit(s) 1012 is coupled to: an application processor 1020 which includes a set of one or more cores 1002A-N and shared cache unit(s) 1006; a system agent unit 1010; a bus controller unit(s) 1016; an integrated memory controller unit(s) 1014; a set or one or more media processors 1018 which may include integrated graphics logic 1008, an image processor 1024 for providing still and/or video camera functionality, an audio processor 1026 for providing hardware audio acceleration, and a video processor 1028 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, a memory module may be included in the integrated memory controller unit(s) 1014. In another embodiment, the memory module may be included in one or more other components of the SOC 1000 that may be used to access and/or control a memory. The application processor 1020 may include a store address predictor for implementing hybrid cores as described in embodiments herein.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1006, and external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.In some embodiments, one or more of the cores 1002A-N are capable of multi-threading. The system agent 1010 includes those components coordinating and operating cores 1002A-N. The system agent unit 1010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1002A-N and the integrated graphics logic 1008. The display unit is for driving one or more externally connected displays.The cores 1002A-N may be homogenous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 1002A-N may be in order while others are out-of-order. As another example, two or more of the cores 1002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.The application processor 1020 may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, Atom™ or Quark™ processor, which are available from Intel™ Corporation, of Santa Clara, Calif. Alternatively, the application processor 1020 may be from another company, such as ARM Holdings™, Ltd, MIPS™, etc. The application processor 1020 may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The application processor 1020 may be implemented on one or more chips. The application processor 1020 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. Figure 11 is a block diagram of an embodiment of a system on-chip (SOC) design in accordance with the present disclosure. As a specific illustrative example, SOC 1100 is included in user equipment (UE). In one embodiment, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra-thin notebook, notebook with broadband adapter, or any other similar communication device. Often a UE connects to a base station or node, which potentially corresponds in nature to a mobile station (MS) in a GSM network.Here, SOC 1100 includes 2 cores-1106 and 1107. Cores 1106 and 1107 may conform to an Instruction Set Architecture, such as an Intel® Architecture Core™-based processor, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 1106 and 1107 are coupled to cache control 1108 that is associated with bus interface unit 1109 and L2 cache 1110 to communicate with other parts of system 1100. Interconnect 1110 includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnect discussed above, which potentially implements one or more aspects of the described disclosure. In one embodiment, cores 1106, 1107 may implement hybrid cores as described in embodiments herein.Interconnect 1110 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 1130 to interface with a SIM card, a boot ROM 1135 to hold boot code for execution by cores 1106 and 1107 to initialize and boot SOC 1100, a SDRAM controller 1140 to interface with external memory (e.g. DRAM 1160), a flash controller 1145 to interface with non-volatile memory (e.g. Flash 1165), a peripheral control 1150 (e.g. Serial Peripheral Interface) to interface with peripherals, video codecs 1120 and Video interface 1125 to display and receive input (e.g. touch enabled input), GPU 1115 to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the disclosure described herein. In addition, the system 1100 illustrates peripherals for communication, such as a Bluetooth module 1170, 3G modem 1175, GPS 1180, and Wi-Fi 1185. Figure 12 illustrates a diagrammatic representation of a machine in the example form of a computer system 1200 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.The computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), a static memory 1206 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1218, which communicate with each other via a bus 1230.Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one embodiment, processing device 1202 may include one or processing cores. The processing device 1202 is configured to execute the processing logic 1226 for performing the operations and steps discussed herein. In one embodiment, processing device 1202 is the same as processor architecture 100 described with respect to Figure 1 as described herein with embodiments of the disclosure.The computer system 1200 may further include a network interface device 1208 communicably coupled to a network 1220. The computer system 1200 also may include a video display unit 1210 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1212 (e.g., a keyboard), a cursor control device 1214 (e.g., a mouse), and a signal generation device 1216 (e.g., a speaker). Furthermore, computer system 1200 may include a graphics processing unit 1222, a video processing unit 1228, and an audio processing unit 1232.The data storage device 1218 may include a machine-accessible storage medium 1224 on which is stored software 1226 implementing any one or more of the methodologies of functions described herein, such as implementing store address prediction for memory disambiguation as described above. The software 1226 may also reside, completely or at least partially, within the main memory 1204 as instructions 1226 and/or within the processing device 1202 as processing logic 1226 during execution thereof by the computer system 1200; the main memory 1204 and the processing device 1202 also constituting machine-accessible storage media.The machine-readable storage medium 1224 may also be used to store instructions 1226 implementing store address prediction for hybrid cores such as described according to embodiments of the disclosure. While the machine-accessible storage medium 1128 is shown in an example embodiment to be a single medium, the term "machine-accessible storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-accessible storage medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "machine-accessible storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.The following examples pertain to further embodiments.Example 1 is an apparatus comprising a memory and a cache eviction circuit operatively coupled to the memory, to receive a request to evict at least one cache line of a plurality of cache lines stored at a cache memory. The cache eviction circuit may further determine whether a condition associated with a replacement policy has been satisfied and in response to determining that the condition associated with the replacement policy has been satisfied, evict a second cache line of the plurality of cache lines based on a random replacement policy.In Example 2, in the apparatus of Example 1, the cache eviction circuit is further to identify a period of time that the replacement policy has been used to evict one or more of the plurality of cache lines stored at the cache memory, wherein the condition associated with the replacement policy has been satisfied when the period of time exceeds a threshold period of time.In Example 3, in the apparatus of any of Examples 1-2, the cache eviction circuit is further to identify a number of times that a particular cache line of the plurality of cache lines has been evicted by the replacement policy, wherein the condition associated with the replacement policy has been satisfied when the number of times satisfies a threshold number of times that the particular cache line has been evicted by using the replacement policy.In Example 4, in the apparatus of any of Examples 1-3, the replacement policy is based on a least recently used cache line of the plurality of cache lines, and the random replacement policy is based on a random selection of a particular cache line of the plurality of cache lines to evict.In Example 5, in the apparatus of any of Examples 1-4, the cache eviction circuit is further to identify a particular cache line of the plurality of cache lines and identify a status of the particular cache line. The cache eviction circuit may further determine whether the status of the particular cache line is associated with a protected status and in response to determining that the status of the particular cache line is associated with the protected status, determine to not evict the particular cache line and selecting a second cache line to evict in response to a second request.In Example 6, in the apparatus of any of Examples 1-5, the cache eviction circuit is further to identify a status of a second cache line and determine whether the status of the second cache line is associated with a protected status, wherein the evicting of the second cache line is based on determining that the status of the cache line is not associated with the protected status.In Example 7, in the apparatus of any of Examples 1-6, the protected status indicates whether the second cache line is available to be evicted by the random replacement policy or is not available to be evicted by the random replacement policy.In Example 8, in the apparatus of any of Examples 1-7, in response to determining that the condition associated with the replacement policy has not been satisfied, the cache eviction circuit is further to evict the second cache line of the plurality of cache lines based on the replacement policy.Example 9 is a system that comprises a processor core and a cache eviction circuit associated with the processor core to receive a first request to evict at least one cache line of a plurality of cache lines stored at a cache memory, evict a first cache line of the plurality of cache lines based on a replacement policy in response to the first request and receive a second request to evict at least one of the plurality of cache lines from the cache memory, the second request being received after the first request. The cache eviction circuit is further to determine whether a condition associated with the replacement policy has been satisfied and in response to determining that the condition associated with the replacement policy has been satisfied and in response to the second request, evict a second cache line of the plurality of cache lines based on a random replacement policy.In Example 10, in the system of Example 9, the cache eviction circuit is further to identify a period of time that the replacement policy has been used to evict one or more of the plurality of cache lines stored at the cache memory, wherein the condition associated with the replacement policy has been satisfied when the period of time exceeds a threshold period of time.In Example 11, in the system of any of Examples 9-10, the cache eviction circuit is further to identify a number of times that a particular cache line of the plurality of cache lines has been evicted by the replacement policy, wherein the condition associated with the replacement policy has been satisfied when the number of times satisfies a threshold number of times that the particular cache line has been evicted by using the replacement policy.In Example 12, in the system of any of Examples 9-11, the replacement policy is based on a least recently used cache line of the plurality of cache lines, and the random replacement policy is based on a random selection of a particular cache line of the plurality of cache lines to evict.In Example 13, in the system of any of Examples 9-12, the cache eviction circuit is further to identify a particular cache line of the plurality of cache lines and identify a status of the particular cache line. The cache eviction circuit is further to determine whether the status of the particular cache line is associated with a protected status and in response to determining that the status of the particular cache line is associated with the protected status, determine to not evict the particular cache line and selecting the second cache line to evict in response to the second request.In Example 14, in the system of any of Examples 9-13, the cache eviction circuit is further to identify a status of the second cache line and determine whether the status of the second cache line is associated with a protected status, wherein the evicting of the second cache line is based on determining that the status of the cache line is not associated with the protected status.In Example 15, in the system of any of Examples 9-14, the protected status indicates whether the second cache line is available to be evicted by the random replacement policy or is not available to be evicted by the random replacement policy.In Example 16, in the system of any of Examples 9-15, in response to determining that the condition associated with the replacement policy has not been satisfied and in response to the second request, the cache eviction circuit is further to evict the second cache line of the plurality of cache lines based on the replacement policy.Example 17 is a method comprising receiving a first request to evict at least one cache line of a plurality of cache lines stored at a cache memory, evicting a first cache line of the plurality of cache lines based on a replacement policy in response to the first request, receiving a second request to evict at least one of the plurality of cache lines from the cache memory, the second request being received after the first request, determining whether a condition associated with the replacement policy has been satisfied and in response to determining that the condition associated with the replacement policy has been satisfied and in response to the second request, evicting a second cache line of the plurality of cache lines based on a random replacement policy.In Example 18, in the method of Example 17, the method further comprises identifying a period of time that the replacement policy has been used to evict one or more of the plurality of cache lines stored at the cache memory, wherein the condition associated with the replacement policy has been satisfied when the period of time exceeds a threshold period of time.In Example 19, in the method of any of Examples 17-18, the method further comprises identifying a number of times that a particular cache line of the plurality of cache lines has been evicted by the replacement policy, wherein the condition associated with the replacement policy has been satisfied when the number of times satisfies a threshold number of times that the particular cache line has been evicted by using the replacement policy.In Example 20, in the method of any of Examples 17-19, the replacement policy is based on a least recently used cache line of the plurality of cache lines, and the random replacement policy is based on a random selection of a particular cache line of the plurality of cache lines to evict.In Example 21, in the method of any of Examples 17-20, the method further comprises identifying a particular cache line of the plurality of cache lines, identifying a status of the particular cache line, determining whether the status of the particular cache line is associated with a protected status and in response to determining that the status of the particular cache line is associated with the protected status, determining to not evict the particular cache line and selecting the second cache line to evict in response to the second request.In Example 22, in the method of any of Examples 17-21, the method further comprises identifying a status of the second cache line and determining whether the status of the second cache line is associated with a protected status, wherein the evicting of the second cache line is based on determining that the status of the cache line is not associated with the protected status.In Example 23, in the method of any of Examples 17-22, the protected status indicates whether the second cache line is available to be evicted by the random replacement policy or is not available to be evicted by the random replacement policy.In Example 24, in the method of any of Examples 17-23, the method further comprises evicting the second cache line of the plurality of cache lines based on the replacement policy.Example 25 is a system on a chip (SOC) comprising a plurality of functional units and a controller, coupled to the functional units, to receive a request to evict at least one cache line of a plurality of cache lines stored at a cache memory, determine whether a condition associated with a replacement policy has been satisfied and in response to determining that the condition associated with the replacement policy has been satisfied, evict a second cache line of the plurality of cache lines based on a random replacement policy.In Example 26, the SOC of Example 25 further comprises the subject matter of any of Examples 2-8.In Example 27, the SOC of any of Examples 25-26 further comprises the subject matter of any of Examples 17-24.Example 28 is an apparatus comprising means for receiving a first request to evict at least one cache line of a plurality of cache lines stored at a cache memory, means for evicting a first cache line of the plurality of cache lines based on a replacement policy in response to the first request, means for receiving a second request to evict at least one of the plurality of cache lines from the cache memory, the second request being received after the first request, means for determining whether a condition associated with the replacement policy has been satisfied and means for, in response to determining that the condition associated with the replacement policy has been satisfied and in response to the second request, evicting a second cache line of the plurality of cache lines based on a random replacement policy.In Example 29, the apparatus of Example 28 further comprises the subject matter of any of Examples 1-8 and 17-24.Example 30 is an apparatus comprising a memory and a processor coupled to the memory and comprising a controller where the controller is configured to perform the method of any of Examples 17-24.In Example 31, the apparatus of Example 30 further comprises the subject matter of any of Examples 1-16.Example 32 is a non-transitory machine-readable storage medium including instructions that, when accessed by a processing device, cause the processing device to perform operations comprising identifying a particular cache line of the plurality of cache lines, identifying a status of the particular cache line, determining whether the status of the particular cache line is associated with a protected status and in response to determining that the status of the particular cache line is associated with the protected status, determining to not evict the particular cache line and selecting the second cache line to evict in response to the second request.In Example 33, in the non-transitory machine-readable storage medium of Example 32, the operations further comprise the subject matter of any of Examples 17-25.While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations there from. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this disclosure.A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.Use of the phrase 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases 'to,' 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 910 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from.Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
A system (100) for memory allocation in a multiclass memory system (101) includes a processor (102) coupleable to a plurality of memories (106, 107, 108, 109) sharing a unified memory address space, and a library (120, 620) to store a library of software functions. The processor identifies a type of a data structure (128) in response to a memory allocation function call (126) to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure. |
WHAT IS CLAIMED IS:1. A method comprising:responsive to a memory allocation function call (126, 602, 604) to a library (120, 620) for allocating memory to a data structure (128) in a multiclass memory system (101):identifying, at a processor (102) of the multiclass memory system, a type of the data structure; and allocating, at the processor of the multiclass memory system and using the library, portions of the data structure among multiple memories (106, 107, 108, 109) of the multiclass memory system based on the type of the data structure.2. The method of claim 1, wherein allocating portions of the data structure among multiple memories further comprises:allocating metadata (402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412) of the data structure to a first set of one or more memories of the multiple memories; andallocating data (422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432) of the data structure to a second set of one or more memories of the multiple memories.3. The method of claim 1, wherein:the data structure comprises an ordered data structure (200, 400); andallocating portions of the ordered data structure comprises:allocating an initial portion (250) of the ordered data structure to a first set of one or morememories of the multiple memories; andallocating a final portion (252) of the ordered data structure to a second set of one or morememories of the multiple memories.4. The method of claim 3, wherein:the ordered data structure comprises a tree structure (400);the initial portion of the ordered data structure comprises initial levels of nodes (402, 403, 404, 405, 406, 407, 408) of the tree structure; andthe final portion of the ordered data structure comprises the final levels of nodes (409, 410, 411, 412) of the tree structure.5. The method of claim 1, wherein:the memory allocation function call further comprises:a plurality of parameters (136); andallocating portions of the data structure among multiple memories of the multiclass memory system further comprises allocating portions of the data structure among multiple memories of the multiclass memory system based on the plurality of parameters.6. The method of claim 1, wherein:the data structure comprises a linked list (200); andallocating portions of the data structure among multiple memories of the multiclass memory system further comprises:allocating an initial segment (250) of the linked list to a first set of one or more memories of the multiple memories;allocating a final segment (252) of the linked list to a second set of one or more memories of the multiple memories; andwherein the first set of one or more memories provides faster access than the second set of one or more memories.7. The method of claim 1, wherein:the data structure comprises a map structure (300); andallocating portions of the data structure among multiple memories of the multiclass memory system further comprises:allocating a key portion (350) of the map structure to a first set of one or more memories of the multiple memories;allocating a value portion (352) of the map structure to a second set of one or more memories of the multiple memories; andwherein the first set of one or more memories provides faster access than the second set of one or more memories.8. The method of claim 1, wherein:the data structure comprises a graph structure (400); andallocating portions of the data structure among multiple memories of the multiclass memory system further comprises:allocating node metadata (402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412) of the graph structure to a first set of one or more memories of the multiple memoriesallocating node data portions (422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432) of the graph structure to a second set of one or more memories of the multiple memories; and wherein the first set of one or more memories provides faster access than the second set of one or more memories.9. A method comprising: executing, at a processor (102) of a multiclass memory system (101), a memory allocation function call (602, 604) having an indicator of a specified memory level of a plurality of memory levels in the multiclass memory system; andallocating memory for a data structure (128) at the specified memory level responsive to executing the memory allocation function call.10. The method of claim 9, wherein the indicator comprises at least one of: a type of the function call, a parameter(136) passed via the function call, and a syntax indicator separate from the function call.11. A system (100) comprising:a library (120, 620) store to store a library; anda processor (102) coupleable to a plurality of memories (106, 107, 108, 109) sharing a unified memory address space, the processor to:responsive to a memory allocation function call (126, 602, 604) to the library for allocating memory to a data structure:identify a type of the data structure; andallocate, using the library, portions of the data structure among multiple memories of the plurality of memories based on the type of the data structure.12. The system of claim 11, wherein the processor is to allocate portions of the data structure among multiple memories by:allocating metadata (402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412) of the data structure to a first set of one or more of the multiple memories; andallocating data (422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432) of the data structure to a second set of one or more of the multiple memories.13. The system of claim 11, wherein:the memory allocation function call further comprises a plurality of parameters (136); andthe processor is to allocate portions of the data structure among multiple memories of the plurality of memories by allocating portions of the data structure among multiple memories of the plurality of memories based on the plurality of parameters.14. The system of claim 11, wherein:the data structure comprises a linked list (200); andthe processor is to allocate portions of the data structure among multiple memories of the plurality of memories by:allocating an initial segment (250) of the linked list to a first set of one or more memories of the multiple memories; allocating a final segment (252) of the linked list to a second set of one or more memories of the multiple memories; andwherein the first set of one or more memories provides faster access than the second set of one or more memories. 15. The system of claim 11, wherein:the data structure comprises a map structure (300); andthe processor is to allocate portions of the data structure among multiple memories of the plurality of memories by:allocating a key portion (350) of the map structure to a first set of one or more memories of the multiple memories;allocating a value portion (352) of the map structure to a second set of one or more memories of the multiple memories; andwherein the first set of one or more memories provides faster access than the second set of one or more memories. 16. The system of claim 11, wherein:the data structure comprises a graph structure (400); andthe processor is to allocate portions of the data structure among multiple memories of the plurality of memories by:allocating node metadata (402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412) of the graph structure to a first set of memories of the multiple memories;allocating node data portions (422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432) of the graph structure to a second set of memories of the multiple memories; andwherein the first set of one or more memories provides faster access than the second set of one or more memories. |
SYSTEM AND METHOD FOR MEMORY ALLOCATION IN A MULTICLASS MEMORY SYSTEMBACKGROUND Field of the DisclosureThe present disclosure relates generally to memory systems and more particularly to memory systems employing multiple memories.Description of the Related ArtProcessing systems may implement multiple types or levels of memory (e.g. combinations of volatile and nonvolatile memory architectures or in-package and external memory) to satisfy a variety of design requirements. For example, multilevel memory may be used to take advantage of increased bandwidth, capacity, and expandability by combining memories that offer one or more of these features. Allocation of data structures among the memories of a multilevel memory system having a unified memory address space can impact the system performance.Conventionally, the operating system or the hardware of the system determines how to allocate data structures among the memories of the multilevel memory system based on static, predefined conditions or based on a seemingly arbitrary allocation. This often can result in an inefficient or ineffective utilization of the different memories of the multilevel memory system.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items. FIG. 1 is a block diagram of a processing system employing a multiclass memory system in accordance with some embodiments.FIG. 2 is a diagram illustrating an example memory allocation for a linked list data structure in accordance with some embodiments.FIG. 3 is a diagram illustrating an example memory allocation for a map data structure in accordance with some embodiments.FIG. 4 is a diagram illustrating an example memory allocation for a binary tree data structure in accordance with some embodiments.FIG. 5 is a flow diagram illustrating a method for memory allocation for a data structure among memories of a multiclass memory system in accordance with some embodiments. FIG. 6 is a block diagram of a processing system employing a multiclass memory system in accordance with some embodiments.FIG. 7 is a flow diagram illustrating a method for memory allocation for a data structure among memories of a multiclass memory system in accordance with some embodiments. DETAILED DESCRIPTIONFIGs. 1-7 illustrate example systems and techniques for allocating memory for data structures of a software program in a processing system employing a multiclass memory system. In some embodiments, the processing system comprises the multiclass memory system and a processor having processing cores and a memory controller. The multiclass memory system comprises a plurality of memories of at least two different memory classes (each class defining one or both of a level and a type) that share a unified memory address space. In response to the processor executing a memory allocation function call (e.g., malloc) to a library to allocate memory to a data structure, the library identifies the type of the data structure, and based on the type of data structure, an operating system allocates portions of the data structure among multiple memories of the multiclass memory system. For example, in some embodiments, the operating system allocates portions of data structure among the memories such that more frequently searched or accessed portions of the data structure are allocated to a memory with a faster access time, while less frequently searched or accessed portions of the data structure are allocated to a memory comprising a slower access time. In some embodiments the function call comprises a plurality of parameters, such that when the processor executes the function call, the operating system allocates the portions of the data structure based on the parameters. In another embodiment, the function call comprises an indicator of a memory level of the multiclass memory system, such that when the processor executes the function call comprising the indicator, the operating system allocates an identified portion of the data structure to the indicated memory level. The described techniques allow for more efficient allocation of portions of the data structure based on how the processing cores are likely to search or access the portions of the data structure, improving performance and reducing power consumption.FIG. 1 illustrates a block diagram of a processing system 100 employing a multiclass memory system 101 in accordance with some embodiments. The processing system 100 comprises a processor 102 and a memory hierarchy 104 comprising a plurality of memories belonging to two or more different classes, each class defining one or both of a level and a type. The memory level is based on the locational access speed of the memory. For example, between in-package memory and outside-package memory (or "on-chip" and "off-chip" memories), the access speed of the in-package memory will generally be faster. In at least one embodiment, the multiclass memory system 101 is a multilevel memory system. The memory type is based on the particular architecture of the memory, and each memory may comprise any of a variety of memory types, for example, lower granularity divisions, such as volatile memory vs. non-volatile memory or dynamic random access memory (DRAM) vs. static random access memory (SRAM) vs. phase change memory vs. memristor memory, or higher granularity divisions, such as different architectures within the same type of general memory architecture, such as double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), graphics double data rate version five synchronous dynamic random access memory (GDDR5 SDRAM), and low power double data rate synchronous dynamic random access memory (LPDDR SDRAM).Each of the memories 106, 107, 108, 109 within the unified memory address space 116, classified into its respective memory class (denoted class "I" and "II") based on its level, type, or both. As such, in some embodiments the memories 106, 107, 108, 109 may be classified such that memories within the same class share one or more of the same level, the same type, and other operational characteristics, such as access time, bandwidth, data transfer rate, and the like. To illustrate, the memories 106, 107 may be classified as class I as they both are at the same level (e.g., in-package) and the memories 108, 109 may be classified as class II as they both are at the same level (e.g., outside- package), or the memories 106, 107 may be classified as class I as they both implement, for example, DRAM architectures whereas the memories 108, 109 may be classified as class II as they both implement, for example, SRAM architectures, and the like.While the memory hierarchy 104 is illustrated in the embodiment of FIG. 1 as two in-package memories 106, 107 and two outside -package memories 108, 109, other embodiments may employ any number of memories spanning at least two classes. Additionally, in some embodiments the memory hierarchy 104 may comprise any combination of in-package and outside-package memories, including all outside-package memory and all in-package memories. Some embodiments of the memory hierarchy 104 may implement die-stacked memory to increase capacity or otherwise take advantage of multiple memories while maintaining a smaller overall footprint. Die-stacked memory may be implemented in a vertical stacking arrangement, using through-silicon via (TSV) or other vertical interconnect technologies, or in a horizontal arrangement, whereby the memory dies are "stacked" horizontally relative to the processor or one another, such that they are connected via an interposer. In the embodiment of FIG. 1, the in-package memories 106, 107 are illustrated as being of the same class (denoted class "I"), and the outside- package memories 108, 109 are illustrated as being of the same class (denoted class "II"). Further, the multiclass memory system 101 of other embodiments may comprise memories of different levels, different types, or a combination thereof. For example, in at least one embodiment, the multiclass memory system 101 comprises memories all of the same level but of different types.The processor 102 comprises processor cores 110, 111, and a memory controller 112. While the illustrated embodiment depicts a memory controller 112 implemented at the processor 102, in other embodiments the memory controller 112 may be implemented elsewhere, for example, at a memory interface of a stacked memory device implementing one or more of the memories 108, 109. Further, in some embodiments, the processor 102 comprises more than one memory controller 112. The memory controller 112 retrieves data from the memories 106, 107, 108, 109 in response to a memory address request based on an address space allocation. Thus, in the illustrated embodiment, the memory controller 112, and the processing system 100, treats the memories 106, 107, 108, 109 as a single, flat, unified memory address space 116. As a result, the different classes (I, II) of memories are still logically part of the same level of the traditional memory hierarchy, in that they are all part of the same main or system memory, and are therefore all accessible through the same, unified, flat physical memory address space.Conventionally, the operating system or the hardware of the system determines how to allocate data structures among the memories of a multiclass memory system based on static, predefined conditions or based on a seemingly arbitrary allocation. Since these conventional approaches cannot take advantage of higher-level (e.g. software, data structure, algorithm, etc.) semantic or domain-specific knowledge of how data will be accessed, frequently accessed portions of data structures are often allocated to lower performance memories, leading to decreased efficiency and overall degraded performance. In contrast, in the illustrated embodiment, a library store comprises a library 120 which provides data structures, algorithms, and other services through an Application Programming Interface (API) 122 to a programmer or other user, such that the back-end implementation of the library 120 dynamically handles memory allocation decisions. This allows for allocation decisions based on higher-level semantic or domain-specific knowledge of how data will be accessed. For example, in some embodiments, the library 120 may use a multilevel-memory-aware software interface to selectively allocate data structures to the memories 106, 107, 108 109 of the multiclass memory system 101, or it may maintain its own pools of memory pages from the different memory levels and explicitly handle the allocation of the data structure to these pages as it sees fit. The library 120 may be any library that transparently manages the memory allocation, for example, the C++ standard template library (STL), Java standard libraries, C# and the .NET framework, custom libraries, domain-specific libraries, and the like. Based on the memory allocation decision of the library 120, an operating system 121 of the processing system 100 allocates a unified, flat address space to the memories 106, 107, 108, 109.In the illustrated embodiment, the processor core 111 executes a software program 124 comprising a memory allocation function call 126 to the library 120 to allocate memory to a data structure 128. The software program 124 accesses the library 120 via the API 122. In at least one embodiment, the library 120 references a data structure type table 130 to determine how to allocate the data structure 128 among the memories 106, 107, 108, 109 of the multiclass memory system 101 based on the type of the data structure 128 to be allocated. The data structure type table 130 may comprise static allocation rules, may maintain heuristics updated based on memory access history or other information, or the like. The data structure 128 may be any of a variety of data structures, for example, a linked list, a map structure, a binary tree, a graph structure, an array, a tuple, and the like. Based on the type of the data structure 128, the library 120 may decide that the operating system 121 is to allocate different portions of the data structure 128 to different memories of the multiclass memory system 101, in an effort to maintain efficient performance of the processor 102.For example, in the illustrated embodiment, the library 120 indicates that the operating system 121 is to allocate a first portion 132 of the data structure 128 to memory 106, and a second portion 134 of the data structure 128 to memory 109. The library 120 may make such a decision based on the dynamic access patterns of the type of data structure (e.g., more frequently used portions should be allocated to memories with faster access times), the amount of memory available in each memory 106, 107, 108, 109 or class (e.g., as much of the data structure 128 as possible should be allocated to the memories with faster access times as long as they have available memory space), a combination of these, and the like. In at least one embodiment, the portions 132, 134 represent the metadata and data, respectively of the data structure 128, such that the metadata portion 132 is allocated to a first set of memories 106 of the multiple memories 106, 107, 108, 109 and the data portion 134 is allocated to a second set of memories 109 of the multiple memories 106, 107, 108, 109. In the illustrated example, the first portion 132 (such as the metadata of the data structure 128) is allocated to memory 106 of class I that provides faster access than the memory 109 to which the second portion 134 (such as the data of the data structure 128) is allocated. Such an allocation may be made to improve performance of the processor 102 because the metadata is smaller than the data of the data structure 128, because the metadata is accessed more frequently than the data, a combination of these, and the like.While the illustrated embodiment depicts the library 120 dividing the data structure 128 into two portion 132, 134 to be allocated among two memories 106, 109, respectively, other embodiments may divide the data structure 128 into more portions, allocate the data structure among more memories, or allocate the data structure 128 without dividing it into portions. Further, in some embodiments the library 120 allocates the portions 132, 134 of the data structure 128 to specific memory classes (I, II) for the operating system 121 to distribute subsections of the portions among the memories of a specific class (e.g., if portion 132 is allocated to class I, the operating system 121 distributes subsections of portion 132 among memories 106, 107 of class I) evenly, arbitrarily, or based on one or more heuristics. Further, portions and subsections may represent any portion or subsection of the data structure 128, and need not be contiguous.In some embodiments, the library 120 may provide any of a variety of interfaces or hooks, which may be optional, to allow the programmer to provide input or direction on how the data structure 128 is to be allocated among the memories 106, 107, 108, 109 of the multiclass memory system 101. For example, in at least one embodiment, the library 120 allows the programmer or other user to provide a parameter 136 with the memory allocation function call 126, such that the operating system 121 allocates portions 132, 134 of the data structure 128 among multiple memories 106, 109 of the multiclass memory system 101 based on the parameter 136 (or plurality of parameters). The parameter 136 may indicate, for example, the type of the data structure 128, how the data structure 128 is to be divided into its portions 132, 134, how many memories 106, 107, 108, 109 are to be used, which memories 106, 107, 108, 109 are to be used, which classes (I, II) are to be used, one or more limits (e.g., only allocate the metadata separately from the data for the first n lines), or the like.In at least one embodiment, library 120 comprises a domain-specific library. Some examples of domain-specific libraries are routines that are specialized for basic linear algebra, such as basic linear algebra subprograms (BLAS), automatically tuned linear algebra software (ATLAS), portable, extensible toolkit for scientific computation (PETSc), and application markup language (APPML). ATLAS for instance is a self-optimizing library that searches the optimization parameter space (blocking factor, unrolling) and, implementation algorithms to generate highly optimized hardware-specific linear algebra routines. An example of such libraries is the use of blocking for matrix- matrix multiply. One implementation includes configuring the library 120 to assume different blocking mechanisms for each level of the memory hierarchy 104 and to move data from the lower levels to the upper levels which also correspond to the innermost loop. Such routines assume access to DRAM is a fixed cost, but in a system comprising multiple classes of memories, the algorithms would have to be refactored for the faster memory. Sparse matrix vector multiplies (SpMV) is another example of an algorithm that is important in the performance of many high performance computing (HPC) applications. SpMV are generally represented using compressed row format (CSR). In CSR the nonzero row elements are stored in a values array, the column indices are stored in a column array, and the index into column array for the start of each row is stored in a row index array. In one embodiment, the library 120 allocates storage of the index arrays in the faster memory (e.g., class I) and the large values array in slower memory (e.g., class II) to allow for faster searching. In addition to static optimization for a multiclass memory system 101, these libraries can insert profile guided dynamic optimization to move components of data structures between different memory levels during execution.FIG. 2 is a diagram illustrating an example memory allocation by the processing system 100 of FIG. 1 for a linked list data structure 200 in accordance with some embodiments. The linked list data structure 200 comprises nodes 204-215 which are linked, such that node 204 comprises a link or other reference to node 205, which comprises a link to node 206, which comprises a link to node 207, and so on, until final node 215. A memory access to retrieve data from one or more nodes requires traversing each node of the linked list data structure 200 from the first node 204 until the desired node. For example, a memory access of node 207, would require the processor 102 to start with the first node 204, follow its link to node 205, follow its link to node 206, and finally follow its link to node 207. Conventional memory allocation arbitrarily allocates the various nodes 204-215 among the memories 106, 107, 108, 109 of the multiclass memory system 101, such that nodes 204, 205, 206, 207 may be stored in separate memories 106, 107, 108, 109, and a memory access of node 207 would require each of the separate memories 106, 107, 108, 109 to be accessed as the processor 102 traverses the nodes 204, 205, 206, 207 of the linked list. These conventional approaches introduce inefficiencies as multiple memories may need to be accessed multiple times to reach a node, and frequently accessed data may be stored at memories with slower access times. In contrast, in the illustrated example, the operating system 121 allocates portions of the linked list data structure 200 based on the access order of the nodes 204-215 such that segments having nodes that are earlier in the access order of the linked list data structure 200 are allocated to memories with faster access times, while segments having nodes later in the access order of the linked list data structure 200 are allocated to memories with slower access times.Responsive to the memory allocation function call 126 to the library 120 via the API 122 for allocating memory to the data structure 128, the library 120 identifies the data structure 128 as the linked list data structure 200. Based on the type table 130, one or more parameters 136 provided by the program 124, or the data structure 128 itself, the library 120 determines how the linked list data structure 200 is to be divided into portions and allocated among the multiclass memory system 101. In the illustrated embodiment, two portions are depicted, the first representing an initial segment 250 of the linked list data structure 200, and the second representing a final segment 252 of the linked list data structure 200. Since the nodes 204-208 of the initial segment 250 will be accessed at least as frequently (and likely more frequently) than the nodes 209-215 of the final segment 252, the operating system 121 allocates the initial segment 250 to memory class I comprising memories 106, 107 with relatively faster access times and the final segment 252 to memory class II comprising memories 108, 109 with relatively slower access times. As a result, a memory access of node 207 will only require accessing one or more memories 106, 107 of class I, while a memory access of node 213 will require accessing one or more memories 106, 107 of class I for nodes 204-208 as well as one or more memories 108, 109 of class II for nodes 209-213. Since nodes 204-208 are accessed from memories 106, 107 having relatively faster access times, the processor 102 is able to traverse the initial segment 250 of the list relatively quickly, allowing for more efficient memory accesses of the linked list data structure 200. This linked list memory allocation technique may be applied to any type of linked list, for example, a singly linked list, doubly linked list, and the like. The linked list data structure 200 may or may not be allocated contiguously within a given memory.While the illustrated example depicts the linked list data structure 200 divided into two portions representing the initial segment 250 and the final segment 252 allocated to two different memory classes (I, II), the library 120 may determine any number of portions of the linked list data structure 200, and may allocate the portions to any number of memory classes or individual memories. Further, in some embodiments, the library 120 may make its allocation decisions based on one or more parameters 136 provided by the program 124. For example, the parameter 136 may indicate how to divide the linked list data structure 200 into portions, how many portions should be created, which memory classes (I, II) to use, which memories 106, 107, 108, 109 to use, which portions of the linked list data structure 200 should be allocated to which memories 106, 107, 108, 109 or which classes (I, II), the initial node 204 of the linked list data structure 200, or the like.FIG. 3 is a diagram illustrating an example memory allocation by the processing system 100 of FIG. 1 for a map data structure 300 in accordance with some embodiments. The map data structure 300 comprises a plurality of keys 302-311 bound to a plurality of values 312-321, such that a memory access requires a lookup operation of the key 305 to retrieve the corresponding value 315. Conventional memory allocation arbitrarily allocates the various keys 302-311 and values 312-321 among the memories 106, 107, 108, 109 of the multiclass memory system 101. These conventional approaches introduce inefficiencies as multiple memories may need to be accessed multiple times to reach a value (e.g., due to linear chaining or other hash conflict handling techniques), and frequently accessed data may be stored at memories with slower access times. In contrast, in the illustrated example, the operating system 121 allocates portions of the map data structure 300, such that the keys 302-311 of the map data structure 300 are allocated to memories with faster access times, while the corresponding values 312-321 of the map data structure 300 are allocated to memories with slower access times.Responsive to the memory allocation function call 126 to the library 120 via the API 122 for allocating memory to the data structure 128, the library 120 identifies the data structure 128 as the map data structure 300. Based on the type table 130, one or more parameters 136 provided by the program 124, or the data structure 128 itself, the library 120 determines how the map data structure 300 is to be divided into portions and allocated among the multiclass memory system 101. In the illustrated embodiment, two portions are depicted, the first representing a key portion 350 of the map data structure 300, and the second representing a value portion 352 of the map data structure 300. The operating system 121 allocates the key portion 350 to memory class I comprising memories 106, 107 with relatively faster access times and the value portion 352 to memory class II comprising memories 108, 109 with relatively slower access times. As a result, the key lookup operations may proceed quickly, and then the memory controller 112 may retrieve the corresponding value from a memory with slower access times. The processing system 100 will further realize the efficiencies of such a memory allocation in situations involving multiple lookups. Further, allocation to a memory having a slower access time but an increased capacity may be beneficial if the map data structure 300 comprises one or more values 312-321 of a relatively large size. This map data structure memory allocation technique may be applied to any type of map or other associate array data. The keys 302-311 and values 312-321 of the map data structure 300 may or may not be allocated contiguously within a given memory. While the illustrated example depicts the map data structure 300 divided into two portions representing the key portion 350 and the value portion 352 allocated to two different memory classes (I, II), the library 120 may determine any number of portions of the map data structure 300, and may allocate the portions to any number of memory classes or individual memories. Further, in some embodiments, the library 120 may make its allocation decisions based on one or more parameters 136 provided by the program 124. For example, the parameter 136 may indicate how to divide the map data structure 300 into portions, how many portions should be created, which memory classes (I, II) to use, which memories 106, 107, 108, 109 to use, which portions of the map data structure 300 should be allocated to which memories 106, 107, 108, 109 or which classes (I, II), or the like.FIG. 4 is a diagram illustrating an example memory allocation by the processing system 100 of FIG. 1 for a binary tree data structure 400 in accordance with some embodiments. The binary tree data structure 400 comprises a plurality of nodes, with each node storing node metadata 402-412 (e.g., information regarding node ID, keys, pointers, or links to other nodes) and node data 422-432. A memory access to retrieve node data, such as node data 426 typically requires traversal of multiple nodes of the binary tree data structure 400 in accordance with any of a variety of traversal schemes. For example, in the case of an in-order traversal scheme, a memory access to retrieve node data 426 would require that the processor 102 traverse the binary tree 400 beginning with node metadata 409, then node metadata 405, node metadata 403 and finally node metadata 406 to retrieve node data 426. In the case of a level-order traversal scheme, a memory access to retrieve node data 426 would require that the processor 102 traverse the binary tree 400 beginning with the root node metadata 402, then node metadata 403, node metadata 404, node metadata 405, and finally node metadata 406 to retrieve node data 426. Conventional memory allocation arbitrarily allocates the node metadata 402-412 and the node data 422-432 among the memories 106, 107, 108, 109 of the multiclass memory system 101, such that nodes that will be traversed consecutively according to the traversal scheme may be allocated to separate memories, such that transversal of the binary tree data structure 400 may require each of the separate memories 106, 107, 108, 109 to be accessed. These conventional approaches introduce inefficiencies as multiple memories may need to be accessed multiple times to reach the requested node, and frequently accessed portions of the binary tree data structure 400 may be stored at memories with slower access times. In contrast, in the illustrated example, the operating system 121 allocates portions of the binary tree data structure 400, such that the node metadata 402-412 of the binary tree data structure 400 is allocated to memories with faster access times, while the corresponding node data 422-432 of the binary tree data structure 400 is allocated to memories with slower access times. Responsive to the memory allocation function call 126 to the library 120 via the API 122 for allocating memory to the data structure 128, the library 120 identifies the data structure 128 as the binary tree data structure 400. Based on the type table 130, one or more parameters 136 provided by the program 124, or the data structure 128 itself, the library 120 determines how the binary tree data structure 400 is to be divided into portions and allocated among the multiclass memory system 101. In the illustrated embodiment, two portions are depicted, the first representing a node metadata portion 450 of the binary tree data structure 400, and the second representing a node data portion 452 of the binary tree data structure 400. For ease of illustration, the node metadata portion 450 and node data portion 452 only indicate select nodes of the binary tree data structure 400, however the node metadata portion 450 represents all of the node metadata 402-412 and the node data portion 452 represents all of the node data 422-432.The operating system 121 allocates the node metadata portion 450 to memory class I comprising memories 106, 107 with relatively faster access times and the node data portion 452 to memory class II comprising memories 108, 109 with relatively slower access times. As a result, the traversal of the binary tree data structure 400 may proceed quickly since the node metadata 402-412 will be accessed from one or more memories 106, 107 with faster access times, and then the memory controller 112 may retrieve the requested node data from a memory with slower access times. Further, allocation to a memory having a slower access time but an increased capacity may be beneficial to nodes of the binary tree data structure 400 comprising node data 422-432 of a relatively large size. In another embodiment, the operating system 121 allocates portions of the binary tree data structure 400 based on the traversal order of the nodes, such that segments having nodes that are earlier in the traversal order according to the traversal scheme of the binary tree data structure 400 are allocated to memories with faster access times, while segments having nodes later in the traversal order according to the traversal scheme of the binary tree data structure 400 are allocated to memories with slower access times. For example, in the context of a level-order traversal scheme, since the node metadata of higher levels (i.e., closer to the root node) will be accessed at least as frequently (and likely more frequently) than the node metadata of lower levels (i.e., closer to the branches), the operating system 121 may allocate the first three levels (comprising node metadata 402-408) to memory class I comprising memories 106, 107 with relatively faster access times and the branch level (included metadata 409-412) to memory class II comprising memories 108, 109 with relatively slower access times. As a result, a memory access of node data 427 will only require accessing one or more memories 106, 107 of class I, while a memory access of node data 430 will require accessing one or more memories 106, 107 of class I for node metadata 402-408 as well as one or more memories 108, 109 of class II for node metadata 409, 410. Since node metadata 402-408 is accessed from memories 106, 107 having relatively faster access times, the processor 102 is able to traverse the first three levels relatively quickly, allowing for more efficient memory accesses of the binary tree data structure 400. These binary tree data structure memory allocation technique may be applied to any type of graph data structure, for example, a ternary tree structure, a B+ tree structure, a directed acyclic graph (DAG), or the like. Further, the node metadata 402-412 and the node data 422-432 may or may not be allocated contiguously within a given memory. While the illustrated example depicts the binary tree data structure 400 divided into two portions representing the node metadata portion 450 and the node data portion 452 allocated to two different memory classes (I, II), the library 120 may determine any number of portions of the binary tree data structure 400, and may allocate the portions to any number of memory classes or individual memories. Further, in some embodiments, the library 120 may make its allocation decisions based on one or more parameters 136 provided by the program 124. For example, the parameter 136 may indicate how to divide the binary tree data structure 400 into portions, how many portions should be created, which memory classes (I, II) to use, which memories 106, 107, 108, 109 to use, which portions of the binary tree data structure 400 should be allocated to which memories 106, 107, 108, 109 or which classes (I, II), the traversal scheme, or the like. FIG. 5 is a flow diagram illustrating an example method 500 for memory allocation for a data structure among memories of a multiclass memory system in accordance with some embodiments. For ease of reference, the method 500 is described below in the example context of the multiclass memory system 101 of FIG. 1. The method 500 initiates at block 502, whereby the processing system 100 receives the memory allocation function call 126 when the processor core 111 executes the software program 124 comprising the memory allocation function call 126 to the library 120 to allocate memory to a data structure 128.At block 504, the processing system 100 accesses the library 120 via the API 122. The library 120 provides data structures, algorithms, and other services through the API 122 to the programmer or other user, such that the back- end implementation of the library 120 dynamically handles memory allocation decisions. This allows for allocation decisions based on higher-level semantic or domain-specific knowledge of how data will be accessed. For example, in some embodiments, the library 120 may use a multilevel -memory-aware software interface to selectively allocate data structures to the memories 106, 107, 108 109 of the multiclass memory system 101, or it may maintain its own pools of memory pages from the different memory levels and explicitly handle the allocation of the data structure to these pages as it sees fit. The library 120 may be any library that transparently manages the memory allocation, for example, the C++ standard template library (STL), Java standard libraries, C# and the .NET framework, custom libraries, domain-specific libraries, and the like.At block 506, the library 120 identifies the type of the data structure 128 based on, for example, one or more parameters 136 included with the memory allocation function call, heuristics, or the like. In at least one embodiment, the library 120 references the data structure type table 130 to determine information related to allocation of the data structure 128 among the memories 106, 107, 108, 109 of the multiclass memory system 101. For example, the library 120 may use the type table 130 to identify portions of the data structure 128 in accordance with block 508. The library 120 identifies portions 132, 132 of the data structure 128 based on the data structure type. In some embodiments, the library 120 identifies portions 132, 134 of the data structure 128 based on one or more parameters 136 provided by the program 124. The portions may be determined based on access frequency, data size, or the like. The library 120 indicates to the operating system 121 how the data structure 128 is to be allocated based on the portions 132, 134.At block 510, the operating system 121 allocates the portions 132, 134 of the data structure among multiple memories 106, 109 of the multiclass memory system 101. The allocation may be based on the dynamic access patterns of the type of data structure (e.g., more frequently used portions should be allocated to memories with faster access times), the amount of memory available in each memory 106, 107, 108, 109 or class (e.g., as much of the data structure 128 as possible should be allocated to the memories with faster access times as long as they have available memory space), a combination of these, and the like. In at least one embodiment, the portions 132, 134 represent the metadata and data, respectively of the data structure 128, such that the metadata portion 132 is allocated to a first set of memories 106 of the multiple memories 106, 107, 108, 109 and the data portion 134 is allocated to a second set of memories 109 of the multiple memories 106, 107, 108, 109. Such an allocation may be made to improve performance of the processor 102 because the metadata is smaller than the data of the data structure 128, because the metadata is accessed more frequently than the data, a combination of these, and the like. FIG. 6 is a block diagram of the processing system 100 of FIG. 1 performing a memory allocation for allocating the data structure 128 among the memories 106, 107, 108, 109 of the multiclass memory system 101 in accordance with some embodiments. In the illustrated embodiment, a library 620 provides an interface for software 624 to communicate memory locality information, preferences, and the like to the underlying system software (e.g., operating software (OS), hypervisor, etc.). That is, the library provides data structures, algorithms, and other services through an API 622, such that a programmer or other user may use function calls 602, 604 to indicate how an operating system 621 is to allocate the data structure 128 among the memories 106, 107, 108, 109 of the multiclass memory system 101. For example, in at least one embodiment, the API 622 includes a function call 602 for each memory 106, 107, 108, 109, each memory class (I, II), each memory level, or each memory type. The function call 602 comprises a memory indicator (depicted as "1-1") to indicate a memory allocation to memory 106. The function call 602 may further include parameters to indicate which data structure or other memory object is to be allocated to this memory 106, data size criteria, which memory (if the function call memory indicator indicates a memory class, level, or type rather than a specific memory), or the like.In some embodiments, the API 622 includes a general memory allocation function call 604 that accepts parameters, including the data structure 128 (depicted as "DS" in function call 604) to be allocated and a memory indicator ("I- 1") to indicate which memory 106 the data structure 128 is to be allocated to. In some embodiments, the memory indicator ("1-1) may indicate a memory class (I, II), multiple memories, multiple memory classes, or the like. Further, different embodiments may allow or require any of a number of parameters, for example, data structure type, data structure portions, allocation size limits, and the like. As illustrated, when the processor core 111 executes the software program 624 comprising either of the memory allocation function calls 602, 604, 606 to the library 620 via the API 622, the library 620 indicates that the operating system 621 is to allocate the data structure 128 to the memory 106 identified by the memory indicator ("1-1"). While the illustrated embodiment uses the standard C library "malloc" for the memory allocation function calls 602, 604, these techniques may easily be applied to other programming languages and their respective memory allocation interfaces as well. In some embodiments, a directive 606 or other annotation-like syntax is used to specify a memory allocation by specifying a particular memory or memory class directly to the compiler via a memory indicator. For example, in some embodiments, the directive 606 is processed by a compiler, and the information is passed to the library 620 or the operating system 621.Different embodiments may employ different conventions for handling the allocations. For example, in some embodiments, memory allocation specified by the memory indicator of the function call 602, 604 or directive 606 is a strict requirements, such that if the indicated memory 106 does not have enough available memory space to satisfy the memory allocation request, the allocation would fail (e.g., a "NULL" pointer may be returned by the function calls 602, 604 with an "1-1" memory indicator). In other embodiments, the memory allocation specified by the function call 602, 604 or the directive 606 is treated more as a suggestion, such that if the indicated memory 106 does not have enough available memory space to satisfy the memory allocation request, the operating system 621 allocates the data structure 128 other than specified, for example, according to other heuristics, arbitrarily, or the like. In at least one embodiment, if the memory allocation specified by the function call or directive is not followed, the processing system 100 returns additional information to the programmer or other user regarding the actual allocation. Some embodiments of the library 620 provide a "realloc" or "remap" function call that instructs (or suggests to) the OS that an existing allocation should be reallocated to a new level of memory (optionally resizing the size of the allocation at the same time). Variants may include an interface to allow subsets or regions of an existing memory allocation to be remapped. Further, some embodiments of the library 620 provide additional interface functions to help differentiate where an allocation came from. For example, in one embodiment, the function call "type whichMemory(ptr)" returns "I-l" if "ptr" is associated with a physical memory location in memory 106. In some embodiments, these memory allocation techniques are used in combination with Non-Uniform Memory Access (NUMA) based memory allocation schemes.FIG. 7 is a flow diagram illustrating an example method 700 for memory allocation for a data structure among memories of a multiclass memory system in accordance with some embodiments. For ease of reference, the method 700 is described below in the example context of the multiclass memory system 101 of FIG. 6. The method 700 initiates at block 702, whereby the processing system 100 receives the memory allocation function call 602, 604 when the processor core 111 executes the software program 624 comprising the memory allocation function call 602, 604 to the library 620 to allocate memory to a data structure 128.At block 704, the processing system 100 accesses the library 620 via the API 622. The library 620 provides data structures, algorithms, and other services through the API 622 to the programmer or other user, such that it acts as an interface for the software 624 to communicate memory locality information, preferences, and the like to the underlying system software. As such, the library 620 facilitates a programmer or other user to specify memory allocation via the function calls.At block 706, the processing system 100 identifies the memory indicator (depicted as "I-l" in FIG. 6) of the function call 602, 604 to determine the specified location for the allocation. For example, the memory indicator ("I- 1") may specify one or more memories 106, 107, 108, 109, one or more classes (I, II), one or more memory levels, one or more memory types, or the like. The memory indicator ("I-l") may comprise a parameter passed via the function call, a syntax indicator separate from the function call, or the function call itself.At block 708, the processing system 100 identifies portions of the data structure 128 based on parameters of the function call 126. In some embodiments, the parameter may specify the portions of the data structure 128 by identifying the type of the data structure 128, the boundaries of the portions, the data size of the portions, the data type for the portions, or the like. The data structure 128 may be divided into any number of data portions of any size, including a single portion representing the entire data structure 128.At block 710, the operating system 621 allocates portions of the data structure 128 among multiple memories 106, 107, 108, 109 of the multiclass memory system 101 based on the memory indicator ("I-l"). For example, in response to the function call 604 comprising memory indicator "I-l" and parameter "DS," the operating system 621 allocates the entire data structure 128 to the first memory 106 of class I. In some embodiments the processing system 100 may treat the function call 602, 604, and its specified memory indicator "I-l" and parameters as a suggestion rather than a requirement. Generally, the method 700 facilitates efficient utilization of a multiclass memory system by allowing programmers or other users (including application software) to manage the allocation of data structures among the multiple memories of the multiclass memory system using function calls or directives comprising a memory indicator.In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. |